Lesson Archives

  1. ABSTRACT Most, if not all, of the software projects developed can't implement the entire requirements within a given time and available resources. Hence Requirements Prioritization (RP) is needed to define the priorities given the available resources and constraints. It could be claimed that the RP process represents the heart of software systems development, as failure in choosing the right requirements during the requirements elicitation phase, or for release planning, could have the projects challenged or fail. There are many prioritization techniques available in the literature for prioritizing software requirements. However, most of them work well on a small number of requirements, but when the number of requirements and stakeholders’ preferences increase, many of these techniques suffer from different shortcomings, such as scalability, uncertainty, time consumption, and complexity. In addition, most of these techniques don’t take into consideration the effects of a project’s required goals on the final alternatives’ ranking. In this paper, we propose another RP technique based on goals’ weights to reduce the problems of time consumption, scalability and complexity. We evaluate our RP technique through case studies and compare results with other available RP techniques. In this paper we present the results of comparision with Fuzzy Analytical Hierarchy Process (FAHP).

  2. ABSTRACT Software development productivity is one of the major and vital aspects that impacts software industry and time to market of many software products. Although many studies have been conducted to improve the productivity measurements within software engineering research domain, productivity is still an issue in current software development industry because not all impacting factors and their relationships are known. This paper sheds a light on some of these factors and assesses their impacts as seen by random sample of industrial software SMEs. It also elaborates the main best practices that help in improve the software productivity based on real industrial projects. The resulting list of factors and best practices can be utilized to guide further productivity analysis and be taken as basis for building improved and more optimized productivity models. Paper also identifies some of the productivity measurements challenges and recommends set of best practices that can be utilized as basis for productivity measurements and estimation models.

  3. ABSTRACT Software reliability plays an integral part in the software development process.  Growth in the use of IT in today’s interconnected world precipitates the production of reliable software systems considering the potential loss and damage due to failure.  Several software reliability growth models exist to predict the reliability of software systems.  Non-homogeneous Poisson process (NHPP) is a probabilistic model for determining software reliability.  Application of order statistics is a relatively new technique for estimating software reliability for time domain data based on NHPP with a distribution model.  This paper presents the Burr Type III model as a software reliability growth model and derives the expressions for an efficient reliability function using order statistics.  The parameters are estimated using the maximum likelihood (ML) estimation procedure. The live data sets are analysed and the results are exhibited.

  4. ABSTRACT The main goal of this paper is to demonstrate how ontology could be used to support the full path from requirements to code in a model-driven development. The proposed approach first demonstrate how ontology can be utilized at the first step of requirements specification in an intuitive way and then second step of how such requirements generated by ontology guidance can be used as the basis for transformations to code via Analysis, Detailed Design, and Platform-Specific Models. The requirements are generated in form of features suitable for Feature Driven Development (FDD). Models are generated according to a particular ontological style, including selection of appropriate design patterns for these models.

  5. ABSTRACT Embedded SQL queries have become an integral part of modern web applications and many SQL complexity metrics have been proposed, but subject of validation of this metrics is still actual. In this study we are going to validate SQL complexity metrics for adaptive maintenance effort (DLOC) estimation. Obtained that instead of classic complexity characteristics, SQL complexity metrics keep a good correlation with DLOC with growing (function length)min for a small and medium source code changes. Proposed a liner regression model for estimating the mixing power of application business logic and SQL query compiling logic (SQL Mix Complexity). SELECT operators count in a combination with CCN and Halstead`s Difficulty have a good correlation with DLOC for a methods with lengths higher than 2000 symbols for a small and medium source code changes.

  6. ABSTRACT Software development has always been characterized by some metrics. One of the greatest challenges for software developers lies in predicting the development effort for a software system which is based on developer abilities, size, complexity and other metrics. Several algorithmic cost estimation models such as Boehm’s COCOMO, Albrecht's' Function Point Analysis, Putnam’s SLIM, ESTIMACS etc. are available but every model has its own pros and cons in estimating development cost and effort. Most common reason being project data which is available in the initial stages of project is often incomplete, inconsistent, uncertain and unclear. In this paper, Bayesian probabilistic model has been explored to overcome the problems of uncertainty and imprecision resulting in improved process of software development effort estimation. This paper considers a software estimation approach using six key cost drivers in COCOMO II model. The selected cost drivers are the inputs to systems. The concept of Fuzzy Bayesian Belief Network (FBBN) has been introduced to improve the accuracy of the estimation. Results shows that the value of MMRE (Mean of Magnitude of Relative Error) and PRED obtained by means of FBBN is much better as compared to the MMRE and PRED of Fuzzy COCOMO II models. The validation of results was carried out on NASA-93 dem COCOMO II dataset.

  7. ABSTRACT The accuracy of the learned classification rules in data mining is affected by the used learning algorithm and the availability of the whole training set in main memory during the learning process. In this paper, we propose a combination of data reduction techniques based on attributes relevancy, data abstraction, and data generalization. We also propose a hybrid classification algorithm based on decision tree and genetic algorithm. Decision tree as a greedy algorithm is to handle generalization, where each learned rule is covered by large number of examples in the training set "large-scope rules". The genetic algorithm handles specialization in the training set, where a small number of examples cover each of the learned rules "small-scope rules".

  8. ABSTRACT We make use of a well known data structure consisting of two linear arrays" to represent the Component Interaction Graph (GIG) and have experimented with some possible CiGs for a Component Based Software to show the quan-titative characteristics of the dependencies and understand the ways in which these dependencies can be managed/ minimized. We have developed a tool 'CIGIET' for this purpose. The understanding of interconnections of compo-nents is also desirable for the maintenance purpose, Based on the observa-tions we suggest some guidelines for designing a CBS for functionality along with maintainability. This work attempts to provide an initial background for meaningful studies related to the concept of 'Design for Maintainability'.

  9. ABSTRACT Meeting stakeholders' requirements and expectations becomes one of the critical aspects on which any software organization in market-driven environment focuses on, and pays a lot of effort and expenses to maximize the satisfaction of their stakeholders. Therefore identifying the software product release contents becomes one of the critical decisions for software product success. Requirements prioritization refers to that activity through which product releases contents that maximize stakeholder satisfaction can be identified [8]. This paper illustrates the Value-Oriented requirement prioritization approach for software product management. The technique proposed in this paper is based on the Hierarchical Cumulative Voting (HCV) and Value-Oriented Prioritization (VOP) techniques. The proposed technique, Value-Oriented HCV (VOHCV) addresses the weakness of HCV through selecting the best candidate requirements for each release not only based on the stakeholder's perceived value as HCV but also in terms of associated anticipated cost, technical risk, relative impact and market-related aspects. The VOHCV also addresses the weakness of VOP through supporting not only requirements flat structure as VOP but also through supporting hierarchical structure. By this means VOHCV inherits the strengths of both VOP and HCV and addresses their weaknesses while selecting the best candidate release requirement, to maximize stakeholders' value and satisfaction [11].

  10. ABSTRACT This paper presents Software Architecture Risk Assessment (SARA) Tool to demonstrate the process of risk assessment at the software architecture level. The prototype tool accepts different types of inputs that define software architecture. It parses these input files and produces quantitative metrics that are used to estimate the required risk factors. The final result of this process is to discern the potentially high risk components in the software system. By manipulating the data acquired from domain expert and measures obtained from Unified Modeling Language (UML) artifacts, SARA Tool can be used at the architecture development phase, at the design phase, or at the implementation phase of the software development process to improve the quality of the software product.