Vol 8 No.1
ABSTRACT Software development has always been characterized by some metrics. One of the greatest challenges for software developers lies in predicting the development effort for a software system which is based on developer abilities, size, complexity and other metrics. Several algorithmic cost estimation models such as Boehm’s COCOMO, Albrecht's' Function Point Analysis, Putnam’s SLIM, ESTIMACS etc. are available but every model has its own pros and cons in estimating development cost and effort. Most common reason being project data which is available in the initial stages of project is often incomplete, inconsistent, uncertain and unclear. In this paper, Bayesian probabilistic model has been explored to overcome the problems of uncertainty and imprecision resulting in improved process of software development effort estimation. This paper considers a software estimation approach using six key cost drivers in COCOMO II model. The selected cost drivers are the inputs to systems. The concept of Fuzzy Bayesian Belief Network (FBBN) has been introduced to improve the accuracy of the estimation. Results shows that the value of MMRE (Mean of Magnitude of Relative Error) and PRED obtained by means of FBBN is much better as compared to the MMRE and PRED of Fuzzy COCOMO II models. The validation of results was carried out on NASA-93 dem COCOMO II dataset.
ABSTRACT Embedded SQL queries have become an integral part of modern web applications and many SQL complexity metrics have been proposed, but subject of validation of this metrics is still actual. In this study we are going to validate SQL complexity metrics for adaptive maintenance effort (DLOC) estimation. Obtained that instead of classic complexity characteristics, SQL complexity metrics keep a good correlation with DLOC with growing (function length)min for a small and medium source code changes. Proposed a liner regression model for estimating the mixing power of application business logic and SQL query compiling logic (SQL Mix Complexity). SELECT operators count in a combination with CCN and Halstead`s Difficulty have a good correlation with DLOC for a methods with lengths higher than 2000 symbols for a small and medium source code changes.
ABSTRACT The main goal of this paper is to demonstrate how ontology could be used to support the full path from requirements to code in a model-driven development. The proposed approach first demonstrate how ontology can be utilized at the first step of requirements specification in an intuitive way and then second step of how such requirements generated by ontology guidance can be used as the basis for transformations to code via Analysis, Detailed Design, and Platform-Specific Models. The requirements are generated in form of features suitable for Feature Driven Development (FDD). Models are generated according to a particular ontological style, including selection of appropriate design patterns for these models.
ABSTRACT Software reliability plays an integral part in the software development process. Growth in the use of IT in today’s interconnected world precipitates the production of reliable software systems considering the potential loss and damage due to failure. Several software reliability growth models exist to predict the reliability of software systems. Non-homogeneous Poisson process (NHPP) is a probabilistic model for determining software reliability. Application of order statistics is a relatively new technique for estimating software reliability for time domain data based on NHPP with a distribution model. This paper presents the Burr Type III model as a software reliability growth model and derives the expressions for an efficient reliability function using order statistics. The parameters are estimated using the maximum likelihood (ML) estimation procedure. The live data sets are analysed and the results are exhibited.