Vol 2 No.1
ABSTRACT Agile software development methodologies have introduced best practices into software development. However we need to adopt and monitor those practices continuously to maximize its benefits. Our research has focused on adaptability, suitability and software maturity model called Agile Maturity Model (AMM) for agile software development environments. This paper introduces a process of adaptability assessment, suitability assessment, and improvement framework for assessing and improving agile best practices. We have also developed a web based automated tool support to assess adaptability, suitability, and improvement for agile practices.
ABSTRACT Software reliability models are very useful to estimate the probability of the software fail along the time. Several different models have been proposed to predict the software reliability growth models (SRGM) however; none of them has proven to perform well considering different project characteristics. The variability of predictive accuracy seems mainly due to the unrealistic assumptions in each model, there is no single model yet available has been shown to be sufficiently trustworthy in most or all applications. Genetic Algorithms can proposed the solution by overcome the uncertainties in the modeling. This is dependent on the successful software runs by combining multiple models using multiple objective function to achieve the best generalization performance where. The objectives are conflicting and no design exists which can be considered best with respect to all objectives. In this paper, experiments were conducted to confirm these hypotheses. Then evaluating the predictive capability of the ensemble of models optimized using multi-objective GA has been calculated. Finally, the results were compared with traditional models.
ABSTRACT Software maintenance becomes an integral part of software life cycle and constitutes the most important fraction of the total cost of the software lifecycle. Around 50-80 percent of the total lifecycle cost is consumed by maintenance for evolving system. Thus systems with poor maintainability are difficult to modify and require more cost to maintain. This difficulty arises from the impact of the system components where the new requirements/goals will be implemented. These new goals will result in modification of existing components and creation of new components. In this paper, we present the foundation for a new Hybrid-Based Maintainability Impact Analysis (HBMIA) methodology for assessing the impact of the new goals to be selected for implementation on the system new and existing components. (HBMIA) uses not only the system history but it also gets benefit from the expert’s experience. (HBMIA) balances between the system historical data and experts data based on the organization’ maturity and experts experience for system components. A case study is performed to demonstrate the added value of the proposed (HBMIA).
ABSTRACT Information Systems (IS) are based upon data collected by means of questionnaires, interviews, and observation. Inexperienced researchers find questionnaires and interviews attractive as a data gathering methodology. Many researchers have discovered that it is not simple to draft a good questionnaire because their answers are very superficial and impact negatively on the research quality. This paper explores a Repertory Grid technique as an alternative method for gathering meaningful data. Also, a hybrid model between questionnaire technique and Repertory Grid technique is presented. The model uses questionnaire as a primary data gathering technique and then the acquired data are automatically transferred to the Repertory Grid. The proposed model is considered an improvement technique of Repertory Grid because it solves many of its problems such as inability to name all the scales in the grid, the size limitation of repertory grid which is opened in the current model, and the expert have to use all elements and constructs in the grid without the ability to leave some of them.
ABSTRACT Frameworks provide large scale reuse by providing skeleton structure of similar applications. But, the generality, that a framework may have, makes it fairly complex, hard to understand and thus to reuse. This paper defines and analyzes two types of frameworks: tight and loose. It then proposes a strategy for framework development methodology that leads to loose frameworks. We try to find out the answer of a question: “which one (tight or loose) has what benefits over the other” by getting some experiences of developing loose and tight frameworks for the application sets of Environment for Unit testing (EUT) domain.