Lesson Archives

  1. ABSTRACT Information Systems (IS) are based upon data collected by means of questionnaires, interviews, and observation. Inexperienced researchers find questionnaires and interviews attractive as a data gathering methodology. Many researchers have discovered that it is not simple to draft a good questionnaire because their answers are very superficial and impact negatively on the research quality. This paper explores a Repertory Grid technique as an alternative method for gathering meaningful data. Also, a hybrid model between questionnaire technique and Repertory Grid technique is presented. The model uses questionnaire as a primary data gathering technique and then the acquired data are automatically transferred to the Repertory Grid. The proposed model is considered an improvement technique of Repertory Grid because it solves many of its problems such as inability to name all the scales in the grid, the size limitation of repertory grid which is opened in the current model, and the expert have to use all elements and constructs in the grid without the ability to leave some of them.

  2. ABSTRACT Software maintenance becomes an integral part of software life cycle and constitutes the most important fraction of the total cost of the software lifecycle. Around 50-80 percent of the total lifecycle cost is consumed by maintenance for evolving system. Thus systems with poor maintainability are difficult to modify and require more cost to maintain. This difficulty arises from the impact of the system components where the new requirements/goals will be implemented. These new goals will result in modification of existing components and creation of new components. In this paper, we present the foundation for a new Hybrid-Based Maintainability Impact Analysis (HBMIA) methodology for assessing the impact of the new goals to be selected for implementation on the system new and existing components. (HBMIA) uses not only the system history but it also gets benefit from the expert’s experience. (HBMIA) balances between the system historical data and experts data based on the organization’ maturity and experts experience for system components. A case study is performed to demonstrate the added value of the proposed (HBMIA).

  3. ABSTRACT Software reliability models are very useful to estimate the probability of the software fail along the time. Several different models have been proposed to predict the software reliability growth models (SRGM) however; none of them has proven to perform well considering different project characteristics. The variability of predictive accuracy seems mainly due to the unrealistic assumptions in each model, there is no single model yet available has been shown to be sufficiently trustworthy in most or all applications. Genetic Algorithms can proposed the solution by overcome the uncertainties in the modeling. This is dependent on the successful software runs by combining multiple models using multiple objective function to achieve the best generalization performance where. The objectives are conflicting and no design exists which can be considered best with respect to all objectives. In this paper, experiments were conducted to confirm these hypotheses. Then evaluating the predictive capability of the ensemble of models optimized using multi-objective GA has been calculated. Finally, the results were compared with traditional models.

  4. ABSTRACT Agile software development methodologies have introduced best practices into software development. However we need to adopt and monitor those practices continuously to maximize its benefits. Our research has focused on adaptability, suitability and software maturity model called Agile Maturity Model (AMM) for agile software development environments. This paper introduces a process of adaptability assessment, suitability assessment, and improvement framework for assessing and improving agile best practices. We have also developed a web based automated tool support to assess adaptability, suitability, and improvement for agile practices.

  5. ABSTRACT There is a great interest paid to the web services paradigm nowadays. One of the most important problems related to the web service paradigm is the automatic composition of web services. Several frameworks have been proposed to achieve this novel goal. The most recent and richest framework (model) is the Colombo model. However, even for experienced developers, working with Colombo formalisms is low-level, very complex and timeconsuming. We propose to use UML (Unified Modeling Language) to model services and service composition in Colombo. By using UML, the web service developer will deal with the high level graphical models of UML avoiding the difficulties of working with the low-level and complex details of Colombo. To be able to use Colombo automatic composition algorithm, we propose to represent Colombo by a set of related XML document types that can be a base for a Colombo language. Moreover, we propose the transformation rules between UML and Colombo proposed XML documents. Next Colombo automatic composition algorithm can be applied to build a composite service that satisfies a given user request. A prototypical implementation of the proposed approach is developed using Visual Paradigm for UML.

  6. ABSTRACT This paper discusses the lessons learned from building a model for effort estimation. This model focuses on minimizing effort variance by enhancing the adjustments made to the functional sizing techniques. A special focus was made on the adjustment factors which reflect the application’s complexity and the actual environment in which this application will be implemented. We introduced the idea of grouping the adjustment factors to simplify the process of adjustment and to ensure more consistency in the adjustments. We have also studied, in depth, how the quality of requirements impact effort estimation. We introduced the quality of requirements as an adjustment factor in our proposed model. Our study concentrates on Egyptian companies with an objective to enhance effort estimation in these companies. We have learned a number of lessons that are discussed in the paper. From these learned lessons, we derive a group of recommendations to other researchers aiming at building similar estimation models.

  7. ABSTRACT Defect management is one of the major issues prevailing in software industry. The main objective of the defect management is to achieve complete customer satisfaction. One of the most important steps towards total customer satisfaction is the generation of nearly zero-defect products. Both industry and research segments are continuously working in this direction. This work is an empirical analysis of data from several leading software industries. The paper shows defect distribution in small, medium and large category of projects. It reveals the existence of various types of defect patterns spanning across major phases of software development. It further analyses the most common root causes and percentage in which they contribute towards occurrences the defect patterns. The awareness of defect patterns enables one to capture of majority of defects close to defect inception point. This knowledge reduces defect injection and helps to develop nearly zero-defect product in software industry.

  8. ABSTRACT We present a comprehensive test case generation technique from UML models. We use the features in UML 2.0 sequence diagram including conditions, iterations, asynchronous messages and concurrent components. In our approach, test cases are derived from analysis artifacts such as use cases, their corresponding sequence diagrams and constraints specified across all these artifacts. We construct Use case Dependency Graph (UDG) from use case diagram and Concurrent Control Flow Graph (CCFG) from corresponding sequence diagrams for test sequence generation. We focus testing on sequences of messages among objects of use case scenarios. Our testing strategy derives test cases using full predicate coverage criteria. Our proposed test case generation technique can be used for integration and system testing accommodating the object message and condition information associated with the use case scenarios. The test cases thus generated are suitable for detecting synchronization and dependency of use cases and messages, object interaction and operational faults. Finally, we have made an analysis and comparison of our approach with existing approaches, which are based on other coverage criterion through an example.

  9. ABSTRACT The object relational database (ORDB) management systems signify a set of interrelated objects using reference and collection attributes. The static classlevel design metrics for designing ORDB dictates the structural complexity of the object-relational schema, whereas the dynamic nature of the ORDB demands object-level assessment using runtime metrics which help determine the functional complexity of the database. Runtime coupling and cohesion metrics are deemed appropriate in measuring the Object-level behavioral complexity.In this work we use the runtime cohesion and coupling metrics for studying the dynamic behavior of ORDB by measuring the behavioral complexity of the objects in ORDBs. Experiments on sample ORDB schemas are conducted using statistical analysis and correlation clustering techniques to assess the dynamic behavior of the objects in real time relational environment. The results indicate that the object behavior in ORDB directly influences the database performance. The future works pertaining to this research form the concluding note.

  10. ABSTRACT This research is concerned with the problem of software flexibility. Specifically, it addresses the problem of managing change in workflow management systems. A large change in business requirements naturally leads to a large change in the supporting software. However, a small change in business requirements may lead to a huge change in the supporting software. This is a result of software systems that are built with no consideration to flexibility. The suggested solution is based on separating activities from execution rules. Activities are implemented as a set of loosely coupled services. Services can be replaced when necessary. The execution sequence may be changed without the need to rewrite or reconstruct a given workflow. The work presented here is based upon ongoing research into software application flexibility which focuses on building flexible workflow engines.