Lesson Archives

  1. ABSTRACT Performance is a persistent quality of any software systems. Software performance engineering (SPE) encompasses efforts to describe and improve performance of systems at the early stages of development of the system. Multi-Agent Systems (MAS) are composed of autonomous entities called agents which cooperates together to solve complex distributed problems. Whatever complex the system, the quality of the system is an important parameter to be addressed. In this paper we are proposing an algorithm for predicting the performance of softwfrfare systems using Artificial Neural Network (ANN) approach. The algorithm is a new attempt in performance engineering of MAS. We have used ANN models for size estimation of the software (representative workload) which is an important parameter for assessing the performance in early stages of software development. Another significant contribution is assessment of performance by considering the data gathered during feasibility study.  The ANN models are trained and validated for different data sets. The algorithm is validated for static properties of the RETSINA architecture. A case study on MAS is considered and the results are obtained using the validated ANN model.

  2. ABSTRACT Software cost Estimation is the process of predicting the amount of time (Effort) required to build a software system. The primary reason for cost estimation is to enable the client or the developer to perform a cost-benefit analysis. Effort Estimations are determined in terms of person-months, which can be translated into actual dollar cost. The accuracy of the estimate will be depending on the amount of accurate information of the final product. Specification with uncertainty represents a range of possible final products, and not one precisely defined product. The input for the effort estimation is size of the project and cost driver parameters. A number of models have been proposed to construct a relation between software size and Effort but no model consistently and effectively predict the Effort. Accurate software effort estimation is a challenge in the software Industry. In this paper a Particle Swarm Optimization technique is proposed which operates on data sets which are clustered using the K-means clustering algorithm. PSO has been employed to generate parameters of the COCOMO model for each cluster of data values. The clusters and effort parameters are then trained to a Neural Network by using Back propagation technique, for classification of data. Testing of this model has been carried out on the COCOMO 81 dataset and also the results have been compared with standard COCOMO model and as well as the neuro fuzzy model. It is concluded from the results that the neural networks with efficient tuning of parameters by PSO operating on clusters, can generate better results and hence it can function efficiently on ever larger data sets.

  3. ABSTRACT Program slicing was originally introduced by Mark Weiser, is useful in program debugging, automatic parallelization, software maintenance, program integration etc. It is a method for automatically decomposing programs by analyzing their data flow and control flow reduces the program to a minimal form called “slice” which still produces that behavior. Interprocedure slicing is the slicing of multiprocedure program .In this paper a new method or algorithm (IP algorithm) is introduced for the interprocedure static slicing of structured programs. The most time consuming part of the interprocedure slicing methods is the computation of transitive dependences (i.e. summary edges) due to the procedure calls. Horowitz et al. [8] introduced an algorithm based on attribute grammar for computing summary edges. Reps et al. [7] and Istavan [9] defined an improved algorithm for computing summary edges representing interprocedural dependences at procedure calls. Here in this paper we discuss the improved interprocedure slicing algorithm (IP) algorithm, which is faster than previous algorithm and takes less memory space.

  4. ABSTRACT Software quality metrics are very crucial in software development. As a result, many metrics suites were proposed by various researchers and practitioners to measure and assess software quality. Most of those exisitng suites captures specific dimension of quality, such as class size or polymorphism , coupling and cohesion, and give results hesitates between evaluating the entire system and evaluating each item individually, even they were developed before UML emergence. Also, they focus on certain features of object-oriented metrics and ignore others. Our proposed work is a unified hybrid metrics suite (HCLKM) developed for evaluating the design of object- oriented software in early deisgn UML phases which covers important aspects and dimensions of software quality and provides initial indicator of development track correctness in time which changes are nearly costless. To assess correctness of results a custom metrics extraction tool was developed which operates on UML design models and corresponding XMI files to assure independency results. Two examples at different disciplines are used for illustration: Laboratory Certification System and MIDAS microarray tools.

  5. ABSTRACT Generally, in the software development process, security is added as an afterthought, which may not assure the complete security of the system. It is also required to add security as part of software development process. This is possible with the quantified values for the parameters under assessment for assuring security. Hence, we suggest having quantified values for the security metrics too. In this paper, a security analysis has been carried out for ETL (Extraction, Transformation and Loading) process and the security metrics are quantified. A framework for secure ETL processes has been suggested and a methodology for assessing the security of the system in the early stages. The framework can be applied for any phase in the ETL process. We validate the framework firstly, using the static model of data extraction process in ETL and later for dynamic model using the simulation. We considered two security metrics; vulnerability index and security index. A simulation tool SeQuanT, which quantifies security of the system, in a general context of security, has been developed and discussed. We have also carried out sensitivity analysis also for the security metrics. The results show the level of security in the system and the number of security requirements to be considered to achieve the required level of security.

  6. ABSTRACT The high-level contribution of this paper is to illustrate the development of generic solution strategies to remove software security vulnerabilities that could be identified using automated tools for source code analysis on software programs (developed in Java). We use the Source Code Analyzer and Audit Workbench automated tools, developed by HP Fortify Inc., for our testing purposes. We present case studies involving a file writer program embedded with features for password validation, and connection-oriented server socket programs to discover, analyze the impact and remove the following software security vulnerabilities: (i) Hardcoded Password, (ii) Empty Password Initialization, (iii) Denial of Service, (iv) System Information Leak, (v) Unreleased Resource, (vi) Path Manipulation, and (vii) Resource Injection vulnerabilities. For each of these vulnerabilities, we describe the potential risks associated with leaving them unattended in a software program, and provide the solutions (including the code snippets in Java) that can be incorporated to remove these vulnerabilities. The proposed solutions are very generic in nature, and can be suitably modified to correct any such vulnerabilities in software developed in any other programming language.

  7. ABSTRACT Software testing is performed to validate that software under test meets all requirements. With the increase in software developing platforms, developers may commit those errors, which, if not tested with appropriate test cases, may lead to false confidence in software testing. In this paper, we proposed that building quality source code documentation can help in predicting such errors. To validate this proposal, we performed an initial study and found that if software is well documented, a tester may predict the possible set of errors that developers may commit, and hence, may select better test cases that target those faults. From this study, it has been observed that proper code documentation can help in selecting appropriate test cases from candidate test cases and can lead to more effective software testing.

  8. ABSTRACT  Software vulnerability is a weakness that can be exploited to get access to the code making the software highly insecure. To make the software secure, vulnerabilities must be identified and corrected. As identifying weaknesses manually in large programs is time consuming, the process needs to be automated. This paper discusses a tool called SecCheck developed to identify vulnerabilities in Java code. The tool takes Java source files as input, stores each line in memory and scans to find vulnerabilities. A warning message is displayed when vulnerability is found. The tool can detect critical software vulnerabilities not found by most of the other tools as well as calculate Degree of Insecurity, a metric defined in this paper. SecCheck has been used to calculate the Degree of Insecurity in two classes of programs: one written by experienced Java programmers and the other by students. The experimental results are discussed.

  9. ABSTRACT This paper analyzes the change history of various software systems for understanding their evolutionary behavior with respect to the type of changes performed over a period of time. The main objectives of this research work are: (a) What types of changes are most likely to occur in a software system during its evolution? (b) Is there any pattern in the type of changes performed over time in a system? An automated keyword based categorization technique is applied to the textual description of commit records of the software systems to categorize change activities into various types such as: Adaptive, Corrective, Perfective, Enhancement, and Preventive. The study finds that corrective changes are the maximum and preventive changes are the least in the software systems analyzed here.

  10. ABSTRACT Various factors affect the impact of agile factors on the continuous delivery of software projects. This is a major reason why projects perform differently- some failing and some succeeding- when they implement some agile practices in various environments. This is not helped by the fact that many projects work within limited budget while project plans also change-- making them to fall into some sort of pressure to meet deadline when they fall behind in their planned work. This study investigates the impact of pair programming, customer involvement, QA Ability, pair testing and test driven development in the pre-release and post -release quality of software projects using system dynamics within a schedule pressure blighted environment. The model is validated using results from a completed medium-sized software. Statistical results suggest that the impact of PP is insignificant on the pre-release quality of the software while TDD and customer involvement both have significant effects on the pre-release quality of software. Results also showed that both PT and QA ability had a significant impact on the post-release quality of the software.