Vol 6 No.1

Vol 6 No.1

Papers

  1. ABSTRACT The high-level contribution of this paper is to illustrate the development of generic solution strategies to remove software security vulnerabilities that could be identified using automated tools for source code analysis on software programs (developed in Java). We use the Source Code Analyzer and Audit Workbench automated tools, developed by HP Fortify Inc., for our testing purposes. We present case studies involving a file writer program embedded with features for password validation, and connection-oriented server socket programs to discover, analyze the impact and remove the following software security vulnerabilities: (i) Hardcoded Password, (ii) Empty Password Initialization, (iii) Denial of Service, (iv) System Information Leak, (v) Unreleased Resource, (vi) Path Manipulation, and (vii) Resource Injection vulnerabilities. For each of these vulnerabilities, we describe the potential risks associated with leaving them unattended in a software program, and provide the solutions (including the code snippets in Java) that can be incorporated to remove these vulnerabilities. The proposed solutions are very generic in nature, and can be suitably modified to correct any such vulnerabilities in software developed in any other programming language.

  2. ABSTRACT Generally, in the software development process, security is added as an afterthought, which may not assure the complete security of the system. It is also required to add security as part of software development process. This is possible with the quantified values for the parameters under assessment for assuring security. Hence, we suggest having quantified values for the security metrics too. In this paper, a security analysis has been carried out for ETL (Extraction, Transformation and Loading) process and the security metrics are quantified. A framework for secure ETL processes has been suggested and a methodology for assessing the security of the system in the early stages. The framework can be applied for any phase in the ETL process. We validate the framework firstly, using the static model of data extraction process in ETL and later for dynamic model using the simulation. We considered two security metrics; vulnerability index and security index. A simulation tool SeQuanT, which quantifies security of the system, in a general context of security, has been developed and discussed. We have also carried out sensitivity analysis also for the security metrics. The results show the level of security in the system and the number of security requirements to be considered to achieve the required level of security.

  3. ABSTRACT Software quality metrics are very crucial in software development. As a result, many metrics suites were proposed by various researchers and practitioners to measure and assess software quality. Most of those exisitng suites captures specific dimension of quality, such as class size or polymorphism , coupling and cohesion, and give results hesitates between evaluating the entire system and evaluating each item individually, even they were developed before UML emergence. Also, they focus on certain features of object-oriented metrics and ignore others. Our proposed work is a unified hybrid metrics suite (HCLKM) developed for evaluating the design of object- oriented software in early deisgn UML phases which covers important aspects and dimensions of software quality and provides initial indicator of development track correctness in time which changes are nearly costless. To assess correctness of results a custom metrics extraction tool was developed which operates on UML design models and corresponding XMI files to assure independency results. Two examples at different disciplines are used for illustration: Laboratory Certification System and MIDAS microarray tools.

  4. ABSTRACT Program slicing was originally introduced by Mark Weiser, is useful in program debugging, automatic parallelization, software maintenance, program integration etc. It is a method for automatically decomposing programs by analyzing their data flow and control flow reduces the program to a minimal form called “slice” which still produces that behavior. Interprocedure slicing is the slicing of multiprocedure program .In this paper a new method or algorithm (IP algorithm) is introduced for the interprocedure static slicing of structured programs. The most time consuming part of the interprocedure slicing methods is the computation of transitive dependences (i.e. summary edges) due to the procedure calls. Horowitz et al. [8] introduced an algorithm based on attribute grammar for computing summary edges. Reps et al. [7] and Istavan [9] defined an improved algorithm for computing summary edges representing interprocedural dependences at procedure calls. Here in this paper we discuss the improved interprocedure slicing algorithm (IP) algorithm, which is faster than previous algorithm and takes less memory space.

  5. ABSTRACT Software cost Estimation is the process of predicting the amount of time (Effort) required to build a software system. The primary reason for cost estimation is to enable the client or the developer to perform a cost-benefit analysis. Effort Estimations are determined in terms of person-months, which can be translated into actual dollar cost. The accuracy of the estimate will be depending on the amount of accurate information of the final product. Specification with uncertainty represents a range of possible final products, and not one precisely defined product. The input for the effort estimation is size of the project and cost driver parameters. A number of models have been proposed to construct a relation between software size and Effort but no model consistently and effectively predict the Effort. Accurate software effort estimation is a challenge in the software Industry. In this paper a Particle Swarm Optimization technique is proposed which operates on data sets which are clustered using the K-means clustering algorithm. PSO has been employed to generate parameters of the COCOMO model for each cluster of data values. The clusters and effort parameters are then trained to a Neural Network by using Back propagation technique, for classification of data. Testing of this model has been carried out on the COCOMO 81 dataset and also the results have been compared with standard COCOMO model and as well as the neuro fuzzy model. It is concluded from the results that the neural networks with efficient tuning of parameters by PSO operating on clusters, can generate better results and hence it can function efficiently on ever larger data sets.