|
Software testing is a well-defined phase of the software development life cycle. Functional ("black box") testing and structural ("white box") testing are two methods of test case design commonly used by software developers. A lesser known testing method is risk-based testing, which takes into account the probability of failure of a portion of code as determined by its complexity. For object oriented programs, a methodology is proposed for identification of risk-prone classes.Risk-based testing is a highly effective testing technique that can be used to find and fix the most important problems as quickly as possible. Risk can be characterized by a combination of two factors: the severity of a potential failure event and the probability of its occurrence. Risk can be quantified by using the equation
Risk = … p(Ei) * c(Ei),
Where i =1,2,à,n. n is the number of unique failure events, Ei are the possible failure events, p is probability and c is cost.
Risk-based testing focuses on analyzing the software and deriving a test plan weighted on the areas most likely to experience a problem that would have the highest impact. This looks like a daunting task, but once it is broken down into its parts, a systematic approach can be employed to make it very manageable.
The severity factor c(Ei) of the risk equation depends on the nature of the application and is determined by domain analysis. For some projects, this might relative to mission criticality, for some it would be in terms of financial loss, while for others it might be related to safety. Severity assessment requires expert knowledge of the application as well as a thorough understanding of the potential costs of various failures. Musa addresses ways to estimate the severity of software module failures in his discussion of "Operational Profiles" in his book, Software Reliability Engineering.
Both severity and probability of failure are needed before risk-based test planning can proceed. Severity assessment is not addressed here, however, because it involves so much application-specific knowledge. Instead we confine the remainder of the discussion to the other crucial part of the risk equation, assessing the likelihood of component failures, p(Ei), and we suggest a way to capture the information directly from the source code, independent of domain knowledge.
The task we address is to determine how likely it is that each part of a software system will fail. It has been proven that code that is more complex has a higher incidence of errors or problems. For example, cyclomatic complexity has been demonstrated as one criterion for identifying and ranking the complexity of source code. Therefore, using metrics to predict module failures might simply mean identifying and sorting them by complexity. Then using the complexity rankings in conjunction with severity assessments from the domain risk analysis described above would identify which modules should get the most attention. But restricting the focus to ranking module complexity is an over-simplification, and we may fail to detect some very risk-prone code. Experience has shown that object oriented programming, in particular, can result in deceptively low values for common complexity metrics. The hierarchical nature of object oriented code calls for a multivariate approach to measure complexity.
We narrow the topic further and focus specifically on object oriented software. The Software Assurance Technology Center (SATC) at NASA Goddard Space Flight Center has identified and applied a set of six metrics for object oriented source code. These metrics have been used in the evaluation of many NASA projects and empirically derived guidelines have been developed for their interpretation. In this paper, we will identify and discuss the interpretation and application of these metrics.
The purpose of the metrics information is to identify the classes at highest risk for error. While there is insufficient data to make precise ranking determinations, there is enough information to justify additional testing of those classes that exceed the recommended values. Then, combining module risk evaluation with expert criticality estimates, we have the two components needed for determining risk by class. Allocating testing resources based on these two factors, severity and likelihood of failures, amounts to risk-based testing.
Object oriented software metrics can be used in combination to identify classes that are most likely to pose problems for a project. The SATC has used the data collected from thousands of object oriented classes to determine a set of benchmarks that are effective when used simultaneously in identifying potential problems. When problematic classes are also identified by domain experts as critical to the success of the project, testing can be allocated to mitigate risk. Risk-based testing will allow developers to find and fix the most important software problems earlier in the test phase.
Dr. Linda H. Rosenberg is the Division Chief responsible for the Software Assurance Technology Center (SATC) at Goddard Space Flight Center, NASA. The SATC primary responsibilities are in the areas of Metrics, Assurance tools and techniques, Risk management, and Outreach programs.Although she oversees all work areas, Dr. Rosenberg's area of expertise is metrics. The emphasis of her work with project managers is the application of metrics to evaluate the quality of development products. Dr. Rosenberg holds a Ph.D. in Computer Science, an M.E.S. in Computer Science, and a BS in Mathematics.
Contact Dr. Rosenberg by Email at Linda.Rosenberg@gsfc.nasa.gov or visit the SATC WebSite at satc.gsfc.nasa.gov.