Home Engineering



Software Reliability Assessment MethodsTable of Contents:
There are many quantitative and qualitative methods that can be used for assessing software reliability. They may be grouped under the following three classifications [22]:
Each of the above classifications is described in the following subsections. Classification I: Software MetricsThese metrics may simply be described as quantitative indicators of degree to which a software item/process possesses a specified attribute. Often, software metrics are employed for determining the status of a trend in a software development process as well as for determining the risk of going from one phase to another. Two software metrics considered quite useful for assessing, directly or indirectly, software reliability are presented in the following subsections. Design Phase MeasureThis metric is concerned with determining the degree of reliability growth during the design phase. The measure/metric requires establishing necessary defect severity classifications and possesses some ambiguity, since its low value may mean either a poor review process or a good product. The metric is expressed by [12,14,22].
where CDRj is the cumulative defect ratio for design. к is the number of reviews. 6_{t} is the number of unique defects at or above a stated severity level, discovered in the /'th design review. у is the total number of source lines of design statement in the design phase, expressed in thousands. Additional information on this metric is available in Refs. [12,14,22]. Code and Unit Test Phase MeasureThis metric is concerned with assessing software reliability during the code and unit test phase and is expressed by [12,14,22] where CDR_{c} is the cumulative defect ratio for code. к is the number of reviews. a, is the number of unique defects at or above a stated severity level, discovered in the /'th code review. P is the total number of source lines of code reviewed, expressed in thousands. Classification II: Software Reliability ModelsThere are many software reliability models and they may be grouped under four categories as shown in Figure 6.2 [12, 14, 22, 2729]. Category I (i.e., fault seeding) includes those software reliability models that determine the number of faults in the program at zero time via seeding of extraneous faults. Two main assumptions associated with the models belonging to this category are as follows:
Mills model is an example of the models belonging to this category [30]. Category II (i.e., failure count) includes those software reliability models that count the number of failures/faults taking place in stated time intervals. Three main assumptions associated with the models belonging to this category are as follows:
FIGURE 6.2 Categories of software reliability models. The Musa model [31] and the Shooman model [32] are the examples of the models belonging to this category. Category III (i.e., times between failures) includes those software reliability models that provide the time between failure estimations. Four main assumptions associated with the models belonging to this category are as follows:
The Jelinski and Moranda model [33] and the Shick and Wolverton model [28] are the examples of the models belonging to this category. Category IV (i.e., input domain based) includes those software reliability models that determine the software/program reliability under the condition that the test cases are sampled randomly from a given operational distribution of inputs to the software/ program. Three main assumptions associated with the models belonging to this category are as follows:
The Nelson model [34] and the Ramamoorthy and Bastani model [35] are the examples of the models belonging to this category. Two, categories I and II, software reliability models are presented below. Mills ModelThe basis for this model is that an assessment of the faults remaining in a software program can be made through a seeding process that assumes a homogenous distribution of representative group of faults. Prior to the seeding process’s initiation, a fault analysis is needed for determining the expected types of faults in the code as well as their relative frequency of occurrence. An identification of seeded and unseeded faults is made during the reviews or testing process, and the discovery of indigenous and seeded faults allows an assessment of remaining faults for the type of fault under consideration. However, it is to be noted with care that the value of this measure can only be calculated if the seeded faults are discovered. The maximum likelihood of the unseeded faults is expressed by [22, 30]. where MLUF is the maximum likelihood of the unseeded faults. NSF is the number of seeded faults. NUFU is the number of unseeded faults uncovered. NSFF is the number of seeded faults found. Thus, the number of unseeded faults still remaining in a software program under consideration is expressed by
where в is the number of unseeded faults still remaining in a software program under consideration. Example 6.3 Assume that a software program was seeded with 30 faults and that, during the testing process, 60 faults of the same type were found. The breakdowns of the faults found were 35 unseeded faults and 25 seeded faults. Calculate the number of unseeded faults still remaining in the software program. By substituting the stated data values into Equation (6.16), we get
By inserting the above resulting value and the stated data value into Equation (6.17), we get
This means that 7 unseeded faults still remain in the software program. Musa ModelThe basis for this model is the premise that reliability assessments in the time domain can only be based upon actual execution time, as opposed to calendar/elapsed time. The main reason for this is that only during the execution process does a software program becomes exposed to failureprovoking stress. Two of the main assumptions associated with the Musa model are as follows [12,31]:
The net number of corrected faults is defined by [12, 14, 31] where к is the net number of corrected faults. m is the initial number of faults. t is the time. T_{m} is the mean time to failure at the beginning of the test. c is the testing compression factor defined as the average ratio of detection rate of failures during test of the rate during normal use of the software program under consideration. Mean time to failure, MTTF, increases exponentially with execution time and is expressed by
Thus, the reliability at operational time t is
From the above relationships, we obtain the number of failures that must occur for improving mean time to failure from, say MTTF_{X} to MTTF_{2} [36]:
The additional execution time required to experience A к is expressed by Example 6.4 Assume that a newly developed software is estimated to have approximately 400 errors. Also, at the beginning of the testing process, the recorded mean time to failure is 5 hours. Determine the amount of time required for reducing the remaining errors to 5 if the value of the testing compression factor is 3. Also, estimate the reliability over a 150 hour operational period. By substituting the given data values into Equation (6.21), we obtain
Rearranging Equation (6.23) yields
By inserting the above result and the other specified data values into Equation (6.22), we get
Thus, for the given and calculated values from Equation (6.20), we obtain
Thus, the required testing time is 1848.39 hours, and the reliability of the software for the specified operational period is 0.1533. Classification III: Analytical MethodsThere are a number of analytical methods that can be used for assessing software reliability. Two of these methods are failure modes and effect analysis (FMEA) and fault tree analysis (FTA). Both of these methods are quite commonly used for assessing reliability of hardware, and they can equally be used for assessing reliability of software as well. Both FMEA and FTA methods are described in Chapter 4. 
<<  CONTENTS  >> 

Related topics 