IEEE 1633:2008 pdf free download – IEEE Recommended Practice on Software Reliability

02-15-2022 comment

IEEE 1633:2008 pdf free download – IEEE Recommended Practice on Software Reliability
3.1 General
Software is a complex intellectual product.Inevitably,some errors are made during requirementsformulation as well as during designing, coding, and testing the product.The development process forhigh-quality software includes measures that are intended to discover and correct faults resulting from theseerrors,including reviews,audits,screening by language-dependent tools,and several levels of test.Managing these errors involves describing the errors,classifying the severity and criticality of their effects,and modeling the effects of the remaining faults in the delivered product,, and thereby working withdesigners to reduce their number of errors and their criticality.
NOTE—The IEEE standard for classifying errors and other anomalies is IEEE Std 10441″-1993 [B25].
Dealing with faults costs money. It also impacts development schedules and system performance(throughincreased use of computer resources such as memory,CPU time,and peripherals requirements).Consequently, there can be too much as well as too littlc effort spent dealing with faults. The systemengineer (along with management) can use reliability estimation and assessment to understand the currentstatus of the system and make tradeoff decisions.
3.2 Basic concepts
Clause 3 describes the basic concepts involved in SRE and addresses the advantages and limitations of SRprediction and estimation.The basic concept of reliability predictions and assessments of electronicsystems and equipment is described in IEEE Std 14137M-1998 [B28]. The objective of IEEE Std 1413-1998[B28] is to identify required clements for an understandablc,credible reliability prediction,which willprovide the users of the prediction sufficient information to evaluate the effective use of the predictionresults. A reliability prediction should have sufficient information concerning inputs,assumptions,anduncertainty, such that the risk associated with using the prediction results would be understood.
There are at least two significant differences between hardware reliability and SR. First, software does notfatigue or wear out. Second, due to the accessibility of software instructions within computer memories,any line of code can contain a fault that, upon execution, is capable of producing a failure.
An SR model specifies the general form of the dependence of the failure process on the principal factorsthat affect it: fault introduction,fault removal, and the operational environment.The failure rate (failuresper unit time) of a software system is generally decreasing due to fault identification and removal, as shownin Figure 1.At a particular time, it is possible to observe a history of the failure rate of the software. SRmodeling is done to estimate the form of the curve of the failure rate by statistically estimating theparameters associated with the selected model. The purpose of this measure is twofold: 1) to estimate theextra execution time during test required to meet a specified reliability objective and 2) to identify theexpected reliability of the software when the product is released.
3.3 Limitations of software reliability assessment and prediction
SR models can both assess and predict reliability.The former deals with measuring past and currentreliabilities. The latter provides forecasts of future reliability.The word “prediction” is not intended to beused in the common dictionary sense of foretelling future events, particularly failures, but instead as anestimate of the probabilities of future events. Both assessment and prediction need good data if they are toyield good forecasts. Good data implies accuracy(that data is accurately recorded at the time the eventsoccurred) and pertinence (that data relates to an environment that is equivalent to the environment forwhich the forecast is to be valid). A negative example with respect to accuracy is the restricting of failurereport counts to those,which are completely filled out. This is negative because they may represent abiased sample of the total reports. A negative example with respect to pertinence would be the usc of datafrom early test runs at an uncontrolled workload to forecast the results of a later test executed under ahighly controlled workload.

Main Focus Download

LEAVE A REPLY

Anonymous netizen Fill in information