Reliability is what really defines the end result of a software application. In the era of cut throat competition, any product should not only serve the required functionality but also offer some additional benefits to its end users. Software development is a tedious and time consuming process, so is testing. So ensuring a reliable software product should be the ultimate goal of any model adopted by an organisation.
Software reliability can thus be defined as that attribute of the quality aspect of a software, along with other aspects like functionality, usability, performance, capability, maintainability, documentation etc. As a software application expands in size, it is bound to become complex in nature. As a result, the estimations like its cost becomes really difficult. Therefore over the years, there have been many models that has evolved, which offers efficient estimation techniques.
Software Reliability Models :
With the rise in demand for software reliability models, based on the nature of these models, reliability models are categorised as -
Prediction Models -This modelling technique relies on historical data . Previous data is analysed to conclude some facts to be able to arrive at a consensus. They are usually made before development and regular test phase begins. Prediction model is used as a prediction based model. Few prediction models are listed below -
Musa Model : It is used to predict the failure rate before the beginning of the system testing. This model emphasises on the assumption about the failure rate of a software based on the number of faults it contains.
Putnam Model : It is the estimation of time and effort required to complete a software project, of a particular size. It has a mathematical equation for calculating productivity levels.
Effort = [size/ productivity* time] . B
size refers to the product size.
B is the scaling factor, a function of the project size.
Effort is the total effort in person years.
Time refers to the total time allotted for the project.
Estimation Models -This model uses the current data from the ongoing software development effort and does not take into account the conceptual development phases. It can estimate at any point of time.
Apart from the aforementioned models, there are some other reliability models, those are :
Geometric Model - This model is derived from the version of J-M model. It assumes that the time between failures are exponentially distributed.
S-Shaped Model - This model states that the time between failures depends on the time the failure occurred. Fault mitigation occurs immediately at the time of occurrence of failure.
L-V Reliability Growth Model - This model relies on the fact that for correction of a fault, a piece of code has to be written.
Exponential Failure Time Model - This model calculates the remaining number of remaining faults . This model comprises of two other models namely - Poisson and Binomial model. These models assumes a hazard function, which is defined as the function of the remaining number of faults. The failure function rises exponentially.
Execution Time Model - It assumes that faults will occur independently. The intensity of the failure is directly proportional to the remaining number of faults in the program and the fault correction is proportional to the occurrence rate of a failure.
Hyper Exponential Model - The concept lies here, that different portions of a program experience a different failure rate. The failure rate varies according to the behavioural patterns of the different parts.
Logarithmic Poisson Model - The model follows logarithmic approach. That is, when a failure occurs, the distribution decreases exponentially. The number of failures that could possibly occur over a period of time is a logarithmic function thus it is called Logarithmic Poisson.
Software reliability Engineering is a concept that involves taking a step towards improvement and measurement of reliability. The models discussed above provide a very systematic and quantitative approach to figure out the defects/failure in a timely manner.