Вы находитесь на странице: 1из 3

Full Paper Int. J. on Recent Trends in Engineering and Technology, Vol. 7, No.

1, July 2012

A Methodology for Measuring Web Software Reliability

M.Ravichandran1 and A.V. Ramani2

Associate professor of Computer Science, SRMV College of Arts and Science, Coimbatore, India Email: ravichandransrkv@gmail.com 2 HOD of Computer Science, SRMV College of Arts and Science, Coimbatore, India Email: avvramani@yahoo.com Actually, software reliability is a measure to the reliable degree of software in operation. The software failure is due to the design defects, and the input determines whether the defects will cause the failure. So if the input profile in the reliability test is different with the input profile in practical operation, the times that the failures occur will not be consistent with the times in the practical operation. Therefore the data acquired from the test cannot be used for reliability estimation by the reliability estimation model on time domain [11]. Two basic types of software reliability models are: input domain reliability models (IDRMs) and time domain software reliability growth models (SRGMs) [3, 4]. IDRMs can provide a snapshot of the Web sites current reliability. For example, if a total number of f failures are observed for n workload units, the estimated reliability R according to the Nelson model [5], one of the most widely used IDRMs [3], can be obtained as:
R (n f ) f 1 ( ) 1 r n n

Abstract - The web applications are the basic needs of the business and communications. These applications are accessed through internet. The main problem with the web application is assessing its reliability. Assessing the reliability of web application software is different from other applications software. This paper proposes a new metrics and a methodology to assess the web software reliability. This new methodology was analyzed on four web sites. Index Terms - Software reliability; web applications; student t distribution; standard deviation.

I. INTRODUCTION The rise of the Internet has brought dramatic changes in commerce and communication. Internet distribution and ecommerce created new preferences, expectations, and challenges for business, education, industry, and the general public. In recent years, the World Wide Web continues to drive an enormous growth in the user population. For this increasing reliance on the Internet, malfunctions of a Web site can jeopardize business opportunities and even company reputations [10]. Compared with the traditional software, the Web applications have many special properties [2,12]: firstly, because of the easy accessibility to information, the web applications have a huge user population, thus propose a high demand to the servers performance and the ability of dealing with concurrent transactions, secondly, the architecture requires the web applications to fit for the heterogeneous and autonomous environments, thirdly, web applications mainly focus on the information search and index, so they have weaker functions but quicker updating rates in their contents and techniques, comparing with the traditional ones. Thus, additional efforts are needed in Web testing [8]. One of the problem in web software testing is assessing web software reliability [6]. II. SOFTWARE RELIABILITY The software reliability [13] is defined as the probability that the software work well without failure under given conditions in a period of time. This probability is the function of the software input, system operation and the defects in the software, because the input will determine that if the defects (if exists) will be encountered in operation [11]. 16

Where r is the failure rate, and is also often used to characterize reliability. When usage time ti is available for each workload unit I, the summary reliability measure, mean-time between failures (MTBF) [3], can be calculated as: MTBF=

1 t f i

When the usage time ti is not available, we can use the number of workload units as the rough time measure [3]. In this case, MTBF=

n f

We will use Ri to represent the reliability of the atomic web service Ci , and Pi to represent the probability of the precondition of atomic web service Ci. For the composite web service C with structure as sequence( C1 ; C2 ; ; Cn), the reliability of the composite Web service R can be obtained by [4]: n R= Pi Ri

SRMs are characterized by a mean value function (MVF) [7], which provides the expected number of faults detected as a function of the testing time [1]. The MVF of the GO model is:

2012 ACEEE DOI: 01.IJRTET.7.1.70

Full Paper Int. J. on Recent Trends in Engineering and Technology, Vol. 7, No. 1, July 2012 M(t) = (1-e-t) The parameters of the GO model include: , which denotes the number of faults that will eventually be detected, and , which denotes the constant fault detection rate. The larger the value of , the faster the number faults detected will approach , which occurs as t goes to infinity.The MVF of the Weibull model [9] is: M(t) = (1-e-t) Where represents the number of faults that will eventually be detected, while and are the scale and shape parameters respectively. To measure software reliability some tools are available. For example, the log analysers Analog and Fast stats only analyses access logs for common HTML errors. JMeter performance testing tool can be used to create test plans and execute test plans. The google webmaster tool gives types of errors in the websites. No tools analyses the errors for assessing the web reliability. III. METRICS OF SOFTWARE RELIABILITY The metrics of software reliability can be classified into three categories, namely server side metrics, client side metrics and network metrics. The server side metric includes availability of servers, availability of databases, availability of server side softwares, websites monitoring and email notifications. The client side metric includes availability of web application softwares, browsers, net softwares and hardwares. The network metric includes type of internet connections, internet service provider, and network traffic. IV. METHODOLOGY Statistical methods are extremely helpful in formulating and testing problems and to develop new theories. If number of access to a web page is very low, success rate and error rate will give the web reliability. It is calculated by using the following formula i) First, assume the hypothesis that there is no difference between expected value of the errors and calculated value of errors. The expected value of the error is the assumption of number of errors which is zero. ii) Then suitable significance levels have to be setup. The significance level is expressed as a percentage, such as 5 percent, is the probability of rejecting our hypothesis. When the hypothesis is accepted at 5 percent level, the statistician makes a wrong decision about 5 percent of times. Testing at 1 percent level reduces the chance of making a false judgment 1 out of 100 occasions. iii) Here only less than 30 types of errors are only taken for testing. So student t distribution can be selected for our test. iv) To test the significance of error, the following statistics is calculated.
t ( X ) s n

Where X is the calculated mean of the errors and is the expected mean of the errors, n is the number of types of errors and s is the standard division of the error types. S is calculated using the formula

(X X )
n 1

The t value gives the probability integral of t distribution. The t distribution has different values for each degree of freedom. If the calculated value of t exceeds the table value to t 0.05 then it will be concluded that there is no difference in our initial assumption. So the System is reliable. If table values t 0.05 exceeds the calculated value of t then the system is not reliable. The user can customize the percentage of error which will be in acceptable region for the problem depends up on the definition of the problem. V. ANALYSIS OF WEBSITES The websites of Nail Soft Company, Brindhavan CBSE School, Rangatex and SRMV college were taken for analysis. The error occurred in the above websites were stored in a separate log file. The table 1 shows the type of error occurred and number of times occurred in the four websites. From the table I, total number of access to the websites, number of successful access, number of failure access, percentage of success rate and percentage of failure rate are calculated and given in the table II. The table II shows that the success rate varies from 62.08 to 86.56 and the failure rate varies from 13.43 to 37.91.This shows that the websites have high failure rate. This concludes that these websites are not reliable. An another statistical measure called mode, is calculated. This gives error which repeats more times. The database error is high in Nail soft, software error is high in Brindhavan CBSE, Rangatex and SRMV College. This infers that these errors must be rectified at the earliest. Hypothesis testing is another method to test the reliability. Here, at first it is assume that the expected errors which is zero and calculated 17

Where X is the total Number of successful access to a web page, Y is the total number of errors in accessing a web page and Z is the total number of access to a web page. If success rate is high and error rate is very low, then it is concluded that the web system is reliable. Mode is a measure which shows the error which needs immediate attention. These measures are not suitable when the number of access to a web page is large and decision to make depends on many types of errors. By the time a statistical testing technique called hypothesis testing can be used. Here, the types of errors taken for reliability testing is less than 30.So student t distribution can be used. If the number of types of errors is greater than thirty, the normal distribution has to be used. The following is the procedure. 2012 ACEEE DOI: 01.IJRTET.7.1.70

Full Paper Int. J. on Recent Trends in Engineering and Technology, Vol. 7, No. 1, July 2012


VI. CONCLUSION A new metrics for software reliability is given in this paper. A hypothesis testing technique is suggested for assessing software reliability. Four websites are analyzed using this technique. This technique compares each type of errors with expected error instead of comparing total number of errors with expected number of errors. Since acceptable error percentage can be fixed in the technique, it is most suitable for assessing the software reliability. REFERENCES




errors are same. Only six types of errors are taken for our calculations. The table III shows various t statistics. Mean, standard deviation and t statistics are calculated for all the four websites. The degrees of freedom is five and the t table value at 5% of errors is 2.015. The calculated value of t is lower than the table value of t for all websites. So the hypothesis is accepted. This means that there is no difference between expected error and the actual errors(Calculated errors).So we can accept the four websites are reliable with 5% error. The initial estimates of success rate and failure rate gave the result that the websites are not reliable. But the hypothesis testing statistics gives the result of the websites are reliable with 5% error. Six types of errors are taken for calculations. Only two types of errors occurred many times. The remaining errors are near to our expected level. So hypothesis testing statistics shows that websites are reliable. Since no system is hundred percent perfect, hypothetical testing method is a good measure for testing the software reliability. 2012 ACEEE DOI: 01.IJRTET.7.1.70 18

[1] A. Goel and K. Okumoto, Time-Dependent Error-Detection Rate Model for Software Reliability and Other Performance Measures. IEEE Trans. Rel., vol. 28, no. 3, Aug. 1979, pp 206-211. [2] C. Kallepalli and J. Tian, Measuring and Modeling Usage and Reliability for Statistical Web Testing, IEEE Trans. Software Eng., vol. 27, no. 11, pp. 1023-1036, Nov. 2001. [3] Prof. R.Manjula and Eswar Anand Sriram, Reliability Evaluation of Web Applications from Click-stream Data, International Journal of Computer Applications , Volume 9, No.5, November 2010. [4] Ning HUANG, Dong WANG and Xiaoguang JIA, FAST ABSTRACT: An Algebra-Based Reliability Prediction Approach for Composite Web Services, 19th international Symposium on Software Reliability Engineering, 1071-9458/ 08, 2008. [5] J. Offutt, Quality Attributes of Web Applications, Software, vol. 19, no. 2, pp. 25-32, Mar. 2002. [6] M.Ravichandran and A.V.Ramani An Analysis of Web Software Reliability World Congress on Information and Communication Technologies, Mumbai, December 11-14, 2011. [7] Swapna S. Gokhale, Software Reliability Model with BathtubShaped Fault Detection Rate, 978-1-4244-8856-8/11, 2011. [8] F. A Torkey, ArabiKeshk, TaherHamza and Amal Ibrahim, A New Methodology for Web Testing, IEEE, 1-4244-1430 X/07, 2007. [9] W. Weibull, A Statistical Distribution of Wide Applicability. Journal of Applied Mechanics, vol. 18, 1951, pp 293-297. [10] Wen-Li Wang and Mei_Huei Tang, User-Oriented Reliability Modeling for a Web System, Proceeding of the 14th International Symposium on Software Reliability Engineering (ISSRE03), 1071-9458/03, 2003. [11] Xinlei ZHOU, Jia Lin and Xing SONG, The Test and Estimation Method of Software Reliability Based on State Analysis, IEEE 978-4244-4905-7/09, 2009. [12] L. Xu, B. W. Xu, and Z.Q. Chen, Survey of Web Testing, Computer Science (in Chinese), 30(3): 100-104, 2003. [13] GB/T 11457-2006 Information technologySoftware engineering terminology.