Вы находитесь на странице: 1из 21

Different types of servers:

A Server is a computer or device on a network that manages network resources. For example, a file server is a computer and storage device dedicated to storing files Any user on the network can store files on the server. A print server is a computer that manages one or more printers and a network server is a computer that manages network traffic. Servers are often dedicated, meaning that they perform no other tasks besides their server tasks. On multiprocessing operating systems however, a single computer can execute several programs at once. A server in this case could refer to the program that is managing resources rather than the entire computer.

What is Server Platform?


A term often used synonymously with operating system. A platform is the underlying hardware or software for a system and is thus the engine that drives the server.

Server types
Application Servers
Sometimes referred to as a type of middleware, application servers occupy a large chunk of computing territory between database servers and the end user, and they often connect the two. Middleware is a software that connects two otherwise separate applications For example, there are a number of middleware products that link a database system to a Web server This allows users to request data from the database using forms displayed on a Web browser and it enables the Web server to return dynamic Web pages based on the user's requests and profile. The term middleware is used to describe separate products that serve as the glue between two applications. It is, therefore, distinct from import and export features that may be built into one of the applications. Middleware is sometimes called plumbing because it connects two sides of an application and passes data between them. Common middleware categories include: * TP monitors * DCE environments * RPC systems * Object Request Brokers (ORBs) * Database access systems * Message Passing

Audio/Video Servers
Audio/Video servers bring multimedia capabilities to Web sites by enabling them to broadcast streaming multimedia content. Streaming is a technique for transferring data such that it can be processed as a steady and continuous stream. Streaming technologies are becoming increasingly important with the growth of the Internet because most users do not have fast enough access to download large multimedia files quickly. With streaming, the client browser or plug-in can starts displaying the data before the entire file has been transmitted.

For streaming to work, the client side receiving the data must be able to collect the data and send it as a steady stream to the application that is processing the data and converting it to sound or pictures. This means that if the streaming client receives the data more quickly than required, it needs to save the excess data in a buffer If the data doesn't come quickly enough, however, the presentation of the data will not be smooth. There are a number of competing streaming technologies emerging. For audio data on the Internet, the de facto standard is Progressive Network's RealAudio.

Chat Servers
Chat servers enable a large number of users to exchange information in an environment similar to Internet newsgroups that offer real-time discussion capabilities. Real time means occurring immediately. The term is used to describe a number of different computer features. For example, real-time operating systems are systems that respond to input > immediately. They are used for such tasks as navigation, in which the computer must react to a steady flow of new information without interruption. Most general-purpose operating systems are not real-time because they can take a few seconds, or even minutes, to react. Real time can also refer to events simulated by a computer at the same speed that they would occur in real life. In graphics animation, for example, a real-time program would display objects moving across the screen at the same speed that they would actually move.

Fax Servers
A fax server is an ideal solution for organizations looking to reduce incoming and outgoing telephone resources but that need to fax actual documents.

FTP Servers
One of the oldest of the Internet services, File Transfer Protocol makes it possible to move one or more files securely between computers while providing file security and organization as well as transfer control.

Groupware Servers
A GroupWare server is software designed to enable users to collaborate, regardless of location, via the Internet or a corporate Intranet and to work together in a virtual atmosphere.

IRC Servers
An option for those seeking real-time capabilities, Internet Relay Chat consists of various separate networks (or "nets") of servers that allow users to connect to each other via an IRC network.

List Servers
List servers offer a way to better manage mailing lists, whether they are interactive discussions open to the public or one-way lists that deliver announcements, newsletters, or advertising.

Mail Servers

Almost as ubiquitous and crucial as Web servers, mail servers move and store mail over corporate networks via LANs and WANs and across the Internet.

News Servers
News servers act as a distribution and delivery source for the thousands of public news groups currently accessible over the USENET news network. USENET is a worldwide bulletin board system that can be accessed through the Internet or through many online services The USENET contains more than 14,000 forums called newsgroups that cover every imaginable interest group. It is used daily by millions of people around the world.

Proxy Servers
Proxy servers sit between a client program typically a Web browser and an external server (typically another server on the Web) to filter requests, improve performance, and share connections.

Telnet Servers
A Telnet server enables users to log on to a host computer and perform tasks as if they're working on the remote computer itself.

Web Servers
At its core, a Web server serves static content to a Web browser by loading a file from a disk and serving it across the network to a user's Web browser. The browser and server talking to each other using HTTP mediate this entire exchange.

Difference b/n Client /server application and web application server 1)Client server applications follow up Two tier architecture, but web server applications follows Three tier or 'n'tier architecture 2)In client server application we will not use Web servers, but in web server applications same will be used. 3)Client server applications Majorly used in Intranet areas,but web server applications used in Internet areas 4)Securiry issues are very less in client server as there are Minimum users,but in Web server Security issues are Maximum as end users are more 5)Performance issues will not be taken in to consideration in client server, but consideration will be done web server applications. Projects are broadly divided into two types of:
y y

2 tier applications 3 tier applications

CLIENT / SERVER TESTING This type of testing usually done for 2 tier applications (usually developed for LAN) Here we will be having front-end and backend.

The application launched on front-end will be having forms and reports which will be monitoring and manipulating data E.g: applications developed in VB VC++ Core Java C C++ D2K PowerBuilder etc. The backend for these applications would be MS Access SQL Server Oracle Sybase Mysql Quadbase The tests performed on these types of applications would be - User interface testing - Manual support testing - Functionality testing - Compatibility testing & configuration testing - Intersystem testing WEB TESTING This is done for 3 tier applications (developed for Internet / intranet / xtranet) Here we will be having Browser web server and DB server. The applications accessible in browser would be developed in HTML DHTML XML JavaScript etc. (We can monitor through these applications) Applications for the web server would be developed in Java ASP JSP VBScript JavaScript Perl Cold Fusion PHP etc. (All the manipulations are done on the web server with the help of these programs developed) The DBserver would be having oracle sql server sybase mysql etc. (All data is stored in the database available on the DB server) The tests performed on these types of applications would be - User interface testing - Functionality testing - Security testing - Browser compatibility testing - Load / stress testing - Interoperability testing/intersystem testing - Storage and data volume testing

Choosing a test automation framework Basing an automated testing effort on using only a capture tool such as IBM Rational Robot to record and play back test cases has its drawbacks. Running complex and powerful tests is time consuming and expensive when using only a capture tool. Because these tests are created ad hoc, their functionality can be difficult to track and reproduce, and they can be costly to maintain. A better choice for an automated testing team that's just getting started might be to use a test automation framework, defined as a set of assumptions, concepts, and practices that

constitute a work platform or support for automated testing. In this article I'll attempt to shed a little light on a handful of the test automation frameworks I'm familiar with -- specifically, test script modularity, test library architecture, keyword-driven/table-driven testing, datadriven testing, and hybrid test automation. I won't evaluate which framework is better or worse but will just offer a description and a demonstration of each and, where appropriate, some tips on how to implement it in the IBM Rational toolset. The Test Script Modularity Framework The test script modularity framework requires the creation of small, independent scripts that represent modules, sections, and functions of the application-under-test. These small scripts are then used in a hierarchical fashion to construct larger tests, realizing a particular test case. Of all the frameworks I'll review, this one should be the simplest to grasp and master. It's a well-known programming strategy to build an abstraction layer in front of a component to hide the component from the rest of the application. This insulates the application from modifications in the component and provides modularity in the application design. The test script modularity framework applies this principle of abstraction or encapsulation in order to improve the maintainability and scalability of automated test suites. To demonstrate the use of this framework, I'll automate a simple test case for the Windows Calculator program (see Figure 1) to test the basic functions (add, subtract, divide, and multiply).

Figure 1. The Windows Calculator

At the bottom level of the script hierarchy are the individual scripts testing addition, subtraction, multiplication, and division. As examples, the first script that follows tests addition and the second script tests subtraction. Sub Main Window Set Context, "Caption=Calculator", "" '5

PushButton Click, "ObjectIndex=10" '+ PushButton Click, "ObjectIndex=20" '6 PushButton Click, "ObjectIndex=14" '= PushButton Click, "ObjectIndex=21" '11 Result = LabelUP (CompareProperties, "Text=11.", "UP=Object Properties") End Sub Sub Main Window Set Context, "Caption=Calculator", "" '20 PushButton Click, "ObjectIndex=11" PushButton Click, "ObjectIndex=8" 'PushButton Click, "ObjectIndex=19" '10 PushButton Click, "ObjectIndex=7" PushButton Click, "ObjectIndex=8" '= PushButton Click, "ObjectIndex=21" '10 Result = LabelUP (CompareProperties, "Text=10.", "UP=Object Properties") End Sub

The two scripts at the next level of the hierarchy would then be used to represent the Standard view and the Scientific view available from the View menu. As the following script for the Standard view illustrates, these scripts would contain calls to the scripts we built previously. 'Test Script Modularity Framework 'Script for Standard View Sub Main 'Test Add Functionality

CallScript "Test Script Mod Framework - Add" 'Test Subtract Functionality CallScript "Test Script Mod Framework - Substract" 'Test Divide Functionality CallScript "Test Script Mod Framework - Divide" 'Test Multiply Functionality CallScript "Test Script Mod Framework - Multiply" End Sub

And finally, the topmost script in the hierarchy would be the test case to test the different views of the application. 'Test Script Modularity Framework 'Top level script - represents test case Sub Main 'Test the Standard View CallScript "Test Script Mod Framework - Standard" 'Test the Scientific View CallScript "Test Script Mod Framework - Scientific" End Sub

From this very simple example you can see how this framework yields a high degree of modularization and adds to the overall maintainability of the test suite. If a control gets moved on the Calculator, all you need to change is the bottom-level script that calls that control, not all the test cases that test that control.

Back to top The Test Library Architecture Framework The test library architecture framework is very similar to the test script modularity framework and offers the same advantages, but it divides the application-under-test into procedures and functions instead of scripts. This framework requires the creation of library files (SQABasic libraries, APIs, DLLs, and such) that represent modules, sections, and functions of the application-under-test. These library files are then called directly from the test case script.

To demonstrate the use of this framework, I'll automate the same test case as above but use an SQABasic library. The library contains a function to perform the operations. Following are the header file (.sbh) and the library source file (.sbl). 'Header File 'Test Library Architecture Framework "Functions Library Declare Sub StandardViewFunction BasicLib "Functions Library" (OperandOne As Integer, _ OperandTwo As Interger, _ Operation As String) 'Library Source File 'Test Library Architecture Framework 'Functions Library Sub StandardViewFunction (OperandOne As Integer, _ OperandTwo As Interger, _ Operation As String) 'Click on first operand Select Case OperandOne Case 0 PushButton Click, "ObjectIndex=8" Case 1 PushButton Click, "ObjectIndex=7" Case 2 PushButton Click, "ObjectIndex=11" Case 3 PushButton Click, "ObjectIndex=15" Case 4 PushButton Click, "ObjectIndex=6" Case 5 PushButton Click, "ObjectIndex=10" Case 6 PushButton Click, "ObjectIndex=14" Case 7 PushButton Click, "ObjectIndex=5" Case 8 PushButton Click, "ObjectIndex=9" Case 9 PushButton Click, "ObjectIndex=13" End Select 'Click on first operand Select Case OperandOne Case "+"

PushButton Click, "ObjectIndex=8" Case "-" PushButton Click, "ObjectIndex=7" Case "*" PushButton Click, "ObjectIndex=11" Case "/" PushButton Click, "ObjectIndex=15" End Select 'Click on first operand Select Case OperandOne Case 0 PushButton Click, "ObjectIndex=8" Case 0 PushButton Click, "ObjectIndex=7" Case 0 PushButton Click, "ObjectIndex=11" Case 0 PushButton Click, "ObjectIndex=15" Case 0 PushButton Click, "ObjectIndex=6" Case 0 PushButton Click, "ObjectIndex=10" Case 0 PushButton Click, "ObjectIndex=14" Case 0 PushButton Click, "ObjectIndex=5" Case 0 PushButton Click, "ObjectIndex=9" Case 0 PushButton Click, "ObjectIndex=13" End Select '= PushButton Click, "ObjectIndex=21" End Sub

Using this library, the following test case script can be made. 'Test Library Architecture Framework 'Test Case script '$Include "Functions Library.sbh" Sub Main 'Test the Standard View Window Set Context, "Caption=Calculator", ""

'Test Add Functionalty StandardViewFunction 3,4,"+" Result = LabelVP (CompareProperties, "Text=7.", "VP=Add") 'Test Subtract Functionalty StandardViewFunction 3,2,"-" Result = LabelVP (CompareProperties, "Text=1.", "VP=Sub") 'Test Multiply Functionalty StandardViewFunction 4,2,"*" Result = LabelVP (CompareProperties, "Text=8.", "VP=Mult") 'Test Divide Functionalty StandardViewFunction 10,5,"/" Result = LabelVP (CompareProperties, "Text=2.", "VP=Div") End Sub

From this example, you can see that this framework also yields a high degree of modularization and adds to the overall maintainability of the test suite. Just as in test script modularity, if a control gets moved on the Calculator, all you need to change is the library file, and all test cases that call that control are updated.

Back to top The Keyword-Driven or Table-Driven Testing Framework Keyword-driven testing and table-driven testing are interchangeable terms that refer to an application-independent automation framework. This framework requires the development of data tables and keywords, independent of the test automation tool used to execute them and the test script code that "drives" the application-under-test and the data. Keyword-driven tests look very similar to manual test cases. In a keyword-driven test, the functionality of the application-under-test is documented in a table as well as in step-by-step instructions for each test. If we were to map out the actions we perform with the mouse when we test our Windows Calculator functions by hand, we could create the following table. The "Window" column contains the name of the application window where we're performing the mouse action (in this case, they all happen to be in the Calculator window). The "Control" column names the type of control the mouse is clicking. The "Action" column lists the action taken with the mouse (or by the tester). And the "Arguments" column names a specific control (1, 2, 3, 5, +, -, and so on). Window Calculator Control Menu Action Arguments View, Standard

Calculator Calculator Calculator Calculator Calculator Calculator Calculator Calculator Calculator Calculator Calculator

Pushbutton Pushbutton Pushbutton Pushbutton

Click Click Click Click Verify Result Clear

1 + 3 = 4 6 3 = 3

Pushbutton Pushbutton Pushbutton Pushbutton

Click Click Click Click Verify Result

This table represents one complete test; more can be made as needed in order to represent a series of tests. Once you've created your data table(s), you simply write a program or a set of scripts that reads in each step, executes the step based on the keyword contained the Action field, performs error checking, and logs any relevant information. This program or set of scripts would look similar to the pseudocode below: Main Script / Program Connect to data tables. Read in row and parse out values. Pass values to appropriate functions. Close connection to data tables. Menu Module Set focus to window. Select the menu pad option. Return. Pushbutton Module Set focus to window. Push the button based on argument. Return. Verify Result Module Set focus to window. Get contents from label. Compare contents with argument value. Log results. Return.

From this example you can see that this framework requires very little code to generate many test cases. The data tables are used to generate the individual test cases while the same code is reused. The IBM Rational toolset can be extended using interactive file reads, queries, or datapools, or you can use other tools (freeware, other development tools, and such) along with IBM Rational tools in order to build this type of framework.

Back to top The Data-Driven Testing Framework Data-driven testing is a framework where test input and output values are read from data files (datapools, ODBC sources, cvs files, Excel files, DAO objects, ADO objects, and such) and are loaded into variables in captured or manually coded scripts. In this framework, variables are used for both input values and output verification values. Navigation through the program, reading of the data files, and logging of test status and information are all coded in the test script. This is similar to table-driven testing in that the test case is contained in the data file and not in the script; the script is just a "driver," or delivery mechanism, for the data. Unlike in tabledriven testing, though, the navigation data isn't contained in the table structure. In data-driven testing, only test data is contained in the data files. The IBM Rational toolset has native data-driven functionality when using the SQABasic language and the IBM Rational datapool structures. To demonstrate the use of this framework, we'll test the order form from the test sample application Classics A (see Figure 2).

Figure 2. Order form from the sample application Classics A

If we record data entry into this window, we get the following:

'Data Driven Framework 'Test Case Script Sub Main 'Make An Order Window Set Context, "Name=frmOrder", "" 'Card Number EditBox Click, "Name=txtCreditCard", "Coords=16,9" InputKeys "3333444455556666" 'Expiration Date EditBox Click, "Name=txtExpirationDate", "Coords=6,7" InputKeys "3333444455556666" 'Place Order PushButton Click, "Name=cmdOrder" 'Confirmation Screen Window SetContext, "Name=frmConfirm", "" PushButton Click, "Name=cmdOK" End Sub

We can use datapools to set up test cases that test valid and invalid credit card numbers and expiration dates. The datapool shown in Figure 3, for example, would be for a test case that would test the date field.

Figure 3. Sample datapool for a test case that would test the date field

If we modify the script to accept this data, we get the following: 'Data Driven Framework 'Test Case Script '$Include "SQAUTIL.SBH" Sub Main Dim Result As Integer

Dim DatapoolHandle As Long Dim DatapoolReturnValue As Variant 'Open the datapool DatapoolHandle = SQADatapoolOpen("OrderFormDP") '...Add error checking.... 'Loop through the datapool While SQADatapoolFetch(DatapoolHandle) = dqaDpSuccess 'Open Order Form Window SetContext, "Name=frmMain", "" PushButton Click, "Name=cmdOrder" Window SetContext, "Name=frmOrder", "" 'Card Number Result = SQADatapoolValue(DatapoolHanle, "Credit Card Number", DatapoolReturnValue) "...Add error checking.... EditBox Click, "Name=txtCreditCard", "Coords=16,9" '...Clear Value.... InputKeys DatapoolReturnValue 'Expiration Date Result = SQADatapoolValue(DatapoolHandle, "Expiration Date", DatapoolReturnValue) '...Add error checking.... '...Clear Value... EditBox Click, "Name=txtExpirationDate", "Coords=6,7" InputKeys DatapoolReturnValue 'Place Order Result = SQADatapoolValue(DatapoolHandle, "Order", DatapoolReturnValue) If UCASE(DatapoolReturnValue) = "YES" Then PushButton Click, "Name=cmdOrder" 'Confirmation Screen Window SetContext, "Name=frmConfirm", "" Pushbutton Click, "Name=cmdOK" Else PushButton Click, "Name=cmdCancel" End If Wend 'Go fetch next row 'Close datapool Result = SQADatapoolClose(DatapoolHandle) '...Add error checking.... End Sub

I had to add SQABasic commands to manipulate the datapools. I also added a While loop to allow for the processing of each row in the datapool. I should also mention the use of the SQABasic command UCase within the If Then statement. UCase is used to make the argument (in this case, the datapool return value) all uppercase. This way the comparisons aren't case sensitive, making the code more robust. This framework tends to reduce the overall number of scripts you need in order to implement all of your test cases, and it offers the greatest flexibility when it comes to developing workarounds for bugs and performing maintenance. Much like table-driven testing, datadriven testing requires very little code to generate many test cases. This framework is very easy to implement using the IBM Rational toolset, and there's a lot of detailed documentation available with how-tos and examples.

Back to top The Hybrid Test Automation Framework The most commonly implemented framework is a combination of all of the above techniques, pulling from their strengths and trying to mitigate their weaknesses. This hybrid test automation framework is what most frameworks evolve into over time and multiple projects. Figure 4 gives you an idea of how you could combine the approaches of the different frameworks within the IBM Rational toolset.

Figure 4. A hybrid test automation framework

Click here to see more detail for figure 4.

Back to top Roundup I've described five test automation frameworks that an automated testing team might consider using instead of relying only on a capture tool. You can use just one or a combination of the frameworks. You can implement modularity by nesting test scripts and using the SQABasic library files to implement functions and procedures. You can use datapools to implement whichever data-driven technique you choose, or you can extend Robot to work with other types of data stores. The trick is to use the best framework(s) for the job, and the only way to figure that out is to jump in and start using them.

In J2EE application modules are packaged as EAR, JAR and WAR based on their functionality JAR: EJB modules which contains enterprise java beans class files and EJB deployment descriptor are packed as JAR files with .jar extenstion WAR: Web modules which contains Servlet class files,JSP FIles,supporting files, GIF and HTML files are packaged as JAR file with .war( web achive) extension EAR: All above files(.jar and .war) are packaged as JAR file with .ear ( enterprise archive) extension and deployed into Application Server.

Mobile Ad Serving Mechanics - Admob


One of the most frequent questions that we get from technical folks is, How is mobile ad serving different from online ad serving? There are many things that are different between these two worlds. Each has a different mix of markup languages, different user interface constraints, different browser capabilities, different types of networking infrastructure and protocol quirks, etc. The difference thats often most surprising to engineers howeverespecially ones familiar with online ad servingis the basic mechanism by which ads are requested and delivered. In the online world the browser is most often responsible for both fetching and rendering ads. The content owner modifies the markup of their content to include placeholders for ads. The placeholders are often called tags and take the form of a bit of Javascript code or a clickable img tag. When the browser loads a document containing ad tags, it executes some Javascript and/or starts an image load in order to fetch each ad. In the mobile world the browser is most often only responsible for rendering ads. The content owner typically modifies their application servers to request ads from a mobile ad server. The application server copies returned ad(s) into the appropriate place(s) in the content that theyre preparing to send back to the mobile phone. When the phone receives the page that it requested, the document already contains ad markup ready to be rendered. The diagram below illustrates a single mobile request for a page with ads.

1. A user navigates to a new page causing the mobile device to send a request to the carrier gateway asking it to retrieve the specified content. 2. The carrier gateway forwards the request on to the content owners application server via HTTP. 3. If the requested content contains an AdMob ad, the content owners application server bundles up some information about the request context and sends an HTTP ad request to AdMob. 4. AdMob parses the ad request, finds all ads that could possibly match the request, ranks the ads, selects the top ranked ad and sends the ad back in the markup language appropriate to the requesting device. 5. The content owners application server copies the returned ad into the appropriate spot and returns the content + ad(s) to the carrier gateway.

6. The carrier gateway forwards the content to the users mobile device. This isnt to say that every mobile ad request uses this request and delivery mechanism. Some mobile ad requests look an awful lot like their online brethren. For instance, AdMob has a special version of our ad request code for the iPhone. The iPhone code is a bit of Javascript that gets installed within content markup and makes the browser responsible for both fetching and rendering the ad.

we quickly sketched the basic mechanism by which most mobile content and service providers fetch ads from AdMob using our server-to-server ad request API. We also mentioned that we support ad requests made directly from mobile browsers via Javascript, a mechanism that should be immediately familiar to those with experience running ads online. We didnt go into details about this second way of requesting AdMob ads, not because we dont love Javascript ad requests or the more-capable handsets that support them. Quite the contrary. If we had our way, every handset in the world, right now, would be as capable as the iPhone and its brethren. The big question that we didnt resolve last post is Why do you need a server-to-server ad request API in the first place? A fine question indeed, and one with a multi-faceted answer. Before we dive into the answer though, lets start by throwing out a non-factor. Our server-to-server ad request API isnt a technological differentiator. Our ad business is all about optimally connecting advertisers and mobile consumers. How the ad gets requested is more-or-less immaterial. We have a couple of ad request mechanisms now. We will surely add more in the future. For instance, if the mobile ecosystem collectively switched over to browsers whose primary scripting language was Haskell and used VRML for content markup, we would probably ship a Haskell ad request API and VRML-tailored ad units. Proud as we might be of such a request API written for the programming tool of choice to discriminating hackers, it still wouldnt be a technological differentiator. Now back to the question: why do we currently need a server-to-server API for mobile ad serving? First and foremost, the vast majority of handsets on our mobile network dont support Javascript. Javascript-capable handsets are growing in popularity. But the price points of these devices right now place a rate limiter on their world-wide adoption. Within a scant few years Moores Law will result in mobile devices that are even more powerful than todays berhandsets and that are at the same time cheap enough for most of the worlds population to afford. That promise is one of the reasons were so excited about mobile. Today, however, we still need a server-to-server ad request mechanism. Second, but still crucial, todays mobile data networks are nowhere near as fast and reliable as the broadband networks most of us use for our day-to-day Internet activities. The last hop on these networksthe wireless one out to the handsetis typically the slowest and least reliable link. Consequently, we need to use this resource very carefully. With Javascript ad requests originating from the phone, we essentially have an extra round trip over this last hop. Because of this ads may load slower or might get dropped altogether. That can in turn slow down rendering of a sites main content or even leave rendered pages in an unreadable state. Neither is an awesome experience for the user. On the other hand, with server-to-server ad requests for text ads, there is no ad request round trip over the wireless hop to the handset. The ad is already bundled with the content payload. Ad retrieval takes place over the Internet, typically over networks whose slowest link is substantially faster than residential broadband. This saves the handset from having to make a DNS lookup, and opening and subsequently closing a TCP connection to the ad server for retrieving the ad. Over a slow wireless network, eliminating these things can be a huge performance win and results in a superior user experience.

Difference between Priority & Severity Priority: Customer importance, Business oriented, how quick bug should be fixed Severity: Functionality The exact definition of severity is project-specific. Here is one that is reasonable for many projects, however:
y y y y y

Enhancement: New features Low: Improvement to existing code, e.g. performance enhancement, or problems with an easy workaround Normal: Broken or missing functionality High: Problems causing crashes, loss of data, severe performance problems or excessive resource use. Blocker: Problems that prevent testing or development work

High Severity & Low Priority: A severe bug that crashes the software only once in a blue moon for 1% of the users is lower priority, but aren't fixed because the crash is very infrequent or on a version/platform/feature low on the vendor's support list. If the same bug is found in production then it is High Priority Low Severity & High Priority: A start-up splash screen with your company logo backwards and the name misspelled is purely a cosmetic problem (Low Severity). However, most companies would treat it as a high-priority bug High Priority: a mishandled error condition resulting in the need to re-enter a portion of the input for every user every time. We do also increase the priority of the bug from low to high if the build release date is near High Priority low severity: Spelling the name of the company president wrong Low Priority High severity: Year end processing breaks ('cause its 6 more months 'till year end) High Priority High severity: Application won't start Low Priority low severity: spelling error in documentation; occasionally screen is slightly misdrawn requiring a screen refresh
Quality Control vs Quality Assurance? The Difference Between Them Many people including some quality professionals do not know what quality control vs quality assurance is. Both these terms are often used interchangeably. However, both terms are different in meaning as well as purpose. Given here are main points of quality control vs. quality assurance Approach Quality guarantee or assurance is based on process approach. Quality monitoring and its assurance ensure that the processes and systems are developed and adhered in such a way that the deliverables are of good quality. This process is meant to produce defect-

free goods or services which means being right the first time with no or minimum rework. Quality control is product-based approach. It checks whether the deliverables satisfy the quality requirements as well as the specifications of the customers or not. Depending upon the results, suitable corrective action is taken by quality control personals. Sequence One of the major points of quality control vs. QA is that assurance of quality is done before starting a project whereas the quality control begins once the product has been manufactured. During assurance of quality or monitoring process, the requirements of the customers are defined. Based on those requirements, the processes and systems are established and documented. All this is done to ensure that the requirements of the customers are met stringently. After manufacturing the product, the quality control process begins. Based on the customer requirements and standards developed during the quality guarantee process, the quality control personal check whether the manufactured product meets all those requirements or not. So, assurance of quality is a proactive or preventive process to avoid defects whereas quality control is a corrective process to identify the defects in order to correct them. Activities Most activities falling under the purview of quality guarantee or its assurance are performed by managers, customers, and third party auditors. These activities include process documentation, establishing standards, developing checklists, conducting internal audits, conducting external audits, failure mode effect analysis and training. Engineers, inspectors, and supervisors on the shop floor perform quality control activities. Their activities include performing and receiving inspection, final inspection, in-process inspection etc. Interdependence Both, quality control and assurance of quality are largely interdependent. The quality guarantee department relies mostly on the feedback provided by the quality control department. For example, if there are recurrent problem with the quality of the products, then the quality control department provides feedback to the quality monitoring and assurance personnel that there is a problem in the process or system that is causing product quality problems. Then the assurance of quality department determines the root cause of the problem and then brings changes to the process to ensure that there are no quality issues in future. Similarly, the quality control department follows the guidelines and standards established by assurance of quality department to check whether deliverables meet the quality requirements or not. Hence, both these departments are essential to maintain good quality of the deliverables. Although both quality control and quality guarantee are different processes, but because of their strong interdependence, it becomes difficult to pin point the differences between the two. In fact, there is a very thin line that separates the two functions. Moreover in some organizations, one department performs the functions of both. All this leads to confusion between quality control vs quality assurance .

SMOKE Vs SANITY
I have gathered a few points about the difference between smoke and sanity testing from the responses of two software testing groups. I have added the points below.

However, my experience of executing the Smoke and Sanity testing has been the following: Smoke Test: When a build is received, a smoke test is run to ascertain if the build is stable and it can be considered for further testing. Smoke testing can be done for testing the stability of any interim build. Smoke testing can be executed for platform qualification tests. Sanity testing: Once a new build is obtained with minor revisions, instead of doing a through regression, a sanity is performed so as to ascertain the build has indeed rectified the issues and no further issue has been introduced by the fixes. Its generally a subset of regression testing and a group of test cases are executed that are related with the changes made to the app. Generally, when multiple cycles of testing are executed, sanity testing may be done during the later cycles after through regression cycles.

2 3

Smoke Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke. In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested. A smoke test is scripted--either using a written set of tests or an automated test A Smoke test is designed to touch every part of the application in a cursory way. It's is shallow and wide. Smoke testing will be conducted to ensure whether the most crucial functions of a program work, but not bothering with finer details. (Such as build verification).

Sanity A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.

A sanity test is usually unscripted. A Sanity test is used to determine a small section of the application is still working after a minor change. Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. sanity testing is to verify whether requirements are met or not, checking all features breadth-first.

Smoke testing is normal health check up to a build of an application before taking it to testing in depth.

Types of testing Unit testing Functional testing System testing Integration testing Database testing GUI testing

Compatibility testing Performance testing Load testing Stress testing Scalability testing Volume testing Security testing Protocol testing Recovery testing Compatibility testing Cross browser testing Adhoc testing Mutation testing Localization testing Cluster testing Smoke testing Sanity testing Regression testing Alpha testing Beta testing User Acceptance testing White box Black box

Client/Server testing Web services testing

Вам также может понравиться