Вы находитесь на странице: 1из 25

Definitions of Software Testing

It is the process of Creating, Implementing and Evaluating tests. Testing measures software quality Testing can find faults. When they are removed, software quality is improved. Testing is executing a program with an indent of finding Error/Fault and Failure. IEEE Terminology: An examination of the behavior of the program by executing on sample data sets.

Why is Software Testing Important?


1. To discover defects.

2. To avoid user detecting problems 3. To prove that the software has no faults 4. To learn about the reliability of the software. 5. To avoid being sued by customers 6. To ensure that product works as user expected. 7. To stay in business 8. To detect defects early, which helps in reducing the cost of defect fixing?

Why start testing Early?


Introduction: You probably heard and read in blogs Testing should start early in the life cycle of development". In this chapter, we will discuss Why start testing Early? Very practically. Fact One: Lets start with the regular software development life cycle:

When project is planned

First weve got a planning phase: needs are expressed, people are contacted, meetings are booked. Then the decision is made: we are going to do this project. After that analysis will be done, followed by code build. Now its your turn: you can start testing.

Do you think this is what is going to happen? Dream on. This is what's going to happen:

This is what actual happened when the project executes


Planning, analysis and code build will take more time than planned. That would not be a problem if the total project time would pro-longer. Forget it; it is most likely that you are going to deal with the fact that you will have to perform the tests in a few days. The deadline is not going to be moved at all: promises have been made to customers, project managers are going to lose their bonuses if they deliver later past deadline.

Fact Two:
The earlier you find a bug, the cheaper it is to fix it.

Price of Buggy Code

If you are able to find the bug in the requirements determination, it is going to be 50 times cheaper (!!) than when you find the same bug in testing. It will even be 100 times cheaper (!!) than when you find the bug after going live. Easy to understand: if you find the bug in the requirements definitions, all you have to do is change the text of the requirements. If you find the same bug in final testing, analysis and code build already took place. Much more effort is done to build something that nobody wanted. Conclusion: start testing early! This is what you should do:

Testing should be planned for each phase


Make testing part of each Phase in the software life cycle Start test planning the moment the project starts Start finding the bug the moment the requirements are defined Keep on doing that during analysis and design phase Make sure testing becomes part of the development process And make sure all test preparation is done before you start final testing. If you have to start then, your testing is going to be crap!

Test Design Techniques:


Black Box Testing White Box Testing (include its approaches) Gray Box Testing

Black Box Testing?


Test the correctness of the functionality with the help of Inputs and Outputs. User doesnt require the knowledge of software code. Black box testing is also called as Functionality Testing.

It attempts to find errors in the following categories:


Incorrect or missing functions. Interface errors. Errors in data structures or external data base access. Behavior or performance based errors. Initialization or termination errors.

White Box Testing:


Testing the Internal program logic White box testing is also called as Structural testing. User does require the knowledge of software code.

Purpose

Testing all loops Testing Basis paths Testing conditional statements Testing data structures Testing Logic Errors Testing Incorrect assumptions

Structure = 1 Entry + 1 Exit with certain Constraints, Conditions and Loops. Logic Errors and incorrect assumptions most are likely to be made while coding for special cases. Need to ensure these execution paths are tested. Approaches / Methods / Techniques for White Box Testing Basic Path Testing (Cyclomatic Complexity (Mc Cabe Method)

Measures the logical complexity of a procedural design. Provides flow-graph notation to identify independent paths of processing Once paths are identified - tests can be developed for - loops, conditions Process guarantees that every statement will get executed at least once.

Structure Testing:

Condition Testing - All logical conditions contained in the program module should be tested. Data Flow testing- Selects test paths according to the location of definitions and use of variables. Loop Testing: o Simple Loops o Nested Loops

o o

Concatenated Loops Unstructured Loops

Gray Box Testing

It is just a combination of both Black box & white box testing. It is platform independent and language independent. Used to test embedded systems. Functionality and behavioral parts are tested. Tester should have the knowledge of both the internals and externals of the function If you know something about how the product works on the inside, you can test it better from the outside.

Gray box testing is especially important with Web and Internet applications, because the Internet is built around loosely integrated components that connect via relatively well-defined interfaces. Unless you understand the architecture of the Net, your testing will be skin deep.

Software Testing Techniques:


No notes

Test Design Techniques


The purpose of test design techniques is to identify test conditions and test scenarios through which effective and efficient test cases can be written. Using test design techniques is a best approach rather the test cases picking out of the air. Test design techniques help in achieving high test coverage. In this post, we will discuss the following: 1. Black Box Test Design Techniques Specification Based Experience Based

2. White-box or Structural Test design techniques Black-box testing techniques

These includes specification-based and experienced-based techniques. These use external descriptions of the software, including specifications, requirements, and design to derive test cases. These tests can be functional or non-functional, though usually functional. Tester needs not to have any knowledge of internal structure or code of software under test.

Specification-based techniques: Equivalence partitioning Boundary value analysis Use case testing Decision tables Cause-effect graph State transition testing Classification tree method Pair-wise testing

From ISTQB Syllabus: Common features of specification-based techniques: Models, either formal or informal, are used for the specification of the problem to be solved, the software or its components. From these models test cases can be derived systematically. Experienced-based techniques:

Error Guessing Exploratory Testing

Read Unscripted testing Approaches for the above. From ISTQB Syllabus: Common features of experience-based techniques: The knowledge and experience of people are used to derive the test cases. Knowledge of testers, developers, users and other stakeholders about the software, its usage and its environment. Knowledge about likely defects and their distribution.

White-box techniques
Also referred as structure-based techniques. These are based on the internal structure of the component. Tester must have knowledge of internal structure or code of software under test. Structural or structure-based techniques includes: Statement testing Condition testing LCSAJ (loop testing) Path testing Decision testing/branch testing

From ISTQB Syllabus: Common features of structure-based techniques: Information about how the software is constructed is used to derive the test cases, for example, code and design. The extent of coverage of the software can be measured for existing test cases, and further test cases can be derived systematically to increase coverage.

Art of Test case writing:


Objective and Importance of a Test Case - The basic objective of writing test cases is to ensure complete test coverage of the application. The most extensive effort in preparing to test software, is writing test cases. Gives better reliability in estimating the test effort Improves productivity during test execution by reducing the understanding time during execution Writing effective test cases is a skill and that can be achieved by experience and in-depth study of the application on which test cases are being written. Documenting the test cases prior to test execution ensures that the tester does the homework and is prepared for the attack on the Application Under Test Breaking down the Test Requirements into Test Scenarios and Test Cases would help the testers avoid missing out certain test conditions

What is a Test Case? It is the smallest unit of Testing A test case is a detailed procedure that fully tests a feature or an aspect of a feature. Whereas the test plan describes what to test, a test case describes how to perform a particular test. A test case has components that describes an input, action or event and an expected response, to determine if a feature of an application is working correctly. Test cases must be written by a team member who thoroughly understands the function being tested.

Elements of a Test Case Every test case must have the following details: Anatomy of a Test Case Test Case ID Requirement # / Section: Objective: [What is to be verified?] Assumptions & Prerequisites Steps to be executed: Test data (if any): [Variables and their values ] Expected result: Status: [Pass or Fail with details on Defect ID and proofs [o/p files, screenshots (optional)] Comments: Any CMMi company would have defined templates and standards to be adhered to while writing test cases.

Language to be used in Test Cases: 1. Use Simple and Easy-to-Understand language. 2. Use Active voice while writing test cases For eg. - Click on OK button - Enter the data in screen1 - Choose the option1 - Navigate to the account Summary page. 3. Use words like Verify / Validate for starting any sentence in Test Case description (Specially for checking GUI) For eg. - Validate the fields available in _________ screen/tab. 4. Use words like is/are and use Present Tense for Expected Results - The application displays the account information screen - An error message is displayed on entering special characters.

Fault, Error & Failure:


Fault : It is a condition that causes the software to fail to perform its required function.

Error : Refers to difference between Actual Output and Expected output.

Failure : It is the inability of a system or component to perform required function according to its specification. IEEE Definitions Failure: External behavior is incorrect Fault: Discrepancy in code that causes a failure. Error: Human mistake that caused fault

Note: Error is terminology of Developer. Bug is terminology of Tester Unscripted Testing Techniques/Approaches Error Guessing Why can one Tester find more errors than another Tester in the same piece of software? More often than not this is down to a technique called Error Guessing. To be successful at Error Guessing, a certain level of knowledge and experience is required. A Tester can then make an educated guess at where potential problems may arise. This could be based on the Testers experience with a previous iteration of the software, or just a level of knowledge in that area of technology. This test case design technique can be very effective at pin-pointing potential problem areas in software. It is often be used by creating a list of potential problem areas/scenarios, then producing a set of test cases from it. This approach can often find errors that would otherwise be missed by a more structured testing approach. An example of how to use the Error Guessing method would be to imagine you had a software program that accepted a ten digit customer code. The software was designed to only accept numerical data. Here are some example test case ideas that could be considered as Error Guessing: 1. Input of a blank entry 2. Input of greater than ten digits 3. Input of mixture of numbers and letters 4. Input of identical customer codes What we are effectively trying to do when designing Error Guessing test cases, is to think about what could have been missed during the software design. This testing approach should only be used to compliment an existing formal test method, and should not be used on its own, as it cannot be considered a complete form of testing software.

Exploratory Testing: This type of testing is normally governed by time. It consists of using tests based on a test chapter that contains test objectives. It is most effective when there are little or no specifications available. It should only really be used to assist with, or compliment a more formal approach. It can basically ensure that major functionality is working as expected without fully testing it. Ad-hoc Testing: This type of testing is considered to be the most informal, and by many it is considered to be the least effective. Ad-hoc testing is simply making up the tests as you go along. Often, it is used when there is only a very small amount of time to test something. A common mistake to make with Ad-hoc testing is not documenting the tests performed and the test results. Even if this information is included, more often than not additional information is not logged such as, software versions, dates, test environment details etc. Ad-hoc testing should only be used as a last resort, but if careful consideration is given to its usage then it can prove to be beneficial. If you have a very small window in which to test something, then the following are points to consider: 1. Take some time to think about what you want to achieve 2. Prioritize functional areas to test if under a strict amount of testing time 3. Allocate time to each functional area when you want to test the whole item 4. Log as much detail as possible about the item under test and its environment 5. Log as much as possible about the tests and the results

Random Testing: A Tester normally selects test input data from what is termed an input domain in a structured manner. Random Testing is simply when the Tester selects data from the input domain randomly. In order for random testing to be effective, there are some important open questions to be considered: 1. Is random data sufficient to prove the module meets its specification when tested? 2. Should random data only come from within the input domain? 3. How many values should be tested?

As you can tell, there is little structure involved in Random Testing. In order to avoid dealing with the above questions, a more structured Black-box Test Design could be implemented instead. However, using a random approach could save valuable time and resources if used in the right circumstances. There has been much debate over the effectiveness of using random testing techniques over some of the more structured techniques. Most experts agree that using random test data provides little chance of producing an effective test. There are many tools available today that are capable of selecting random test data from a specified data value range. This approach is especially useful when it comes to tests associated at the system level. You often find in the real world that Random Testing is used in association with other structured techniques to provide a compromise between targeted testing and testing everything.

V-model is the basis of structured testing


You will find out this is a great model!

V-model is the basis of structured testing

The left side shows the classic software life cycle & Right side shows the verification and validation for Each Phase

Analyze User requirements End users express their whish for a solution for one or more problems they have. In testing you have to start preparation of your user tests at this moment! You should do test preparation sessions with your acceptance testers. Ask them what cases they want to test. It might help you to find good test cases if you interview end users about the every day cases they work on. Ask them for difficulties they meet in every days work now.

Give feedback about the results of this preparation (hand the list of real life cases, the questions) to the analyst team. Or even better, invite the analyst team to the test preparation sessions. They will learn a lot! System requirements One or more analysts interview end users and other parties to find out what is really wanted. They write down what they found out and usually this is reviewed by Development/Technical Team, end users and third parties. In testing you can start now by breaking the analyses down into 'features to test'. One 'feature to test' can only have 2 answers: 'pass' or 'fail'. One analysis document will have a number of features to test. Later this will be extremely useful in your quality reporting! Look for inconsistency and things you don't understand in the analysis documents. Theres a good chance that if you don't understand it, neither will the developers. Give Feedback your questions and remarks to the analyst team. This is a second review delivered by testing in order to find the bug as early as possible!

Lets discuss Left side of V Model: - Global and detailed design Development translates the analysis documents into technical design. - Code / Build Developers program the application and build the application. - Note: In the classic waterfall software life cycle testing would be at the end of the life cycle. The V-model is a little different. We already added some testing review to it. The right side shows the different testing levels : - Component & Component integration testing These are the tests development performs to make sure that all the issues of the technical and functional analysis is implemented properly. - Component testing (unit testing) Every time a developer finishes a part of the application he should test this to see if it works properly. - Component integration testing Once a set of application parts is finished, a member of the Development team should test to verify whether the different parts do what they have to do. Once these tests pass successfully, system testing can start.

- System and System integration testing In this testing level we are going to check whether the features to test, destilated from the analyses documents, are realised properly. Best results will be achieved when these tests are performed by professional testers. - System testing In this testing level each part (use case, screen description) is tested apart. - System integration testing Different parts of the application now are tested together to examine the quality of the application. This is an important (but sometimes difficult) step. Typical stuff to test: navigation between different screens, background processes started in one screen, giving a certain output (PDF, updating a database, consistency in GUI,...). System integration testing also involves testing the interfacing with other systems. E.g. if you have a web shop, you probably will have to test whether the integrated Online payment services works. These interface tests are usually not easy to realise, because you will have to make arrangements with parties outside the project group. - Acceptance testing Here real users (= the people who will have to work with it) validate whether this application is what they really wanted. This comic explains why end users need to accept the application:

This is what actually Client Needs :-(

During the project a lot off interpretation has to be done. The analyst team has to translate the wishes of the customer into text. Development has to translate these to program code. Testers have to interpret the analysis to make features to test list. Tell somebody a phrase. Make him tell this phrase to another person. And this person to another one... Do this 20 times. You'll be surprised how much the phrase has changed!

This is exactly the same phenomenon you see in software development! Let the end users test the application with the real cases you listed up in the test preparation sessions. Ask them to use real life cases! And - instead of getting angry - listen when they tell you that the application is not doing what it should do. They are the people who will suffer the applications shortcomings for the next couple of years. They are your customer!

V Model to W Model | W Model in SDLC Simplified


We already discuss that V-model is the basis of structured testing. However there are few problem with V Model. V Model Represents one-to-one relationship between the documents on the left hand side and the test activities on the right. This is not always correct. System testing not only depends on Function requirements but also depends on technical design, architecture also. Couple of testing activities are not explained in V model. This is a major exception and the V-Model does not support the broader view of testing as a continuously major activity throughout the Software development lifecycle. Paul Herzlich introduced the W-Model. In W Model, those testing activities are covered which are skipped in V Model. The W model illustrates that the Testing starts from day one of the of the project initiation. If you see the below picture, 1st V shows all the phases of SDLC and 2nd V validates the each phase. In 1st V, every activity is shadowed by a test activity. The purpose of the test activity specifically is to determine whether the objectives of that activity have been met and the deliverable meets its requirements. W-Model presents a standard development lifecycle with every development stage mirrored by a test activity. On the left hand side, typically, the deliverables of a development activity (for example, write requirements) is accompanied by a test activity test the requirements and so on.

Fig 1: W Model

Fig 2: Each phase is verified/validated. Dotted arrow shows that every phase in brown is validated/tested through every phase in sky blue. Now, in the above figure,

Point 1 refers to - Build Test Plan & Test Strategy. Point 2 refers to - Scenario Identification. Point 3, 4 refers to Test case preparation from Specification document and design documents Point 5 refers to review of test cases and update as per the review comments.

So if you see, the above 5 points covers static testing.

Point 6 refers to Various testing methodologies (i.e. Unit/integration testing, path testing, equivalence partition, boundary value, specification based testing, security testing, usability testing, performance testing). After this, there are regression test cycles and then User acceptance testing.

Conclusion - V model only shows dynamic test cycles, but W models gives a broader view of testing. the connection between the various test stages and the basis for the test is clear with W Model (which is not clear in V model).

The Testing Mindset:


A professional tester approaches a product with the mind-set that the product is already broken it has bugs and it is their job to find out them. They suppose the application under test is inherently defective and it is their job to illuminate the defects. This methodology/approach is required in testing. Designers and developers approach software with an optimism based on the guess/assumption that the changes they make are the accurate solution to a particular problem. But they are just that assumptions. Without being proved they are no more correct than guesses. Developers often neglect primary ambiguities in specification documents in order to complete the project; or they fail to identify them when they see them. Those ambiguities are then built into the code and represent a bug when compared to the end-user's needs. By taking a skeptical approach, the tester offers a balance. A Good Professional tester:

Takes nothing at face value. Always asks the question why? Seek to drive out certainty where there is none. Seek to illuminate the darker part of the projects with the light of inquiry.

Sometimes this attitude can bring argument with Development Team. But development team can be testers too! If they can accept and adopt this state of mind for a certain portion of the project, they can offer excellent quality in the project and reduce cost of the project. Identifying the need for Testing Mindset is the first step towards a successful test approach and strategy.

Must Read - Testing ISNT About Learning. It is About Thinking.

Practical interview questions on Software Testing - Part 1


1. On which basis we give priority and severity for a bug and give one example for high priority and low severity and high severity and low priority? Always the priority is given by team leader or Business Analyst. Severity is given by the reporter of bug. For example, High severity: hardware bugs application crash. Low severity: User interface bugs. High priority: Error message is not coming on time, calculation bugs etc. Low priority: Wrong alignment, etc 2. What do you mean by reproducing the bug? If the bug was not reproducible, what is the next step? If you find a defect, for example click the button and the corresponding action didnt happen, it is a bug. If the developer is unable to find this behaviour he will ask us to reproduce the bug. In another scenario, if the client complaints a defect in the production we will have to reproduce it in test environment. If the bug was not reproducible by developer, then bug is assigned back to reporter or goto meeting or informal meeting (like walkthrough) is arranged in order to reproduce the bug. Sometimes the bugs are inconsistent, so that that case we can mark the bugs as inconsistent and temporarily close the bug with status working fine now. 3. What is the responsibility of a tester when a bug which may arrive at the time of testing. Explain? First check the status of the bug, then check whether the bug is valid or not then forward the same bug to the team leader and then after confirmation forward it to the concern developer. If we cannot reproduce it, it is not reproducible in which case we will do further testing around it and if we cannot see it we will close it, and just hope it would never come back ever again. 4. How can we design the test cases from requirements? Do the requirements, represent exact functionality of AUT? Ofcourse, requirements should represents exact functionality of AUT. First of all you have to analyze the requirements very thoroughly in terms of functionality. Then we have to think about suitable test case design technique [Black Box design techniques like Specification based test cases, functional test cases, Equivalence Class Partitioning (ECP), Boundary Valve Analysis (BVA), Error guessing and Cause Effect Graphing] for writing the test cases. By these concepts you should design a test case, which should have the capability of finding the absence of defects. Read: Art of Test case writing 5. How to launch the test cases in Quality Centre (Test Director) and where it is saved? You create the test cases in the test plan tab and link them to the requirements in the requirement

tab. Once the test cases are ready we change the status to ready and go to the Test Lab Tab and create a test set and add the test cases to the test set and you can run from there. For automation, in test plan, create a new automated test and launch the tool and create the script and save it and you can run from the test lab the same way as you did for the manual test cases. The test cases are sorted in test plan tab or more precisely in the test director, lets say quality centers database. test director is now referred to as quality center. 6. How is traceability of bug follow? The traceability of bug can be followed in so many ways. 1. Mapping the functional requirement scenarios(FS Doc) - test cases (ID) - Failed test cases(Bugs) 2. Mapping between requirements(RS Doc) - Test case (ID) - Failed test cases. 3. mapping between test plan (TP Doc) - test case (ID) - failed test cases. 4. Mapping between business requirements (BR Doc) - test cases (ID) - Failed test cases. 5. Mapping between high level design(Design Doc) - test cases (ID) - Failed test cases. Usually the traceability matrix is mapping between the requirements, client requirements, function specification, test plan and test cases. 7. What is the difference between use case, test case, test plan? Use Case: It is prepared by Business analyst in the Functional Requirement Specification(FRS), which are nothing but a steps which are given by the customer. Test cases: It is prepared by test engineer based on the use cases from FRS to check the functionality of an application thoroughly Test Plan: Team lead prepares test plan, in it he represents the scope of the test, what to test and what not to test, scheduling, what to test using automation etc.

Concept of Complete Testing | Exhaustive testing is impossible:


It is not unusual to find people making claims such as I have exhaustively tested the program. Complete, or exhaustive, testing means there are no undiscovered faults at the end of the test phase. All problems must be known at the end of complete testing. For most of the systems, complete testing is near impossible because of the following reasons:

The domain of possible inputs of a program is too large to be completely used in testing a system. There are both valid inputs and invalid inputs. The program may have a large number of states. There may be timing constraints on the inputs, that is, an input may be valid at a certain time and invalid at other times. An input value which is valid but is not properly timed is called an inopportune input. The input domain of a system can be very large to be completely used in testing a program. The design issues may be too complex to completely test. The design may have included implicit design decisions and assumptions. For example, a programmer may use a global variable or a static variable to control program execution. It may not be possible to create all possible execution environments of the system. This becomes more significant when the behaviour of the software system depends on the real, outside world, such as weather, temperature, altitude, pressure, and so on.

Testing Limitations:
Testing Limitations

You cannot test a program completely We can only test against system requirements

- May not detect errors in the requirements. - Incomplete or ambiguous requirements may lead to inadequate or incorrect testing.

Exhaustive (total) testing is impossible in present scenario. Time and budget constraints normally require very careful planning of the testing effort. Compromise between thoroughness and budget. Test results are used to make business decisions for release dates. Even if you do find the last bug, youll never know it You will run out of time before you run out of test cases You cannot test every path You cannot test every valid input You cannot test every invalid input

How and When Testing Starts:


For the betterment, reliability and performance of an Information System, it is always better to involve the Testing team right from the beginning of the Requirement Analysis phase. The active involvement of the testing team will give the testers a clear vision of the functionality of the system by which we can expect a better quality and error-free product. Once the Development Team-lead analyzes the requirements, he will prepare the System Requirement Specification, Requirement Traceability Matrix. After that he will schedule a meeting with the Testing Team (Test Lead and Tester chosen for that project). The Development Team-lead will explain regarding the Project, the total schedule of modules, Deliverables and Versions. The involvement of Testing team will start from here. Test Lead will prepare the Test Strategy and Test Plan, which is the scheduler for entire testing process. Here he will plan when each phase of testing such as Unit Testing, Integration Testing, System Testing, User Acceptance Testing. Generally Organizations follow the V Model for their development and testing. After analyzing the requirements, Development Team prepares System Requirement Specification, Requirement Traceability Matrix, Software Project Plan, Software Configuration Management Plan, Software Measurements/metrics plan, Software Quality Assurance Plan and move to the next phase of Software Life Cycle ie., Design. Here they will prepare some important Documents like Detailed Design Document, Updated Requirement Traceability Matrix, Unit Test Cases Document (which is prepared by the Developers if there are no separate

White-box testers), Integration Test Cases Document, System Test Plan Document, Review and SQA audit Reports for all Test Cases. After preparation of the Test Plan, Test Lead distributes the work to the individual testers (whitebox testers & black-box testers). Testers work will start from this stage, based on Software Requirement Specification/Functional Requirement Document they will prepare Test Cases using a standard Template or Automation Tool. After that they will send them for review to the Test Lead. Once the Test Lead approves it, they will prepare the Test Environment/Test bed, which is specifically used for Testing. Typically the Test Environment replicates the Client side system setup. We are ready for Testing. While testing team will work on Test strategy, Test plan, Test Cases simultaneously the Development team will work on their individual Modules. Before three or four days of First Release they will give an interim Release to the Testing Team. They will deploy that software in Test Machine and the actual testing will start. The Testing Team handles configuration management of Builds. After that the Testing teams do testing against Test Cases, which are already prepared and report bugs in a Bug Report Template or automation Tool (based on Organization). They will track the bugs by changing the status of Bug at each and every stage. Once Cycle #1 testing is done, then they will submit the Bug Report to the Test Lead then he will discuss these issues with Development Team-lead after which they work on those bugs and will fix those bugs. After all the bugs are fixed they will release next build. The Cycle#2 testing starts at this stage and now we have to run all the Test Cases and check whether all the bugs reported in Cycle#1 are fixed or not. And here we will do regression testing means, we have to check whether the change in the code give any side effects to the already tested code. Again we repeat the same process till the Delivery Date. Generally we will document 4 Cycles information in the Test Case Document. At the time of Release there should not be any high severity and high priority bugs. Of course it should have some minor bugs, which are going to be fixed in next iteration or release (generally called Deferred bugs). And at the end of Delivery Test Lead and individual testers prepare some reports. Sometimes the Testers also participate in the Code Reviews, which is static testing. They will check the code against historical logical errors checklist, indentation, proper commenting. Testing team is also responsible to keep the track of Change management to give qualitative and bug-free product.

Requirement Specification document Review Guidelines and Checklists:


To prepare effective test cases, testers and QA engineers should review the software specs documents carefully and raise as much queries as they can. The purpose of Software Requirement Specification Review is to uncover problems that are hidden within the specification document. This is a part of defect prevention. These problems always lead the software to incorrect implementation. So following guidelines for a detailed specification review is suggested:

1. Always review specification document with the entire testing team. Discuss each point with team members. 2. While reviewing specification document, look carefully for vague/fuzzy terms like ordinarily, most, mostly, some, sometimes, often, and usually and ask for clarification. 3. Many times it happens that list values are given but not completed. Look for terms: "etc., and so forth, and so on, such as." And be sure all the items/list values are understood. 4. When you are doing spec review, make sure stated ranges dont contain unstated/implicit assumptions. For example: The range of Number field is from 10 to 100. But is it Decimal? Ask for Clarification. 5. Also take care of vague/fuzzy terms like - skipped, eliminated, handled, rejected, processed. These terms can be interpreted in many ways. 6. Take care of unclear pronouns like The ABC module communicates with the XYZ module and its value is changed to 1. But whose value (of ABC Module or XYZ Module)? 7. Whenever a scenario/condition is defined in paragraph, then draw a picture of that in order to understand and try to find the expected result. If paragraph is too long, break it in multiple steps. It will be easy to understand. 8. In the specification document, if a scenario is described which hold calculations, then work on its calculations with minimum two examples. 9. If any point of the specs is not clear then get your queries resolved from the Business Analyst or Product Manager as soon as possible. 10. If any mentioned scenario is complex then try to break it into points. 11. If there is any open issue (under discussion) in the specs (sometimes to be resolved by client), then keep track of those issues. 12. Always go thru the revision history carefully. 13. After the specs are sign off and finalized, if any change come, then see the impacted areas.

Role of a tester in defect prevention and defect detection:


Some Testers (especially beginners) often get confused with this Question - What is the role of a tester in Defect Prevention and Defect Detection?. In this post we will discuss the role of a tester in these phases and how to testers can prevent more defects in Defect Prevention phase and how testers can detect more bugs in Defect Detection phase

Role of a tester in defect prevention and defect detection. Defect prevention In Defect prevention, developers plays an important role. In this phase Developers do activities like code reviews/static code analysis, unit testing, etc. Testers are also involved in defect prevention by reviewing specification documents. Studying the specification document is an art. While studying specification documents, testers encounter various queries. And many times it happens that with those queries, requirement document gets changed/updated. Developers often neglect primary ambiguities in specification documents in order to complete the project; or they fail to identify them when they see them. Those ambiguities are then built into the code and represent a bug when compared to the end-user's needs. This is how testers help in defect prevention. We will discuss How to review the specification document? in a separate post.

Defect Detection In Defect detection, role of a tester include Implementing the most appropriate approach/strategy for testing ,preparation/execution of effective test cases and conducting the necessary tests like - exploratory testing, functional testing, etc. To increase the defect detection rate, tester should have complete understanding of the application. Ad hoc /exploratory testing should go in parallel with the test case execution as a lot of bugs can be found through that means.

How and When Testing Starts:


For the betterment, reliability and performance of an Information System, it is always better to involve the Testing team right from the beginning of the Requirement Analysis phase. The active

involvement of the testing team will give the testers a clear vision of the functionality of the system by which we can expect a better quality and error-free product. Once the Development Team-lead analyzes the requirements, he will prepare the System Requirement Specification, Requirement Traceability Matrix. After that he will schedule a meeting with the Testing Team (Test Lead and Tester chosen for that project). The Development Team-lead will explain regarding the Project, the total schedule of modules, Deliverables and Versions. The involvement of Testing team will start from here. Test Lead will prepare the Test Strategy and Test Plan, which is the scheduler for entire testing process. Here he will plan when each phase of testing such as Unit Testing, Integration Testing, System Testing, User Acceptance Testing. Generally Organizations follow the V Model for their development and testing. After analyzing the requirements, Development Team prepares System Requirement Specification, Requirement Traceability Matrix, Software Project Plan, Software Configuration Management Plan, Software Measurements/metrics plan, Software Quality Assurance Plan and move to the next phase of Software Life Cycle ie., Design. Here they will prepare some important Documents like Detailed Design Document, Updated Requirement Traceability Matrix, Unit Test Cases Document (which is prepared by the Developers if there are no separate White-box testers), Integration Test Cases Document, System Test Plan Document, Review and SQA audit Reports for all Test Cases. After preparation of the Test Plan, Test Lead distributes the work to the individual testers (whitebox testers & black-box testers). Testers work will start from this stage, based on Software Requirement Specification/Functional Requirement Document they will prepare Test Cases using a standard Template or Automation Tool. After that they will send them for review to the Test Lead. Once the Test Lead approves it, they will prepare the Test Environment/Test bed, which is specifically used for Testing. Typically the Test Environment replicates the Client side system setup. We are ready for Testing. While testing team will work on Test strategy, Test plan, Test Cases simultaneously the Development team will work on their individual Modules. Before three or four days of First Release they will give an interim Release to the Testing Team. They will deploy that software in Test Machine and the actual testing will start. The Testing Team handles configuration management of Builds. After that the Testing team do testing against Test Cases, which are already prepared and report bugs in a Bug Report Template or automation Tool (based on Organization). They will track the bugs by changing the status of Bug at each and every stage. Once Cycle #1 testing is done, then they will submit the Bug Report to the Test Lead then he will discuss these issues with Development Team-lead after which they work on those bugs and will fix those bugs. After all the bugs are fixed they will release next build. The Cycle#2 testing starts at this stage and now we have to run all the Test Cases and check whether all the bugs reported in Cycle#1 are fixed or not. And here we will do regression testing means, we have to check whether the change in the code give any side effects to the already tested code. Again we repeat the same process till the

Delivery Date. Generally we will document 4 Cycles information in the Test Case Document. At the time of Release there should not be any high severity and high priority bugs. Of course it should have some minor bugs, which are going to be fixed in next iteration or release (generally called Deferred bugs). And at the end of Delivery Test Lead and individual testers prepare some reports. Some times the Testers also participate in the Code Reviews, which is static testing. They will check the code against historical logical errors checklist, indentation, proper commenting. Testing team is also responsible to keep the track of Change management to give qualitative and bug-free product.

Вам также может понравиться