Вы находитесь на странице: 1из 23

Software Quality

DEFINITION Software quality is the degree of conformance to explicit or implicit requirements and expectations. Explanation:

Explicit: clearly defined and documented Implicit: not clearly defined and documented but indirectly suggested Requirements: business/product/software requirements Expectations: mainly end-user expectations

Note: Some people tend to accept quality as compliance to only explicit requirements and not implicit requirements. We tend to think of such people as lazy. Definition by IEEE

The degree to which a system, component, or process meets specified requirements. The degree to which a system, component, or process meets customer or user needs or expectations.

Definition by ISTQB

quality: The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations. software quality: The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs.

As with any definition, the definition of software quality is also varied and debatable. Some even say that quality cannot be defined and some say that it can be defined but only in a particular context. Some even state confidently that quality is lack of bugs. Whatever the definition, it is true that quality is something we all aspire to. Software quality has many dimensions. See Dimensions of Quality. In order to ensure software quality, we undertake Software Quality Assurance and Software Quality Control.

Dimensions of Quality

Accessibility:The degree to which software can be used comfortably by a wide variety of people, including those who require assistive technologies like screen magnifiers or voice recognition. Compatibility:The suitability of software for use in different environments like different Operating Systems, Browsers, etc. Concurrency:The ability of software to service multiple requests to the same resources at the same time. Efficiency:The ability of software to perform well or achieve a result without wasted energy, resources, effort, time or money. Functionality: The ability of software to carry out the functions as specified or desired. Installability:The ability of software to be installed inspecified environment. Localizability:The ability of software to be used in different languages, time zones etc. Maintainability:The ease with which software can be modified (adding features, enhancing features, fixing bugs, etc) Performance:The speed at which software performs under a particular load. Portability:The ability of software to be transferred easily from one location to another. Reliability:The ability of software to perform a required function under stated conditions for stated period of time without any errors. Scalability:The measure of softwares ability to increase or decrease in performance in response to changes in softwares processing demands. Security:The extent of protection of software against unauthorized access, invasion of privacy, theft, loss of data, etc. Testability:The ability of software to be easily tested. Usability:The degree of softwares ease of use.

When someone says This software is of a very high quality., you might want to ask In which dimension of quality?

Accessibility:The degree to which software can be used comfortably by a wide variety of people, including those who require assistive technologies like screen magnifiers or voice recognition. Compatibility:The suitability of software for use in different environments like different Operating Systems, Browsers, etc. Concurrency:The ability of software to service multiple requests to the same resources at the same time. Efficiency:The ability of software to perform well or achieve a result without wasted energy, resources, effort, time or money. Functionality: The ability of software to carry out the functions as specified or desired. Installability:The ability of software to be installed inspecified environment. Localizability:The ability of software to be used in different languages, time zones etc.

Maintainability:The ease with which software can be modified (adding features, enhancing features, fixing bugs, etc) Performance:The speed at which software performs under a particular load. Portability:The ability of software to be transferred easily from one location to another. Reliability:The ability of software to perform a required function under stated conditions for stated period of time without any errors. Scalability:The measure of softwares ability to increase or decrease in performance in response to changes in softwares processing demands. Security:The extent of protection of software against unauthorized access, invasion of privacy, theft, loss of data, etc. Testability:The ability of software to be easily tested. Usability:The degree of softwares ease of use.

When someone says This software is of a very high quality., you might want to ask In which dimension of quality?

Software Quality Assurance


Software Quality Assurance (SQA) is a set of activities for ensuring quality in software engineering processes (that ultimately result in quality in software products). It includes the following activities:

Process definition and implementation Auditing Training

The quality management system under which the software system is created is normally based on one or more of the following models/standards:

CMMI Six Sigma ISO 9000

Note: There are many other models/standards for quality management but the ones mentioned above are the most popular. Software Quality Assurance encompasses the entire software development life cycle and the goal is to ensure that the development and/or maintenance processes are continuously improved to produce products that meet specifications/requirements. The process of Software Quality Control (SQC) is also governed by Software Quality Assurance (SQA).

SQA is generally shortened to just QA.

Software Quality Control


Software Quality Control (SQC) is a set of activities for ensuring quality in software products. It includes the following activities:

Reviews o Requirement Review o Design Review o Code Review o Deployment Plan Review o Test Plan Review o Test Cases Review Testing o Unit Testing o Integration Testing o System Testing o Acceptance Testing

Software Quality Control is limited to the Review/Testing phases of the Software Development Life Cycle and the goal is to ensure that the products meet specifications/requirements. The process of Software Quality Control (SQC) is governed by Software Quality Assurance (SQA). While SQA is oriented towards prevention, SQC is oriented towards detection. Note: Some people assume that QC means just Testing and fail to consider Reviews; this should be discouraged.

Verification vs Validation
Verification and Validation: Definition, Differences, Details: The terms Verification and Validation are frequently used in the software testing world but the meaning of those terms are mostly vague and debatable. You will encounter (or have encountered) all kinds of usage and interpretations of those terms, and it is our humble attempt here to distinguish between them as clearly as possible. Criteria Verification Validation

Definition The process of evaluating workThe process of evaluating software products (not the actual final product) during or at the end of the development of a development phase to determine process to determine whether it satisfies whether they meet the specified specified business requirements. requirements for that phase. Objective To ensure that the product is being built To ensure that the product actually according to the requirements and meets the users needs, and that the design specifications. In other words, to specifications were correct in the first ensure that work products meet their place. In other words, to demonstrate specified requirements. that the product fulfills its intended use when placed in its intended environment. Question Are we building the product right? Are we building the right product? Evaluation Plans, Requirement Specs, Design The actual product/software. Items Specs, Code, Test Cases Activities Reviews Testing Walkthroughs

Inspections

It is entirely possible that a product passes when verified but fails when validated. This can happen when, say, a product is built as per the specifications but the specifications themselves fail to address the users needs.

Trust but Verify. Verify but also Validate.

Software Development Life Cycle


Software Development Life Cycle, or Software Development Process, defines the steps/stages/phases in the building of software. There are various kinds of software development models like:

Waterfall model Spiral model Iterative and incremental development (like Unified Process and Rational Unified Process) Agile development (like Extreme Programming and Scrum)

Models are evolving with time and the development life cycle can vary significantly from one model to the other. It is beyond the scope of this particular article to discuss each model. However, each model comprises of all or some of the following phases/activities/tasks.

SDLC IN SUMMARY

Project Planning Requirements Development Estimation Scheduling Design Coding Test Build/Deployment Unit Testing Integration Testing User Documentation System Testing Acceptance Testing Production Build/Deployment Release Maintenance

SDLC IN DETAIL

Project Planning o Prepare o Review o Rework o Baseline o Revise [if necessary] >> Review >> Rework >> Baseline Requirements Development[Business Requirements and Software/Product Requirements] o Develop o Review o Rework o Baseline o Revise [if necessary] >> Review >> Rework >> Baseline Estimation[Size / Effort / Cost] o <same as the activities/tasks mentioned for Project Planning> Scheduling o <same as the activities/tasks mentioned for Project Planning> Designing[ High Level Design and Detail Design] o <same as the activities/tasks mentioned for Requirements Development> Coding o Code o Review o Rework o Commit o Recode [if necessary] >> Review >> Rework >> Commit

Test Builds Preparation/Deployment o Build/Deployment Plan Prepare Review Rework Baseline Revise [if necessary] >> Review >> Rework >> Baseline o Build/Deploy Unit Testing o Test Plan Prepare Review Rework Baseline Revise [if necessary] >> Review >> Rework >> Baseline o Test Cases/Scripts Prepare Review Rework Baseline Execute Revise [if necessary] >> Review >> Rework >> Baseline >> Execute Integration Testing o <same as the activities/tasks mentioned for unit testing> User Documentation o Prepare o Review o Rework o Baseline o Revise [if necessary] >> Review >> Rework >> Baseline System Testing o <same as the activities/tasks mentioned for Unit Testing> Acceptance Testing[ Internal Acceptance Test and External Acceptance Test] o <same as the activities/tasks mentioned for Unit Testing> Production Build/Deployment o <same as the activities/tasks mentioned for Test Build/Deployment> Release o Prepare o Review o Rework o Release Maintenance o Recode [Enhance software / Fix bugs] o Retest o Redeploy

Rerelease

Notes:

The life cycle mentioned here is NOT set in stone and each phase does not necessarily have to be implemented in the order mentioned. Though SDLC uses the term Development, it does not focus just on the coding tasks done by developers but incorporates the tasks of all stakeholders, including testers.

There may still be many other activities/ tasks which have not been specifically mentioned above, like Configuration Management. No matter what, it is essential that you clearly understand the software development life cycle your project is following. One issue that is widespread in many projects is that software testers are involved much later in the life cycle, due to which they lack visibility and authority (which ultimately compromises software quality).

Definition of Test
Being in the software industry, we have to encounter the word TEST many times. Though we have our own specific meaning of the word TEST, we have collected here some definitions of the word as provided by various dictionaries and other tidbits. The word TEST can be a Noun, a Verb or an Adjective but the definitions here are only of the Noun form. DEFINITION OF TEST Google Dictionary: A Test is a deliberate action or experiment to find out how well something works. Cambridge Advanced Learners Dictionary: A Test is an act of using something to find out whether it is working correctly or how effective it is. The Free Dictionary by Farlex: A Test is a procedure for critical evaluation; a means of determining the presence, quality, or truth of something. Meriam Webster: A Test is a critical examination, observation, or evaluation. Dictionary.com: A Test is the means by which the presence, quality, or genuineness of anything is determined.

WordWeb: A Test is trying something to find out about it. Longman Dictionary of Contemporary English: A Test is a process used to discover whether equipment or a product works correctly, or to discover more about it. ETYMOLOGY OF TEST Online Etymology Dictionary: TEST late 14c., small vessel used in assaying precious metals, from O.Fr. test, from L. testum earthen pot, related to testa piece of burned clay, earthen pot, shell (cf. L. testudo tortoise) and textere to weave (cf. Lith. tistas vessel made of willow twigs; see texture). Sense of trial or examination to determine the correctness of something is recorded from 1594. The verb in this sense is from 1748. The connecting notion is ascertaining the quality of a metal by melting it in a pot. SYNONYMS OF TEST:

If the word TEST has been nauseating you because of its being overused, try the following synonyms:

Analysis Assessment Attempt Check Confirmation Evaluation Examination Experiment Inquiry Inspection Investigation

Scrutiny Trial Verification

Software Testing Myths


Just as every field has its myths, so does the field of Software Testing. Software testing myths have arisen primarily due to the following:

Lack of authoritative facts. Evolving nature of the industry. General flaws in human logic.

Some of the myths are explained below, along with their related facts: 1. MYTH: Quality Control = Testing. o FACT: Testing is just one component of software quality control. Quality Control includes other activities such as Reviews. 2. MYTH: The objective of Testing is to ensure a 100% defect- free product. o FACT: The objective of testing is to uncover as many defects as possible. Identifying all defects and getting rid of them is impossible. 3. MYTH: Testing is easy. o FACT: Testing can be difficult and challenging (sometimes, even more so than coding). 4. MYTH: Anyone can test. o FACT: Testing is a rigorous discipline and requires many kinds of skills. 5. MYTH: There is no creativity in testing. o FACT: Creativity can be applied when formulating test approaches, when designing tests, and even when executing tests. 6. MYTH: Automated testing eliminates the need for manual testing. o FACT: 100% test automation cannot be achieved. Manual Testing, to some level, is always necessary. 7. MYTH: When a defect slips, it is the fault of the Testers. o FACT: Quality is the responsibility of all members/stakeholders, including developers, of a project. 8. MYTH: Software Testing does not offer opportunities for career growth. o FACT: Gone are the days when users had to accept whatever product was dished to them; no matter what the quality. With the abundance of competing software and increasingly demanding users, the need for software testers to ensure high quality will continue to grow. Management folk tend to have a special affinity for myths; so it will be your responsibility as a software tester to convince them that they are wrong. Tip: Put forward your arguments with relevant examples and numbers to validate your claims.

Agile Testing
This article on Agile Testing assumes that you already understand Agile software development methodology (Scrum, Extreme Programming, or other flavors of Agile). Also, it discusses the idea at a high level and does not give you the specifics. VERY SHORT DEFINITION Agile testing is a method of software testing that follows the principles of agile software development.

MANIFESTO FOR AGILE SOFTWARE TESTING This is adapted from agilemanifesto.org and it might look a little silly to copy everything from there and just replace the term development with testing but here it is for your refreshment. You need to however realize that the term development means coding, testing and all other activities that are necessary in building a valuable software.
We are uncovering better ways of testing software by doing it and helping others do it. Through this work we have come to value: Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan

AGILE TESTING VALUES EXPLAINED

Individuals and interactions over processes and tools: This means that flexible people and communication are valued over rigid processes and tools. However, this does not mean that agile testing ignores processes and tools. In fact, agile testing is built upon very simple, strong and reasonable processes like the process of conducting the daily meeting or preparing the daily build. Similarly, agile testing attempts to leverage tools, especially for test automation, as much as possible. Nevertheless, it needs to be clearly understood that it is the testers who drive those tools and the output of the tools depend on the testers (not the other way round). Working software over comprehensive documentation: This means that functional and usable software is valued over comprehensive but unusable documentation. Though this is more directed to upfront requirement specifications and design specifications, this can be true for test plans and test cases as well. Our primary goal is the act of testing itself and not any elaborate documentation merely pointing toward that goal. However, it is always best to have necessary documentation in place so that the picture is clear and the picture remains with the team if/when a member leaves. Customer collaboration over contract negotiation: This means that the client is engaged frequently and closely in touch with the progress of the project (not through complicated progress reports but through working pieces of software). This does put some extra burden on the customer who has to collaborate with the team at regular intervals (instead of just waiting till the end of the contract, hoping that deliveries will be made as promised). But this frequent engagement ensures that the project is heading toward the right direction and not toward the building of a frog when a fish is expected. Responding to change over following a plan: This means accepting changes as being natural and responding to them without being afraid of them. It is always nice to have a plan beforehand but it is not very nice to stick to a plan, at whatever the cost, even when situations have changed. Lets say you write a test case, which is your plan, assuming a certain requirement. Now, if the requirement changes, you do not lament over the wastage of your time and effort. Instead, you promptly adjust your test case to validate the changed requirement. And, of course, only a FOOL would try to run the same old test case on the new software and mark the test as FAIL.

PRINCIPLES BEHIND AGILE MANIFESTO Behind the Agile Manifesto are the following principles which some agile practitioners unfortunately fail to understand or implement. We urge you to go through each principle and digest them thoroughly if you intend to embrace Agile Testing. On the right column, the original principles have been re-written specifically for software testers. We follow these principles: What it means for Software Testers: Our highest priority is to satisfy the Our highest priority is to satisfy the customer customer through early and continuous through early and continuous delivery of highdelivery of valuable software. quality software. Welcome changing requirements, even late Welcome changing requirements, even late

in development. Agile processes harness in testing. Agile processes harness change for change for the customers competitive the customers competitive advantage. advantage. Deliver working software frequently, from Deliver high-quality software frequently, from a a couple of weeks to a couple of months, couple of weeks to a couple of months, with a with a preference to the shorter timescale. preference to the shorter timescale. Business people and developers must work Business people, developers, and testers must together daily throughout the project. work together daily throughout the project. Build projects around motivated Build test projects around motivated individuals. individuals. Give them the environment Give them the environment and support they and support they need, and trust them to get need, and trust them to get the job done. the job done. The most efficient and effective method of The most efficient and effective method of conveying information to and within a conveying information to and within a test team development team is face-to-face is face-to-face conversation. conversation. Working software is the primary measure Working high-quality software is the primary of progress. measure of progress. Agile processes promote sustainable Agile processes promote sustainable development. The sponsors, developers, development and testing. The sponsors, and users should be able to maintain a developers, testers, and users should be able to constant pace indefinitely. maintain a constant pace indefinitely. Continuous attention to technical Continuous attention to technical excellence and excellence and good design enhances good test design enhances agility. agility. Simplicitythe art of maximizing the Simplicitythe art of maximizing the amount of amount of work not doneis essential. work not doneis essential. The best architectures, requirements, and The best architectures, requirements, and designs emerge from self-organizing teams.designs emerge from self-organizing teams. At regular intervals, the team reflects on At regular intervals, the test team reflects on how to become more effective, then tunes how to become more effective, then tunes and and adjusts its behavior accordingly. adjusts its behavior accordingly. THE RECURRING QUESTION So what happens to all the traditional software testing methods and artifacts? Do we throw them away? THE ANSWER Naaah! You will still need all those software testing methods and artifacts (but at varying scales of priority and necessity). You will, however, need to completely throw away that traditional attitude and embrace the agile attitude.

Unit Testing
Unit Testing Definition, Analogy, Method, Tasks, Details, Benefits, Tips: DEFINITION Unit Testing is a level of the software testing process where individual units/components of a software/system are tested. The purpose is to validate that each unit of the software performs as designed.

A unit is the smallest testable part of software. It usually has one or a few inputs and usually a single output. In procedural programming a unit may be an individual program, function, procedure, etc. In object-oriented programming, the smallest unit is a method, which may belong to a base/super class, abstract class or derived/child class. (Some treat a module of an application as a unit. This is to be discouraged as there will probably be many individual units within that module.) Unit testing frameworks, drivers, stubs and mock or fake objects are used to assist in unit testing. METHOD Unit Testing is performed by using the White Box Testing method. When is it performed? Unit Testing is the first level of testing and is performed prior to Integration Testing. Who performs it?

Unit Testing is normally performed by software developers themselves or their peers. In rare cases it may also be performed by independent software testers. TASKS

Unit Test Plan o Prepare o Review o Rework o Baseline Unit Test Cases/Scripts o Prepare o Review o Rework o Baseline Unit Test o Perform

BENEFITS

Unit testing increases confidence in changing/maintaining code. If good unit tests are written and if they are run every time any code is changed, the likelihood of any defects due to the change being promptly caught is very high. If unit testing is not in place, the most one can do is hope for the best and wait till the test results at higher levels of testing are out. Also, if codes are already made less interdependent to make unit testing possible, the unintended impact of changes to any code is less. Codes are more reusable. In order to make unit testing possible, codes need to be modular. This means that codes are easier to reuse. Development is faster. How? If you do not have unit testing in place, you write your code and perform that fuzzy developer test (You set some breakpoints, fire up the GUI, provide a few inputs that hopefully hit your code and hope that you are all set.) In case you have unit testing in place, you write the test, code and run the tests. Writing tests takes time but the time is compensated by the time it takes to run the tests. The test runs take very less time: You need not fire up the GUI and provide all those inputs. And, of course, unit tests are more reliable than developer tests. Development is faster in the long run too. How? The effort required to find and fix defects found during unit testing is peanuts in comparison to those found during system testing or acceptance testing. The cost of fixing a defect detected during unit testing is lesser in comparison to that of defects detected at higher levels. Compare the cost (time, effort, destruction, humiliation) of a defect detected during acceptance testing or say when the software is live. Debugging is easy. When a test fails, only the latest changes need to be debugged. With testing at higher levels, changes made over the span of several days/weeks/months need to be debugged.

Codes are more reliable. Why? I think there is no need to explain this to a sane person.

TIPS

Find a tool/framework for your language. Do not create test cases for everything: some will be handled by themselves. Instead, focus on the tests that impact the behavior of the system. Isolate the development environment from the test environment. Use test data that is close to that of production. Before fixing a defect, write a test that exposes the defect. Why? First, you will later be able to catch the defect if you do not fix it properly. Second, your test suite is now more comprehensive. Third, you will most probably be too lazy to write the test after you have already fixed the defect. Write test cases that are independent of each other. For example if a class depends on a database, do not write a case that interacts with the database to test the class. Instead, create an abstract interface around that database connection and implement that interface with mock object. Aim at covering all paths through the unit. Pay particular attention to loop conditions. Make sure you are using a version control system to keep track of your code as well as your test cases. In addition to writing cases to verify the behavior, write cases to ensure performance of the code. Perform unit tests continuously and frequently.

ONE MORE REASON Lets say you have a program comprising of two units. The only test you perform is system testing. [You skip unit and integration testing.] During testing, you find a bug. Now, how will you determine the cause of the problem?

Is the bug due to an error in unit 1? Is the bug due to an error in unit 2? Is the bug due to errors in both units? Is the bug due to an error in the interface between the units? Is the bug due to an error in the test or test case?

Unit testing is often neglected but it is, in fact, the most important level of testing. Definition by ISTQB

unit testing: See component testing. component testing: The testing of individual software components.

Integration Testing
Integration Testing Definition, Analogy, Method, Tasks, Details, Approaches, Tips: DEFINITION Integration Testing is a level of the software testing process where individual units are combined and tested as a group.

The purpose of this level of testing is to expose faults in the interaction between integrated units. Test drivers and test stubs are used to assist in Integration Testing. Note: The definition of a unit is debatable and it could mean any of the following: 1. the smallest testable part of a software 2. a module which could consist of many of 1 3. a component which could consist of many of 2 ANALOGY During the process of manufacturing a ballpoint pen, the cap, the body, the tail and clip, the ink cartridge and the ballpoint are produced separately and unit tested separately. When two or more units are ready, they are assembled and Integration Testing is performed. For example, whether the cap fits into the body or not. METHOD

Any of Black Box Testing, White Box Testing, and Gray Box Testing methods can be used. Normally, the method depends on your definition of unit. TASKS

Integration Test Plan o Prepare o Review o Rework o Baseline Integration Test Cases/Scripts o Prepare o Review o Rework o Baseline Integration Test o Perform

When is Integration Testing performed? Integration Testing is performed after Unit Testing and before System Testing. Who performs Integration Testing? Either Developers themselves or independent Testers perform Integration Testing. APPROACHES

Big Bang is an approach to Integration Testing where all or most of the units are combined together and tested at one go. This approach is taken when the testing team receives the entire software in a bundle. So what is the difference between Big Bang Integration Testing and System Testing? Well, the former tests only the interactions between the units while the latter tests the entire system. Top Down is an approach to Integration Testing where top level units are tested first and lower level units are tested step by step after that. This approach is taken when top down development approach is followed. Test Stubs are needed to simulate lower level units which may not be available during the initial phases. Bottom Up is an approach to Integration Testing where bottom level units are tested first and upper level units step by step after that. This approach is taken when bottom up development approach is followed. Test Drivers are needed to simulate higher level units which may not be available during the initial phases. Sandwich/Hybrid is an approach to Integration Testing which is a combination of Top Down and Bottom Up approaches.

TIPS

Ensure that you have a proper Detail Design document where interactions between each unit are clearly defined. In fact, you will not be able to perform Integration Testing without this information. Ensure that you have a robust Software Configuration Management system in place. Or else, you will have a tough time tracking the right version of each unit, especially if the number of units to be integrated is huge. Make sure that each unit is first unit tested before you start Integration Testing. As far as possible, automate your tests, especially when you use the Top Down or Bottom Up approach, since regression testing is important each time you integrate a unit, and manual regression testing can be inefficient.

Definition by ISTQB

integration testing: Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. See also component integration testing, system integration testing. component integration testing: Testing performed to expose defects in the interfaces and interaction between integrated components. system integration testing: Testing the integration of systems and packages; testing interfaces to external organizations (e.g. Electronic Data Interchange, Internet).

System Testing
System Testing Definition, Analogy, Method, Tasks, Details: DEFINITION System Testing is a level of the software testing process where a complete, integrated system/software is tested. The purpose of this test is to evaluate the systems compliance with the specified requirements.

ANALOGY During the process of manufacturing a ballpoint pen, the cap, the body, the tail, the ink cartridge and the ballpoint are produced separately and unit tested separately. When two or more units are ready, they are assembled and Integration Testing is performed. When the complete pen is integrated, System Testing is performed. METHOD Usually, Black Box Testing method is used. TASKS

System Test Plan o Prepare o Review o Rework o Baseline System Test Cases o Prepare o Review o Rework o Baseline System Test o Perform

When is it performed? System Testing is performed after Integration Testing and before Acceptance Testing.

Who performs it? Normally, independent Testers perform System Testing. Definition by ISTQB

system testing: The process of testing an integrated system to verify that it meets specified requirements.

Acceptance Testing
Acceptance Testing Definition, Analogy, Method, Tasks, Details: DEFINITION Acceptance Testing is a level of the software testing process where a system is tested for acceptability. The purpose of this test is to evaluate the systems compliance with the business requirements and assess whether it is acceptable for delivery.

ANALOGY During the process of manufacturing a ballpoint pen, the cap, the body, the tail and clip, the ink cartridge and the ballpoint are produced separately and unit tested separately. When two or more units are ready, they are assembled and Integration Testing is performed. When the complete pen is integrated, System Testing is performed. Once the System Testing is complete, Acceptance Testing is performed so as to confirm that the ballpoint pen is ready to be made available to the end-users.

METHOD Usually, Black Box Testing method is used in Acceptance Testing. Testing does not usually follow a strict procedure and is not scripted but is rather ad-hoc. TASKS

Acceptance Test Plan o Prepare o Review o Rework o Baseline Acceptance Test Cases/Checklist o Prepare o Review o Rework o Baseline Acceptance Test o Perform

When is it performed? Acceptance Testing is performed after System Testing and before making the system available for actual use. Who performs it?

Internal Acceptance Testing (Also known as Alpha Testing) is performed by members of the organization that developed the software but who are not directly involved in the project (Development or Testing). Usually, it is the members of Product Management, Sales and/or Customer Support. External Acceptance Testing is performed by people who are not employees of the organization that developed the software. o Customer Acceptance Testing is performed by the customers of the organization that developed the software. They are the ones who asked the organization to develop the software for them. [This is in the case of the software not being owned by the organization that developed it.] o User Acceptance Testing (Also known as Beta Testing) is performed by the end users of the software. They can be the customers themselves or the customers customers.

Definition by ISTQB

acceptance testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the

acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.

Вам также может понравиться