Вы находитесь на странице: 1из 63

SOFTWARE SYSTEM QUALITY

LECTURE # 9

SOFTWARE TESTING - II
BLACK BOX TESTING

7th December, 2017 Dr. Ali Javed


Contact Information
2

 Instructor: Dr. Ali Javed


Assistant Professor
Department of Software Engineering
U.E.T Taxila

 Email: ali.javed@uettaxila.edu.pk
 Contact No: +92-51-9047747
 Office hours:
 Monday, 11:00AM - 01:00PM, Office # 7

Dr. Ali Javed


Course Information
3

 Course Name: Software System Quality

 Course Code: SE-5001

Dr. Ali Javed


BLACK BOX TESTING TECHNIQUES

 Equivalence Class Partitioning Testing


 Boundary Value Testing
 Fuzzy Testing
 Omission Testing
 Comparison Testing
 End to End Testing
 Localization Testing
 Globalization Testing
 Integration Testing
 Sandwich Testing
 Security Testing
 Null Case Testing
 Volume Testing
 Load Testing
 Stress Testing
Documentation Testing
 Smoke Testing
 Usability Testing
 Exploratory Testing
 Button Press Testing
 State Transition Testing
 Installation Testing
 Acceptance Testing
 Alpha Testing
 Beta Testing
Equivalence Class Partitioning Testing
 Equivalence Partitioning divides the input domain of a program into classes of data from which
test cases can be derived

 An ideal test case single handedly uncovers a class of errors e.g incorrect processing of all
character data that might otherwise require many cases to be executed before the general
error is observed.

 Equivalence Partitioning strives to define the test case that uncovers classes of errors, there by
reducing the total number of test cases that must be developed.

 An equivalence class represents a set of valid or invalid states for input conditions.

 Define one / couple of test cases for each class


 Test cases that cover valid eq. classes
Equivalence
 Test cases that cover at most one invalid eq. class
classes

Input, output
domain
Dr. Ali Javed
Equivalence Class Partitioning Testing
 Identify equivalence classes

 Input, output: clue from requirement


 Equivalence classes are of 2 types: valid and invalid
 Ex: identify equivalence classes for this requirement “ if a
pupil has total score >= 75, he will pass the exam, otherwise
will fail (total score is an integer)”
Pass
Total score System Error message

Fail

Dr. Ali Javed


Equivalence Class Partitioning Testing
 Identify equivalence classes

Valid equivalence Invalid


classes equivalence
classes
Total score 1. >=75 2. <75
3. Null
4. String
Result of the 5. Pass
exam 6. Fail
7. Error message

Dr. Ali Javed


Equivalence Class Partitioning Testing
 Define the test cases for both valid and Invalid Equivalence Classes

 Example: write test case for “ if a pupil has total score >= 75, he will be
past the exam, otherwise will fail ”, using equivalence partitioning.
Conditions Valid Invalid
equivalence equivalence Test case:
class class
Total score 1. >=75 2. <75 • 1, 5
3. Null
4. String
• 2, 6
Result of the 5. Pass • 3, 7
exam 6. Fail
7. Error message • 4, 7

Dr. Ali Javed


Equivalence Class Partitioning Testing
 Strong Equivalence Partitioning

 Weak Equivalence Partitioning

 Traditional Equivalence Partitioning

Dr. Ali Javed


Boundary Value Testing
 (Specific case of Equivalence Class Partitioning Testing)

 Boundary value analysis leads to a selection of test


cases that exercise bounding values. This technique is
developed because a great number of errors tend to
occur at the boundary of input domain rather than at
the center.

 Concentrate on cases at the extreme ends of each


equivalence class.

 Guideline for BVA are following;

 If an input condition specifies a range bounded by values


a and b, test cases should be designed with values a and
Boundary
b and just above and below a and b. values

Dr. Ali Javed


Value Selection in Boundary Value Analysis
11

 The basic idea in boundary value analysis is to


select input variable values at their:

 Minimum
 Just above the minimum

 A nominal value

 Just below the maximum

 Maximum

Dr. Ali Javed


Boundary Value Testing
Example:
• “ if a pupil has total score >= 75, he will pass the exam, otherwise will
fail (total score must be integer)”

Conditions Valid Invalid


Test Data to test:
equivalence equivalence case: 1a. 75, pass
classes classes 1. 1, 5 1b. 76, pass
Total score 1. >=75 2. <75 2. 2, 6 2. 74, fail
3. Null 3. 3, 7 3. Null, error
4. String 4. 4, 7 message
4a. A, error
Result of 5. Pass
the exam message
6. Fail
4b. I am a tester of
7. Error
EW and I love
message
this job, error
message
Dr. Ali Javed
Fuzzy Testing
 Fuzz testing or fuzzing is a software testing technique, often automated or
semi-automated, that involves providing invalid or unexpected data to the
inputs of a computer program. The program is then monitored for exceptions
such as crashes or failing built-in code assertions or for finding
potential memory leaks.

 The term first originates from a class project at the University of Wisconsin
1988 although similar techniques have been used in the field of quality
assurance, where they are referred to as robustness testing or negative
testing.

Dr. Ali Javed


Omission Testing [1]
 Omission Testing (also called Missing Case Testing):

 Exposes defects caused inputting cases (scenarios) the developer forgot to


handle or did not anticipate
 A study by Sherman on a released Microsoft product reported that 30% of client reported
defects were caused by missing cases.

 Other studies show than an average of 22 to 54% of all client reported defects are caused by
missing cases.

 Identify any Missing case for Pupil Example?

Dr. Ali Javed


Omission Testing
Example:
• Case of Floating point Input (Invalid Class)is missing in the current
Test Case
Data to test:
Conditions Valid Invalid Test 1a. 75, pass
equivalence equivalence case: 1b. 76, pass
classes classes
1. 1, 5 2. 74, fail
Total score 1. >=75 2. <75
2. 2, 6 3. Null, error
3. Null
3. 3, 7 message
4. String
4. 4, 7 4a. A, error message
5. Float No.
5. 5, 7 4b. I am a tester of
Result of 5. Pass EW and I love this
the exam 6. Fail job, error message
7. Error 5. 75.5, error
message message
Dr. Ali Javed
Null Case Testing [1]
 Null Testing: (a specific case of Omission Testing, but triggers defects extremely often)
 Exposes defects triggered by no data or missing data.
 Often triggers defects because developers create programs to act upon data, they
don’t think of the case where the project may not contain specific data types

 Example: X, Y coordinate missing for drawing various shapes in Graphics editor.

 Example: Blank file names

Dr. Ali Javed


Comparison Testing[2-3]
 There are some situations in which the reliability of software is absolutely critical. In such
applications redundant hardware and software are often used to minimize the possibility of
error

 When redundant software is developed separate software engineering teams develop


independent versions of an application using the same specification

 In such situations each version can be tested with the same test data to ensure that all provide
identical output

 Then all versions are executed in parallel with real time comparison of results to ensure
consistency

 These independent versions form the basis of black box testing technique called comparison
testing or back-to-back testing

Dr. Ali Javed


Comparison Testing[2-3]
 If the output from each version is the same, it is
assumed that all implementations are correct

 If the output is different, each of the application is


investigated to determine if the defect in one or more
versions is responsible for the difference

 Comparison testing is not fool proof , if the


specification from which all versions have been
developed is in error, all versions will likely reflect the
error

 In addition if each of the independent versions


produces identical but incorrect results, condition
testing will fail to detect the error

Dr. Ali Javed


End to End Testing[4]
 End-to-end testing is a methodology used to test whether the flow
of an application is performing as designed from start to finish.

 End-to-end testing involves ensuring that that integrated


components of an application function as expected.

 This is basically exercising an entire “workflow”. Although System


Testing is similar, in System Testing you do not have to complete the
entire workflow. [13]

 For example, a simplified end-to-end testing of an email


application might involve logging in to the application, getting into
the inbox, opening and closing the mail box, composing,
forwarding or replying to email, checking in the sent items and
logging out of the application.

Dr. Ali Javed


End to End Testing[5-6]
 Unlike System Testing, End-to-End Testing not only validates the software system under test
but also checks it’s integration with external interfaces. Hence, the name “End-to-End”.

 End to End Testing is usually executed after functional and system testing. It uses actual
production like data and test environment to simulate real-time settings. End-to-End testing
is also called Chain Testing

Dr. Ali Javed


Globalization Testing [7-8]
 Globalization Testing is the software testing process for checking if the software
can work properly in various culture/locale settings using every type of
international input.

Dr. Ali Javed


Localizing Testing[7-8]
 Localization is the process of customizing a software application that was originally designed for a domestic
market so that it can be released in foreign markets.

 This process involves translating all native language strings to the target language and customizing the GUI
so that it is appropriate for the target market

 Localization testing checks how well the build has been translated into a particular target language (e.g.,
Japanese product for Japanese user).

 We should invite the local staff to help our localization testing by checking the quality of translation as well.

 Common bugs found from this testing


 Cannot display the correct format
 Functionality is broken

Dr. Ali Javed


Integration Testing
 Integration testing (sometimes called Integration and Testing, abbreviated
"I&T") is the phase in software testing in which individual software modules
are combined and tested as a group.

 It occurs after unit testing and before system and validation testing.

 Integration Testing Types


 Big Bang Integration
 Incremental Integration
 Top-down
 Bottom-up
 Sandwich
 Modified Sandwich

Dr. Ali Javed


Big Bang Integration
 In this approach, all or most of the developed modules are coupled together to form a
complete software system or major part of the system and then used for integration
testing.
 The Big Bang method is very effective for saving time in the integration testing process.
However, the major drawback of Big Bang Integration is to find the actual location of
error.

Dr. Ali Javed


Big Bang Integration
 Not Advised
 Requires both Stubs and Drivers to test the independent components
 Need Many Drivers
 Need Many Stubs
 Hard to isolate faults

Dr. Ali Javed


Incremental Integration
TOP-DOWN INTEGRATION

 Top Down Testing is an approach to integrated testing where


the top integrated modules are tested and the branch of the
module is tested step by step until the end of the related
module.

 Top down integration is performed in a series of steps:

1. The main control module is used as test driver and stubs are
substituted for all components directly subordinate to the main module.

2. Depending on the integration approach selected (depth-first or breadth-


first) subordinates stubs are replaced one at a time with actual
components.

3. Test are conducted as each component is integrated.

4. On completion of each set of tests another stub is replaced with actual


component.

5. Regression testing may be conducted to make sure that new errors


have not been introduced.

Dr. Ali Javed


Incremental Integration
BOTTOM UP INTEGRATION
 Bottom Up Testing is an approach to integrated testing where the lowest level components are
tested first, then used to facilitate the testing of higher level components. The process is repeated
until the component at the top of the hierarchy is tested.

 This approach is helpful only when all or most of the modules of the same development level are
ready.

Dr. Ali Javed


Incremental Integration
BOTTOM UP INTEGRATION
 Bottom up integration is performed in a series of steps:

1. Low level components are combined into clusters.


2. A driver (a control program for testing) is written to coordinate test case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program structure.

Dr. Ali Javed


Sandwich Testing
 Sandwich Testing is an approach to combine top down testing with bottom
up testing.
 The system is viewed as having three layers
 A target layer in the middle
 A layer above the target
 A layer below the target
 Testing converges at the target layer
 How do you select the target layer if there are more than 3 layers?

Dr. Ali Javed


Sandwich Testing
Test E
Bottom up
testing
Bottom Test B, E, F
Layer Test F
Tests
Test
A, B, C, D,
Test D,G E, F, G
Test G

Test A,B,C, D

Top Test A Top down


Layer
Tests

Dr. Ali Javed


Sandwich Testing

 Top and Bottom Layer Tests can  Does not test the individual subsystems
be done in parallel and their interfaces thoroughly before
integration

Solution: Modified sandwich testing strategy

Dr. Ali Javed


Modified Sandwich Testing
 Test in parallel:
 Middle layer with drivers and stubs
 Top layer with stubs
 Bottom layer with drivers
 Test in parallel:
 Top layer accessing middle layer (top layer replaces drivers)
 Bottom accessed by middle layer (bottom layer replaces stubs).

Dr. Ali Javed


Modified Sandwich Testing
 Allows upper-level components to be tested before merging them
with others

Dr. Ali Javed


Load Testing
 Load testing is the process of putting demand on a system or device and measuring its
response. Load testing is performed to determine a system’s behavior under both
normal and anticipated peak load conditions.
 It helps to identify the maximum operating capacity of an application as well as any
bottlenecks and determine which element is causing degradation.

 Example: Using automation software to simulate 500 users logging into a web site and performing end-
user activities at the same time.
 Example: Typing at 120 words per minute for 3 hours into a word processor.

Dr. Ali Javed


Stress Testing
 Stress testing is a form of testing that is used to
determine the stability of a given system or entity.

 It involves testing beyond normal operational


capacity, often to a breaking point, in order to
observe the results.

 In stress testing you continually put excessive load on


the system until the system crashes

 The system is repaired and the stress test is


repeated until a level of stress is reached that is
higher than expected to be present at a customer
site.

Dr. Ali Javed


Volume Testing
 Volume testing refers to testing a software application with a certain amount of data. This amount
can, in generic terms, be the database size or it could also be the size of an interface file that is
the subject of volume testing.
 For example, if you want to volume test your application with a specific database size, you will
expand your database to that size and then test the application's performance on it. Another
example could be when there is a requirement for your application to interact with an interface
file; this interaction could be reading and/or writing on to/from the file.

Dr. Ali Javed


Recovery Testing [4]
 In software testing, recovery testing is the activity of testing how fast and better
an application is able to recover from crashes, hardware failures and other similar problems.

 Recovery testing is the forced failure of the software in a variety of ways to verify that recovery
is properly performed.

 Examples of recovery testing:

 While an application is running, suddenly restart the computer, and afterwards check the validness of the
application's data integrity.
 While an application is receiving data from a network, unplug the connecting cable. After some time, plug
the cable back in and analyze the application's ability to continue receiving data from the point at which
the network connection disappeared.
 Restart the system while a browser has a definite number of sessions. Afterwards, check that the browser is
able to recover all of them.

Dr. Ali Javed


Documentation Testing [1]
 Exposes defects in the content and access of on-line
user manuals (Help files) and content of training
manuals.

 The Testing Group tests that all Help files appear on


the screen when selected.

 On-Line documentation is a Landmark requirement for


product release

 Documentation testing can be approached in two


phases:

 1st Phase is Review and Inspection, examines the


documents for editorial clarity.

 2nd phase is Live Test, which uses the documentation in


conjunction with the use of the actual program.

Dr. Ali Javed


Exploratory Testing
 Exploratory testing is an approach to software testing that is concisely described as
simultaneous learning, test design and test execution. Exploratory software testing is a
powerful and fun approach to testing.
 The essence of exploratory testing is that you learn while you test, and you design
your tests based on what you are learning
 Exploratory testing is a method of manual testing.
 The testing is dependent on the tester's skill of inventing test cases and finding
defects. The more the tester knows about the product and different test methods, the
better the testing will be.

Dr. Ali Javed


Button Press Testing [1]
 Button Press Testing: (Landmark testing term, not industry standard)
 Exposes functionality defects by methodically pressing every widget (pull
down menu, pop ups, drop down lists, buttons, icons, etc.) in the program.

Dr. Ali Javed


Regression Testing [11]
 Exposes defects in code that should have not changed.

 Re-executes some or all existing test cases to exercise code that was tested in a previous release
or previous test cycle.

 Performed when previously tested code has been re-linked such as when:

 Ported to a new operating system


 A fix has been made to a specific part of the code.

 Studies shows that:

 The probability of changing the program correctly on the first try is only 50% if the change
involves 10 or fewer lines of code.
 The probability of changing the program correctly on the first try is only 20% if the change
involves around 50 lines of code.

Dr. Ali Javed


Progressive VS Regressive VS Re-Test

 When testing new code, you are performing “progressive testing.”

 When testing a program to determine if a change has introduced errors in the unchanged code,
you are performing “regression testing.”

 Re- test - Retesting means we testing only the certain part of an application again and not
considering how it will effect in the other part or in the whole application.

 All black box test design methods apply to both progressive and regressive testing. Eventually, all
your “progressive” tests should become “regression” tests.

 The Testing Group performs a lot of Regression Testing because most Landmark development
projects are adding enhancements (new functionality) to existing programs. Therefore, the existing
code (code that did not change) must be regression tested.

Dr. Ali Javed


Smoke Testing
 Smoke testing is non-exhaustive software testing, ascertaining that the most crucial functions of a
program work, but not bothering with finer details.
 The term comes to software testing from a similarly basic type of hardware testing, in which the
device passed the test if it didn't catch fire the first time it was turned on. A daily build and smoke
test is among industry best practices promoted by the IEEE (Institute of Electrical and Electronics
Engineers).
 Software Testing done to ensure that whether the build can be accepted for through software
testing or not. Basically, it is done to check the stability of the build received for software testing.
 In software industry, smoke testing is a shallow and wide approach whereby all areas of the
application without getting into too deep, is tested.

Dr. Ali Javed


Sanity Test [9-10]
 In software development, the sanity test determines whether
it is reasonable to proceed with further testing.
 Software sanity tests are commonly conflated with smoke
tests. A smoke test determines whether it is possible to
continue testing, as opposed to whether it is reasonable.
 A software smoke test determines whether the program
launches and whether its interfaces are accessible and
responsive (for example, the responsiveness of a web page
or an input button).
 If the smoke test fails, it is impossible to conduct a sanity test.
 If the sanity test fails, it is not reasonable to attempt more
rigorous testing.
 Both sanity tests and smoke tests are ways to avoid wasting
time and effort by quickly determining whether an
application is too flawed to continue detailed testing.

Dr. Ali Javed


State Transition Testing
 Exposes defects triggered by moving from one program state to another.

 Example: In case of an ATM machine software, consider the various operations of ATM
like “Withdrawl Cash”, “Balance Inquiry”, “Transfer Cash” as different states, then the
defects that arise from Moving from the state of Menu selection to Withdrawl cash
appears under State Transition Testing

Dr. Ali Javed


State Transition Testing
 A state transition model has four basic parts:
 The states that the software may occupy (open/closed or funded/insufficient funds);
 The transitions from one state to another (not all transitions are allowed);
 The events that cause a transition (withdrawing money, closing a file);
 The actions that result from a transition (an error message, or being given your cash).

 Electronic clock example

 A simple electronic clock has four modes, display


time, change time, display date and change date

 The change mode button switches between display


time and display date

 The reset button switches from display time to adjust


time or display date to adjust date

 The set button returns from adjust time to display


time or adjust date to display date
Dr. Ali Javed
State Transition Testing
Electronic clock example

Dr. Ali Javed


Installation Testing [17-18]
 Installation testing is a kind of quality assurance work in the software industry that
focuses on what customers will need to do to install and set up the new software
successfully. The testing process may involve full, partial or upgrades install/uninstall
processes.

 This testing is typically done by the software testing engineer in conjunction with the
configuration manager.

 Process of installing your software could be different for different platforms. It could
be a neat GUI for windows or plain command line for Unix boxes.

Dr. Ali Javed


Installation Testing [17]
 If installation is dependent on some other components like database, server etc. test
cases should be written specifically to address this.

 Negative cases like insufficient memory, aborted installation should also be covered as
part of installation testing.

 Software Distribution cases


 If software is distributed using physical CD format, test activities should include following things -
 Test cases should be present to check the sequence of CDs used.
 Test cases should be present for the graceful handling of corrupted CD.

 If software are distributed from Internet, test cases should be included for
 Bad network speed and broken connection.
 Firewall and security related.
 Size and approximate time taken.
 Concurrent installation/downloads

Dr. Ali Javed


Security Testing
 Security testing is a process to determine that an information system protects data and maintains
functionality as intended.
 To check whether there is any information leakage.
 To test the application whether it has unauthorized access
 To finding out all the potential loopholes and weaknesses of the system.
 Primary purpose of security testing is to identify the vulnerabilities and subsequently repairing
them.

Dr. Ali Javed


Security Testing Techniques [12]
Vulnerability Scanning

 It involves scanning of the application for all known vulnerabilities.


 A computer program designed to assess computers, computer systems,
networks or applications for weaknesses.
 Generally done through various vulnerability scanning software. Ex : Nessus, Sara, and ISS.

Security Scanning

 It is the combination of Scanning and manual verification of the system and applications.

Risk Assessment

 Is a method of analyzing and deciding the risk that depends upon the type of loss and the
possibility / probability of loss occurrence.
 Risk assessment is carried out in the form of various interviews, discussions and analysis of the
same.

Dr. Ali Javed


Security Testing Techniques[12-13]
Penetration Testing/ Ethical Hacking [7]
 An ethical hacker is a computer and network expert
who attacks a security system on behalf of its owners,
seeking vulnerabilities that a malicious hacker could
exploit.
 To test a security system, ethical hackers use the same
methods as their less principled colleagues, but report
problems instead of taking advantage of them.
 Ethical hacking is also known as penetration testing,
intrusion testing and red teaming.
 An ethical hacker is sometimes called a white hat, a
term that comes from old Western movies, where the
"good guy" wore a white hat and the "bad guy"
wore a black hat.
 This is live test mimicking the actions of real life
attackers

Dr. Ali Javed


Security Testing Techniques [12]
Security Auditing

 Security Auditing involves hands on internal inspection of Operating Systems and Applications,
often via line-by-line inspection of the code.

Security Posture Assessment

 It combines Security Scanning, Ethical Hacking and Risk Assessments to show an overall Security
Posture of the organization.
 Security Posture Assessment (SPA) is meant to establish the current baseline security of the network
and systems by discovering known vulnerabilities and weaknesses, with the intention of providing
incremental improvements to tighten the security of the network and systems.

Password Cracking

 Password cracking programs can be used to identify weak passwords.


 Password cracking verifies that users are employing sufficiently strong passwords.
Dr. Ali Javed
Pairwise Testing
 Pairwise (all-pairs) testing is an effective test case generation technique that is based
on the observation that most faults are caused by interactions of at most two factors.

 So for example a web form may work fine using Firefox. And the web form may
work fine if the user selects England as the location. But it may have an error if both
Firefox is used and England is selected. This pair causes an error where neither alone
causes an error.

Simple

Combinations Bed Linen Tea


1 Checked Checked
2 Checked Unchecked
3 Unchecked Checked
Combinations: 2x2 = 4
Dr. Ali Javed 4 Unchecked Unchecked
Usability Testing
 Usability testing is a technique used to evaluate a product by testing it on users. This
can be seen as an irreplaceable usability practice, since it gives direct input on how
real users use the system.
 Usability testing measures the usability, or ease of use, of a specific object or set of
objects.
 User interviews, surveys, video recording of user sessions, and other techniques can be
used

Dr. Ali Javed


Usability Testing
 The aim is to observe people using the product to discover errors and areas of
improvement. Usability testing generally involves measuring how well test subjects
respond in four areas:
 Efficiency -- How much time, and how many steps, are required for people to complete basic tasks? (For
example, find something to buy, create a new account, and order the item.)
 Accuracy -- How many mistakes did people make? (And were they fatal or recoverable?)
 Recall -- How much does the person remember afterwards or after periods of non-use?
 Emotional response -- How does the person feel about the tasks completed? Is the person confident,
stressed? Would the user recommend this system to a friend?

Dr. Ali Javed


Usability Testing
 How many users to test? [14]
 In the early 1990s, Jakob Nielsen, at that time a
researcher at Sun Microsystems, popularized the concept
of using numerous small usability tests typically with only
five test subjects

 Why only need to test with 5 users? [15]


 Some people think that usability is very costly and complex
and that user tests should be reserved for the rare web
design project with a huge budget and a lavish time
schedule. Not true.
 Elaborate usability tests are a waste of resources. The best
results come from testing no more than 5 users and running
as many small tests as you can afford.
 In earlier research, Tom Landauer showed that the number
of usability problems found in a usability test with n users
is: N= (1-(1-L)n)
 where N is the total number of usability problems in the
design and L is the proportion of usability problems
discovered while testing a single user.
 The typical value of L is 31%, averaged across a large
number of projects he studied.

Dr. Ali Javed


58 Validation Testing
 Acceptance Testing

 Alpha Testing

 Beta Testing

Dr. Ali Javed


Acceptance Testing [16]
 It is virtually impossible for a software developer to foresee how the customer will really use
a program

 When custom software is built for one customer, a series of acceptance tests are conducted to
enable the customer to validate all requirements

 Conducted by the end user rather than software engineers

 An acceptance test can range from an informal test drive to a planned and systematically
executed series of tests

 Software developers often distinguish acceptance testing by the system provider from
acceptance testing by the customer (the user or client) prior to accepting transfer of ownership.
In the case of software, acceptance testing performed by the customer is known as user
acceptance testing (UAT), end-user testing, site (acceptance) testing, or field (acceptance)
testing

Dr. Ali Javed


Alpha Testing
60

 In this type of testing, the users are invited at the development center where they use the
application and the developers note every particular input or action carried out by the user.
Any type of abnormal behavior of the system is noted.

 Alpha tests are conducted in a controlled environment

Dr. Ali Javed


Beta Testing
61

 The beta test is conducted at end user sites. Unlike


alpha testing , the developer is generally not present.

 Therefore the beta test is a live application of the


software in an environment that cannot be controlled by
the developer

 In this type of testing, the software is handed over to


the user in order to find out if the software meets the
user expectations and works as it is expected to.

 The end user records all problems that are encountered


during beta testing and reports these to the developer
at regular intervals

 As a result of problems reported during beta tests,


software engineers make modifications and then
prepare for release of the software product

Dr. Ali Javed


References
62

[1] LandMark Resource Software House testing content


[2] http://www.softwaretestinghelp.com/black-box-testing/
[3] Roger Pressman, Software Engineering, 7th Edition
[4] http://www.techopedia.com/definition/7035/end-to-end-test
[5] http://www.asi-test.com/ASI/system-vs-endtoend-testing/
[6] http://www.guru99.com/end-to-end-testing.html
[7]http://testing-a-software.blogspot.com/2011/11/common-issue-found-from-globalization.html
[8] www.onestoptesting.com
[9] http://en.wikipedia.org/wiki/Sanity_testing
[10] http://www.softwaretestingstuff.com/2009/12/difference-between-smoke-sanity-testing.html
[11] http://en.wikipedia.org/wiki/Recovery_testing
[12] http://en.wikipedia.org/wiki/Security_testing
[13] http://searchsecurity.techtarget.com/definition/ethical-hacker
[14] http://en.wikipedia.org/wiki/Usability_testing
[15] http://www.useit.com/alertbox/20000319.html
[16] http://en.wikipedia.org/wiki/Acceptance_testing
[17] http://en.wikipedia.org/wiki/Installation_testing
[18] http://www.testinggeek.com/installation-testing

Dr. Ali Javed


For any query Feel Free to ask
63

Dr. Ali Javed

Вам также может понравиться