Вы находитесь на странице: 1из 14

SQA: monitoring and improving the process and making sure that software has standard,

procedures are followed, problems have been resolved. It force to prevention. It is done in entire
life cycle of all projects. Methodology and standards development are examples of QA activities.
A QA review would focus on the process elements of a project - e.g., are requirements being
defined at the proper level of detail.

QC: is nothing but actual testing in which we test the software, bug tracking and making sure that
software meets with user requirements. QC is part of QA. It forces to break the system. e.g., are
the defined requirements the right requirements

Validation: “Are we doing right job”. It involves Testing, Bug tracking etc. It is product oriented.
Validation is under QC. It takes place after Verification. Validation is concerned with whether the
right functions of the program have been properly implemented, and that this function will properly
produce the correct output given some input value.

Verification: “Are we doing the job in right way?” It involves reviews and meetings to evaluate
documents, plans, and code walk through, requirements, and specifications. It is process
oriented. Verification involves checking to see whether the program conforms to specification. I.e
the right tools and methods have been employed. Thus, it focuses on process correctness

Black box: Testing: Not based on the internal design. Tests are based on requirements and
functionality.

White box testing: Based on the internal design and code. It involves code walkthrough,
branches, paths, loops and conditions.

Unit Level Testing: This is most ‘micro’ scale of testing. It requires design and code. Code
Oriented. Individual components are tested to ensure that they operate correctly. Each
component is tested independently, without other system components

Incremental integrated testing: continue with testing as new functionality is added. Test that
application is independently working without completing the new changes.

Module testing:

A module is a collection of dependent components such as an object class, an abstract data type
or some looser collection of procedures and functions. A module encapsulates related
components so it can be tested without other system modules

Integrated Testing: Test that when we combine parts of application then it is working together or
not. Design Oriented. This phase involves testing collections of modules, which have been
integrated into sub-systems. Sub-systems may be independently designed and implemented. The
most common problems, which arise in large software systems, are sub-systems interface
mismatches. The sub-system test process should therefore concentrate on the detection of
interface errors by rigorously exercising these interfaces.
There are three basic integration test methods:
All-at-once
Bottom-up
Top-down
The all-at-once method provides a useful solution for simple integration problems, involving a
small program possibly using a few previously tested modules.

Bottom-up testing involves individual testing of each module using a driver routine that calls the
module and provides it with needed resources. Means when upper module is not completed that
will send the data to bottom module then we have to create Drivers. Drivers are nothing but
dummy program that will produce some output for lower module.

Functional testing: Test the functional requirement of application.


System Testing: It is based on overall requirement specification. It covered all combined parts of
system. The sub-systems are integrated to make up the entire system. The testing process is
concerned with finding errors that result from unanticipated interactions between sub-systems
and system components. It is also concerned with validating that the system meets its functional
and non-functional requirements. Examples of system testing issues:
Resource loss bugs, throughput bugs, performance, security, recovery,
Transaction synchronization bugs (often misnamed "timing bugs").
Monkey testing:
Gorilla testing however has nothing to do with this. It is an intense round
of testing--quite often redirecting all available resources to the activity.
The idea here is to test as much of the application in as short a period of
time as possible

Sanity Testing or Smoke testing: It is done in starting phase of software creation. . It is used for
checking that macro software are crashing the system every 5 minutes, corrupting database etc.
then software is not in enough state for further testing.
Sanity testing is scripted and Smoke is not scripted.
Basic logic behind smoke and sanity testing is to test that software is ready to further deep
testing. Smoke tests get their name from the electronics industry. The circuits are
laid out on a bread board and power is applied. If anything starts smoking,
there is a problem. In the software industry, smoke testing is a shallow and
wide approach to the application. You test all areas of the application
without getting too deep. This is also known as a Build Verification test or
BVT.In comparison; sanity testing is usually narrow and deep. That is they look
at only a few areas but all aspects of that part of the application. A smoke
test is scripted--either using a written set of tests or an automated
test--whereas a sanity test is usually unscripted.
Regression testing: re-testing after modifications of the software or its environment.
Re-testing: re-testing after fixing a problem of the software or its environment.
Acceptance Testing: Test that application meets with acceptance criteria (specification given
by client). Required Functionality has been implemented. This is the final stage in the testing
process before the system is accepted for operational use. The system is tested with data
supplied by the system client rather than simulated test data. Acceptance testing may reveal
errors and omissions in the systems requirements definition( user – oriented) because real
data exercises the system in different ways from the test data. Acceptance testing may also
reveal requirement problems where the system facilities do not really meet the users needs
(functional) or the system performance (non-functional) is unacceptable.
Performance Testing: Is application level testing. When 100 users hit a request to server or make
http connection at a same time then check how your application behaves. If there is any problem
then you optimize your code or you can say that tuning your code or database, if used, so that
you can bottlenecks your code.(Bottleneck means –This is the limit of code optimization).
Load testing: After Performance it is done. If there is not further way to optimize the code or
database then you need to look at your hardware. "loading up the system and seeing what
breaks." Testing an application under heavy loads, such as testing of a web site under a range of
loads to determine at what point the systems response time degrades or fails. Load can be
increased with no of users and with heavy database. How an application will work under different
user loads and determine the maximum number of concurrent users accessing the site or making
Http connection at the same time. In Load testing you check the system that how much CPU
uses, memory etc. it is system level testing.
Stress Testing: Testing a system under by giving heavy stress with large complex queries,
reputation of same action and determine at what point system response time degrades. Stressing
the system by going beyond its specified limits and hence testing how well the system can cope
with over-load situations? We can also say that it is like a negative testing. Where we increase
the application load beyond to its limit and check that system is behaving in decent manner (e.g.
not corrupting or loosing data).
Response time is matches with Service Level Agreements – SLAs.

Scalability Testing: Scalability testing is done to determine the threshold limit of concurrent users
when the system fails under any of the varied test configurations. Scalability testing is done to
measure and analyze speed of application operation on different hardware/ software platforms,
networks, and database configurations and to identify common failures for server hanging/
crashing.

Bench Marking: Is nothing but end to end timing. Means how much time is taken by server to give
a response to client.

Top-down testing Where testing starts with the most abstract component and works downwards.
Bottom-up testing Where testing starts with the fundamental components and works upwards.
Thread testing Which is used for systems with multiple processes where the processing of a
transaction threads its way through these processes? When more then one thread are used in
system. Like more than one user is debit the amount at a same time.

Back-to-back testing
Which is used when versions of a system are available? The systems are tested together and
their outputs are compared.

Performance Testing: This is used to test the run-time performance of software. The Performance
requirements can be obtained from Software requirements Specifications.

Security testing. This attempt to verify that protection mechanisms built into system will protect it
from improper access.

Recovery testing. This forces software to fail in a variety ways and verifies that recovery is
properly performed.

Pilot Testing
Testing that involves the users just before actual release to ensure that users become familiar
with the release contents and ultimately accept it. Often it is used when ERP is released.
Typically involves many users, is conducted over a short period of time and is tightly controlled

Driver: dummy main program


Stub: dummy sub-program or data

V-Model: First Definition-> 'V' shape model describes about the process about the constructing
the application at a time all the Analyzing, designing, coding and testing will be done at a time. i.e.
once coding finishes it'll go to tester to test for bugs if we got OK form tester we can immediately
start coding after coding again send to tester he'll check for BUGS and will send back to
programmer then he, programmer can finish up by implementing the project. It is the model what
is using by most of the companies.
Second Definition-> v model is model in which testing is done parallel with development.
Third Definition-> In the Software Development Life Cycle, both the Development activity and the
testing activities starts almost at the same time with the same in formations in their hands. The
development team will apply "do-procedures" to achieve the goals and the testing team will apply
"Check-procedures" to verify that. Its a parallel process and finally arrives to the product with
almost no bugs or errors
Comments: Traditional Water fall model will not allow to do the testing and the coding process in
parallel. V - model in the SDLC will allow the process to have testing and coding as a parallel
activity, which enables the changes to occur, more dynamic.
Component Testing/ Unit Testing: Each feature specified in Component Design has been
implemented in component.
Interface Testing: Linked components work together.
System Testing: Whole system is working as per the requirements.
Release Testing: new or changed system will work in the existing business environment, Does it
affect any other systems running on the hardware?
Is it compatible with other systems?
Does it have acceptable performance under load?
Mutation Testing: A method whereby errors are purposely inserted into a program under test to
verify that the test can detect the error. Also known as error seeding.

FU = FG · (FE / FEG)
where FU = number of undetected errors
FG = number of not seeded errors detected
FE = number of seeded errors
FEG = number of seeded errors detected

Alpha testing - testing of an application when development is nearing completion; minor design
changes may still be made as a result of such testing. Typically done by end-users or others, not
by programmers or testers.
Beta testing - testing when development and testing are essentially completed and final bugs and
problems need to be found before final release. Typically done by end-users or others, not by
programmers or testers.
Link testing: This type of testing determines if your site's links to internal and external Web pages
are working. A Web site with many links to outside sites will need regularly scheduled link testing,
because Web sites come and go and URLs change. Sites with many internal links (such as an
enterprisewide Intranet, which may have thousands of internal links) may also require frequent
link testing.
HTML validation: The need for this type of testing will be determined by the type of browser(s)
expected to be used
Reliability and recovery testing: Do you require uninterrupted 24 * 7 availability? Redundant
database servers? Scalable Web servers? Depending on how critical your Web site is to your
business, you may want to simulate various "emergency" scenarios (such as failure of a hard
drive on the Web or database server, or communication link failures) in a test system to be sure
that your production system will handle them successfully.

Server log : Testing/report testing. Web sites that use advertising and track site usage for
marketing needs may need extensive testing to ensure the accuracy of logging and reporting
capabilities. (For more information on log analysis)

GUI Testing:
Information is nothing but Processed Data, which are provided by a Software Application. GUIs
are the one, through which data is fed into the software for processing. Each and every process is
mapped with functionality. While testing the GUIs, the functionalities have to be mapped with the
GUIs that are appearing on the screen. Justification of each and every GUI’s presence is needed
very badly for achieving that functionality in our application.

Field Level Validation


The data that is fed into the application should be able to process. Ensure that the data that is
input into the system is valid for processing. The system should not accept any data that is not
valid. Tests have to be done to ensure this, at the data level, at the GUI level.
Configuration Management: It Include Procedure and techniques for check that what will the
impact of the proposed changes and then to track and document those changes that are made.
Configuration means including policies, system hardware and software, documentation,
operational procedures, freeway geometries and associated infrastructure (e.g., signing and
lighting), incident management strategies, work zone procedures
Software Test Plan: is a document that describes the objectives, scope, approach, and focus of
software testing efforts.
A test plan is a document that collects and organizes testcases in a form that can be presented to
project personnel, project management and external clients. A solid, well-written test plan should
allow a new tester to step in and easily execute the testcases by simply following the test steps.

A test plan should include:

• An overview of the project


• Any assumptions made in the course of creating the testcases
• A description of how Build Verification Testcases (BVTs) are distinguished
• Testcases required for testing the build, core, user interface, error handling, system, load,
stress and performance functionality
• Expected results for each testcase
• Prioritization of testcases, if any

A test plan may include:

• Unique testcase IDs for each testcase (these may be generated by a testcase database)
• A listing of requirement IDs that correspond to the requirement being tested by the
testcase
the project requirements can be used as the starting point for the creation of a project’s test plan.
A test plan should NOT include descriptions of methodologies, practices, standards, defect
reporting methods, corrective action methods, tools or quality requirements

Test Case: is a document that describes an input, action, or event and an expected response, to
determine feature of an application is working correctly.
When to stop testing?:
Deadlines (release deadlines, testing deadlines, etc.)
Test cases completed
Bug Rate falls below of certain rate.
Beta and Alpha testing has completed.
All functionality have been covered
What can be done if requirements are changing continually :
Make commented code.
Start the Proto typing process. When new changes are made, confirm from client.
Project initial schedule should allow for extra time for new changes.
Adopt Automated Testing.
----------------------------------------------------------------------------------------------------

Gray box Testing: In using this strategy Black box testing can be combine with knowledge of
database validation, such as SQL for database query and adding/loading data sets to confirm
functions, as well as query the database to confirm expected result
The Graybox methodology is defined as “Black box testing + White Box test + Regression test +
Mutation testing”. The modified Graybox method will address real-time performance. The original
Graybox methodology did not address real-time or system level testing.
The Gray box methodology is a ten-step process for testing computer software
Step
Description
1
Identify Inputs
2
Identify Outputs
3
Identify Major Paths
4
Identify Sub function (SF)X
5
Develop Inputs for SF X
6
Develop Outputs for SF X
7
Execute Test Case for SF X
8
Verify Correct Result for SF X
9
Repeat Steps 4:8 for other SF
10
Repeat Steps 7&8 for Regression

Scalability - A scalable application will has a response time that increases linearly as load
increases. Such an application will be able to process more and more volume by adding more
hardware resources in a linear (not exponential) fashion.

Volume Testing: just giving a range of load to the database. Suppose our application work for
10,000 records. Then test the application with 10,000 records.

Load and Scalability Testing. Load and scalability testing has two forms:

• Test Response time as you increase the size of our database


• Testing response time as you increase concurrent users

The purpose of load and scalability testing is to ensure that your application will have a good
response time during peak usage. You can also test how your application will behave over time
(as your website contains more and more data in your database). To begin testing, write some
testing scripts that will populate your database with an average amount of data. Run your
performance tests, measure your response time. Then populate your database with an extreme
amount of data (3 to 4 times more data than you can foresee having in 3 years). Run your
performance tests again. If response times are significantly larger for the second test, then
something is wrong.

To run your performance tests, you will want to simulate server usage at different loads. As a rule
of thumb, I simulate low load (one to 5 concurrent users), medium load (10-50 concurrent users),
high load (100 concurrent users) and extreme load (1000+ concurrent users). Note that these
numbers are arbitrary and depend on your business needs. Also, simulating 10 concurrent users
with load testing software isn’t representative of 10 people, since each “robot” in the load test may
wait just milliseconds before hitting the server again. Thus, using a load tester to simulate 10
users is probably more representative of the web surfing patterns of 30-40 people.

Once you have tested at all three load levels, you can now compare average response times to
see if your system is scales, that is, if the response time increases linearly.

Interpreting the results

The fun part of this process is interpreting the results of your load testing. Let us examine some
of the different possibilities:

1. Response time increases too much when database is over populated


Response time should not increase too much if you move from a database with 100 rows
in its tables to 50,000. Database indexing technology makes finding a row in a table take
a matter of milliseconds, even if there are hundreds of thousands of rows. Thus, if your
response time increases too much after moving from a moderately populated database to
an over populated database, then you probably haven’t indexed your appropriate
columns yet.
2. Response time increases exponentially as load increases
If your system becomes un-useable as you increase concurrent users, then your system
is not scalable. Interpreting these results are difficult, as the problem could be with
hardware, deployment configuration, architecture, etc. Make sure you watch the server
resources during the tests:
1. Watch memory requirements
2. Watch CPU usage
If CPU is over used, need faster processor, or more processors. If the cpu is
underused, then the problem is probably input/output (I/O) related. Check your
database connections, your running thread count, and the network configuration
of your test boxes.

If after checking your configuration, verifying that the slowdown is not a hardware bottleneck, and
looking over your architecture for code to optimize, its time to run a code profiler.

Experiments in Performance Testing:

The first step in testing TheServerSide was populating the database with test data. After
populating it to a moderate amount and to an extreme amount (added 16,000 messages and
40,000 users to our database), we found a serious problem. The response time for our top level
pages jumped from 2 seconds to 12 seconds, at a single user.

The problem we had indicated that something was wrong in the database. After checking how our
database handled our queries, we discovered that our primary key columns (and others) were not
being indexed properly. This means that the database had to do linear searches, even for
ejbFindByPrimaryKey(), which is the most common of calls.

Fail-over Testing
Fail over are highly undesirable feature. In the real scenario, for serving more users more than
one server will be placed. The servers need not to be active for all the time. They may need some
maintenance and they can be shut down, or they themselves can become non-functional under
unexpected circumstances. When a server goes down, the session, which is established between
the user and server, should not get erased. The session information will be persisted to the
External State Store (ESS), if the model is fail over enabled. Test should be done to ensure that
the Load Balancing Server is taking the session information of Server A and pooling it to Server B
when A goes down.

Database Script Testing


Database Scripts will be created and they will be run across various databases so that they can
be database independent. This test ensures that the product can have any database as its
backend.

Cyclomatic Complexity Cyclomatic complexity is a measure of the complexity of a software


module based on its edges, nodes, and components within a control-flow graph. It provides an
indication of how many times you need to test the module.
The calculation of cyclomatic complexity is as follows:
CC = E - N + p
where CC is the cyclomatic complexity, E is the number of edges, N is the number of nodes, and
p is the number of components.

Gray Box Testing:


What is Gray Box Testing? Where & how Gray Testing is used?
Answer: Let's begin with the most basic forms of testing: white box, or glass box versus black box
testing: I will quote from Testing Computer Software:Second Edition (1993) Cem Kaner, Jack
Falk, Hung Quoc Nguyen.
Glass box testing is distinguished from black box testing, in which the program is treated as a
black box. You can't see into it. The tester (or programmer) feeds it input data, observes output
data, but does not know, or pretends not to know, how the program works. The test designer
looks for interesting input data and conditions that might lead to interesting outputs. Input data are
"interesting" representatives of a class of possible inputs if they are the ones most likely to
expose an error in the program. In contrast, in glass box testing, the programmer uses her
understanding and access to the source code to develop test cases. At this point, on page 41of
Testing Computer Software, they go into explaining the benefits of white box testing. They also
spend the next few pages going over the concepts of white box testing. In recent years I have
also heard of gray box testing. In this form of testing, the tester has access to some of the inner
workings of the system, usually the database, but not the code. White box testers have access to
the code, but even a black box tester canknow the branches of code--the rules within the code
that cause operations to fork.
A white box tester generally uses the code, and the ability to create drivers and stubs to test the
code directly. They do not rely on the UI to do it.
The typical gray box tester is permitted to set up his testing environment, like seeding a database,
and can view the state of the product after their actions, like performing a SQL query on the
database to be certain of the values of columns. It is used almost exclusively of client-server
testers or others who use a database as a repository of information, but can also apply to a tester
who has to manipulate XML files (DTD or an actual XML file) or configuration files directly. The
true black box tester looks only at the GUI and cannot touch intermediate files, registry entries,
databases, etc., nor are they permitted to see the results their actions have wrought, other than
through the UI. They are, therefore, only permitted to use the UI to do their testing.

Severity tells us how bad the defect is. Priority tells us how soon it is desired to fix the problem
Priority is Business
Severity is Technical

Equivalence Partitioning
Equivalence means that all data that takes the same logic path. For example valid values.
Input data of a program is divided into different categories so that test cases can be developed for
each category of input data. The different categories of input data are called Equivalence
Classes.
For Example: Prepare Test Data for a Edit box that accepts only 5 number of digits. Test Data will
be –1,0,1, 100, 99999, 100000. We divide these values in three classes, first will be negative test
cases –1, 100000 second will be 1,100, 99999 and third will be 0. These are same kind of values
in three different classes.

Boundary Value Analysis

Boundary Value Analysis is a method to analyze the boundary values. In this case, data input as
well as data output are tested. The rationale behind BVA is that the errors typically occur at the
boundaries of the data. The boundaries refer to the upper limit and the lower limit of a range of
values

ISO 9001

1. ISO 9001 requires an organization to establish procedure to control and verify


the design. (Control and Verify the Design)
2. ISO 9001 requires an organization to define and plan its production process and
it must continuously monitor and control the process. (Monitoring and Control the
Process)
3. ISO 9001 requires an organization to inspect or verify incoming material before
use and to perform in-process inspection and testing. It also requires good test
status keeping records. (Prepare raw data before start testing and prepare all
Documents)
4. ISO 9001 emphasizes on prevention and eliminating the causes of non-
conformities. ISO 9001 focusing on Defect Report and record documentation.
(How to Prevent the non-conformities)
5. ISO 9001 emphasizes on servicing. (Proper Maintenance)
CMM
1. Software Development Life Cycle including design, coding and test is
described in CMM 3. Extend design process is further in CMM4 (Design,
Coding, Testing are proper)
2. Production process is specified in software development plan in CMM2.
(Organization is working with process that is defined in Software
Development Plan)
3. Production process control is defined in CMM5. (Selecting the Right
Process)
4. The CMM address testing in CMM2 with configuration and CMM3 with
testing practice.
5. Software Quality Assurance in CMM2 and defect prevention in CMM5.
(How to Prevent the defect)
6. Servicing is defined in all process of CMM, there is no single level for
servicing in CMM. (Service is defined for all phases of process)
Q. Suppose we have a login page in web based application and we want to know that how much
request it is sending to server (database)?
Ans. We open SQL Profiler and check that how much request we are receiving in database
server.

What kind of the testing you should performed on Web Based Application:
• Time: Download time of page.
• Structural: How well do all of the parts of the Website hold together. Are all links inside
and outside the Website working? Do all of the images work? Are there parts of the
Website that are not connected?
• Content: Does the content of critical pages match what is supposed to be there? Do key
phrases exist continually in highly-changeable pages? Do critical pages maintain quality
content from version to version? What about dynamically generated HTML pages?
• Accuracy and Consistency: Are today's copies of the pages downloaded the same as
yesterday's? Close enough? Is the data presented accurate enough? How do you know?
• Response Time and Latency: Does the WebSite server respond to a browser request
within certain parameters? In an E-commerce context, how is the end to end response
time after a SUBMIT? Are there parts of a site that are so slow the user declines to
continue working on it?
• Performance: Is the Browser-Web-WebSite-Web-Browser connection quick enough?
How does the performance vary by time of day, by load and usage? Is performance
adequate for E-commerce applications? Taking 10 minutes to respond to an E-commerce
purchase is clearly not acceptable!
• Browser. The browser is the viewer of a WebSite and there are so many different
browsers and browser options that a well-done WebSite is probably designed to look
good on as many browsers as possible. This imposes a kind of de facto standard: the
WebSite must use only those constructs that work with the majority of browsers. But this
still leaves room for a lot of creativity, and a range of technical difficulties.
• HTML. There are various versions of HTML supported, and the WebSite ought to be built
in a version of HTML that is compatible. And this should be checkable.
• Java, JavaScript, ActiveX. Obviously JavaScript and Java applets will be part of any
serious WebSite, so the quality process must be able to support these. On the Windows
side, ActiveX controls have to be handled as well.
• Cgi-Bin Scripts. This is link from a user action of some kind (typically, from a FORM
passage or otherwise directly from the HTML, and possibly also from within a Java
applet). All of the different types of Cgi-Bin Scripts (perl, awk, shell-scripts, etc.) need to
be handled, and tests need to check "end to end" operation. This kind of a "loop" check is
crucial for E-commerce situations.
• Database Access. In E-commerce applications either you are building data up or
retrieving data from a database. How does that interaction perform in real world use? If
you give in "correct" or "specified" input does the result produce what you expect?

Navigation. Users move to and from pages, click on links, click on images (thumbnails), etc.
Navigation in a WebSite often is complex and has to be quick and error free.

Server Response. How fast the WebSite host responds influences whether a user (i.e. someone
on the browser) moves on or continues. Obviously, InterNet loading affects this too, but this factor
is often outside the Webmaster's control at least in terms of how the WebSite is written. Instead, it
seems to be more an issue of server hardware capacity and throughput. Yet, if a WebSite
becomes very popular -- this can happen overnight! -- loading and tuning are real issues that
often are imposed -- perhaps not fairly -- on the WebMaster.

• Concurrent Users. Do multiple users interact on a WebSite? Can they get in each others'
way? While WebSites often resemble conventional client/server software structures, with
multiple users at multiple locations a WebSite can be much different, and much more
complex, than complex applications.
• Fonts and Preferences. Most browsers support a wide range of fonts and presentation
preferences, and these should not affect how quality on a Website is assessed or
assured.
• Object Mode. Edit fields, push buttons, radio buttons, check boxes, etc. All should be
treatable in object mode, i.e. independent of the fonts and preferences.
• Frames. Windows with multiple frames ought to be processed simply, i.e. as if they were
multiple single-page frames.

Test Context. Tests need to operate from the browser level for two reasons: (1) this is where
users see a Website, so tests based in browser operation are the most realistic; and (2) tests
based in browsers can be run locally or across the Web equally well. Local execution is fine for
quality control, but not for performance measurement work, where response time including Web-
variable delays reflective of real-world usage is essential.

The Session method

Http is a connectionless protocol. So server is not able to identify that what was the sequence of
visitor users.

The first time a user accesses a page some connections and disconnections took place. During
this process the server and the client will interchange information to identify each other. Due to
this, exchange of information, our server will be able to identify a specific user and this
information may be use to assign specific information to each specific client. This relationship
between computers is call a session. During the time a session is active, it is possible to assign
information to a specific client by using Session method.
In this example, we will ask the username of the person in our index.asp page
respondtoforms.asp
<% IF Request.form="" THEN %>
<html>
<title>Our private pages</title>
<body>
In order to access this pages fill the form below:<BR>
<form method="post" action="index.asp">
Username: <input type="text" name="username" size="20"><BR>
Password: <input type="password" name="password" size="15"><BR>
<input type="Submit" value="Submit">
</form>
</body>
</html>
<% ELSE %>

<%
IF Request.form("username")="Joe" AND Request.form("password")="please" THEN
%>
<%
Session("permission")="YES"
Session("username")="Joe"
%>

<html>
<title>Our private pages</title>
<body>

Hi <% =Session("username") %>, you are allow to see these pages: <BR>
<A HREF="page1.asp">Page 1</A><BR>
<A HREF="page2.asp">Page 2</A>

</body>
</html>

<% ELSE %>

Error in username or password

<% END IF %>

<% END IF %>


Application Method:
With Session method we have defined a value for Session("whatever")="Joe", but this information
can not be share between visitors (Session("whatever") has a unique value for each visitor). To
allow sharing information Application method is used.

Sub Routines:
<%
TheName=request.form("name)

if TheName="John" then
ResponseToJohn()
else
ResponseToUnknown()
end if

Sub ResponseToJohn()
response.write ("Hi, John. How are you?")
response.write ("<br>Did you know I got married last month?")
End Sub

Profiler. A profiler is a program that examines your application as it runs. It provides you with
useful run time information such as time spent in particular code blocks, memory / heap
utilization, number of instances of particular objects in memory, etc.

Test Case Sample: (Given by the Rational Robot)


It is a Excel file.
PROJECT : COES
Document References : CO
MODULE : Order Entry Ver1.2
FORM REF: Authentication Sec/Page REF NO.:- 5.1.1
FUNCTIONAL
SPECIFICATION: User Authentication REF NO:- 5.1.1.1
TEST DATE Time Taken
TEST To check whether the entered User name
OBJECTIVE:- and Password are vaild or Invaild
PREPAIRED BY Ashok
TEST CASE NO:- oe_auth_1
USER Name = COES and PASSWORD =
Test DATA COES
Step No Steps Data Expected Result
User Name= COES Should Display Warning Me
"Please Enter User name a
1 Enter User Name and press LOGIN Button Password"
Password= COES Should Display Warning Me
"Please Enter User name a
2 Enter Password and press LOGIN Button Password"
USER = COES AND Should Display Warning Me
Enter user Nameand Password and press Password = XYZ "Please Enter User name a
3 LOGIN Button Password"
USER = XYX AND Should Display Warning Me
Enter user Name and Password and press Password = COES "Please Enter User name a
4 LOGIN Button Password"
USER = XYZ AND Should Display Warning Me
Enter user Name and Password and press Password = XYZ "Please Enter User name a
5 LOGIN Button Password"
Enter user Name and Password and press USER =" " AND Should Display Warning Me
LOGIN Password = " " "Please Enter User name a
6 Button Password"
Enter User Name and Password and USER = COES AND Should navigate to
7 press LOGIN Button Password = COES CoesCategoryList.asp page
Enter User Name and Password and USER = ADMIN AND Should navigate to Maintena
8 press LOGIN Button Password = ADMIN page.

6.2 Bug Review meetings


Regular weekly meeting will be held to discuss reported defects. The development
department will provide status/updates on all defects reported and the test department will
provide addition defect information if needed. All member of the project team will participate.

6.3 Change Request


Once testing begins, changes to the payroll system are discouraged. If functional changes
are required, these proposed changes will be discussed with the Change Control Board
(CCB). The CCB will determine the impact of the change and if/when it should be
implemented.

6.4 Defect Reporting


When defects are found, the testers will complete a defect report on the defect tracking
system. The defect tracking Systems is accessible by testers, developers & all members of
the project team. When a defect has been fixed or more information is needed, the developer
will change the status of the defect to indicate the current state. Once a defect is verified as
FIXED by the testers, the testers will close the defect report.

Вам также может понравиться