Вы находитесь на странице: 1из 32

Performance Testing Fundamentals

Revision 1.0
March, 2017

Confidential and Proprietary Materials

This document is Confidential and Proprietary to SPM Software, Inc. No part of this
document may be reproduced or used in any form or by any means, graphic, electronic,
or mechanical, including photocopying, recording, taping, or information storage and
retrieval systems without written permission from SPM Software, Inc.
Contents
1. Overview ........................................................................................................................... 1
1.1 What is Performance? The End-User Perspective ................................................. 1
1.2 Performance Measurement....................................................................................... 1
1.3 Performance testing ................................................................................................... 2
1.3.1 Load testing ...................................................................................................................... 2
1.3.2 Stress testing ..................................................................................................................... 3
1.3.3 Endurance testing ............................................................................................................ 3
1.3.4 Spike testing ...................................................................................................................... 3
1.4 Performance versus Scalability ................................................................................ 3
1.5 QA Testing versus Performance Testing ................................................................ 3
2. Performance Testing Core activities ............................................................................ 5
2.1 Summary Table of Core Performance-Testing Activities ..................................... 5
2.2 Performance-Testing Activities Walkthrough ....................................................... 7
2.2.1 Identify the Test Environment ....................................................................................... 7
2.2.2 Identify Performance Acceptance Criteria ................................................................... 9
2.2.3 Plan and Design Tests ..................................................................................................... 9
2.2.4 Configure the Test Environment ................................................................................. 11
2.2.5 Implement the Test Design ........................................................................................... 12
2.2.6 Execute the Test .............................................................................................................. 13
2.2.7 Analyze Results, Report, and Retest ........................................................................... 14
3. Load generation ............................................................................................................. 17
3.1 Creating a script for load generation of a web application ................................ 17
3.1.1 Load generation of a simple HTML page ................................................................... 17
3.1.2 Load generation of content dependent pages. ........................................................... 19
3.1.2.1 Parameterize the script .............................................................................................................19
3.1.2.2 Sessions and capture server responses. .....................................................................................19
3.1.2.3 Checking for real results ...........................................................................................................20
3.1.3 Load generation ............................................................................................................. 21
3.1.3.1 The concurrent/active users enigma .........................................................................................21
3.1.3.2 The ramp up .............................................................................................................................22
3.1.3.3 Duration....................................................................................................................................22

4. Monitoring ...................................................................................................................... 23
4.1 System resources ...................................................................................................... 23
4.2 Software components and infrastructure ............................................................. 23
4.3 Correlation and logging .......................................................................................... 24
4.4 Prepare for analysis before the test run ................................................................ 24
5. What to test ..................................................................................................................... 26
5.1 Introduction .............................................................................................................. 26
5.2 Work together with the functional test team ....................................................... 26
5.3 Risk based selection for performance tests........................................................... 26
5.3.1 Risks for the organization ............................................................................................. 26
5.3.2 Risk or chance of a problem ......................................................................................... 27
5.3.3 Combined risk ................................................................................................................ 27
5.4 Making and details of test cases............................................................................. 27
6. GLOSARY ....................................................................................................................... 29

Revision 1.0
1. OVERVIEW
We have become used to having many services available to us provided by computer
systems. Many services are available today on Internet. We have gotten used to buying
items on the Internet, that our email is always accessible, that we can read the news online
when we want to or even see videos of just about anything on Youtube. For the service
providers, the companies that make and operate these web-sites, it is important that you
are pleased with these sites. For you to be pleased, the site needs to do what it promises
properly and attractively. If a site promises to sell books, it should just do that: allow you
to buy books by using their site, it should do it such a way that it is easy for you to use it.
But it should also do it exactly when you want to use and not frustrate you with slow
appearing pages or even time-out errors. In other words, the pages should load quickly.
When the service provided has a lot of competition the pages should load quickly: a study
commissioned by Forester consulting for Akamai showed that 40% of users will leave a
web site if it does not load in two seconds or less, or even worse: they often try out
competition.
The performance is important not just on the Internet: corporate applications performing
fast will benefit the corporation as well, the employees may not have the benefit of turning
to a competitor or alternative tools, but they can become more efficient, saving on labor
costs.
Well performing application save money, not just on the sheer seconds saved but also on
the increased productivity due to less frustration with the tools at the employees.

1.1 What is Performance? The End-User Perspective


It is very difficult to give a right answer when is an application considered to perform well?
From my years of experience (and my colleagues ), I can say that I did not find a magic
number or formula, the answer to this is ultimately one of perception. A well-performing
application is one that lets the end-user carry out a given task without undue perceived
delay or irritation.
Performance sounds simple enough, and you may have your own thoughts on what
constitutes good performance but as a performance tester, when I try to talk about
application performance Im referring to the sum of the parts. At a high level, we can define
these parts as the client, the application software and hardware infrastructure. We will
talk about these parts later.

1.2 Performance Measurement


Unfortunately, the performance cannot be measured by perception; to accurately measure
performance, we must take into accounts some performance indicators (part of non-
functional requirements):
Response time it can be defined as the amount of time it takes for the application to respond
to a user request. In performance testing this is the time between the end user requesting a
response from the application and a complete reply arriving at the users workstation. This

Performance Testing Fundamentals


1
response can be synchronous (blocking) or asynchronous (user doesnt have to wait for
reply to complete before they can resume interaction with application).
Throughput the rate at which application-oriented events occur. i.e. the number of hits on
a web page within a given period.
Utilization The percentage of the theoretical capacity of a resource that is being used. i.e.:
how much CPU is being consumed by application on a web server when 1,000 visitors are
active.

1.3 Performance testing


Now that we know the performance indicators we can define the performance testing: it is
about testing if a system accomplishes its designated functions within given constraints
regarding response time and throughput rate. Performance testing is a non-functional type
of testing, and a superset containing other tests such as: load testing, stress testing,
endurance testing, capacity testing .

1.3.1 Load testing


The term of load testing refers to workload put on a system. Typically, this is the load
provided by the multiple users that we expect when a system is in production. When we
are discussing load testing, we discuss testing on the performance of the application in
terms of response times, throughput when we apply a load that is the same as what we
expect in production

Performance Testing Fundamentals


2
1.3.2 Stress testing
Stress testing has many similarities with load testing. The big difference is that with stress
testing we go way beyond expected load and apply load until the system cant handle this
anymore. The reason we do this is that we want to know when a system breaks, what
happens at that moment, do we lose transactions, data synchronization issues, does the
system recover after the system breaks.
Stress testing is about finding out when a system breaks and what happens when it breaks.

1.3.3 Endurance testing


This is about testing with a given load, usually not all that high, but significant nonetheless,
over a prolonged period of time. The reason for performing these tests is to find out
problems that occur over time (i.e. memory leak).
This is also called soak testing or reliability testing.

1.3.4 Spike testing


Spike test is done with unusual and suddenly increasing or decreasing the load generated
by a very large number of users. The object of this type of performance is to verify system
stability during bursts of concurrent users or system activity to varying degrees of load over
varying time periods.

1.4 Performance versus Scalability


I cannot go further with how to perform performance testing without talking about
scalability. A software is not scalable when the performance of the software system
becomes unacceptable when reaching a certain load level with a given environment and it
cannot be improved even with upgraded (scale up) or additional hardware (scale out).
There are many different explanations about performance versus scalability. From my
point of view, I agree with Henry Liu: performance and scalability for a software system differ
from and correlate to each other.
Performance measure how fast and efficiently a software system can complete certain computing
tasks, while scalability measure the trend of performance with increase load. Performance and
scalability are inseparable from each other. It doesnt make sense to talk about scalability if a system
does not perform. However, a system may perform but not scale.

1.5 QA Testing versus Performance Testing


QA testing is about making sure that a software product works as has been designed from
the functionality point of view. Its more about the correctness of a software program with
its coded logic, rather than how fast it can complete a specific task. Performance testing
works on the basic assumption that the software works, but that it might perform slow,
which make it unusable for its users. There are different sides of the same coin and typically
inseparable.
In dealing with QA testing versus performance testing, it is important to share some
observations and experiences that helps you not to fall into trap:

Performance Testing Fundamentals


3
QA testing should precede performance testing. Using a non-QAed version of
software for performance testing may only end up with rediscovering the
functionality bugs that the developers and QA engineers are already aware of.
Typically, you should use the last known good version of software that has passed
QA testing for your performance testing.
Performance testing is different from Q testing. QA testing doesnt care about what
tool to use as long as it can be used to help prove the correctness of the software
under test. However, performance testing is a lot pickier on what tools to use.
Sometimes QA people trying to adapt QA testing tools for performance testing. In
this case you may get some performance number, but QA testing tools typically
carry heavy overhead on the client side, which does not necessarily represent the
true performance of the software under test. Also, QA testing tools typically do not
measure up with the requirement of volume testing that is a requirement for
performance testing.
Sometimes performance testing may help discover functionality bugs that QA
testing did not uncover. This is additional merit of performance testing. Same thing
may happen with QA testing, it may help discover performance bugs before
performance testing team starts the tests on QAed software version.

Performance Testing Fundamentals


4
2. PERFORMANCE TESTING CORE ACTIVITIES
In this section, you will be able to learn the main activities that are part of performance
testing projects from our perspective.
Performance testing is a complex activity that cannot effectively be shaped into a one-size-
fits-all or one-size-fits-most approach.
There are some activities that are part of nearly all project-level performance-testing efforts.
These activities may occur at different times, be called different things, have different
degrees of focus, and be conducted either implicitly or explicitly, but when all is said and
done, it is quite rare when a performance-testing-project does not involve at least making a
decision around the core activities identified and referenced throughout this guide. These
core activities do not in themselves constitute an approach to performance testing; rather,
they represent the foundation upon which an approach can be built that is appropriate for
your project.

2.1 Summary Table of Core Performance-Testing Activities


The following table summarizes the core performance-testing activities along with the most
common input and output for each activity.
Note: project context is not listed, although it is a critical input item for each activity.

Activity Input Output


Comparison of test and
Logical and physical
production environments
production architecture
Environment-related
Identify the Test Logical and physical test
concerns
Environment architecture
Determination of whether
Available tools
additional tools are required

Performance-testing success
criteria
Client expectations
Performance goals and
Identify Risks to be mitigated
requirements
Performance Business requirements
Key areas of investigation
Acceptance Criteria Contractual obligations
Key performance indicators
Key business indicators

Available application
Conceptual strategy
features and/or
Plan and Design Test execution prerequisites
components
Tests Tools and resources
Application usage
required
scenarios

Performance Testing Fundamentals


5
Unit tests Application usage models to
Performance acceptance be simulated
criteria Test data required to
implement tests
Tests ready to be
implemented

Configured load-generation
Conceptual strategy and resource-monitoring
Configure the Test Available tools tools
Environment Designed tests Environment ready for
performance testing

Conceptual strategy
Available
Validated, executable tests
tools/environment
Validated resource
Implement the Test Available application
monitoring
Design features and/or
Validated data collection
components
Designed tests

Task execution plan


Available
tools/environment
Available application Test execution results
Execute the Test features and/or
components
Validated, executable tests

Task execution results


Results analysis
Performance acceptance
Analyze Results, Recommendations
criteria
Report, and Retest Reports
Risks, concerns, and issues

Performance Testing Fundamentals


6
2.2 Performance-Testing Activities Walkthrough

2.2.1 Identify the Test Environment


The environment in which your performance tests will be executed, along with the tools
and associated hardware necessary to execute the performance tests, constitute the test
environment. Under ideal conditions, if the goal is to determine the performance
characteristics of the application in production, the test environment is an exact replica of
the production environment but with the addition of load-generation and resource-
monitoring tools. Exact replicas of production environments are uncommon.
The degree of similarity between the hardware, software, and network configuration of the
application under test conditions and under actual production conditions is often a
significant consideration when deciding what performance tests to conduct and what size
loads to test. It is important to remember that it is not only the physical and software
environments that impact performance testing, but also the objectives of the test itself.

Performance Testing Fundamentals


7
Often, performance tests are applied against a proposed new hardware infrastructure to
validate the supposition that the new hardware will address existing performance
concerns.
The key factor in identifying your test environment is to completely understand the
similarities and differences between the test and production environments. Some critical
factors to consider are:

Hardware
o Configurations
o Machine hardware (processor, RAM, etc.)
Network
o Network architecture and end-user location
o Load-balancing implications
o Cluster and Domain Name System (DNS) configurations
Tools
o Load-generation tool limitations
o Environmental impact of monitoring tools
Software
o Other software installed or running in shared or virtual environments
o Software license constraints or differences
o Storage capacity and seed data volume
o Logging levels
External factors
o Volume and type of additional traffic on the network
o Scheduled or batch processes, updates, or backups
o Interactions with other systems

Consider the following key points when characterizing the test environment:

Although few performance testers install, configure, and administrate the


application being tested, it is beneficial for the testers to have access to the servers
and software, or to the administrators who do the system maintenance.
Identify the amount and type of data the application must be seeded with to emulate
real-world conditions.
Identify critical system components. Do any of the system components have known
performance concerns? Are there any integration points that are beyond your
control for testing?
Get to know the IT staff. You will likely need their support to perform tasks such as
monitoring overall network traffic and configuring your load-generation tool to
simulate a realistic number of Internet Protocol (IP) addresses.
Check the configuration of load balancers.
Validate name resolution with DNS. This may account for significant latency when
opening database connections.
Validate that firewalls, DNS, routing, and so on; treat the generated load similarly
to a load that would typically be encountered in a production environment.

Performance Testing Fundamentals


8
It is often appropriate to have systems administrators set up resource-monitoring
software, diagnostic tools, and other utilities in the test environment.

2.2.2 Identify Performance Acceptance Criteria


It generally makes sense to start identifying, or at least estimating, the desired performance
characteristics of the application early in the development life cycle. This can be
accomplished most simply by noting the performance characteristics that your users and
stakeholders equate with good performance. The notes can be quantified at a later time.
Classes of characteristics that frequently correlate to a users or stakeholders satisfaction
typically include:

Response time. For example, the product catalog must be displayed in less than
three seconds.
Throughput. For example, the system must support 25 book orders per second.
Resource utilization. For example, processor utilization is not more than 75
percent. Other important resources that need to be considered for setting objectives
are memory, disk input/output (I/O), and network I/O.

Consider the following key points when identifying performance criteria:

Business requirements
User expectations
Contractual obligations
Regulatory compliance criteria and industry standards
Service Level Agreements (SLAs)
Resource utilization targets
Various and diverse, realistic workload models
The entire range of anticipated load conditions
Conditions of system stress
Entire scenarios and component activities
Key performance indicators
Previous releases of the application
Competitors applications
Optimization objectives
Safety factors, room for growth, and scalability
Schedule, staffing, budget, resources, and other priorities

2.2.3 Plan and Design Tests


Planning and designing performance tests involves identifying key usage scenarios,
determining appropriate variability across users, identifying and generating test data, and
specifying the metrics to be collected. Ultimately, these items will provide the foundation
for workloads and workload profiles.
When designing and planning tests with the intention of characterizing production
performance, your goal should be to create real-world simulations in order to provide

Performance Testing Fundamentals


9
reliable data that will enable your organization to make informed business decisions. Real-
world test designs will significantly increase the relevancy and usefulness of results data.
Key usage scenarios for the application typically surface during the process of identifying
the desired performance characteristics of the application. If this is not the case for your test
project, you will need to explicitly determine the usage scenarios that are the most valuable
to script. Consider the following when identifying key usage scenarios:

Contractually obligated usage scenario(s)


Usage scenarios implied or mandated by performance-testing goals and objectives
Most common usage scenario(s)
Business-critical usage scenario(s)
Performance-intensive usage scenario(s)
Usage scenarios of technical concern
Usage scenarios of stakeholder concern
High-visibility usage scenarios

When identified, captured, and reported correctly, metrics provide information about how
your applications performance compares to your desired performance characteristics. In
addition, metrics can help you identify problem areas and bottlenecks within your
application.
It is useful to identify the metrics related to the performance acceptance criteria during test
design so that the method of collecting those metrics can be integrated into the tests when
implementing the test design. When identifying metrics, use either specific desired
characteristics or indicators that are directly or indirectly related to those characteristics.
Consider the following key points when planning and designing tests:

Realistic test designs are sensitive to dependencies outside the control of the system,
such as humans, network activity, and other systems interacting with the
application.
Realistic test designs are based on what you expect to find in real-world use, not
theories or projections.
Realistic test designs produce more credible results and thus enhance the value of
performance testing.
Component-level performance tests are integral parts of realistic testing.
Realistic test designs can be more costly and time-consuming to implement, but they
provide far more accuracy for the business and stakeholders.
Extrapolating performance results from unrealistic tests can create damaging
inaccuracies as the system scope increases, and frequently lead to poor decisions.
Involve the developers and administrators in the process of determining which
metrics are likely to add value and which method best integrates the capturing of
those metrics into the test.
Beware of allowing your tools to influence your test design. Better tests almost
always result from designing tests on the assumption that they can be executed and
then adapting the test or the tool when that assumption is proven false, rather than

Performance Testing Fundamentals


10
by not designing tests based on the assumption that you do not have access to a tool
to execute the test.

Realistic test designs include:

Realistic simulations of user delays and think times, which are crucial to the
accuracy of the test.
User abandonment, if users are likely to abandon a task for any reason.
Common user errors.

2.2.4 Configure the Test Environment


Preparing the test environment, tools, and resources for test design implementation and
test execution prior to features and components becoming available for test can
significantly increase the amount of testing that can be accomplished during the time those
features and components are available.
Load-generation and application-monitoring tools are almost never as easy to get up and
running as one expects. Whether issues arise from setting up isolated network
environments, procuring hardware, coordinating a dedicated bank of IP addresses for IP
spoofing, or version compatibility between monitoring software and server operating
systems, issues always seem to arise from somewhere. Start early, to ensure that issues are
resolved before you begin testing.
Additionally, plan to periodically reconfigure, update, add to, or otherwise enhance your
load-generation environment and associated tools throughout the project. Even if the
application under test stays the same and the load-generation tool is working properly, it
is likely that the metrics you want to collect will change. This frequently implies some
degree of change to, or addition of, monitoring tools.

Consider the following key points when configuring the test environment:

Determine how much load you can generate before the load generators reach a
bottleneck. Typically, load generators encounter bottlenecks first in memory and
then in the processor.
Although it may seem like a common sense practice, it is important to verify that
system clocks are synchronized on all the machines from which resource data will
be collected. Doing so can save you significant time and prevent you from having
to dispose of the data entirely and repeat the tests after synchronizing the system
clocks.
Validate the accuracy of load test execution against hardware components such as
switches and network cards. For example, ensure the correct full-duplex mode
operation and correct emulation of user latency and bandwidth.
Validate the accuracy of load test execution related to server clusters in load-
balanced configuration. Consider using load-testing techniques to avoid affinity of
clients to servers due to their using the same IP address. Most load-generation tools
offer the ability to simulate usage of different IP addresses across load-test
generators.

Performance Testing Fundamentals


11
Monitor resource utilization (CPU, network, memory, disk and transactions per
time) across servers in the load-balanced configuration during a load test to validate
that the load is distributed.

2.2.5 Implement the Test Design


The details of creating an executable performance test are extremely tool-specific.
Regardless of the tool that you are using, creating a performance test typically involves
scripting a single usage scenario and then enhancing that scenario and combining it with
other scenarios to ultimately represent a complete workload model.
Load-generation tools inevitably lag behind evolving technologies and practices. Tool
creators can only build in support for the most prominent technologies and, even then,
these have to become prominent before the support can be built. This often means that the
biggest challenge involved in a performance-testing project is getting your first relatively
realistic test implemented with users generally being simulated in such a way that the
application under test cannot legitimately tell the difference between the simulated users
and real users. Plan for this and do not be surprised when it takes significantly longer than
expected to get it all working smoothly.

Consider the following key points when implementing the test design:

Ensure that test data feeds are implemented correctly. Test data feeds are data
repositories in the form of databases, text files, in-memory variables, or
spreadsheets that are used to simulate parameter replacement during a load test.
For example, even if the application database test repository contains the full
production set, your load test might only need to simulate a subset of products
being bought by users due to a scenario involving, for example, a new product or
marketing campaign. Test data feeds may be a subset of production data
repositories.
Ensure that application data feeds are implemented correctly in the database and
other application components. Application data feeds are data repositories, such as
product or order databases, that are consumed by the application being tested. The
key user scenarios, run by the load test scripts may consume a subset of this data.
Ensure that validation of transactions is implemented correctly. Many transactions
are reported successful by the Web server, but they fail to complete correctly.
Examples of validation are, database entries inserted with correct number of rows,
product information being returned, correct content returned in html data to the
clients etc.
Ensure hidden fields or other special data are handled correctly. This refers to data
returned by Web server that needs to be resubmitted in subsequent request, like
session IDs or product ID that needs to be incremented before passing it to the next
request.
Validate the monitoring of key performance indicators (KPIs).
Add pertinent indicators to facilitate articulating business performance.
If the request accepts parameters, ensure that the parameter data is populated
properly with variables and/or unique data to avoid any server-side caching.

Performance Testing Fundamentals


12
If the tool does not do so automatically, consider adding a wrapper around the
requests in the test script in order to measure the request response time.
It is generally worth taking the time to make the script match your designed test,
rather than changing the designed test to save scripting time.
Significant value can be gained from evaluating the output data collected from
executed tests against expectations in order to test or validate script development.

2.2.6 Execute the Test


Executing tests is what most people envision when they think about performance testing.
It makes sense that the process, flow, and technical details of test execution are extremely
dependent on your tools, environment, and project context. Even so, there are some
universal tasks and considerations that need to be kept in mind when executing tests.
Much of the performance testingrelated training available today treats test execution as
little more than starting a test and monitoring it to ensure that the test appears to be running
as expected. In reality, this activity is significantly more complex than just clicking a button
and monitoring machines.

Test execution can be viewed as a combination of the following sub-tasks:

1. Coordinate test execution and monitoring with the team.


2. Validate tests, configurations, and the state of the environments and data.
3. Begin test execution.
4. While the test is running, monitor and validate scripts, systems, and data.
5. Upon test completion, quickly review the results for obvious indications that the
test was flawed.
6. Archive the tests, test data, results, and other information necessary to repeat the
test later if needed.
7. Log start and end times, the name of the result data, and so on. This will allow you
to identify your data sequentially after your test is done.

As you prepare to begin test execution, it is worth taking the time to double-check the
following items:

Validate that the test environment matches the configuration that you were
expecting and/or designed your test for.
Ensure that both the test and the test environment are correctly configured for
metrics collection.
Before running the real test, execute a quick smoke test to make sure that the test
script and remote performance counters are working correctly. In the context of
performance testing, a smoke test is designed to determine if your application can
successfully perform all of its operations under a normal load condition for a short
time.
Reset the system (unless your scenario calls for doing otherwise) and start a formal
test execution.
Make sure that the test scripts execution represents the workload model you want
to simulate.

Performance Testing Fundamentals


13
Make sure that the test is configured to collect the key performance and business
indicators of interest at this time.

Consider the following key points when executing the test:

Validate test executions for data updates, such as orders in the database that have
been completed.
Validate if the load-test script is using the correct data values, such as product and
order identifiers, in order to realistically simulate the business scenario.
Whenever possible, limit test execution cycles to one to two days each. Review and
reprioritize after each cycle.
If at all possible, execute every test three times. Note that the results of first-time
tests can be affected by loading Dynamic-Link Libraries (DLLs), populating server-
side caches, or initializing scripts and other resources required by the code under
test. If the results of the second and third iterations are not highly similar, execute
the test again. Try to determine what factors account for the difference.
Observe your test during execution and pay close attention to any behavior you feel
is unusual. Your instincts are usually right, or at least valuable indicators.
No matter how far in advance a test is scheduled, give the team 30-minute and 5-
minute warnings before launching the test (or starting the days testing) if you are
using a shared test environment. Additionally, inform the team whenever you are
not going to be executing for more than one hour in succession so that you do not
impede the completion of their tasks.
Do not process data, write reports, or draw diagrams on your load-generating
machine while generating a load, because this can skew the results of your test.
Turn off any active virus-scanning on load-generating machines during testing to
minimize the likelihood of unintentionally skewing the results of your test.
While load is being generated, access the system manually from a machine outside
of the load-generation environment during test execution so that you can compare
your observations with the results data at a later time.
Remember to simulate ramp-up and cool-down periods appropriately.
Do not throw away the first iteration because of application script compilation, Web
server cache building, or other similar reasons. Instead, measure this iteration
separately so that you will know what the first user after a system-wide reboot can
expect.
Test execution is never really finished, but eventually you will reach a point of
diminishing returns on a particular test. When you stop obtaining valuable
information, move on to other tests.
If you feel you are not making progress in understanding an observed issue, it may
be more efficient to eliminate one or more variables or potential causes and then run
the test again.

2.2.7 Analyze Results, Report, and Retest


Managers and stakeholders need more than just the results from various tests they need
conclusions, as well as consolidated data that supports those conclusions. Technical team
members also need more than just results they need analysis, comparisons, and details

Performance Testing Fundamentals


14
behind how the results were obtained. Team members of all types get value from
performance results being shared more frequently.
Before results can be reported, the data must be analyzed. Consider the following important
points when analyzing the data returned by your performance test:

Analyze the data both individually and as part of a collaborative, cross-functional


technical team.
Analyze the captured data and compare the results against the metrics acceptable
or expected level to determine whether the performance of the application being
tested shows a trend toward or away from the performance objectives.
If the test fails, a diagnosis and tuning activity are generally warranted.
If you fix any bottlenecks, repeat the test to validate the fix.
Performance-testing results will often enable the team to analyze components at a
deep level and correlate the information back to the real world with proper test
design and usage analysis.
Performance test results should enable informed architecture and business
decisions.
Frequently, the analysis will reveal that, in order to completely understand the
results of a particular test, additional metrics will need to be captured during
subsequent test-execution cycles.
Immediately share test results and make raw data available to your entire team.
Talk to the consumers of the data to validate that the test achieved the desired
results and that the data means what you think it means.
Modify the test to get new, better, or different information if the results do not
represent what the test was defined to determine.
Use current results to set priorities for the next test.
Collecting metrics frequently produces very large volumes of data. Although it is
tempting to reduce the amount of data, always exercise caution when using data-
reduction techniques because valuable data can be lost.

Most reports fall into one of the following two categories:

Technical Reports
o Description of the test, including workload model and test environment.
o Easily digestible data with minimal pre-processing.
o Access to the complete data set and test conditions.
o Short statements of observations, concerns, questions, and requests for
collaboration.
Stakeholder Reports
o Criteria to which the results relate.
o Intuitive, visual representations of the most relevant data.
o Brief verbal summaries of the chart or graph in terms of criteria.
o Intuitive, visual representations of the workload model and test
environment.
o Access to associated technical reports, complete data sets, and test
conditions.

Performance Testing Fundamentals


15
o Summaries of observations, concerns, and recommendations.

The key to effective reporting is to present information of interest to the intended audience
in a manner that is quick, simple, and intuitive. The following are some underlying
principles for achieving effective reports:

Report early, report often.


Report visually.
Report intuitively.
Use the right statistics.
Consolidate data correctly.
Summarize data effectively.
Customize for the intended audience.
Use concise verbal summaries using strong but factual language.
Make the data available to stakeholders.
Filter out any unnecessary data.
If reporting intermediate results, include the priorities, concerns, and blocks for
the next several test-execution cycles.

Performance Testing Fundamentals


16
3. LOAD GENERATION
I think is time to move to a more practical guide. We need to learn to apply load in order to
test a system. Next pages will describe how this is done by capturing and replaying traffic
between clients and servers. The load generation tools can accommodate different protocols
which means different kind of traffic.
We will focus on a web application as this is a very straight forward and well known
protocol which allows us to explain the principle.
This section describe how to set up a load generating test. How to create a script for load
testing, how to define and set the actual load to be applied. There are more items to cover
than just record and playback in larger numbers. The scripts need to be adapted in order to
handle server responses, it needs to use different data than during recording and we need
to check that we actually get the correct response. All of these items should be taken into
account by a professional load tester.
Most of the information from this chapter taken and adapted from Witteveen, Albert book,
Performance testing - a practical guide)

3.1 Creating a script for load generation of a web application


Most load generating tools will feature record and playback. The tool will record all the
traffic between the client application and server. For a web application that means that will
record the HTTP traffic. The recording is processed by the load generator and transferred
into a script. That script can then be used during playback to request and receive the same
traffic as we did during records, thereby simulating one users actions. To run this into load,
the load generator will playback the script later multiple times in parallel. Although the
script is, by most load generators, created after the recording.

3.1.1 Load generation of a simple HTML page


Let us look at a very simple HTML page. We will ask for the page:
http://www.example.com. The server will return the index.html page. Such a HTML file

Performance Testing Fundamentals


17
is just a text file that describes how the browser should render the page. The text in the file
looks like:

The browser uses that and shows it to the user like this:

Now if the page would have a graphic in there such as company logo, that graphic would
not have been in the actual HTML file. The HTML file would tell the browser that it should
also show the graphic and where to find it. So right after receiving the HTML file, the
browser would put out another request for.
What happens within the communication between the web browser and the server is that
the browser requests pages and other items, and the server responds with these pages and
other items. The browser will use the responses to render the pages on the screen for the
user. An HTML page, such as returned to the browser by the server, is really a text file
which tells the browser how to render the page. In it you will nearly always also see that is
should depict a graphic such as a picture. The browser will also request these graphics to
the server. A simple HTML page with a logo will mean a request for the page by the
browser, a response containing the page by the server, another request by the browser for
the logo graphic and the response by the server containing the graphic.
When we record this in a load generation tool, the tool will listen on this communication
between the browser and the server and will record it. When we stop the recording, we can
use the tool to replay the requests. What really happens at the replay is that the tool sends
the same request as during the recording. The big difference here is that the page will not
be rendered. It doesnt have to be, as we are interested in the effect on the server.
Now this needs to be turned into load. If we want to simulate for instance 10 users, the load
generator now simply will request the same page as well as the graphic that belongs to it
10 times at the same time. So, it will send simultaneously 10 requests for the HTML file.
And then, it will also send 10 requests for the graphic file. The timing between these
requests is important. To really emulate 10 virtual users the requests for the graphic file
needs to emulate what would happen if 10 real users were to request the page. So, the
graphic file not only needs to be requested after the HTML file was received, but also the
time it took the browser to request the file after it had received the HTML file during
recording.
This also explains why the load generator must be aware of not just the ability of HTTP. It
also needs to know how browsers work. It needs to consider that the browser would never
request the graphic files before it had processed the HTML file. And would there have been

Performance Testing Fundamentals


18
multiple graphic files. The browser could request these files simultaneously, resulting in
yet another behavior. So, the load generator recreates HTTP traffic, but does this on such a
way that it takes into account the HTML rendering.

3.1.2 Load generation of content dependent pages.


In the example used above, it was very simple. The same request could be done
simultaneously by just requesting the 50 HTML pages at the same time. To be followed for
each received HTML by a request for the graphic file after the HTML was received plus the
time it took to process the page. But most sites are much more complex than this.

3.1.2.1 Parameterize the script


Much of the returned content and performance depends on the fact that it provides
different contents for different users. Take for example a website where users have to log
in. When logging in, we cant just let the same user log in 50 times. This may be technically
possible (if the web site allows 50 logins from the same user), but in most cases the server
behaves quite differently when 50 unique users are logged in, for instance the responses for
the same user that logs in 50 times can be cached. For 50 unique users that cant be cached
as their responses are different, even if it just the text Welcome < username > in the page.
When the user logs in, what happens is that the browser sends the name and password to
the server. If this combination is correct the server will construct a page for this user and
return that.
To simulate this in a Load and stress test tool, we add a parameter for this in the script.
* note that caching can very beneficial for performance but can also cause major pitfalls for the load
tester. Caching may not be ignored and needs to be taken into account.
When the load test tool recorded the script, it caught the HTTP traffic and created a script
that is used later when we playback the script. This script is usually written in a scripting
or a programming language. That offers us the ability to replace the parts of the script with
a variable. For instance, in the case of user login. If during the recording, we logged in with
the username: John, usually in the script we will find the text John somewhere in the script.
Now to parametrize this we replace John with i.e. $username. We will then also tell the
script that for this variable it can use a list of valid usernames. Often this list is just a text
file in the same directory.
Parametrization requires a lot more than adding a few variables and a list of data. Things
to consider are that often data is connected. Such as the name and password, if we
parametrize both the name and password, these need to be linked.

3.1.2.2 Sessions and capture server responses.


As soon as a user is logged in, the user gets the content for that user on subsequent requests
as well. HTTP is whats called a stateless protocol. That means that connections tend to be
ended after the requests are processed. A result of this is that for each new page the after
the login, the user needs to authenticate again. This is not done by resending the
username/password over and over again. What usually happens is that part of the server
response after login is a session-id generated by the server. The browser remembers the
session-id. Every subsequent request from the browser will contain the session-id as well.
The server uses this to authenticate these requests.

Performance Testing Fundamentals


19
Therefore Session-ids create a challenge as they are unique and generated by the server at
login. Most load generators provide functionality for that. What they will do is capture the
server response and are able to use that as a variable. So, the script of the load generator
will have a function that, when the playback is running, analyzes the server response,
distills from it the session-id and assigns it to the variable.
Many load generators will handle standard items such as session-id s automatically. That
means that when you recorded a script and open it for the first time for editing, the function
for capturing the session-id as well as the variables later in the script will already be there.
This is one area where the more mature load generators will make things a lot easier for
you. Other tools require that you do this yourself. It is not very hard, but it requires a bit of
manual labor and appropriate use of the find and replace functionality of the script editor.
Aside from session-ids there are multiple replies from the server that become part of
subsequent request and therefore need to be captured and parametrized. This then turns
into something that you will have to do by hand, although some load generators also
provide the ability for you to add a new rule for that.

3.1.2.3 Checking for real results


When the script is running, all sorts of request are done and we get responses from the
server. In any good test, either by hand or automatic, you will have test steps and an
expected result. For load testing this is a bigger challenge. When we simulate multiple
virtual users, how can we check that we get the right response? We cannot simply state that
we do not test functionality and therefore dont need to care. Also, we cant rely on the load
test tools to this for us. When you record a script, parametrize correctly and perform a test
run, the load test tool may happily report that the test passed. But we cannot be sure as the
way that the tools report that everything went fine.
First let us look at a pitfall for load testing a web application. Part of the HTTP protocol is
that the server will not just send files across, but it will also send HTTP-status codes. For
instance, for every request that succeeded the status code replied will be 200. If you request
a page that does not exist or at least cannot be found by the server, it will return the familiar
code 404 file not found.
Most load test tools will automatically check these status codes. If they receive for each
request the page and a status 200 they will conclude that everything went fine.
Unfortunately, this does not have to be the case. For instance, if you log in with a wrong
password, the site will show a page with a text like, Sorry your name and passwords did not
match, please try again. That page however will be returned to you with the status code 200
OK. So, if you made a mistake in the parametrization of the scripts and the script does not
send the right name and password combination for logging in, the load test tool cannot tell
that when rerun since it, by default, only checks for HTTP status code.
The Load test tools can provide special functions for checking the results functionally for
the result. The method for this is that the server responses get checked for known and
unique items that indicate that the result is right. In the example of logging into to the web
site.

Performance Testing Fundamentals


20
Checks in every step will help you as well. If a test failed, you are not much helped with a
generic failed message. You want to know where it failed. For that you need the checks at
all the steps. If there is a performance problem, you want to be able to pinpoint where
exactly you encounter problems.
Test data collection (or generation) and setting up tests can be quite time consuming. If you
just add one final check and only extra checks when problems are found, you may lose a
lot of time in setting up a new test to pinpoint the problem. Therefore, it makes sense to
resist the temptation to only add checks when the last one fails.

3.1.3 Load generation


When the script is fully prepared, the script is ready to be used for load generation. For the
actual load generation, a couple of steps need to be taken. If the script is ready and the
monitoring is set, we can almost start the load generation. We have to create the load
scenario. Setting the load scenario depends heavily on the goal of your test. Do you for
instance want to test a scenario that looks like how you expect the common usage of the
site or are looking at the expected peak loads? Are you trying to find a breakpoint? All this
depends heavily on the test you want to perform, but no matter yours goal the steps for
setting the scenario are the same.

3.1.3.1 The concurrent/active users enigma


An often-used term in load and stress testing is the term: concurrent users. This however
is not as straight forward as it may seem. In any application a certain number of users are
performing actions. This can be seen as concurrent users. But, no matter how hard they are
working, they do not use the same functionality constantly at the same time. The users
interact with the forms on the screen, with customers, need to think. So, when there are for
instance 50 users logged in, you may have only 5 that pretty simultaneously are actually
performing actions that use resources on the server. So, what is a concurrent user? The
amount of users logged in? Or the amount of users that at a point in time clicked on a
button? Users that have running request of the server? The definition you choose defines
greatly the stress that is put on the server. Tweaking the definition also enables anyone to
make sure that stated goals are either easy to achieve or to fail. If a simple business
requirement is that you must be able to have 50 concurrent users, you can state your
definition such that 50 users simultaneously request a certain function, which can be very
hard to achieve if this is a function that requires a lot of the system. Yet if you state that the
50 users are logged in, but at various points in the business process, and allow users to quite
some think time, you may end up with no more than 5 users at a point in time are actually
using a function on the server.
The response time is the time duration measured from a users perspective from initiating
an action to receiving the response from the system. For example, the user login response
time is the time duration from a user clicking on the login button to actually seeing the next

Performance Testing Fundamentals


21
screen indicating that the user has successfully logged in. Its important not to confuse
response time with think time. Think time measures the time duration spent by a user on
preparing input and digesting the content of the response until initiating the next action to
the system, whereas response time measures the time spent by the system actually
processing the user request. A system would be idle when a user is thinking or entering
input. Every load test tool provides the option for specifying think times. It must be clarified
that the differentiation between response time and think time leads to the differentiation
between active users and concurrent users to a system. The following formula summarizes
clearly the subtlety of active users (Nactive) versus concurrent users (Nconcurrent):

3.1.3.2 The ramp up


First you need to create the ramp-up. Logging in to a web application has its own impact
on resource utilization. It may or may not be the case that you want to exclude its effects.
Also, logging in may have to represent real life. And in real life you will not see users log
in at the exact same moment. For an application that is used by corporate users it may be a
fact that 95 % of its users log in between 7:30h and 9:00h am. Whilst for other sites the traffic
may be much more spread over the day. And it may even be possible that there are
applications where the users login fairly at the same moment. Think for instance about sites
that sell tickets for popular concerts. Depending on what your goal is, you will have to
setup the ramp up of the login. That means that you may for instance choose to have all the
users log in simultaneously, or you may choose to have two users log in every 5 minutes
until all your users have logged in.

3.1.3.3 Duration
How long should your scenario run? Most often you will simply let the scenario run until
the number of virtual users you defined are finished. There are however situations, such as
looking for memory leaks, where you would want a certain load to remain on the system
for a prolonged time. In such cases you need to set the duration for that time. Of course, in
tests where test data can be used only once, there should be more than enough test data in
the files to support such a long time.

Performance Testing Fundamentals


22
4. MONITORING
Very important is that we set the monitoring. Monitoring depends heavily on the nature of
the application, the infrastructure and the aim of the test. Typically, we will monitor
resource utilization such as CPU/memory and IO on servers. But there is a lot more to take
into account and a lot more to monitor for.
Obviously, we need to get results from our tests. At the very least we need to measure
response times. But for fixing performance issues it is essential that the system under test
is monitored to find out where exactly in both the system as well as in the process the
performance issue occurs.
The technical team usually has a professional interest in response times and they need
different information when there are problems. If the performance under load is not good
enough, the people that need to solve this need to know what exactly the problem is. The
technical team can be the maintainers of the systems or the developers. Often it makes sense
to have them both.

4.1 System resources


The first thing that comes to mind is to monitor system resources. In the simple case of one
server in a traditional client server model, you would monitor at least:
CPU
Memory
disk I/O
network performance
These will give the technical team some indication on where the issue lays, and sometimes
just one component just needs faster or better hardware but, in general the problem is
software related or at least best solved by improving or tuning the software.

4.2 Software components and infrastructure


To properly monitor how the software behaves and where improvements can be made the
processes and the infrastructure needs to be well known. The easy and often best way, to
approach this is to ask the technical team what to monitor. They should know very well
what is useful for them to be monitored and why. Nonetheless, they are not necessarily
experts on performance. Performance experts need to know more. Unfortunately there is
no straight forward recipe for this as much depends on the system. Client/server solutions
for instance are different from web applications.
There are a few items that nearly always require attention:
Databases, nearly any application will use a database. To be a bit purist, most applications
will use a Relational Database Management Server (RDBMS). If there are specialist
(Database Administrators or DBAs) available, involve them in monitoring.
Application servers, not only must be monitored separately, but often they have some build
in monitoring and/or logging which is very useful.
Special interest applies for Java applications. Java has a particular way of handling memory.
The Java Virtual machine can be limited and tuned on memory usage. Some applications

Performance Testing Fundamentals


23
give certain components their own JVM, which means that you need to take special care
with monitoring this.
The obvious pitfall is that it may seem that not much memory is required by the application
if you only monitor OS memory usage. You would typically monitor the JVM processes
separately. Javas memory processes are heavily depended on the Java Garbage collector.
Its settings can greatly affect the performance of the application. If you deal with a Java
application, learn about the garbage collector and use the tools aimed at monitoring the
JVMs memory and the behavior of the garbage collector.

4.3 Correlation and logging


When the test is done the analysis needs to be done. Naturally the response times need to
be evaluated in respect to the defined and even undefined requirements. But especially for
the technical monitor items that were monitored, the results need to be correlated to the
actions by the load generator. When for instance we see at a certain point in time that the
CPU usage was very heavy, we need to know what function in the software was actually
causing this. For that, the functions need to be correlated to the monitored items.
But for it to work it is essential that the time is logged properly. Therefore monitoring items
outside of an integrated solution will require that you must make sure that the time is
logged as well as that the clocks are synchronized.
Another thing to remember is how fine grained the monitoring is. The monitoring tool will
grab the information at certain intervals. If your monitoring tool will grab the information
at too large an interval, the results may be unclear or even not show up. Whereas if the
interval is too small, you get piles of information to go through. And while a spreadsheet
can condense that to an understandable overview and even graphs, too much information
will hurt. There is the chance that the monitoring itself will have an impact on the
performance, the files can get really huge or resulting in your spreadsheet software having
a hard time processing it.
Also you may miss things if you cannot interpret the raw data anymore. There may be
something that doesnt show up in your spreadsheet that you might have noticed if you
would have looked at the raw data. So make sure that the data gathered is enough or more
than enough, but keep it within limits that you can still process.

4.4 Prepare for analysis before the test run


Another thing to remember is that the monitoring should provide data for the time frame
that youre working in. You do not want to have to go through data that has nothing to do
with your test. But it is important to know what the values are when there is no load on the
system.
While preparing for a test, make sure that you evaluate all monitor items and either register
the normal behavior when there is no load. If for instance you notice that when your test is
not running, a server has a memory consumption of 60% this will be very meaningful if
during load the memory consumption goes to 80%. Instead the function tested using 80%,
it uses 20%. Correlation doesnt imply causation, but it does waggle its eyebrows
suggestively and gesture furtively while mouthing look over there. XKCD
http://xkcd.com/552/ (mouse over).

Performance Testing Fundamentals


24
Another thing is that you do not want to discover after your test that the monitoring didnt
quite work, did not monitor everything or logs everything in a format that you cant use.
For all these items it is usually just a matter of starting the monitors with right options or
configuration. Only after the test is run it is too late for that. And youd have to repeat your
test, which is not always possible as you may have burned your test data. So start your
monitors, collect some data even without load on the systems, and open the results in your
analysis tool. Evaluate then that you can use results.

Performance Testing Fundamentals


25
5. WHAT TO TEST
5.1 Introduction
Setting up a performance test requires a lot of work. Therefore the number of tests cannot
be too high. A performance tester needs to derive the most right test cases to be tested under
load.
Only a limited set of tests can be performed therefore the selection of what to test is an
important task. We use methods common in software testing such as performing risk
analysis. As the analysis is a time consuming task cooperating with functional test teams is
often a good and smart idea. They have already done this or are going to. But keep in mind
that your focus point is more technical and you can rely solely on a list delivered by other
teams. To really add value to your stakeholders, an exploratory style of testing can be used.
Up to the moment you start testing, much of what we anticipate is assumption. After the
first tests, the realization of what is happening and what you should focus on comes into
focus and it helps if you can adapt and add tests based on what your learn.

5.2 Work together with the functional test team


A performance test has a functional base. We need to do something during the recording,
and that something has to be the right thing. We cant even test every functional possibility
and we certainly can test every possibility under load. When setting up a performance test,
we need to decide on what functionality to performance test and create the test cases.
Software testing is a craft and many methods exist on how to test and how to create test
cases. The performance tester needs something similar. You need to select on a clever way
what to test so that you have tested items that are important. And also you need to be sure
that you really touch important items.
One approach is to start doing the analysis within the performance test team. That means
you will have to do a thorough analysis on the application and its usage. You would also
need to learn how to operate the software and how users in real life use the software. In
most cases however, this has been done already by others in the test teams. Software testers
will have performed these analysis and know how to use the software.
In most cases they can already, based on their knowledge and experience, indicate which
items are interesting to subject to a performance test.

5.3 Risk based selection for performance tests


A common approach to identify what to test is to analyze the risks. There are the risks for
the organization when there is an issue, and there is the risk or change that a piece of your
software will have issues

5.3.1 Risks for the organization


Selecting what to test is partly based on something familiar to software testers: a risk based
analysis. If there is a Test Risk Analysis (TRA) available this can partially be used. If the
TRA was done properly you can identify what is important for the organization. Items that

Performance Testing Fundamentals


26
are considered high risk are often candidates to subject to a performance test. In this case
you focus on the problems for the organization if the software shows performance issues
once the software is in production. The focus is on the consequences of the risk.

5.3.2 Risk or chance of a problem


If the TRA also analyzed thoroughly what components were likely to show problems due
to their complexity, this can be used as an indication that these items may cause
performance issues. But the TRA will usually not show which items need a performance
test from a technical point of view. To make sure that you also identify test cases that should
be tested because there is a performance risk, you will have to assess the risk that a part of
the system will show performance issues, regardless of how dramatic this is for the
organization. So in this case, your focus is not on the consequences, but on the causes of
performance issues: what is the perceived chance a functionality will show performance
issues.
For instance if your mortgage module will calculate an offer based on all the input entered,
the risk that this calculation may be costly in CPU power can be high. There are many ways
to assess this. One of the best is common sense and suggestions of project members such as
developers and testers. Developers can often identify sections that have complex code and
queries, perform many actions sequentially etc. Software testers during the test execution
usually know very early on which items show slower response times and therefore could
turn into a possible performance issue.

5.3.3 Combined risk


As we have to make a small selection of functionality to performance test, the selection is
usually based on both types of risk. You will look for things that are important for the
business and at the same time have a significant risk of actually having performance issues.

5.4 Making and details of test cases


An identified test still needs to be turned into a real test case. The context of your test is the
major factor in how you do that. Sometimes you are required to have very detailed
recordings of your tests. If youre lucky, the level of detail this is up to you or your team. If
you have the freedom to decide on the level of detail, make sure that the level of detail in
the test cases is sufficient for you.
This depends on how much preparation time there is in relation to the time that there is a
system that can already be tested. If you have a lot of preparation time, you need enough
detail to enable you to quickly record and create the test script in the load generator. In that
case, it would be a shame if you still had to find out how to operate the application and
how to create the case. When the system is available from day one, for instance if you are
called in late and there is already a stable system, depending on the demands of the
organization, the details are often in test script of the load generator anyway.
In both cases, we also need to keep in mind that we may need some details if we do find an
issue and have to analyze. If the way your load generator works doesnt allow you to get
information on the actual test, details of the test performed must be documented. No
developer can fix an issue that is described as: we did something with your application with
50 users. The something is required.

Performance Testing Fundamentals


27
6. FEW WORDS
As you can imagine these pages are just the beginning, there are a lot of topics that are not
covered in this document and by the Performance Testing Fundamentals presentation from Fii
Practic, edition 2017.
My colleague say that he can talk about performance testing for hours, I say that it is a
never ending story, always new and intriguing things that are waiting to be discovered.
There are more to learn: hardware platform, software platform, how to investigate a
performance issue, how to take a deep look into application code or database queries and
determine what went wrong, how a defect can be fixed, what is the impact of a hardware
resource over performance.
I heard so many times (even sometimes I heard myself) about performance testing is an
art but this is actually a fake concept. As you saw in our presentation is not about art, it is
more about statistics, experience, understanding the system under test, getting deep inside
the end-user head, dedication, patience etc.
In this presentation, we didnt get the chance to talk about Queuing theory, Amdahls law
which can be adapted for evaluating system performance, or about NETFLIX performance
testing (one of most advanced and efficient flow that I ever read about).
I consider that at this point, you should have enough knowledge about performance testing
in general and you will be able to design and conduct software performance tests. Still there
is a lot of learn and if you want to go beyond performance testing fundamentals you should
not stop here.

Performance Testing Fundamentals


28
7. GLOSARY

Performance measure how fast and efficiently a software system can complete
certain computing tasks. Scalability measure the trend of performance with
increase load. Performance and scalability are inseparable from each other. It
doesnt make sense to talk about scalability if a system does not perform. However,
a system may perform but not scale.
Response time it can be defined as the amount of time it takes for the application to
respond to a user request. In performance testing this is the time between the end
user requesting a response from the application and a complete reply arriving at the
users workstation. This response can be synchronous (blocking) or asynchronous
(user doesnt have to wait for reply to complete before they can resume interaction
with application).
Throughput the rate at which application-oriented events occur. i.e. the number of
hits on a web page within a given period.
Utilization the percentage of the theoretical capacity of a resource that is being
used. i.e.: how much CPU is being consumed by application on a web server when
1,000 visitors are active.
Think time measures the time duration spent by a user on preparing input and
digesting the content of the response until initiating the next action to the system,
whereas response time measures the time spent by the system processing the user
request.

Performance Testing Fundamentals


29
8. BIOGRAPHY
Witteveen, Albert. Performance Testing - A Practical Guide
Henry H. Liu. Software Performance and Scalability A Quantitative Approach
Ian Molyneaux. The Art of Application Performance Testing from Strategy to Tools
J.D. Meier. Performace Testing Guidance for Web Applications
https://msdn.microsoft.com/en-us/library/bb924359.aspx

https://www.websitemagazine.com/blog/5-reasons-visitors-leave-your-website
https://blog.kissmetrics.com/loading-time/
http://www.hobo-web.co.uk/your-website-design-should-load-in-4-seconds/

Performance Testing Fundamentals


30

Вам также может понравиться