Вы находитесь на странице: 1из 42

Essence of Performance and Load Testing

April 30th , 2009 Gihan B Madawala

Argus Technologies Ltd CREO Products Inc Chancery Software Ltd Meridian Project Systems Ltd Modular Mining Systems Canada Ltd.

Topics
Why do we need to test performance Myths surrounding performance testing Performance / Scalability testing Single User Performance testing Worst-case scenario Performance bottlenecks Performance and Scalability Targets Performance Testing Lab Performance Team

Topics Contd..
Microsoft Testing Tools Commercial Performance Testing Tools Process of analyzing performance and Scalability issues Tool selection strategy Problems with manual perf testing Tool Evaluation Process Some Performance related issues/solutions Books & Reference materials

Bad comments/Questions/Answers
Its the first release of this feature so of course it is going to be slow ! It only takes couple of seconds so what is the big deal Q: Why is this page too slow ? A: It is because we have TWO users in the system Q: How many users that we can support with this system? A: 20 users per server

Good comments/questions/Answers

The technology of the product is based upon current solid architecture and easy to work with The system is flexible to build upon and can be managed easily by our personnel. Product has the architecture required to function in large projects. It not only performs well but it scales Q: How many users can we support with our system? A: 400 users per server

Why do we need to test performance


Studies show performance was one of the most important qualities of a application Experience shows Performance issues are the most likely to bite with surprise released into live operation Lack of knowledge on Performance & Scalability testing

Myths surrounding Performance testing


We do not need to measure System Performance Our system is incredibly impressive with all the features, users will not mind if it is little slow We do not really need precise measurement goals Performance testing has to be done late in the project. Anyone can measure performance; it does not require any skills or expertise Any test lab/tool is about the same as any other one and you can mimic real life usage of the app

Performance testing should start early in the project and should be a on going process
Performance and testing practices
Reviewed or stimulated performance during requirements and design phases

Operational Performance was Acceptable (48%) 21%

Operational Performance was NOT Acceptable (52%) 0%

Tested early in development Tested late in development Did not do performance or load testing

35% 38% 6%

6% 26% 60%

Performance testing / Scalability Testing

The purpose of performance testing is to measure systems performance for agreed requirements Scalability testing (Load Testing, Stress testing) is measurement of performance under heavy load: Peak or worst-case conditions :Performance usually is measured as weighted mix of response time, throughput (Network Bandwidth) and availability Response time Measure of how long the system takes to complete a task or group of tasks

Single user performance testing

How important is to measure single user performance Gives an early indication of system design feasibility Can be measured with existing set of tools available Cost of single user testing is low and can be done prior to multiple user testing

Worst-case scenario
If the test pass and scale well with worstcase we know for sure that the application will perform better for a lesser load. Full data population is necessary Project Ex: (worst-case) -Larger data packet size -More frequency data packets -More client threads

Performance and Scalability Targets


Achieve the specified Response time for one user Maintain the response time within a specified range when the system is under load Maximize the number of active users per available bandwidth Minimize network bandwidth usage CPU usage of machines should be < 80% under heavy load

Performance Bottlenecks

Client CPU Database Server CPU Client, Server Memory Network Bandwidth Disk I/O Transaction processing speed Serialization and de-serialization speed Network latency Limitations of the 2nd/3rd party servers/add-ins

Performance Testing Lab


Simulation of realistic scenario Having adequate hardware Having feature rich testing tools Test Automation and Unit tests Performance testing expertise Creating real test data

Performance Team
Performance Test Engineer Senior Developer Database Specialist Hardware Engineer Network Specialist

Microsoft Testing Tools


PERFMON SQL Profiler ACT (Application Centre Test) WAST (Web Application Stress Tool) TEAM TEST (Visual Studio Team System) windebug /ddk

Some Industry Leading Performance and Scalability Tools


Load Runner (Mercury now HP) * Silk Performer (Segue now Boland) * QA Load ANTS Open STA (Open Source) E-Load (Emprix) SHUNRA SPIRENT Test Centre

Process of Analyzing Performance and Scalability Issues

Non Invasive action (Low hanging fruits)


1 Analyzing of Performance counters 2 Profiling 3 Load testing tool analysis. Two steps - Run concurrent users - Run specific number of users continuously 4 Simulate different network conditions 5 Tracing and debugging .NET components 6 Use of windebug /ddk

Invasive Action
1.Re-factoring 2.Architectural change

Tool selection Strategy


# Areas to Test GUI Automation (Feature Testing) Single User performanc e
Manual/Benchmark Tools/Visual Studio Manual/Benchmark Tools/Visual Studio Manual/Benchmark Tools/Visual Studio Manual/Benchmark Tools/Visual Studio Manual/Benchmark Tools/Visual Studio Manual/Benchmark Tools/Visual Studio

Multiple User Performance


LoadRunner/SilkPerformer/ TestComplete

Stress/Stability Testing
LoadRunner/SilkPerformer /TestComplete

Web Clients

TestPartner/TestComplete

Office client Central server DB server Network Infrastructure Hardware Interfaces Mobile Client Windows CE

TestPartner/TestComplete

LoadRunner/SilkPerformer

LoadRunner/SilkPerformer

TestPartner/TestComplete

LoadRunner/SilkPerformer/ TestComplete

LoadRunner/SilkPerformer /TestComplete

SUNRA/SPIRENT

SUNRA/SPIRENT

SUNRA/SPIRENT

Internal Simulators

Internal Simulators

Internal Simulators

TestComplete

Internal Simulators

Internal Simulators

Problems with Manual Performance testing in the Lab


Difficult

to mimic concurrency operations Cannot be repeated easily Difficult to setup and organize Lab could be expensive Very time consuming Cannot afford to have large scale performance labs in every development locations Full NFRs cannot be tested

Commercial tools Vs Internal tools


Consider the scope of features needed Estimate the ROI How soon you need the tool Does the test team has performance testing expertise Customer requirements for benchmarks and proper test results

Tool Evaluation Process

Phase 1 Fact finding of requirements, suitable vendors and preparing a proposal for project stakeholders 5 weeks Phase 2 - Tool demonstrations in lab 6 weeks (2 weeks for each vendor) Phase 3 Complete Evaluation Process 1 week Phase 4 Negotiation and Purchase of a tool 4 weeks Phase 5 Tool deployment 1 week

Issue # 1
Unless

Testing Non Functional Requirements

it is tested we cannot guarantee that the application going to work for the stated NFRs. That is if we say that our application can support 500 mobile clients we need to test these scenarios before clamming the limits of our system. We cannot test 50 clients and say that the system should work with 500 clients calculating performance parameters by interpolating them.

Solution #1:

Always test/simulate full NFRs

Shouldnt interpolate single user data for multiple users Try to test with worst-case scenario Different bottlenecks restrain achieving better performance/Scalability. Proper testing Lab Use tools to simulate and test full NFRs

Issue # 2
We

Cost of fixing Performance Defects

need to start testing Performance and scalability early in the development life cycle. More you do this early it is easy to find solutions and less costly to the project.

Solution #2: Reducing Project Costs

Issue # 3 Unable to gauge performance


Performance

optimizations shouldnt be integrated haphazardly. We need to have mechanism and tools to monitor performance and scalability regularly. If it is not with weekly builds we should at least try to monitor this at least once a month with set of test scenarios covering good part of operations of the application. It would be ideal if we can automate this.

Solution #3: Regular Performance Measurements Heartbeat of the project

Issue # 4- Unrealistic/Untested NFRs


We

need to better scrutinize the NFR requirements coming from Marketing to really understand that we can support them with the technology and the platform that we select for the product.

Solution #4: Verify NFRs

Issue # 5 Agile Performance Testing


If

Agile method has been adopted for the development process try to do performance and scalability testing in each sprint tied down to user stories. This will give developers immediate feedback of the status of performance of the particular area.

Solution # 5 Traditional Performance Testing

Solution # 5 Agile Performance Testing

Issue # 6 Testing with unrealistic data


We

need to test systems with more realistic data as possible. We should create different sizes/profiles of databases and benchmark results for each category. We can develop these data sets together with the development team as this is important for them as well.

Solution # 6 Data Generation


We

need to programmatically generate application data keeping the data integrity. Do this for different profiles (standard, heavy and ceiling) Update and maintain this data This is a responsibility of both test and dev teams

Issue # 7 lack of customer feedback till end of the project


Many

projects we find major issues because our process does not has room to get customer feedback a early in the project.

Solution # 7 More input from Support Dept and Customers


Provide

Implementation/Support department with working software/system before the application transitioned to collect feedback Get customer feedback if possible before the release Expose identified dev and test team members for real customer environments if possible.

Issue # 8: Growing customer demands

Once a customer buys your product with time their requirements overpass the original application requirements. Mostly this is to do with handling increase amount of data. Some customers may even require to keep even 10 years worth of data in the running system. This could be a major challenge if not planned properly.

Solution # 8: Capacity Planning


Collecting data from Customers/field Consolidate this data for a 6 months/1 year targets Focus more on most frequent & most important features Get the Performance Team to start on these targets early Do Capacity planning for both software and hardware/network

Microsoft Performance Testing Labs


Can use for benchmarking Can use testing tools with 1000s of user licenses Can use Microsoft domain experts on Performance Worth investment towards improving system performance

Books & Reference materials


MSDN articles Pattern & Practices (Performance tuning MS .NET Applications) By MS ACE Team) Improving .NET Application Performance and Scalability by Rico Mariani, Scott Barber)

Discussion

Q&A

Вам также может понравиться