Академический Документы
Профессиональный Документы
Культура Документы
Argus Technologies Ltd CREO Products Inc Chancery Software Ltd Meridian Project Systems Ltd Modular Mining Systems Canada Ltd.
Topics
Why do we need to test performance Myths surrounding performance testing Performance / Scalability testing Single User Performance testing Worst-case scenario Performance bottlenecks Performance and Scalability Targets Performance Testing Lab Performance Team
Topics Contd..
Microsoft Testing Tools Commercial Performance Testing Tools Process of analyzing performance and Scalability issues Tool selection strategy Problems with manual perf testing Tool Evaluation Process Some Performance related issues/solutions Books & Reference materials
Bad comments/Questions/Answers
Its the first release of this feature so of course it is going to be slow ! It only takes couple of seconds so what is the big deal Q: Why is this page too slow ? A: It is because we have TWO users in the system Q: How many users that we can support with this system? A: 20 users per server
Good comments/questions/Answers
The technology of the product is based upon current solid architecture and easy to work with The system is flexible to build upon and can be managed easily by our personnel. Product has the architecture required to function in large projects. It not only performs well but it scales Q: How many users can we support with our system? A: 400 users per server
Performance testing should start early in the project and should be a on going process
Performance and testing practices
Reviewed or stimulated performance during requirements and design phases
Tested early in development Tested late in development Did not do performance or load testing
35% 38% 6%
6% 26% 60%
The purpose of performance testing is to measure systems performance for agreed requirements Scalability testing (Load Testing, Stress testing) is measurement of performance under heavy load: Peak or worst-case conditions :Performance usually is measured as weighted mix of response time, throughput (Network Bandwidth) and availability Response time Measure of how long the system takes to complete a task or group of tasks
How important is to measure single user performance Gives an early indication of system design feasibility Can be measured with existing set of tools available Cost of single user testing is low and can be done prior to multiple user testing
Worst-case scenario
If the test pass and scale well with worstcase we know for sure that the application will perform better for a lesser load. Full data population is necessary Project Ex: (worst-case) -Larger data packet size -More frequency data packets -More client threads
Performance Bottlenecks
Client CPU Database Server CPU Client, Server Memory Network Bandwidth Disk I/O Transaction processing speed Serialization and de-serialization speed Network latency Limitations of the 2nd/3rd party servers/add-ins
Performance Team
Performance Test Engineer Senior Developer Database Specialist Hardware Engineer Network Specialist
Invasive Action
1.Re-factoring 2.Architectural change
Stress/Stability Testing
LoadRunner/SilkPerformer /TestComplete
Web Clients
TestPartner/TestComplete
Office client Central server DB server Network Infrastructure Hardware Interfaces Mobile Client Windows CE
TestPartner/TestComplete
LoadRunner/SilkPerformer
LoadRunner/SilkPerformer
TestPartner/TestComplete
LoadRunner/SilkPerformer/ TestComplete
LoadRunner/SilkPerformer /TestComplete
SUNRA/SPIRENT
SUNRA/SPIRENT
SUNRA/SPIRENT
Internal Simulators
Internal Simulators
Internal Simulators
TestComplete
Internal Simulators
Internal Simulators
to mimic concurrency operations Cannot be repeated easily Difficult to setup and organize Lab could be expensive Very time consuming Cannot afford to have large scale performance labs in every development locations Full NFRs cannot be tested
Phase 1 Fact finding of requirements, suitable vendors and preparing a proposal for project stakeholders 5 weeks Phase 2 - Tool demonstrations in lab 6 weeks (2 weeks for each vendor) Phase 3 Complete Evaluation Process 1 week Phase 4 Negotiation and Purchase of a tool 4 weeks Phase 5 Tool deployment 1 week
Issue # 1
Unless
it is tested we cannot guarantee that the application going to work for the stated NFRs. That is if we say that our application can support 500 mobile clients we need to test these scenarios before clamming the limits of our system. We cannot test 50 clients and say that the system should work with 500 clients calculating performance parameters by interpolating them.
Solution #1:
Shouldnt interpolate single user data for multiple users Try to test with worst-case scenario Different bottlenecks restrain achieving better performance/Scalability. Proper testing Lab Use tools to simulate and test full NFRs
Issue # 2
We
need to start testing Performance and scalability early in the development life cycle. More you do this early it is easy to find solutions and less costly to the project.
optimizations shouldnt be integrated haphazardly. We need to have mechanism and tools to monitor performance and scalability regularly. If it is not with weekly builds we should at least try to monitor this at least once a month with set of test scenarios covering good part of operations of the application. It would be ideal if we can automate this.
need to better scrutinize the NFR requirements coming from Marketing to really understand that we can support them with the technology and the platform that we select for the product.
Agile method has been adopted for the development process try to do performance and scalability testing in each sprint tied down to user stories. This will give developers immediate feedback of the status of performance of the particular area.
need to test systems with more realistic data as possible. We should create different sizes/profiles of databases and benchmark results for each category. We can develop these data sets together with the development team as this is important for them as well.
need to programmatically generate application data keeping the data integrity. Do this for different profiles (standard, heavy and ceiling) Update and maintain this data This is a responsibility of both test and dev teams
projects we find major issues because our process does not has room to get customer feedback a early in the project.
Implementation/Support department with working software/system before the application transitioned to collect feedback Get customer feedback if possible before the release Expose identified dev and test team members for real customer environments if possible.
Once a customer buys your product with time their requirements overpass the original application requirements. Mostly this is to do with handling increase amount of data. Some customers may even require to keep even 10 years worth of data in the running system. This could be a major challenge if not planned properly.
Discussion
Q&A