Вы находитесь на странице: 1из 57

1. What is the difference between Functional Requirement and Non-Functional Requirement?

The Functional Requirement specifies how the system or application SHOULD DO where in Non Functional Requirement it specifies how the system or application SHOULD BE. Some functional Requirements are Authentication Business rules Historical Data Legal and Regulatory Requirements External Interfaces Some Non-Functional Requirements are Performance Reliability Security Recovery Data Integrity Usability 2. How Severity and Priority are related to each other? Severity- tells the seriousness/depth of the bug where as Priority- tells which bug should rectify first. Severity- Application point of view Priority- User point of view 3. Explain the different types of Severity? User Interface Defect-Low Boundary Related Defects-Medium Error Handling Defects-Medium Calculation Defects-High Interpreting Data Defects-High Hardware Failures& Problems-High Compatibility and Intersystem defects-High Control Flow defects-High Load conditions (Memory leakages under load testing)-High 4. What is the difference between Priority and Severity? The terms Priority and Severity are used in Bug Tracking to share the importance of a bug among the team and to fix it. Severity: Is found in the Application point of view Priority- Is found in the User point of view Severity- (tells the seriousness/depth of the bug) The Severity status is used to explain how badly the deviation is affecting the build. The severity type is defined by the tester based on the written test cases and functionality. Example If an application or a web page crashes when a remote link is clicked, in this case clicking the remote link by an user is rare but the impact of application crashing is severe, so the severity is high and priority is low. PRIORITY- (tells which bug should rectify first) The Priority status is set by the tester to the developer mentioning the time frame to fix a defect. If High priority is mentioned then the developer has to fix it at the earliest. The priority status is set based on the customer requirements. Example If the company name is misspelled in the home page of a website, then the priority is high and the severity is low to fix it. Severity: Describes the bug in terms of functionality. Priority: Describes the bug in terms of customer. Few examples:

High Severity and Low Priority -> Application doesn't allow customer expected configuration. High Severity and High Priority -> Application doesn't allow multiple user's. Low Severity and High Priority -> No error message to prevent wrong operation. Low Severity and low Priority -> Error message is having complex meaning. Or Few examples: High Severity -Low priority Supposing, you try the wildest or the weirdest of operations in a software (say, to be released the next day) which a normal user would not do and supposing this renders a run -time error in the application,the severity would be high. The priority would be low as the operations or the steps which rendered this error by most chances will not be done by a user. Low Severity -High priority An example would be- you find a spelling mistake in the name of the website which you are testing.Say, the name is supposed to be Google and its spelled there as 'Gaogle'. Though, it doesn't affect the basic functionality of the software, it needs to be corrected before the release. Hence, the priority is high. High severity- High Priority A bug which is a show stopper. i.e., a bug due to which we are unable to proceed our testing.An example would be a run time error during the normal operation of the software,which would cause the application to quit abruptly. Low severity - low priority Cosmetic bugs What is Defect Severity? A defect is a product anomaly or flaw, which is variance from desired product specification. The classification of defect based on its impact on operation of product is called Defect Severity. 5. What is Bucket Testing? Bucket testing (also known as A/B Testing) is mostly used to study the impact of various product designs in website metrics, two simultaneous versions were run in a single or set of web pages to measure the difference in click rates, interface and traffic. 6. What is Entry and Exit Criteria in Software Testing? Entry Criteria is the process that must be present when a system begins, like, SRS (Software Requirement Specification) FRS (Functional Requirement Specification) Usecase Test Case Test plan Exit Criteria ensures whether testing is completed and the application is ready for release, like, Test Summary Report Metrics Defect Analysis report 7. What is Concurrency Testing? Concurrency Testing (also commonly known as Multi User Testing) is used to know the effects of accessing the Application, Code Module or Database by different users at the same time.It helps in identifying and measuring the problems in Response time, levels of locking and deadlocking in the application. Example Load runner is widely used for this type of testing, Vugen (Virtual User Generator) is used to add the number of concurrent users and how the users need to be added like Gradual Ramp up or Spike Stepped. 8. Explain Statement coverage/Code coverage/Line Coverage? Statement Coverage or Code Coverage or Line Coverage is a metric used in White Box Testing where we can

identify the statements executed and where the code is not executed cause of blockage. In this process each and every line of the code needs to be checked and executed. Some advantages of Statement Coverage / Code Coverage / Line Coverage are It verifies what the written code is expected to do and not to do. It measures the quality of code written. It checks the flow of different paths in the program also ensure whether those paths are tested or not. To Calculate Statement Coverage, Statement Coverage = Statements Tested / Total No. of Statements. 9. Explain Branch Coverage/Decision Coverage? Branch Coverage or Decision Coverage metric is used to check the volume of testing done in all components. This process is used to ensure whether all the code is executed by verifying every branch or decision outcome (if and while statements) by executing atleast one time, so that no branches lead to the failure of the application. To Calculate Branch Coverage, Branch Coverage = Tested Decision Outcomes / Total Decision Outcomes. 10. What is the difference between High level and Low Level test case? High level Test cases are those which cover major functionality in the application (i.e. retrieve, update display, cancel (functionality related test cases), database test cases). Low level test cases are those related to User Interface (UI) in the application. 11. Explain Localization testing with example? Localization is the process of changing or modifying an application to a particular culture or locale. This includes change in user interface, graphical designs or even the initial settings according to their culture and requirements. In terms of Localization Testing it verifies how correctly the application is changed or modified into that target culture and language. In case of translation required of the application on that local language, testing should be done on each field to check the correct translation. Other formats like date conversion, hardware and software usage like operating system should also be considered in localization testing. Examples for Localization Testing are In Islamic Banking all the transactions and product features are based on Shariah Law, some important points to be noted in Islamic Banking are In Islamic Banking, the bank shares the profit and loss with the customer. In Islamic Banking, the bank cannot charge interest on the customer; instead they charge a nominal fee which is termed as "Profit In Islamic Banking, the bank will not deal or invest in business like Gambling, Alcohol, Pork, etc. In this case, we need to test whether these Islamic banking conditions were modified and applied in the application or product. In Islamic Lending, they follow both the Gregorian calendar and Hijiri Calendar for calculating the loan repayment schedule. The Hijiri Calendar is commonly called as Islamic Calendar followed in all the Muslim countries according to the lunar cycle. The Hijiri Calendar has 12 months and 354 days which is 11 days shorter than Gregorian calendar. In this case, we need to test the repayment schedule by comparing both the Gregorian calendar and Hijiri Calendar. 12. Explain Risk Analysis in Software Testing? In Software Testing, Risk Analysis is the process of identifying risks in applications and prioritizing them to test.

In Software testing some unavoidable risk might takes place like Change in requirements or Incomplete requirements Time allocation for testing. Developers delaying to deliver the build for testing. Urgency from client for delivery. Defect Leakage due to application size or complexity. To overcome these risks, the following activities can be done Conducting Risk Assessment review meeting with the development team. Profile for Risk coverage is created by mentioning the importance of each area. Using maximum resources to work on High Risk areas like allocating more testers for High risk areas and minimum resources for Medium and Low risk areas. Creation of Risk assessment database for future maintenance and management review. 13. What is the difference between Two Tier Architecture and Three Tier Architecture? In Two Tier Architecture or Client/Server Architecture two layers like Client and Server is involved. The Client sends request to Server and the Server responds to the request by fetching the data from it. The problem with the Two Tier Architecture is the server cannot respond to multiple requests at the same time which causes data integrity issues. The Client/Server Testing involves testing the Two Tier Architecture of user interface in the front end and database as backend with dependencies on Client, Hardware and Servers. In Three Tier Architecture or Multi Tier Architecture three layers like Client, Server and Database are involved. In this the Client sends a request to Server, where the Server sends the request to Database for data, based on that request the Database sends back the data to Server and from Server the data is forwarded to Client. The Web Application Testing involves testing the Three Tier Architecture including the User interface, Functionality, Performance, Compatibility, Security and Database testing. 14. What is the difference between Static testing and dynamic testing? Static Testing (done in Verification stage) Static Testing is a White Box testing technique where the developers verify or test their code with the help of checklist to find errors in it, this type of testing is done without running the actually developed application or program. Code Reviews, Inspections, Walkthroughs are mostly done in this stage of testing. Dynamic Testing (done in Validation stage) Dynamic Testing is done by executing the actual application with valid inputs to check the expected output. Examples of Dynamic Testing methodologies are Unit Testing, Integration Testing, System Testing and Acceptance Testing. Some differences between Static Testing and Dynamic Testing are, Static Testing is more cost effective than Dynamic Testing because Static Testing is done in the initial stage. In terms of Statement Coverage, the Static Testing covers more areas than Dynamic Testing in shorter time. Static Testing is done before the code deployment where the Dynamic Testing is done after the code deployment. Static Testing is done in the Verification stage where the Dynamic Testing is done in the Validation stage. 15. Explain Use case diagram. What are the attributes of use cases? Use Case Diagrams is an overview graphical representation of the functionality in a system. It is used in the analysis phase of a project to specify the system to be developed. In Use Case Diagrams the whole system is defined as ACTORS, USE CASES and ASSOCIATIONS, the ACTORS are the external part of the system like users, computer software & hardware, USECASES is the behavior or functionality of the system when these ACTORS perform an action, the ASSOCIATIONS are the line drawn to show the connection between ACTORS and USECASES. One ACTOR can link too many USECASES and one USECASE can link too many ACTORS.

16. What is Web Application testing? Explain the different phases in Web Application testing? Web Application testing is done on a website to check its load, performance, Security, Functionality, Interface, compatibility and other usability related issues. In Web application testing, three phases of testing is done, they are, Web Tier Testing In Web tier testing, the browser compatibility of the application will be tested for IE, Fire Fox and other web browsers. Middle Tier Testing In Middle tier testing, the functionality and security issues were tested. Database Tier Testing In Database tier testing, the database integrity and the contents of the database were tested and verified. 17. Explain Unit testing, Interface Testing and Integration testing. Also explain the types of integration testing in brief? Unit testing Unit Testing is done to check whether the individual modules of the source code are working properly. i.e. testing each and every unit of the application separately by the developer in developer's environment. Interface Testing Interface Testing is done to check whether the individual modules are communicating properly one among other as per the specifications. Interface testing is mostly used in testing the user interface of GUI application. Integration testing Integration Testing is done to check the connectivity by combining all the individual modules together and test the functionality. The types of Integration Testing are Big Bang Integration Testing In Big Bang Integration Testing, the individual modules are not integrated until all the modules are ready. Then they will run to check whether it is performing well. In this type of testing, some disadvantages might occur like, Defects can be found at the later stage.It would be difficult to find out whether the defect arouse in Interface or in module. 2. Top Down Integration Testing In Top Down Integration Testing, the high level modules are integrated and tested first. i.e Testing from main module to sub module. In this type of testing, Stubs are used as temporary module if a module is not ready for integration testing. 3. Bottom Up Integration Testing In Bottom Up Integration Testing, the low level modules are integrated and tested first i.e Testing from sub module to main module. Same like Stubs, here drivers are used as a temporary module for integration testing.

18. Explain Alpha, Beta, Gamma Testing? Alpha Testing: Alpha Testing is mostly like performing usability testing which is done by the in-house developers who developed the software or testers. Sometimes this Alpha Testing is done by the client or an outsider with the presence of developer and tester. The version release after alpha testing is called Alpha Release.

Beta Testing: Beta Testing is done by limited number of end users before delivery, the change request would be fixed if the user gives feedback or reports defect. The version release after beta testing is called beta Release. Gamma Testing: Gamma Testing is done when the software is ready for release with specified requirements, this testing is done directly by skipping all the in-house testing activities. 19. Explain the methods and techniques used for Security Testing? Security testing can be performed in many ways like, Black Box Testing White Box Testing Database Testing 1. Black Box Testing a. Session Hijacking Session Hijacking commonly called as "IP Spoofing" where a user session will be attacked on a protected network. b. Session Prediction Session prediction is a method of obtaining data or a session ID of an authorized user and gets access to the application. In a web application the session ID can be retrieved from cookies or URL. The session prediction happening can be predicted when a website is not responding normally or stops responding for an unknown reason. c. Email Spoofing Email Spoofing is duplicating the email header ("From" address) to look like originated from actual source and if the email is replied it will land in the spammers inbox. By inserting commands in the header the message information can be altered. It is possible to send a spoofed email with information you didn't write. d. Content Spoofing Content spoofing is a technique to develop a fake website and make the user believe that the information and website is genuine. When the user enters his Credit Card Number, Password, SSN and other important details the hacker can get the data and use if for fraud purposes. e. Phishing Phishing is similar to Email Spoofing where the hacker sends a genuine look like mail attempting to get the personal and financial information of the user. The emails will appear to have come from well known websites. f. Password Cracking Password Cracking is used to identify an unknown password or to identify a forgotten password Password cracking can be done through two ways, Brute Force The hacker tries with a combination of characters within a length and tries until it is getting accepted. Password Dictionary The hacker uses the Password dictionary where it is available on various topics. 2. White Box level a. Malicious Code Injection SQL Injection is most popular in Code Injection Attack, the hacker attach the malicious code into the good code by inserting the field in the application. The motive behind the injection is to steal the secured information which was intended to be used by a set of users. Apart from SQL Injection, the other types of malicious code injection are XPath Injection, LDAP Injection, and Command Execution Injection. Similar to SQL Injection the XPath Injection deals with XML document. b. Penetration Testing:

Penetration Testing is used to check the security of a computer or a network. The test process explores all the security aspects of the system and tries to penetrate the system. c. Input validation: Input validation is used to defend the applications from hackers. If the input is not validated mostly in web applications it could lead to system crashes, database manipulation and corruption. d. Variable Manipulation Variable manipulation is used as a method for specifying or editing the variables in a program. It is mostly used to alter the data sent to web server. 3. Database Level a. SQL Injection SQL Injection is used to hack the websites by changing the backend SQL statements, using this technique the hacker can steal the data from database and also delete and modify it. 20. Explain IEEE 829 standards and other Software Testing standards? An IEEE 829 standard is used for Software Test Documentation, where it specifies format for the set of documents to be used in the different stages software testing. The documents are, Test Plan- Test Plan is a planning document which has information about the scope, resources, duration, test coverage and other details. Test Design- Test Design document has information of test pass criteria with test conditions and expected results. Test Case- Test case document has information about the test data to be used. Test Procedure- Test Procedure has information about the test steps to be followed and how to execute it. Test Log- Test log has details about the run test cases, test plans & fail status, order, and the resource information who tested it. Test Incident Report- Test Incident Report has information about the failed test comparing the actual result with expected result. Test Summary Report- Test Summary Report has information about the testing done and quality of the software, it also analyses whether the software has met the requirements given by customer. The other standards related to software testing are, IEEE 1008 is for Unit Testing IEEE 1012 is for Software verification and validation IEEE 1028 is for Software Inspections IEEE 1061 is for Software metrics and methodology IEEE 1233 is for guiding the SRS development IEEE 12207 is for SLC process 21. What is Test Harness? Test Harness is configuring a set of tools and test data to test an application in various conditions, which involves monitoring the output with expected output for correctness. The benefits of Test Harness are, Productivity increase due to process automation. Quality in the application. 22. What is the difference between bug log and defect tracking? Bug Log: Bug Log document showing the number of defect such as open, closed, reopen or deferred of a particular module Defect Tracking- The process of tracking a defect such as symptom, whether reproducible /not, priority, severity and status. 23. What are Integration Testing and Regression Testing?

Integration Testing: Combining the modules together & construct software architecture. To test the communication & data flow White & Black box testing techniques are used It is done by developer & tester Regression Testing It is re-execution of our testing after the bug is fixed to ensure that the build is free from bugs. Done after bug is fixed It is done by Tester 24. Explain Peer Review in Software Testing? It is an alternative form of Testing, where some colleagues were invited to examine your work products for defects and improvement opportunities. Some Peer review approaches are, Inspection It is a more systematic and rigorous type of peer review. Inspections are more effective at finding defects than are informal reviews. Ex: In Motorola's Iridium project nearly 80% of the defects were detected through inspections where only 60% of the defects were detected through formal reviews. Team Reviews: It is a planned and structured approach but less formal and less rigorous comparing to Inspections. Walkthrough: It is an informal review because the work product's author describes it to some colleagues and asks for suggestions. Walkthroughs are informal because they typically do not follow a defined procedure, do not specify exit criteria, require no management reporting, and generate no metrics. Or A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required. Pair Programming: In Pair Programming, two developers work together on the same program at a single workstation and continuously reviewing their work. Peer Desk check In Peer Desk check only one person besides the author examines the work product. It is an informal review, where the reviewer can use defect checklists and some analysis methods to increase the effectiveness. Passaround: It is a multiple, concurrent peer desk check where several people are invited to provide comments on the product. 25. Explain Compatibility testing with an example? Compatibility testing is to evaluate the application compatibility with the computing environment like Operating System, Database, Browser compatibility, backwards compatibility, computing capacity of the Hardware Platform and compatibility of the Peripherals. Example If Compatibility testing is done on a Game application, before installing a game on a computer, its compatibility is checked with the computer specification that whether it is compatible with the computer having that much of specification or not. 26. What is Traceability Matrix? Traceability Matrix is a document used for tracking the requirement, Test cases and the defect. This document is prepared to make the clients satisfy that the coverage done is complete as end to end, this document consists of Requirement/Base line doc Ref No., Test case/Condition, Defects / Bug id. Using this document the person can track the Requirement based on the Defect id. 27. Explain Boundary value testing and Equivalence testing with some examples?

Boundary value testing is a technique to find whether the application is accepting the expected range of values and rejecting the values which falls out of range. Exmple A user ID text box has to accept alphabet characters ( a-z ) with length of 4 to 10 characters. BVA is done like this, max value: 10 pass; max-1: 9 pass; max+1=11 fail ;min=4 pass;min+1=5 pass;min-1=3 fail; Like wise we check the corner values and come out with a conclusion whether the application is accepting correct range of values. Equivalence testing is normally used to check the type of the object. Example A user ID text box has to accept alphabet characters (a - z) with length of 4 to 10 characters. In +ve condition we have test the object by giving alphabets. i.e. a-z char only, after that we need to check whether the object accepts the value, it will pass. In -ve condition we have to test by giving other than alphabets (a-z) i.e. A-Z, 0-9, blank etc, it will fail. 28. What is Security testing? Security testing is the process that determines that confidential data stays confidential Or Testing how well the system protects against unauthorized internal or external access, willful damage, etc? This process involves functional testing, penetration testing and verification. 29. What is Installation testing? Installation testing is done to verify whether the hardware and software are installed and configured properly. This will ensure that all the system components were used during the testing process. This Installation testing will look out the testing for a high volume data, error messages as well as security testing. 30. What is AUT? AUT is nothing but "Application Under Test". After the designing and coding phase in Software development life cycle, the application comes for testing then at that time the application is stated as Application Under Test. 31. What is Defect Leakage? Defect leakage occurs at the Customer or the End user side after the application delivery. After the release of the application to the client, if the end user gets any type of defects by using that application then it is called as Defect leakage. This Defect Leakage is also called as Bug Leakage. 32. What are the contents in an effective Bug report? Project Subject Description Summary Detected By (Name of the Tester) Assigned To (Name of the Developer who is supposed to the Bug) Test Lead (Name) Detected in Version Closed in Version Date Detected Expected Date of Closure Actual Date of Closure Priority (Medium, Low, High, Urgent) Severity (Ranges from 1 to 5) Status

Bug ID Attachment Test Case Failed (Test case that is failed for the Bug) 33. What is Error guessing and Error seeding? Error Guessing is a test case design technique where the tester has to guess what faults might occur and to design the tests to represent them. Error Seeding is the process of adding known faults intentionally in a program for the reason of monitoring the rate of detection & removal and also to estimate the number of faults remaining in the program. 34. What is Ad-hoc testing? Ad hoc testing is concern with the Application Testing without following any rules or test cases. For Ad hoc testing one should have strong knowledge about the Application. 35. What are the basic solutions for the software development problems? Basic requirements- A clear, detailed, complete, achievable, testable requirement has to be developed. Use some prototypes to help pin down requirements. In nimble environments, continuous and close coordination with customers/end-users is needed. Schedules should be realistic- enough time to plan, design, test, bug fix, re-test, change, and document in the given schedule. Adequate testing- testing should be started early, it should be re-tested after the bug fixed or changed, enough time should be spend for testing and bug-fixing. Proper study on initial requirements- be ready to look after more changes after the development has begun and be ready to explain the changes done to others. Work closely with the customers and end-users to manage expectations. This avoids excessive changes in the later stages. Communication- conduct frequent inspections and walkthroughs in appropriate time period; ensure that the information and the documentation is available on up-to-date if possible electronic. More emphasize on promoting teamwork and cooperation inside the team; use prototypes and proper communication with the end-users to clarify their doubts and expectations. 36. What are the common problems in the software development process? Inadequate requirements from the Client: if the requirements given by the client is not clear, unfinished and not testable, then problems may come. Unrealistic schedules: Sometimes too much of work is being given to the developer and ask him to complete in a Short duration, then the problems are unavoidable. Insufficient testing: The problems can arise when the developed software is not tested properly. Given another work under the existing process: request from the higher management to work on another project or task will bring some problems when the project is being tested as a team. Miscommunication: in some cases, the developer was not informed about the Clients requirement and expectations, so there can be deviations. 37. What is the difference between Software Testing and Quality Assurance (QA)? Software Testing involves operation of a system or application under controlled conditions and evaluating the result. It is oriented to 'detection'. Quality Assurance (QA) involves the entire software development PROCESS- monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'. 38. How to Test the water bottle? Note: Before going to generate some test idea on how to test a water bottle, I would like to ask few questions like: Is it a bottle made up off glass, plastic, rubber, some metal, some kind of disposable materials or any thing else? Is it meant only to hot water or we can use it with other fluids like tea, coffee, soft drinks, hot chocolate, soups, wine, cooking oil, vinegar, gasoline, acids, molten lava (!) etc.?

Who is going to use this bottle? A school going kid, a housewife, some beverage manufacturing company, an officegoer, a sports man, a mob protesting in a rally (going to use as missiles), an Eskimo living in an igloo or an astronaut in a space ship? These kinds of questions may allow a tester to know a product (that he is going to test) in a better way. In our case, I am assuming that the water bottle is in form of a pet bottle and actually made up off either plastic or glass (there are 2 versions of the product) and is intended to be used mainly with water. About the targeted user, even the manufacturing company is not sure about them! (Sounds familiar! When a software company develops a product without clear idea about the users who are going to use the software!) Test Ideas Check the dimension of the bottle. See if it actually looks like a water bottle or a cylinder, a bowl, a cup, a flower vase, a pen stand or a dustbin! [Build Verification Testing!] See if the cap fits well with the bottle.[Installability Testing!] Test if the mouth of the bottle is not too small to pour water. [Usability Testing!] Fill the bottle with water and keep it on a smooth dry surface. See if it leaks. [Usability Testing!] Fill the bottle with water, seal it with the cap and see if water leaks when the bottle is tilted, inverted, squeezed (in case of plastic made bottle)! [Usability Testing!] Take water in the bottle and keep it in the refrigerator for cooling. See what happens. [Usability Testing!] Keep a water-filled bottle in the refrigerator for a very long time (say a week). See what happens to the water and/or bottle. [Stress Testing!] Keep a water-filled bottle under freezing condition. See if the bottle expands (if plastic made) or breaks (if glass made). [Stress Testing!] Try to heat (boil!) water by keeping the bottle in a microwave oven! [Stress Testing!] Pour some hot (boiling!) water into the bottle and see the effect. [Stress Testing!] Keep a dry bottle for a very long time. See what happens. See if any physical or chemical deformation occurs to the bottle. Test the water after keeping it in the bottle and see if there is any chemical change. See if it is safe to be consumed as drinking water. Keep water in the bottle for sometime. And see if the smell of water changes. Try using the bottle with different types of water (like hard and soft water). [Compatibility Testing!] Try to drink water directly from the bottle and see if it is comfortable to use. Or water gets spilled while doing so. [Usability Testing!] Test if the bottle is ergonomically designed and if it is comfortable to hold. Also see if the center of gravityof the bottle stays low (both when empty and when filled with water) and it does not topple down easily. Drop the bottle from a reasonable height (may be height of a dining table) and see if it breaks (both with plastic and glass model). If it is a glass bottle then in most cases it may break. See if it breaks into tiny little pieces (which are often difficult to clean) or breaks into nice large pieces (which could be cleaned without much difficulty). [Stress Testing!] [Usability Testing!] Test the above test idea with empty bottles and bottles filled with water. [Stress Testing!] Test if the bottle is made up of material, which is recyclable. In case of plastic made bottle test if it is easily crushable. Test if the bottle can also be used to hold other common household things like honey, fruit juice, fuel, paint, turpentine, liquid wax etc. [Capability Testing!] 39. What is Portlet Testing ? Following are the features that should be concentrated while testing a portlet i. Test alignment/size display with multiple style sheets and portal configurations. When you configure a portlet object in the portal, you must choose from the following alignments: a. Narrow portlets are displayed in a narrow side column on the portal page. Narrow portlets must fit in a column that is fewer than 255 pixels wide. b. Wide portlets are displayed in the middle or widest side column on the portal page. Wide portlets fit in a column fewer than 500 pixels wide. ii. Test all links and buttons within the portlet display. (if there are errors, check that all forms and functions are

uniquely named, and that the preference and gateway settings are configured correctly in the portlet web service editor.) iii. Test setting and changing preferences. (if there are errors, check that the preferences are uniquely named and that the preference and gateway settings are configured correctly in the portlet web service editor.) iv. Test communication with the backend application. Confirm that actions executed through the portlet are completed correctly. (if there are errors, check the gateway configuration in the portlet web service editor.) v. Test localized portlets in all supported languages. (if there are errors, make sure that the language files are installed correctly and are accessible to the portlet.) vi. If the portlet displays secure information or uses a password, use a tunnel tool to confirm that any secure information is not sent or stored in clear text. Vii. If backwards compatibility is supported, test portlets in multiple versions of the portal. 40. What is Equivalence Partitioning? Concepts: Equivalence partitioning is a method for deriving test cases. In this method, classes of input conditions called equivalence classes are identified such that each member of the class causes the same kind of processing and output to occur. In this method, the tester identifies various equivalence classes for partitioning. A class is a set of input conditions that are is likely to be handled the same way by the system. If the system were to handle one case in the class erroneously, it would handle all cases erroneously. 41. Why Learn Equivalence Partitioning? Equivalence partitioning drastically cuts down the number of test cases required to test a system reasonably. It is an attempt to get a good 'hit rate', to find the most errors with the smallest number of test cases. DESIGNING TEST CASES USING EQUIVALENCE PARTITIONING To use equivalence partitioning, you will need to perform two steps. Identify the equivalence classes Design test cases STEP 1: IDENTIFY EQUIVALENCE CLASSES Take each input condition described in the specification and derive at least two equivalence classes for it. One class represents the set of cases which satisfy the condition (the valid class) and one represents cases which do not (the invalid class) Following are some general guidelines for identifying equivalence classes: a) If the requirements state that a numeric value is input to the system and must be within a range of values, identify one valid class inputs which are within the valid range and two invalid equivalence classes inputs which are too low and inputs which are too high. For example, if an item in inventory can have a quantity of 9999 to + 9999, identify the following classes: One valid class: (QTY is greater than or equal to -9999 and is less than or equal to 9999). This is written as (- 9999 < = QTY < = 9999) The invalid class (QTY is less than -9999), also written as (QTY < -9999) The invalid class (QTY is greater than 9999) , also written as (QTY >9999) b) If the requirements state that the number of items input by the system at some point must lie within a certain range, specify one valid class where the number of inputs is within the valid range, one invalid class where there are too few inputs and one invalid class where there are, too many inputs. 42. What are two types of Metrics? Process metrics: Primary metrics are also called as Process metrics. This is the metric the Six Sigma practitioners care about and can influence. Primary metrics are almost the direct output characteristic of a process. It is a measure of a process and not a measure of a high-level business objective. Primary Process metrics are usually Process Defects, Process cycle time and Process consumption.

Product metrics: Product metrics quantitatively characterize some aspect of the structure of a software product, such as a requirements specification, a design, or source code. 43. What is the Outcome of Testing? A stable application, performing its task as expected. 44. Why do you go for White box testing, when Black box testing is available? A benchmark that certifies Commercial (Business) aspects and also functional (technical) aspects is objectives of black box testing. Here loops, structures, arrays, conditions, files, etc are very micro level but they arc Basement for any application, So White box takes these things in Macro level and test these things 45. What is Baseline document, Can you say any two? A baseline document, which starts the understanding of the application before the tester, starts actual testing. Functional Specification and Business Requirement Document 46. Tell names of some testing type which you learnt or experienced? Any 5 or 6 types which are related to companies profile is good to say in the interview, Ad - Hoc testing Cookie Testing CET (Customer Experience Test) Depth Test Event-Driven Performance Testing Recovery testing Sanity Test Security Testing Smoke testing Web Testing 47. What exactly is Heuristic checklist approach for unit testing? It is method of achieving the most appropriate solution of several found by alternative methods is selected at successive stages testing. The checklist Prepared to Proceed is called Heuristic checklist 48. What is a Data Guideline? Data Guidelines are used to specify the data required to populate the test bed and prepare test scripts. It includes all data parameters that are required to test the conditions derived from the requirement / specification The Document, which supports in preparing test data are called Data guidelines 49. Why do you go for Test Bed? When Test Condition is executed its result should be compared to Test result (expected result), as Test data is needed for this here comes the role of test Bed where Test data is made ready. 50. Why do we prepare test condition, test cases, test script (Before Starting Testing)? These are test design document which are used to execute the actual testing Without which execution of testing is impossible, finally this execution is going to find the bugs to be fixed so we have prepare this documents. 51. Is it not waste of time in preparing the test condition, test case & Test Script? No document prepared in any process is waste of rime, That too test design documents which plays vital role in test execution can never be said waste of time as without which proper testing cannot be done.

52. How do you go about testing of Web Application? To approach a web application testing, the first attack on the application should be on its performance behavior as that is very important for a web application and then transfer of data between web server and .front end server, security server and back end server. 53. What kind of Document you need for going for a Functional testing? Functional specification is the ultimate document, which expresses all the functionalities of the application and other documents like user manual and BRS are also need for functional testing. Gap analysis document will add value to understand expected and existing system. 54. Can the System testing be done at any stage? No, .The system as a whole can be tested only if all modules arc integrated and all modules work correctly System testing should be done before UAT (User Acceptance testing) and Before Unit Testing. 55. What is Mutation testing & when can it be done? Mutation testing is a powerful fault-based testing technique for unit level testing. Since it is a fault-based testing technique, it is aimed at testing and uncovering some specific kinds of faults, namely simple syntactic changes to a program. Mutation testing is based on two assumptions: the competent programmer hypothesis and the coupling effect. The competent programmer hypothesis assumes that competent programmers turn to write nearly "correct" programs. The coupling effect stated that a set of test data that can uncover all simple faults in a program is also capable of detecting more complex faults. Mutation testing injects faults into code to determine optimal test inputs. 56. Why it is impossible to test a program completely? With any software other than the smallest and simplest program, there are too many inputs, too many outputs, and too many path combinations to fully test. Also, software specifications can be subjective and be interpreted in different ways. 57. How will you review the test case and how many types are there? There are 2 types of review: Informal Review: technical lead reviewing. Peer Review: by a peer at the same organization (walkthrough? technical - inspection). Or Reviews: Management Review Technical Review Code Review Formal Review (Inspections and Audits) Informal Review (Peer Review and Code Review) and coming to walk through.... objectives of Reviews: To find defects in requirements. To find defects in Design. To identify deviations in any process and also provide valued suggestions to improve the process. 58. What do you mean by Pilot Testing? Pilot testing involves having a group of end users try the system prior to its full deployment in order to give feedback on IIS 5.0 features and functions.

Or Pilot Testing is a Testing Activity which resembles the Production Environment. It is Done Exactly between UAT and Production Drop. Few Users who simulate the Production environment to continue the Business Activity with the System. They Will Check the Major Functionality of the System before going into production. This is basically done to avoid the high-level Disasters. Priority of the Pilot Testing Is High and Issues Raised in Pilot Testing has to be Fixed As Soon As Possible. 59. What is SRS and BRS in manual testing? BRS is Business Requirement Specification which means the client who want to make the application gives the specification to software development organization and then the organization convert it to SRS (Software requirement Specification) as per the need of the software. 60. What is Smoke Test and Sanity Testing? When will use the Above Tests? Smoke Testing: It is done to make sure if the build we got is testable or not, i.e to check for the testability of the build also called as "day 0" check. Done at the 'build level' Sanity Testing: It is done during the release phase to check for the main functionalities without going deeper. Sometimes also called as subset of regression testing. When no rigorous regression testing is done to the build, sanity does that part by checking major functionalities. Done at the 'release level' 61. What is debugging? Debugging is finding and removing "bugs" which cause the program to respond in a way that is not intended. 62. What is determination? Determination has different meanings in different situations. Determination means a strong intention or a fixed intention to achieve a specific purpose. Determination, as a core value, means to have strong will power in order to achieve a task in life. Determination means a strong sense of self-devotion and self-commitment in order to achieve or perform a given task. The people who are determined to achieve various objectives in life are known to succeed highly in various walks of life. Another way, it could also mean calculating, ascertaining or even realizing a specific amount, limit, character, etc. It also refers to a certain result of such ascertaining or even defining a certain concept. It can also mean to reach at a particular decision and firmly achieve its purpose. 63. What is exact difference between Debugging & Testing? Testing is nothing but finding an error/problem and its done by testers where as debugging is nothing but finding the root cause for the error/problem and that is taken care by developers. Or Debugging- is removing the bug and is done by developer. Testing - is identifying the bug and is done by tester. 64. What is fish model can you explain? Fish model explains the mapping between different stages of development and testing. Phase 1 Information gathering takes place and here the BRS document is prepared. Phase 2 Analysis takes place

During this phase, development people prepare SRS document which is a combination of functional requirement specification and system requirement specification. During this phase, testing people are going for reviews. Phase-3 Design phase Here HLD and LLD high level design document and low level design documents are prepared by development team. Here, the testing people are going for prototype reviews. Phase-4 coding phase White box testers start coding and white box testing is being conducted by testing team. Phase-5 testing phase White box testing takes place by the black box test engineers. Phase-6 release and maintenance. 65. What is Conformance Testing? The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard. 66. What is Context Driven Testing? The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now. 67. What is End-to-End testing? Similar to system testing, the 'macro' end of the test scale involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. 68. When the testing should be ended? Testing is a never ending process, because of some factors testing May terminates. The factors may be most of the tests are executed, project deadline, test budget depletion, bug rate falls down below the criteria. 69. What is Parallel/Audit Testing? Testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly. 70. What are the roles of glass-box and black-box testing tools? Black-box testing It is not based on knowledge of internal design or code. Tests are based on requirements and functionality. Black box testing is used to find the errors in the following. Interface errors Performance errors Initialization errors Incorrect or missing functionality

Errors while accessing external database Glass-box testing It is based on internal design of an application code. Tests are based on path coverage, branch coverage, and statement coverage. It is also known as White Box testing. White box test cases can check for; All independent paths with in a module are executed atleast once Execute all loops Exercises all logical decisions Exercise internal data structure to ensure their validity 71. What is your experience with change control? Our development team has only 10 members. Do you think managing change is such a big deal for us? Whenever the modifications happening to the actual project all the corresponding documents are adapted on the information. So as to keep the documents always in sync with the product at any point of time 72. What is GAP ANALYSIS? The gap analysis can be done by traceability matrix that means tracking down each individual requirement in SRS to various work products. 73. How do you know when your code has met specifications? With the help of traceability matrix. All the requirements are tracked to the test cases. When all the test cases are executed and passed is an indication that the code has met the requirements. 74. At what stage of the life cycle does testing begin in your opinion? Testing is a continuous process and it starts as and when the requirement for the project /product begins to be framed. Requirements phase: testing is done to check whether the project/product details are reflecting clients ideas or giving an idea of complete project from the clients perspective (as he wished to be) or not. 75. What are the properties of a good requirement? Requirement specifications are important and one of the most reliable methods of insuring problems in a complex software project. Requirements are the details describing an application's externally perceived functionality and properties. Requirements should be clear, complete, reasonably detailed, cohesive, attainable and testable. 76. How do you scope, organize, and execute a test project? The Scope can be defined from the BRS, SRS, FRS or from functional points. It may be anything that is provided by the client. And regarding organizing we need to analyze the functionality to be covered and who will testing the modules and pros and cons of the application. Identify the number if test cases, resource allocation, what are the risks that we need mitigate all these come into picture. Once this is done it is very easy to execute based on the plan what we have chalked out. 77. How would you ensure 100% coverage of testing? We can not perform 100% testing on any application. but the criteria to ensure test completion on a project are: All the test cases are executed with the certain percentage of pass. Bug falls below a certain level Test budget depleted Dead lines reached (project or test) When all the functionalities are covered in a test cases All critical & high bugs must have a status of CLOSED

78. Do you go about testing a web application? Ideally to test a web application, the components and functionality on both the client and server side should be tested. But it is practically impossible The best approach to examine the project's requirements, set priorities based on risk analysis, and then determine where to focus testing efforts within budget and schedule constraints. To test a web application we need to perform testing for both GUI and client-server architecture. Based on many factors like project requirements, risk analysis, budget and schedule, we can determine that what kind of testing will be appropriate for your project. We can perform unit n integration testing, functionality testing, GUI testing, usability testing, compatibility testing, security testing, performance testing, recovery testing and regression testing. 79. What are your strengths? I'm well motivated, well-organized, good team player, dedicative to work and I've got a strong desire to succeed, and I'm always ready and willing to learn new information and skills. 80. When should you begin testing? For any Project, testing activity will be there from starting onwards, After the Requirements gathering, Design Document (High and Low) will be prepared, that will be tested, whether they are confirming to requirements or not, Design then Coding- White box will be done, after the Build or System is ready, Integration followed by functional testing will be done, Till the product or Project was stable. After the product or project is stable, then testing will be stopped. 81. When should you begin test planning? Test planning is done by test lead. As a test lead test planning begins when TRM is finalized by project manager and handover to the test lead. Here test lead have some responsibilities those are, Testing team formation identifying tactical risks preparing test plan Reviews on test plans 82. Would you like to work in a team or alone, why? I would like to work in a team. Because the process of software development is like a relay race where many runners have to contribute in their respective laps. It is important because the complexity of work and degree of efforts required is beyond level of an individual. 83. When should testing Start in a project? Why? Testing in a continuous activity carried out at every stage of the project. You first test everything that you get from the client. As tester (technical tester), my work will start as soon as the project starts. 84. Have you ever created a test plan? This is just a sample answer - "I have never created any test plan. I developed and executed testcase. But I was involved/ participated actively with my Team Leader while creating Test Plans." 85. Define quality for me as you understand it It is software that is reasonably bug-free and delivered on time and within the budget, meets the requirements and expectations and is maintainable.

86. What is the role of QA in a development project? Quality Assurance Group assures the Quality it must monitor the whole development process. they are most concentration on prevention of bugs. It must set standards, introduce review procedures, and educate people into better ways to design and develop products. 87. How involved where you with your Team Lead in writing the Test Plan? As per my knowledge Test Member are always out of scope while preparing the Test Plan, Test Plan is a higher level document for Testing Team. Test Plan includes Purpose, scope, Customer/Client scope, schedule, Hardware, Deliverables and Test Cases etc. Test plan derived from PMP (Project Management Plan). Team member scope is just go through TEST PLAN then they come to know what all are their responsibilities, Deliverable of modules. Test Plan is just for input documents for every testing Team as well as Test Lead. 88. What processes/methodologies are you familiar with? Methodology Spiral methodology Waterfall methodology. these two are old methods. Rational unified processing. this is from I B M and Rapid application development. this is from Microsoft office. 89. What is globalization testing? The goal of globalization testing is to detect potential problems in application design that could inhibit globalization. It makes sure that the code can handle all international support without breaking functionality that would cause either data loss or display problems. 90. What is base lining? Base lining: Process by which the quality and cost effectiveness of a service is assessed, usually in advance of a change to the service. Base lining usually includes comparison of the service before and after the Change or analysis of trend information. The term Benchmarking is normally used if the comparison is made against other enterprises. For example: If the company has different projects. For each project there will be separate test plans. This test plans should be accepted by peers in the organization after modifications. That modified test plans are the baseline for the testers to use in different projects. Any further modifications are done in the test plan. Present modified becomes the baseline. Because this test plan becomes the basis for running the testing project. 91. Define each of the following and explain how each relates to the other: Unit, System and Integration testing. Unit testing it is a testing on each unit (program) System testin This is a bottleneck stage of our project. This testing done after integration of all modules to check whether our build meets all the requirements of customer or not. Unit and integration testing is a white box testing which can be done by programmers. System testing is a black box testing which can be done by people who do not know programming. The hierarchy of this testing is unit testing integration testing system testing Integration testing: integration of some units called modules. the test on these modules is called integration testing (module testing).

92. Who should you hire in a testing group and why? Testing is an interesting part of software cycle. and it is responsible for providing an quality product to a customer. It involves finding bugs which is more difficult and challenging. I wanna be part of testing group because of this. 93. What do you think the role of test-group manager should be? Relative to senior management? Relative to other technical groups in the company? Relative to your staff? ROLES OF test-group manager INCLUDE Defect find and close rates by week, normalized against level of effort (are we finding defects, and can developers keep up with the number found and the ones necessary to fix?) Number of tests planned, run, passed by week (do we know what we have to test, and are we able to do so?) Defects found per activity vs. total defects found (which activities find the most defects?) Schedule estimates vs. actual (will we make the dates, and how well do we estimate?) People on the project, planned vs. actual by week or month (do we have the people we need when we need them?) Major and minor requirements changes (do we know what we have to do, and does it change?) 94. What criteria do you use when determining when to automate a test or leave it manual? The Time and Budget both are the key factors in determining whether the test goes on Manual or it can be automated. Apart from that the automation is required for areas such as Functional, Regression, Load and User Interface for accurate results. 95. How do you analyze your test results? What metrics do you try to provide? Test results are analyzed to identify the major causes of defect and which is the phase that has introduced most of the defects. This can be achieved through cause/effect analysis or Pareto analysis. Analysis of test results can provide several test matrics. Where matrices are measure to quantify s/w, s/w development resources and s/w development process. Few matrices which we can provide are: Defect density: total no of defects reported during testing/size of project Test effectiveness'/(t+uat) where t: total no of defect recorded during testing and UAT: total no of defect recorded during use acceptance testing Defect removal efficiency(DRE): (total no of defect removed / total no of defect injected)*100 96. How do you perform regression testing? Regression Testing is carried out both manually and automation. The automatic tools are mainly used for the Regression Testing as this is mainly focused repeatedly testing the same application for the changes the application gone through for the new functionality, after fixing the previous bugs, any new changes in the design etc. The regression testing involves executing the test cases, which we ran for finding the defects. Whenever any change takes place in the Application we should make sure, the previous functionality is still available without any break. For this reason one should do the regression testing on the application by running/executing the previously written test cases. 97. Describe to me when you would consider employing a failure mode and effect analysis FMEA (Failure Mode and Effects Analysis) is a proactive tool, technique and quality method that enables the identification and prevention of process or product errors before they occur. Failure modes and effects analysis (FMEA) is a disciplined approach used to identify possible failures of a product or service and then determine the frequency and impact of the failure. 98. What is UML and how to use it for testing?

The Unified Modeling Language is a third-generation method for specifying, visualizing, and documenting the artifacts of an object-oriented system under development From the inside, the Unified Modeling Language consists of three things: A formal metamodel A graphical notation A set of idioms of usage 99. What you will do during the first day of job? In my present company HR introduced me to my colleagues. and i known the following things. What is the organization structure? What is the current project developing, on what domain etc., I will know to whom i have to report and what r my other responsibilities. 100. What is IEEE? Why is it important? Organization of engineers Scientists and students involved in electrical, electronics, and related fields. It is important because it functions as a publishing house and standards-making body. 101. Define Verification and Validation. Explain the differences between the two. Verification - Evaluation done at the end of a phase to determine that requirements are established during the previous phase have been met. Generally Verification refers to the overall s/w evaluation activity, including reviewing, inspecting, checking and auditing. Validation: - The process of evaluating s/w at the end of the development process to ensure compliance with requirements. Validation typically involves actual testing and takes place after verification is complete. Or Verification: Whether we are building the product right? Validation: Whether we are building the right product/System? 102. Describe a past experience with implementing a test harness in the development Of software Harness: an arrangement of straps for attaching a horse to a cart. Test Harness: This class of tool supports the processing of tests by working it almost painless to Install a candidate program in a test environment Feed it input data Simulate by stubs the behavior of subsidiary modules. 103. What criteria do you use when determining when to automate a test or leave it manual? The Time and Budget both are the key factors in determining whether the test goes on Manual or it can be automated. Apart from that the automation is required for areas such as Functional, Regression, Load and User Interface for accurate results. 104. What would you like to do five years from now? I would like to be in a managerial role, ideally working closely with external clients. I have worked in client-facing roles for more than two years and I enjoy the challenge of keeping the customer satisfied. I think it's something I'm good at. I would also like to take on additional responsibility within this area, and possibly other areas such as Finally, I'd like to be on the right career path towards eventually becoming a Senior Manager within the company. I'm very aware that these are ambitious goals, however I feel through hard work and dedication they are quite attainable. 105. Define each of the following and explain how each relates to the other: Unit, System, and Integration testing

Unit system comes first. Performed by a developer. Integration testing comes next. Performed by a tester System testing comes last-Performed by a tester. 106. What is IEEE? Why is it important? "Institute of Electrical & Electronic Engineers" Organization of engineers, scientists and students involved in electrical, electronics, and related fields. It also functions as a publishing house and standards-making body. 107. What is the role of QA in a company that produces software? The role of the QA in the company is to produce a quality software and to ensure that it meets all the requirements of its customers before delivering the product. 108. How would you build a test team? Building a test team needs a number of factors to judge. Firstly, you have to consider the complexity of the application or project that is going to be tested. Next testing, time allotted levels of testing to be performed. With all these parameters in mind you need to decide the skills and experience level of your testers and how many testers. 109. In an application currently in production, one module of code is being modified. Is it necessary to re- test the whole application or is it enough to just test functionality associated with that module? It depends on the functionality related with that module. We need to check whether that module is inter-related with other modules. If it is related with other modules, we need to test related modules too. Otherwise, if it is an independent module, no need to test other modules. 110. What are ISO standards? Why are they important? ISO 9000 specifies requirements for a Quality Management System overseeing the production of a product or service. It is not a standard for ensuring a product or service is of quality; rather, it attests to the process of production, and how it will be managed and reviewed. For ex a few: ISO 9000:2000 Quality management systems. Fundamentals and vocabulary ISO 9000-1:1994 Quality management and quality assurance standards. Guidelines for selection and use ISO 9000-2:1997 Quality management and quality assurance standards. Generic guidelines for the application of ISO 9001, ISO 9002 and ISO 9003 ISO 9000-3:1997 Quality management and quality assurance standards. Guidelines for the application of ISO 9001:1994 to the development, supply, installation and maintenance of computer software ISO 9001:1994 Quality systems. Model for quality assurance in design, development, production, installation and servicing ISO 9001:2000 Quality management systems. Requirements 111. What is the Waterfall Development Method and do you agree with all the steps? Waterfall approach is a traditional approach to the s/w development. This will work out of it project is a small one (Not complex).Real time projects need spiral methodology as SDLC. Some product based development can follow Waterfall, if it is not complex. Production cost is less if we follow waterfall method. 112. What is migration testing? Changing of an application or changing of their versions and conducting testing is migration testing. Testing of programs or procedures used to convert data from existing systems for use in replacement systems. 113. What is terminology? Why testing Necessary fundamental test process psychology of testing Testing Terminologies Error: a human action that produces an incorrect result.

Fault: a manifestation of an error in software. Failure: a deviation of the software from its expected delivery or service. Reliability: the probability that the software will not cause the failure of the system for a specified time under specified conditions. Why Testing is Necessary Testing is necessary because software is likely to have faults in it and it is better (cheaper, quicker and more expedient) to find and remove these faults before it is put into live operation. Failures that occur during live operation are much more expensive to deal with than failures than occur during testing prior to the release of the software. Of course other consequences of a system failing during live operation include the possibility of the software supplier being sued by the customers! Testing is also necessary so we can learn about the reliability of the software (that is, how likely it is to fail within a specified time under specified conditions). 114. What is UAT testing? When it is to be done? UAT stands for 'User acceptance Testing' This testing is carried out with the user perspective and it is usually done before a release UAT stands for User Acceptance Testing. It is done by the end users along with testers to validate the functionality of the application. It is also called as Pre-Production testing. 115. How to find that tools work well with your existing system? I think we need to do a market research on various tools depending on the type of application we are testing. Say we are testing an application made in VB with an Oracle Database, and then Win runner is going to give good results. But in some cases it may not, say your application uses a lots of 3rd party Grids and modules which have been integrated into the application. So it depends on the type of application u r testing. Also we need to know what sort of testing will be performed. If u need to test the performance, u cannot use a record and playback tool, u need a performance testing tool such as Load runner. 116. What is the difference between a test strategy and a test plan? TEST PLAN: IT IS PLAN FOR TESTING.IT DEFINES SCOPE, APPROACH, AND ENVIRONEMENT. TEST STRATEGY: A TEST STRATEGY IS NOT A DOCUMENT.IT IS A FRAMEWORK FOR MAKING DECISIONS ABOUT VALUE. 117. What is Scenarios in term of testing? Scenario means development. We define scenario by the following definition: Set of test cases that ensure the business process flows are tested from end to end. It may be independent tests or a series of tests that follow each other, each dependant on the output of the previous one. The term test scenario and test case are often used synonymously. 118. Explain the differences between White-box, Gray-box, and Black-box testing? Black box testing Tests are based on requirements and functionality. Not based on any knowledge of internal design or code. White box testing Tests are based on coverage of code statements, branches, paths, conditions. Based on knowledge of the internal logic of an application's code. Gray Box Testing A Combination of Black and White Box testing methodologies, testing a piece of software against its specification but using some knowledge of its internal workings. 119. What is structural and behavioral Testing? Structural Testing It is basically the testing of code which is called white box testing. Behavioral Testing It is also called functional testing where the functionality of software is being tested. This kind of testing is called black box testing. Structural Testing

It's a White Box Testing Technique. Since the testing is based on the internal structure of the program/code & hence it is called as Structural Testing. Behavioral Testing: It's a Black Box Testing Technique. Since the testing is based on the external behavior/functionality of the system /application & hence it is called as Behavioral Testing. 120. How does unit testing play a role in the development / Software lifecycle? We can catch simple bugs like GUI, small functional Bugs during unit testing. This reduces testing time. Overall this saves project time. If developer doesn't catch this type of bugs, this will come to integration testing part and if it catches by a tester, this need to go through a Bug life cycle and consumes a lot of time. 121. What made you pick testing over another career? Testing is one aspect which is very important in the Software Development Life Cycle (SDLC). I like to be part of the team which is responsible for the quality of the application being delivered. Also, QA has broad opportunities and large scope for learning various technologies. And of course it has lot more opportunities than the Development.

Q - What are test case formats widely use in web based testing? A - Web based applications deal with live web portals. Hence the test cases can be broadly classified as front end , back end, security testing cases, navigation based, field validations, database related cases. The test cases are written based on the functional specifications and wire-frames.

Q - How to prepare test case and test description for job application? A - Actually the question seems to be vague,... see naukri is one of biggest job site globally and it has is own complex functionality normally a Test case is derived from a SRS (or) FRS basically and test description is always derived from a Test case. Test description is nothing but the steps which has to be followed for the TC what u wrote. And the TC is nothing which compares the expectation and the actual(outcome)result.

Q - What is the difference between Functional and Technical bugs? Give an example for each.? Functional Bugs : Bugs found when Testing the Functionality of the AUT. Technical bugs: Related to Communication which AUT makes.Like H/W,DB ! where these could not be connected properly.

Q - Give proper Seq. to following testing Types Regression, Retesting, Funtional, Sanity and Performance Testing.? A - The proper sequence in which these types of testing are performed is - Sanity, Functional, Regression, Retesting, Performance.

Q - How u test MS- Vista without any requirement Doc.? Know what change is being made from the older verison of Windows to the newer version with the help of User Release notes thats released with Windows Vista. Based on that, formulate the test cases and execute the same.

Q - What is verification? validation? Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed. The term 'IV & V' refers to Independent Verification and Validation.

Q - How can new Software QA processes be introduced in an existing organization? A lot depends on the size of the organization and the risks involved. For large organizations with highrisk (in terms of lives or property) projects, serious management buy-in is required and a formalized QA process is necessary. Where the risk is lower, management and organizational buy-in and QA implementation may be a slower, step-at-a-time process. QA processes should be balanced with productivity so as to keep bureaucracy from getting out of hand. For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and projects. A lot will depend on team leads or managers, feedback to developers, and ensuring adequate communications among customers, managers, developers, and testers. The most value for effort will often be in (a) requirements management processes, with a goal of clear, complete, testable requirement specifications embodied in requirements or design documentation, or in 'agile'-type environments extensive continuous coordination with end-users, (b) design inspections and code inspections, and (c) post-mortems/retrospectives.

Q - Why is it often hard for management to get serious about quality assurance? Solving problems is a high-visibility process; preventing problems is low-visibility. This is illustrated by an old parable: In ancient China there was a family of healers, one of whom was known throughout the land and employed as a physician to a great lord.

Q - What's an 'inspection'? An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything. Attendees should prepare for this type of meeting by reading thru the document; most problems will be found during this preparation. The result of the inspection meeting should be a written report.

Q - What is a 'walkthrough'? A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required. Q - What makes a good test engineer? A good test engineer has a 'test to break' attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers' point of view, and reduce the learning curve in automated test tool

programming. Judgment skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited. Q - What makes a good Software QA engineer? The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews. Q - What is agile testing? Agile testing is used whenever customer requirements are changing dynamically If we have no SRS, BRS but we have test cases does you execute the test cases blindly or do you follow any other process. Test case would have detail steps of what the application is supposed to do. 1) Functionality of application. 2) In addition you can refer to Backend, is mean look into the Database. To gain more knowledge of the application.
What makes a good test engineer? A good test engineer has a 'test to break' attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers' point of view, and reduce the learning curve in automated test tool programming. Judgement skills are needed to assess highrisk areas of an application on which to focus testing efforts when time is limited. What makes a good Software QA engineer? The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews. What makes a good QA or Test manager? A good QA, test, or QA/Test(combined) manager should: be familiar with the software development process be able to maintain enthusiasm of their tea m and promote a positive atmosphere, despite what is a somewhat 'negative' process (e.g., looking for or preventing problems) be able to promote teamwork to increase productivity be able to promote cooperation between software, test, and QA engineers have the diplomatic skills needed to promote improvements in QA processes have the ability to withstand pressures and say 'no' to other managers when quality is insufficient or QA processes are not being adhered to have people judgement skills for hiring and keeping skilled personnel be able to communicate with technical and non-technical people, engineers, managers, and customers. be able to run meetings and keep them focused What's the role of documentation in QA? Critical. (Note that documentation can be electronic, not necessarily paper.) QA practices should be documented such that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documented. There should ideally be a system for easily finding and obtaining documents and determining what documentation will have a particular piece of information. Change management for documentation should be used if possible.

What's the big deal about 'requirements'? One of the most reliable methods of insuring problems, or failure, in a complex software project is to have poorly documented requirements specifications. Requirements are the details describing an application's externallyperceived functionality and properties. Requirements should be clear, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement would be, for example, 'user-friendly' (too subjective). A testable requirement would be something like 'the user must enter their previously-assigned password to access the application'. Determining and organizing requirements details in a useful and efficient way can be a difficult effort; different methods are available depending on the particular project. Many books are available that describe various approaches to this task. (See the Bookstore section's 'Software Requirements Engineering' category for books on Software Requirements.) Care should be taken to involve ALL of a project's significant 'customers' in the requirements process. 'Customers' could be in-house personnel or out, and could include end-users, customer acceptance testers, customer contract officers, customer management, future software maintenance engineers, salespeople, etc. Anyone who could later derail the project if their expectations aren't met should be included if possible. Organizations vary considerably in their handling of requirements specifications. Ideally, the requirements are spelled out in a document with statements such as 'The product shall.....'. 'Design' specifications should not be confused with 'requirements'; design specifications should be traceable back to the requirements. In some organizations requirements may end up in high level project plans, functional specification documents, in design documents, or in other documents at various levels of detail. No matter what they are called, some type of documentation with detailed requirements will be needed by testers in order to properly plan and execute tests. Without such documentation, there will be no clear-cut way to determine if a software application is performing correctly. 'Agile' methods such as XP use methods requiring close interaction and cooperation between programmers and customers/end-users to iteratively develop requirements. The programmer uses 'Test first' development to first create automated unit testing code, which essentially embodies the requirements. What steps are needed to develop and run software tests? The following are some of the steps to consider: Obtain requirements, functional design, and internal design specifications and other necessary documents Obtain budget and schedule requirements Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.) Identify application's higher -risk aspects, set priorities, and determine scope and limitations of tests Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc. Determine test environment requirements (hardware, software, communications, etc.) Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.) Determine test input data requirements Identify tasks, those responsible for tasks, and labor requirements Set schedule estimates, timelines, milestones Determine input equivalence classes, boundary value analyses, error classes Prepare test plan document and have needed reviews/approvals Write test cases Have needed reviews/inspections/approvals of test cases Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data Obtain and install software releases Perform tests Evaluate and report results Track problems/bugs and fixes Retest as needed Maintain and update test plans, test cases, test environment, and testware through life cycle What's a 'test plan'?

A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project: Title Identification of software including version/release numbers Revision history of document including authors, dates, approvals Table of Contents Purpose of document, intended audience Objective of testing effort Software product overview Relevant related document list, such as requireme nts, design documents, other test plans, etc. Relevant standards or legal requirements Traceability requirements Relevant naming conventions and identifier conventions Overall software project organization and personnel/contact-info/responsibilties Test organization and personnel/contact-info/responsibilities Assumptions and dependencies Project risk analysis Testing priorities and focus Scope and limitations of testing Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable Outline of data input equivalence classes, boundary value analysis, error classes Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems Test environment validity analysis - differences between the test and production systems and their impact on test validity. Test environment setup and configuration issues Software migration processes Software CM processes Test data setup requirements Database setup requirements Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs Test automation - justification and overview Test tools to be used, including versions, patches, etc. Test script/test code maintenance processes and version control Problem tracking and resolution - tools and processes Project test metrics to be used Reporting requirements and testing deliverables Software entrance and exit criteria Initial sanity testing period and criteria Test suspension and restart criteria Personnel allocation Personnel pre-training needs Test site/location Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact persons, and coordination issues Relevant proprietary, classified, security, and licensing issues. Open issues Appendix - glossary, acronyms, etc.

(See the Bookstore section's 'Software Testing' and 'Software QA' categories for useful books with more information.) What's a 'test case'? A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results. Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible. What should be done after a bug is found? The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available (see the 'Tools' section for web resources with listings of such tools). The following are items to consider in the tracking process: Complete information such that developers can understand the bug, get an idea of it' s severity, and reproduce it if necessary. Bug identifier (number, ID, etc.) Current bug status (e.g., 'Released for Retest', 'New', etc.) The application name or identifier and version The function, module, feature, object, screen, etc. where the bug occurred Environment specifics, system, platform, relevant hardware specifics Test case name/number/identifier One-line bug description Full bug description Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool Names and/or descriptions of file/data/messages/etc. used in test File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common) Was the bug reproducible? Tester name Test date Bug reporting date Name of developer/group/organization the problem is assigned to Description of problem cause Description of fix Code section/file/module/class/method that was fixed Date of fix Application version that contains the fix Tester responsible for retest Retest date Retest results Regression testing requirements Tester responsible for regression tests Regression testing results A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers. What is 'configuration management'? Configuration management covers the processes used to control, coordinate, and track: code, requirements, documentation, problems, change requests, designs, tools/compilers/libraries/patches, changes made to them, and

who makes the changes. (See the 'Tools' section for web resources with listings of configuration management tools. Also see the Bookstore section's 'Configuration Management' category for useful books with more information.) What if the software is so buggy it can't really be tested at all? The best bet in this situation is for the testers to go through the process of reporting whatever bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules, and indicates deeper problems in the software development process (such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc.) managers should be notified, and provided with some documentation as evidence of the problem. How can it be known when to stop testing? This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are: Deadlines (release deadlines, testing deadlines, etc.) Test cases completed with certain percentage passed Test budget depleted Coverage of code/functionality/requirements reaches a specified point Bug rate falls below a certain level Beta or alpha testing period ends What if there isn't enough time for thorough testing? Use risk analysis to determine where testing should be focused. Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgement skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include: Which functionality is most important to the project's intend ed purpose? Which functionality is most visible to the user? Which functionality has the largest safety impact? Which functionality has the largest financial impact on users? Which aspects of the application are most important to the customer? Which aspects of the application can be tested early in the development cycle? Which parts of the code are most complex, and thus most subject to errors? Which parts of the application were developed in rush or panic mode? Which aspects of similar/related previous projects caused problems? Which aspects of similar/related previous projects had large maintenance expenses? Which parts of the requirements and design are unclear or poorly thought out? What do the developers think are the highest -risk aspects of the application? What kinds of problems would cause the worst publicity? What kinds of problems would cause the most customer service complaints? What kinds of tests could easily cover multiple functionalities? Which tests will have the best high-risk-coverage to time-required ratio? What if the project isn't big enough to justify extensive testing? Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is again needed and the same considerations as described previously in 'What if there isn't enough time for thorough testing?' apply. The tester might then do ad hoc testing, or write up a limited test plan based on the risk analysis. What can be done if requirements are changing continuously? A common problem and a major headache. Work with the project's stakeholders early on to understand how requirements might change so that alternate test plans and strategies can be worked out in advance, if possible. It's helpful if the application's initial design allows for some adaptability so that later changes do not require redoing the application from scratch. If the code is well-commented and well-documented this makes changes easier for the developers.

Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes. The project's initial schedule should allow for some extra time commensurate with the possibility of changes. Try to move new requirements to a 'Phase 2' version of an application, while using the original requirements for the 'Phase 1' version. Negotiate to allow only easily-implemented new requirements into the project, while moving more difficult new requirements into future versions of the application. Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Then let management or the customers (not the developers or testers) decide if the changes are warranted - after all, that's their job. Balance the effort put into setting up automated testing with the expected effort required to re -do them to deal with changes. Try to design some flexibility into automated test scripts. Focus initial automated testing on application aspects that are most likely to remain unchanged. Devote appropriate effort to risk analysis of changes to minimize regression testing needs. Design some flexibility into test cases (this is not easily done; the best bet might be to mini mize the detail in the test cases, or set up only higher-level generic-type test plans) Focus less on detailed test plans and test cases and more on ad hoc testing (with an understanding of the added risk that this entails). What if the application has functionality that wasn't in the requirements? It may take serious effort to determine if an application has significant unexpected or hidden functionality, and it would indicate deeper problems in the software development process. If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer. If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only effects areas such as minor improvements in the user interface, for example, it may not be a significant risk. How can Software QA processes be implemented without stifling productivity? By implementing QA processes slowly over time, using consensus to reach agreement on processes, and adjusting and experimenting as an organization grows and matures, productivity will be improved instead of stifled. Problem prevention will lessen the need for problem detection, panics and burn-out will decrease, and there will be improved focus and less wasted effort. At the same time, attempts should be made to keep processes simple and efficient, minimize paperwork, promote computer-based processes and automated tracking and reporting, minimize time required in meetings, and promote training as part of the QA process. However, no one - especially talented technical types - likes rules or bureacracy, and in the short run things may slow down a bit. A typical scenario would be that more days of planning and development will be needed, but less time will be required for late-night bugfixing and calming of irate customers. What if an organization is growing so fast that fixed QA processes are impossible? This is a common problem in the software industry, especially in new technology areas. There is no easy solution in this situation, other than: Hire good people Management should 'ruthlessly prioritize' quality issues and maintain focus on the customer Everyone in the organization should be clear on what 'quality' means to the customer How does a client/server environment affect testing? Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing. Additionally, load/stress/performance testing may be useful in determining client/server application limitations and capabilities. There are commercial tools to assist with such testing. (See the 'Tools' section for web resources with listings that include these kinds of test tools.) How can World Wide Web sites be tested? Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between html pages, TCP/IP communications, Internet connections, firewalls, applications

that run in web pages (such as applets, javascript, plug-in applications), and applications that run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort. Other considerations might include: What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)? Who is the target audience? What kind of browsers will they be using? What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internetwide (thus with a wide variety of connection speeds and browser types)? What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)? Will down time for server and content maintenance/upgrades be allowed? how much? What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it expected to do? How can it be tested? How reliable are the site's Internet connections required to be? And how does that affe ct backup system or redundant connection requirements and testing? What processes will be required to manage updates to the web site's content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.? Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers? Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site?? How will internal and external links be validated and updated? how often? Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, dial-up connection variabilities, and real-world internet 'traffic congestion' problems to be accounted for in testing? How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing? How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested? Some sources of site security information include the Usenet newsgroup 'comp.security.announce' and links concerning web site security in the 'Other Resources' section. Some usability guidelines to consider - these are subjective and may or may not apply to a given situation (Note: more information on usability testing issues can be found in articles about web site usability in the 'Other Resources' section): Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger, provide internal links within the page. The page layouts and design elements should be consistent throughout a site, so that it's clear to the user tha t they're still within a site. Pages should be as browser-independent as possible, or pages should be provided or generated based on the browser-type. All pages should have links external to the page; there should be no dead -end pages. The page owner, revision date, and a link to a contact person or organization should be included on each page. Many new web site test tools have appeared in the recent years and more than 280 of them are listed in the 'Web Test Tools' section. Telephonic Manual Testing Interview Questions 1. What are the components of an SRS? An SRS contains the following basic components: Introduction Overall Description External Interface Requirements System Requirements

System Features 2. What is the difference between a test plan and a QA plan? A test plan lays out what is to be done to test the product and includes how quality control will work to identify errors and defects. A QA plan on the other hand is more concerned with prevention of errors and defects rather than testing and fixing them. 3. How do you test an application if the requirements are not available? If requirements documentation is not available for an application, a test plan can be written based on assumptions made about the application. Assumptions that are made should be well documented in the test plan. 4. What is a peer review? Peer reviews are reviews conducted among people that work on the same team. For example, a test case that was written by one QA engineer may be reviewed by a developer and/or another QA engineer. 5. How can you tell when enough test cases have been created to adequately test a system or module? You can tell that enough test cases have been created when there is at least one test case to cover every requirement. This ensures that all designed features of the application are being tested. 6. Who approves test cases? The approver of test cases varies from one organization to the next. In some organizations, the QA lead may approve the test cases while another approves them as part of peer reviews. 7. Give an example of what can be done when a bug is found. When a bug is found, it is a good idea to run more tests to be sure that the problem witnessed can be clearly detailed. For example, let say a test case fails when Animal=Cat and. A tester should run more tests to be sure that the same problem doesnt exist with Animal=dog. Once the tester is sure of the full scope of the bug can be documented and the bug adequately reported. 8. Who writes test plans and test cases? Test plans are typically written by the quality assurance lead while testers usually write test cases. 9. Is quality assurance and testing the same? Quality assurance and testing is not the same. Testing is considered to be a subset of QA. QA is should be incorporated throughout the software development life cycle while testing is the phase that occurs after the coding phase. Typical Manual Testing Interview Questions: 10. What is a negative test case? Negative test cases are created based on the idea of testing in a destructive manner. For example, testing what will happen if inappropriate inputs are entered into the application. 11. If an application is in production, and one module of code is modified, is it necessary to retest just that module or should all of the other modules be tested as well? It is a good idea to perform regression testing and to check all of the other modules as well. At the least, system testing should be performed. 12. What should be included in a test strategy? The test strategy includes a plan for how to test the application and exactly what will be tested (user interface, modules, processes, etc.). It establishes limits for testing and indicates whether manual or automated testing will be used. 13. What can be done to develop a test for a system if there are no functional specifications or any system and development documents? When there are no functional specifications or system development documents, the tester should familiarize themselves with the product and the code. It may also be helpful to perform research to find similar products on the market. 14. What are the functional testing types? The following are the types of functional testing: Compatibility Configuration Error handling Functionality Input domain Installation Inter-systems

Recovery 15. What is the difference between sanity testing and smoke testing? When sanity testing is conducted, the product is sent through a preliminary round of testing with the test group in order to check the basic functionality such as button functionality. Smoke testing, on the other hand is conducted by developers based on the requirements of the client. 16. Explain random testing. Random testing involves checking how the application handles input data that is generated at random. Data types are typically ignored and a random sequence of letter, numbers, and other characters are inputted into the data field. 17. Define smoke testing. Smoke testing is a form of software testing that is not exhaustive and checks only the most crucial components of the software but does not check in more detail. Advanced Manual Testing Interview Questions 18. What steps are involved in sanity testing? Sanity testing is very similar to smoke testing. It is the initial testing of a component or application that is done to make sure that it is functioning at the most basic level and it is stable enough to continue more detailed testing. 19. What is the difference between WinRunner and Rational Robot? WinRunner is a functional test tool but Rational Robot is capable of both functional and performance testing. Also, WinRunner has 4 verification points and Rational Robot has 13 verification points. 20. What is the purpose of the testing process? The purpose of the testing process is to verifying that input data produces the anticipated output. 21. What is the difference between QA and testing? The goals of QA are very different from the goals of testing. The purpose of QA is to prevent errors is the application while the purpose of testing is to find errors. 22. What is the difference between Quality Control and Quality Assurance? Quality control (QC) and quality assurance (QA) are closely linked but are very different concepts. While QC evaluates a developed product, the purpose of QA is to ensure that the development process is at a level that makes certain that the system or application will meet the requirements. 23. What is the difference between regression testing and retesting? Regression testing is performing tests to ensure that modifications to a module or system do not have a negative effect on previous releases. Retesting is merely running the same testing again. 24. Explain the difference between bug severity and bug priority. Bug severity refers to the level of impact that the bug has on the application or system while bug priority refers to the level of urgency in the need for a fix. 25. What is the difference between system testing and integration testing? For system testing, the entire system as a whole is checked, whereas for integration testing, the interaction between the individual modules are tested. 26. Explain the term bug. A bug is an error found while running a program. Bug fall into two categories: logical and syntax. Senior Tester Interview Questions 27. Explain the difference between functional and structural testing. Functional testing is considered to be behavioral or black box testing in which the tester verifies that the system or application functions according to specification. Structural testing on the other hand is based on the code or algorithms and is considered to be white box testing. 28. Define defect density. Defect density is the total number of defects per lines of code. 29. When is a test considered to be successful? The purpose of testing is to ensure that the application operates according to the requirements and to discover as many errors and bugs as possible. This means that tests that cover more functionality and expose more errors are considered to be the most successful. 30. What good bug tracking systems have you used?

This is a simple interview question about your experience with bug tracking. Provide the system/systems that you are most familiar with if any at all. It would also be good to provide a comparison of the pros and cons of several if you have experience. 31. In which phase should testing begin requirements, planning, design, or coding? Testing should begin as early as the requirements phase. 32. Can you test a program and find 100% of the errors? It is impossible to fine all errors in an application mostly because there is no way to calculate how many errors exist. There are many factors involved in such a calculation such as the complexity of the program, the experience of the programmer, and so on. 33. What is the difference between debugging and testing? The main difference between debugging and testing is that debugging is typically conducted by a developer who also fixes errors during the debugging phase. Testing on the other hand, finds errors rather than fixes them. When a tester finds a bug, they usually report it so that a developer can fix it. 34. How should testing be conducted? Testing should be conducted based on the technical requirements of the application. 35. What is considered to be a good test? Testing that covers most of the functionality of an object or system is considered to be a good test. 36. What is the difference between top-down and bottom-up testing? Top-Down testing begins with the system and works its way down to the unit level. Bottom-up testing checks in the opposite direction, unit level to interface to overall system. Both have value but bottom-up testing usually aids in discovering defects earlier in the development cycle, when the cost to fix errors is lower. 37. Explain how to develop a test plan and a test case. A test plan consists of a set of test cases. Test cases are developed based on requirement and design documents for the application or system. Once these documents are thoroughly reviewed, the test cases that will make up the test plan can be created. 38. What is the role of quality assurance in a product development lifecycle? Quality assurance should be involved very early on in the development life cycle so that they can have a better understanding of the system and create sufficient test cases. However, QA should be separated from the development team so that the team is not able to build influence on the QA engineers. 39. What is the average size of executables that you have created? This is a simple interview question about our experience with executables. If you know the size of any that youve created, simply provide this info. 40. What version of the Oracle are you familiar with? This is an interview question about experience. Simply provide the versions of the software that you have experience with. 41. How is an SQL query executed in Oracle 8? This is an interview question to check your experience with Oracle and you can simply provide the answer from the command prompt. If you do not have Oracle experience, do not pretend and simply state that you have not worked on an Oracle database. 42. Have you performed tests on the front-end and the back-end? This is an interview question in which you should explain whether you performed testing on the GUI or the server portion of previous applications. 43. What is the most difficult problem youve found during testing? This is a simple interview question in which you should provide an example. 44. What were your testing responsibilities at your previous employer? This interview question is very likely being asked to verify your knowledge of your resume. Make sure that you know what is on your resume and that it is the truth. 45. What do you like the most about testing? There are several answers that you can give for this question. Here are a few examples: You enjoy the process of hunting down bugs Your experience and background have been focused on enhancing testing techniques You like being in the last phase of work before the product reaches the customer You consider your contribution to the whole development process to be very important. CHAPTER 5

Software Testing Techniques Testing techniques can be used to effectively design efficient test cases.These techniques can be grouped into blackbox and white-box techniques. Find below some of the techniques

Black-Box Testing techniques

When creating black-box test cases, the input data used is critical. Three successful techniques for managing the amount of input data required include: Equivalence Partitioning An equivalence class is a subset of data that is representative of a larger class.Equivalence partitioning is a technique for testing equivalence classes rather thanundertaking exhaustive testing of each value of the larger class. For example, aprogram which edits credit limits within a given range (1,000 - 1,500) would have three equivalence classes: < 1,000 (invalid) Between 1,000 and 1,500 (valid) > 1,500 (invalid) Boundary Analysis A technique that consists of developing test cases and data that focus on the input and output boundaries of a given function. In same credit limit example, boundary analysis would test: Low boundary +/- one (999 and 1,001) On the boundary (1,000 and 1,500) Upper boundary +/- one (1,499 and 1,501) Error Guessing Test cases can be developed based upon the intuition and experience of the tester. For example, in an example where one of the inputs is the date, a tester may try February 29, 2000

White-Box Testing techniques

White-box testing assumes that the path of logic in a unit or program is known. White-box testing consists of testing paths, branch by branch, to produce predictable results. The following are white-box testing techniques: Statement Coverage Execute all statements at least once. Decision Coverage Execute each decision direction at least once. Condition Coverage Execute each decision with all possible outcomes at least once. Decision/Condition Coverage Execute all possible combinations of condition outcomes in each decision. Treat all iterations as two-way conditions exercising the loop zero times and one time.

Testing Metrics While testing a product, test manager has to take a lot of decisions like when to stop testing or when is the application ready for production, how to track testing progress, how to measure the quality of a product at a certain point in the testing cycle?Testing metrics can help to take better and accurate decisions Lets start by defining the term 'Metric' A metric is a mathematical number that shows a relationship between two variables. Software metrics are measures used to quantify status or results. How to track testing progress? The best way is to have a fixed number of test cases ready before test execution cycle begins.Then the testing progress is measured by the total number of test cases executed. % Completion = (Number of test cases executed)/(Total number of test cases) Not only the testing progress but also the following metrics are helpful to measure the quality of the product % Test cases Passed = (Number of test cases Passed)/(Number of test cases executed) % Test cases Failed = (Number of test cases Passed)/(Number of test cases executed) Note: A test case is Failed when atleast one bug is found while executing it, otherwise Passed How many rounds or cycles of testing should be done? or When to stop testing? Lets discuss few approaches Approach 1:This approache requires, that you have a fixed number of test cases ready before test execution cycle.In each testing cycle you execute all test cases.You stop testing when all the test cases are Passed or % failure is very very less in the latest testing cycle. Approach 2:Make use of the following metrics Mean Time Between Failure: The average operational time it takes before a software system fails. Coverage metrics: the percentage of instructions or paths executed during tests. Defect density: defects related to size of software such as defects/1000 lines of code Open bugs and their severity levels, If the coverage of code is good, Mean time between failure is quite large, defect density is very ow and not may high severity bugs still open, then 'may' be you should stop testing. 'Good', 'large', 'low' and 'high' are subjective terms and depends on the product being tested.Finally, the risk associated with moving the application into production, as well as the risk of not moving forward, must be taken into consideration.

Test Plan Template Test Planning: is the selection of techniques and methods to be used to validate the product against its approved requirements and design.In this activity we assess the software application risks, and then develop a plan to determine if the software minimizes those risks.We document this planning in a Test Plan document. Explanation of different sections in the template Document Signoff: Usually a test plan document is a contract between testing team and all the other teams involved in developing the product including the higher management folks. Before signoff all interested parties thoroughly reviews the test plan and gives feedback, raises issues or concerns, if any.Once everybody is satisfied with the test plan, they signoff the document and which is a green signal for the testing team to start executing the test plan. Change History: Under this section, you specify, who changed what in the document and when, along with the version of the document which contain the changes. Review and Approval History: This captures who reviewed the document and whether they Approved the test plan or not. The reviewer may suggest some changes or comments(if any) to be incorporated in the test plan. Document References: Any additional documents that will help better understand the test plan like design documents and/or Requirements document etc. Document Scope: In this section specify what the test plan covers and who its intended audience is. Product Summary: In this section describe briefly about the product that is to be tested. Product Quality Goals: In this section describe important quality goals of the product. Following are some of the typical quality goals -Reliability, proper functioning as specified and expected. -Robustness, acceptable response to unusual inputs, loads and conditions. -Efficiency of use by the frequent users -Easy to use even for the less frequent users Testing Objectives: In this section specify the testing goals that need to be accomplished by the testing team. The goals must be measurable and should be prioritized. The following are some example test objectives. Verify functional correctness

Test product robustness and stability. Measure performance hot spots (locations or features that are problem areas). Assumptions: In this section specify the expectations, which if not met could have negative impact on this test plan execution. Some of the assumptions can be on the test budget that must be allocated, resources needed etc. Testing Scope: In this section specify what will be covered in testing and what will not be covered. Testing Strategy: In this section specify different testing types used to test the product. Tools needed to execute the strategy are also specified. Testing Schedule: In this section specify, first the entire project schedule and then detailed testing schedule. Resources: In this section specify all the resources needed to execute the plan successfully Communication Approach: In this section specify how the testing team will report the bugs to the development, how it will report the testing progress to management, how it will report issues and concerns to higher ups.

Test Case Template Test Outline: This document is written before writing test cases.This is a planning document in which the flows or scenarios are written at a high level. These flows or scenarios are later expanded to test cases, in which they are written in detail.Also the biggest advantage of writing this document, before going to test cases is the 'traceability matrix', where you ensure that the project/feature is sufficiently or thoroughly covered by the individual test cases. Explanation of different sections in the template Change History: Under this section, you specify, who changed what in the document and when, along with the version of the document which contain the changes. Review and Approval History: This captures who reviewed the document and whether they Approved the test outline or not. If approved, the reviewer will specify the review comments(if any) to be incorporated in the test outline.There is a review template at the end of the testcase_template.doc, which can be used to specify the comments for test outline also.If the test outline document is 'Not Approved', then either the scenarios mentioned are not sufficient or the scenarios are in a very bad shape(not in a state to be reviewed) etc. Document References: Any additional documents that will help better understand the test outline document like design documents or Requirements document etc. Projects Covered in Test Outline: Projects can be features of the product or modules which are covered in the test outline document. Traceability Matrix: This Matrix is filled after finishing writing all scenarios in the outline.This is to ensure that all requiremnts or features are sufficiently covered by the test cases and none are missing.So you map the requirement or feature and subfeature to the test case that will be covering it. The following IDs uniquely identify the requirements or feature and subfeature.You can add your own IDs based on the need REQ_ID = Requirement ID from the SRS document DD_ID = Detailed Design ID from the Detailed Design document Setup Requirements: Any setup that has to be done in the application being tested, prior to executing this test case, should be mentioned here.For example, if the test case needs certain login IDs with certain settings to begin, which are not created as part of the test case, then such things need to mentioned in this section.

Test Objectives: Specify at a very high level, what the test case is intended to achieve or verify. Test Case Limitations: Does the test case achieve the above mentioned test objective completely or are there any exceptions?These exceptions need to be specified in this section.For example, test case has to verify 'something' on type A, type B and type X, but because of some reason it could NOT verify that 'something' on type X, then its a limitation. Test Case Dependencies / Assumptions: Prior to executing this test case, any other test case needs to be run? All those dependencies need to mentioned here. Process Flow: In this section, we specify at a high level what the flow of the test case is.Suppose there are multiple users in the test case, then a process flow can look like user1: does something user2: does something else user1: does again something user2: says good bye Test Outline Table column - 'User': Who has to perform the action. Suppose in a application, there are two roles 'Buyer' and 'Supplier', then user can be those role names. Test Outline Table column - 'Action': Under Action you specify the following Flow Name - A high level name given to action performed by the user.Suppose Buyer has to create certain purchase orders in the applications, then the flow name can be 'Create Purchase Orders' Description - The following things should be mentioned here at a high level Description of what actions should be performed What is the type or characteristics of data to be used. What should be verified or checked after performing the action. Effort Estimates: In this section you specify the effort needed to write each test case and the effort needed to execute them.

Test Case Template Explanation of different sections in the template Change History: Under this section, you specify, who changed what in the document and when, along with the version of the document which contain the changes. Review and Approval History: This captures who reviewed the document and whether they Approved the test case or not. If approved, the reviewer will specify the review comments to be incorporated in the test case.There is a review template at the end of the template document, which can be used to specify the comments.If the test case document is 'Not Approved', then either the test case is not necessary(redundant) or it is in a very bad shape(not in a state to be reviewed) Document References: Any additional documents that will help better understand the test case like test oulines or design documents or Requirements document etc. Introduction/Overall Test Objectives: Specify at a very high level, what the test case is intended to achieve or verify. Test Case Limitations: Does the test case achieve the above mentioned test objective completely or are there any exceptions?These exceptions need to be specified in this section.For example, test case has to verify something on type A, type B and type X, but because of some reason it could NOT verify that something on type X, then its a limitation. Test Case Dependencies / Assumptions: Prior to executing this test case, any other test case needs to be run? All those dependencies need to mentioned here.

Setup Requirements: Any setup that has to be done in the application being tested, prior to executing this test script should be mentioned here.For example, if the test case needs certain Login IDs with certain settings to begin, which are not created as part of the test case, then such things need to mentioned in this section Process Flow: In this section, we mention who does what in the test case. Suppose there are multiple users in the test case, then a process flow can look like user1: does something user2: does something else user1: does again something user2: says good bye Test Case: The actual test case begins in section 5, which can be further divided into subsections upon convenience and need.For example, if the test case is for an integrated application, then everytime we login to a new application, we can have a new subsection. Following is the example of how a test case looks like Step Num: 1 Step Description: check login Path and Action: Enter user name, Enter pwd, click Login Test Data: abcd, abcd Expected Results: Verify error message is thrown that username and password entered are wrong Appendix: This section contain any additional data that the test case refers.For example if your test case has large amounts of 'Test Data' which is difficult to put under the column 'Test Data' for each step, then you can use the appendix section to hold the data and in the test case, can give reference to appendix. Test Case Review Template: This template can be used by the reviewers to provide their review comments.They can classify the comments based on their severity.The Test Engineer who incorporates the comments in the test case, should specify the action taken by him in the template and then 'Close' the comment. Black box testing is not a type of testing; it instead is a testing strategy, which does not need any knowledge of internal design or code, etc. As the name "black box" suggests, no knowledge of internal logic or code structure is required. The types of testing under this strategy are totally based/focused on the testing for requirements and functionality of the work product/software application. Black box testing is sometimes also called "Opaque Testing", "Functional/Behavioral Testing" and "Closed Box Testing". The base of the black box testing strategy lies in the selection of appropriate data as per functionality and testing it against the functional specifications in order to check for normal and abnormal behavior of the system. Nowadays, it is becoming common to route the testing work to a third party as the developer of the system knows too much of the internal logic and coding of the system, which makes it unfit to test the application by the developer. In order to implement black box testing strategy, the tester is needed to be thorough with the requirement specifications of the system and as a user, should know, how the system should behave in response to the particular action. Various testing types that fall under this testing strategy are: functional testing, stress testing, recovery testing, volume testing, user acceptance testing (also known as UAT), system testing, sanity or smoke testing, load testing, usability testing, exploratory testing, ad-hoc testing, alpha testing, beta testing, etc. These testing types are again divided in two groups: a) Testing in which the user plays a role of tester and b) User is not required. Testing Methods Where a User is Not Required Functional Testing In this type of testing, the software is tested for the functional requirements. The tests are written in order to check if the application behaves as expected. Stress Testing The application is tested against heavy load such as complex numerical values, large number of inputs, large number of queries, etc. which checks for the stress/load that the applications can withstand. Load Testing

The application is tested against heavy loads or inputs such as testing of websites in order to find out at what point the website/application fails or at what point its performance degrades. Ad-hoc Testing This type of testing is done without any formal test plan or test case creation. Ad-hoc testing helps in deciding the scope and duration of the other testing methods and it also helps testers in learning the application prior to starting with any other testing. Exploratory Testing This testing is similar to the ad-hoc testing and is done in order to learn/explore the application. Usability Testing This testing is also called 'Testing for User-Friendliness'. This testing is done if user interface of the application stands an important consideration and needs to be specific for the specific type of user. Smoke Testing This type of testing is also called sanity testing and is done in order to check if the application is ready for further major testing and is working properly without failing up to least expected level. Recovery Testing Recovery testing is basically done in order to check how fast and better the application can recover against any type of crash or hardware failure, etc. Type or extent of recovery is specified in the requirement specifications. Volume Testing Volume testing is done against the efficiency of the application. Huge amount of data is processed through the application (which is being tested) in order to check the extreme limitations of the system. Testing Where a User is Required User Acceptance Testing In this type of testing, the software is handed over to the user in order to find out if the software meets the user expectations and works as it is expected to. Alpha Testing In this type of testing, the users are invited at the development center where they use the application and the developers note every particular input or action carried out by the user. Any type of abnormal behavior of the system is noted and rectified by the developers. Beta Testing In this type of testing, the software is distributed as a beta version to the users and users test the application at their sites. As the users explore the software, in case if any exception/defect occurs, then that is reported to the developers.

White box testing strategy deals with the internal logic and structure of the code. It is also called glass, structural, open or clear box testing. The tests that are written based on the white box testing strategy incorporate coverage of the code written, branches, paths, statements and internal logic of the code, etc. In order to implement white box testing, the tester has to deal with the code, and hence is required to possess knowledge of coding and logic i.e., internal working of the code. White box test also needs the tester to look into the code and find out which unit/statement/chunk of the code is malfunctioning. In other words, it is imperative that the tester has 'structural' knowledge about how the system has been implemented. Not only the code, but even the data flow and control flow have to be assessed. The areas of the code, that are tested using white box testing are: Code Coverage Segment Coverage

Branch Coverage Condition Coverage Loop Coverage Path Testing Data Flow Coverage There are three aspects of the code, which are validated in white box testing, namely If the software has been designed as per the original design of the software. If security measures have been implemented into the software and it is robust. Find out vulnerabilities in the said software. Advantages of White Box Testing As the knowledge of internal coding structure is prerequisite, it becomes very easy to find out which type of input/data can help in testing the application effectively. Yet another advantage of white box testing is that it helps in optimizing the code. It helps in removing the extra lines of code, which can introduce defects in the code. Disadvantages of White Box Testing As knowledge of code and internal structure is a prerequisite, a skilled tester is needed to carry out this type of testing, and this, in turn, increases the cost of the software. It is nearly impossible to look into every bit of code to find out hidden errors, which may create problems, resulting in failure of the application. Types of Testing under White/Glass Box Testing Strategy Unit Testing The developer carries out unit testing in order to check if the particular module or unit of code is working fine. The unit testing comes at the very basic level as it is carried out as and when the unit of the code is developed or a particular functionality is built. Static and Dynamic Analysis While static analysis involves going through the code in order to find out any possible defect in the code, dynamic analysis involves executing the code and analyzing the output. Statement Coverage In this type of testing, the code is executed in such a manner that every statement of the application is executed at least once. It helps in assuring that all the statements are executed without any side effect. Different coverage management tools are used to assess the percentage of the executable elements, which are currently been tested. (These tools are used for both statement as well as branch coverage.) Branch Coverage No software application can be written in a continuous mode of coding. At some point we need to branch out the code in order to perform a particular functionality. Branch coverage testing helps in validating of all the branches in the code, and helps make sure that no branching leads to abnormal behavior of the application. Memory Leak Testing When a code is written, there is a possibility that there is a problem of memory leak in the code, which makes the code faulty. Therefore, during the white box testing phase the code is tested to check, if there is memory leak in the code. In case of memory leak, more memory is required for the software and this affects the speed of the software making it slow. Security Testing Security testing is carried out in order to find out how well the system can protect itself from unauthorized access, hacking (cracking, any code damage, etc.) which deals with the code of application. This type of

testing needs sophisticated testing techniques. Mutation Testing It is a kind of testing in which, the application is tested for the code that was modified after fixing a particular bug/defect. It also helps in finding out which code and which strategy of coding can help in developing the functionality effectively. Besides all the testing types given above, there are some more types which fall under both black box and white box testing strategies such as: functional testing (which deals with the code in order to check its functional performance), incremental integration testing (which deals with the testing of newly added code in the application), performance and load testing (which helps in finding out how the particular code manages resources and gives performance), etc. Since they fall under white box as well as black box it is difficult to categorize them in either of the two broad types of software testing. 1. What is the MAIN benefit of designing tests early in the life cycle? It helps prevent defects from being introduced into the code. 2. What is risk-based testing? Risk-based testing is the term used for an approach to creating a test strategy that is based on prioritizing tests by risk. The basis of the approach is a detailed risk analysis and prioritizing of risks by risk level. Tests to address each risk are then specified, starting with the highest risk first. 3. A wholesaler sells printer cartridges. The minimum order quantity is 5. There is a 20% discount for orders of 100 or more printer cartridges. You have been asked to prepare test cases using various values for the number of printer cartridges ordered. Which of the following groups contain three test inputs that would be generated using Boundary Value Analysis? 4, 5, 99 4. What is the KEY difference between preventative and reactive approaches to testing? Preventative tests are designed early; reactive tests are designed after the software has been produced. 5. What is the purpose of exit criteria? To define when a test level is complete. 6. What determines the level of risk? The likelihood of an adverse event and the impact of the event 7. When is used Decision table testing? Decision table testing is used for testing systems for which the specification takes the form of rules or cause-effect combinations. In a decision table the inputs are listed in a column, with the outputs in the same column but below the inputs. The remainder of the table explores combinations of inputs to define the outputs produced. Learn More About Decision Table Testing Technique in the Video Tutorial here 8. What is the MAIN objective when reviewing a software deliverable? To identify defects in any software work product. 9. Which of the following defines the expected results of a test? Test case specification or test design specification. Test case specification. 10. Which is a benefit of test independence? It avoids author bias in defining effective tests. 11. As part of which test process do you determine the exit criteria? Test planning. 12. What is beta testing? Testing performed by potential customers at their own locations. 13. Given the following fragment of code, how many tests are required for 100% decision coverage?

if width > length then biggest_dimension = width if height > width then biggest_dimension = height end_if else biggest_dimension = length if height > length then biggest_dimension = height end_if end_if 4 14. You have designed test cases to provide 100% statement and 100% decision coverage for the following fragment of code. if width > length then biggest_dimension = width else biggest_dimension = length end_if The following has been added to the bottom of the code fragment above. print "Biggest dimension is " & biggest_dimension print "Width: " & width print "Length: " & length How many more test cases are required? None, existing test cases can be used. 15. Rapid Application Development ? Rapid Application Development (RAD) is formally a parallel development of functions and subsequent integration. Components/functions are developed in parallel as if they were mini projects, the developments are time-boxed, delivered, and then assembled into a working prototype. This can very quickly give the customer something to see and use and to provide feedback regarding the delivery and their requirements. Rapid change and development of the product is possible using this methodology. However the product specification will need to be developed for the product at some point, and the project will need to be placed under more formal controls prior to going into production. 16. What is the difference between Testing Techniques and Testing Tools? Testing technique: Is a process for ensuring that some aspects of the application system or unit functions properly there may be few techniques but many tools. Testing Tools: Is a vehicle for performing a test process. The tool is a resource to the tester, but itself is insufficient to conduct testing Learn More About Testing Tools here 17. We use the output of the requirement analysis, the requirement specification as the input for writing User Acceptance Test Cases 18. Repeated Testing of an already tested program, after modification, to discover any defects introduced or uncovered as a result of the changes in the software being tested or in another related or unrelated software component: Regression Testing 19. What is component testing ? Component testing, also known as unit, module and program testing, searches for defects in, and verifies the functioning of software (e.g. modules, programs, objects, classes, etc.) that are separately testable. Component testing may be done in isolation from the rest of the system depend-ing on the context of the development life cycle and the system. Most often stubs and drivers are used to replace the missing software and simulate the interface between the software components in a simple manner. A stub is called from the software component to be tested; a driver calls a component to be tested. 20. What is functional system testing ? Testing the end to end functionality of the system as a whole. 21. What is the benefits of Independent Testing Independent testers see other and different defects and are unbiased.

22. In a REACTIVE approach to testing when would you expect the bulk of the test design work to be begun? After the software or system has been produced. 23. What are the different Methodologies in Agile Development Model? There are currently seven different Agile methodologies that I am aware of: Extreme Programming (XP) Scrum Lean Software Development Feature-Driven Development Agile Unified Process Crystal Dynamic Systems Development Model (DSDM) 24. Which activity in the fundamental test process includes evaluation of the testability of the requirements and system? A Test analysis and design. 25. What is typically the MOST important reason to use risk to drive testing efforts? Because testing everything is not feasible. 26. Which is the MOST important advantage of independence in testing? An independent tester may be more effective at finding defects missed by the person who wrote the software. 27. Which of the following are valid objectives for incident reports? i. Provide developers and other parties with feedback about the problem to enable identification, isolation and correction as necessary. ii. Provide ideas for test process improvement. iii. Provide a vehicle for assessing tester competence. iv. Provide testers with a means of tracking the quality of the system under test. i. Provide developers and other parties with feedback about the problem to enable identification, isolation and correction as necessary, ii.Provide ideas for test process improvement, iv.Provide testers with a means of tracking the quality of the system under test 28. Consider the following techniques. Which are static and which are dynamic techniques? i. Equivalence Partitioning. ii. Use Case Testing. iii.Data Flow Analysis. iv.Exploratory Testing. v. Decision Testing. vi. Inspections. Data Flow Analysis and Inspections are static, Equivalence Partitioning, Use Case Testing, Exploratory Testing and Decision Testing are dynamic. 29. Why are static testing and dynamic testing described as complementary? Because they share the aim of identifying defects but differ in the types of defect they find. 30. What are the phases of a formal review ? In contrast to informal reviews, formal reviews follow a formal process. A typical formal review process consists of six main steps: Planning Kick-off Preparation Review meeting Rework Follow-up.

31. What is the role of moderator in review process? The moderator (or review leader) leads the review process. He or she deter-mines, in co-operation with the author, the type of review, approach and the composition of the review team. The moderator performs the entry check and the follow-up on the rework, in order to control the quality of the input and output of the review process. The moderator also schedules the meeting, disseminates documents before the meeting, coaches other team members, paces the meeting, leads possible discussions and stores the data that is collected. Learn More About Review process in Video Tutorial here 32. What is an equivalence partition (also known as an equivalence class)? An input or output range of values such that only one value in the range becomes a test case. 33. When should configuration management procedures be implemented? During test planning. 34. A Type of functional Testing, which investigates the functions relating to detection of threats, such as virus from malicious outsiders. Security Testing 35. Testing where in we subject the target of the test , to varying workloads to measure and evaluate the performance behaviors and ability of the target and of the test to continue to function properly under these different workloads. Load Testing 36. Testing activity which is performed to expose defects in the interfaces and in the interaction between integrated components is: Integration Level Testing 37. What are the Structure-based (white-box) testing techniques ? Structure-based testing techniques (which are also dynamic rather than static) use the internal structure of the software to derive test cases. They are com-monly called 'white-box' or 'glass-box' techniques (implying you can see into the system) since they require knowledge of how the software is implemented, that is, how it works. For example, a structural technique may be concerned with exercising loops in the software. Different test cases may be derived to exercise the loop once, twice, and many times. This may be done regardless of the func-tionality of the software. 38. When should be performed Regression testing ? After the software has changed or when the environment has changed 39. When should testing be stopped? It depends on the risks for the system being tested 40. What is the purpose of a test completion criterion? To determine when to stop testing 41. What can static analysis NOT find? For example memory leaks 42. What is the difference between re-testing and regression testing? Re-testing ensures the original fault has been removed; regression testing looks for unexpected sideeffects 43. What are the Experience-based testing techniques ? In experience-based techniques, people's knowledge, skills and background are a prime contributor to the test conditions and test cases. The experience of both technical and business people is important, as they bring different perspectives to the test analysis and design process. Due to previous experience with similar systems, they may have insights into what could go wrong, which is very useful for testing. 44. What type of review requires formal entry and exit criteria, including metrics? Inspection 45. Could reviews or inspections be considered part of testing? Yes, because both help detect faults and improve quality 46. An input field takes the year of birth between 1900 and 2004 What are the boundary values for testing this field ? 1899,1900,2004,2005

47. Which of the following tools would be involved in the automation of regression test? a. Data tester b. Boundary tester c. Capture/Playback d. Output comparator. d. Output comparator 48. To test a function,what has to write a programmer, which calls the function to be tested and passes it test data. Driver 49. What is the one Key reason why developers have difficulty testing their own work? Lack of Objectivity 50.How much testing is enough? The answer depends on the risk for your industry, contract and special requirements. 51. When should testing be stopped? It depends on the risks for the system being tested. 52. Which of the following is the main purpose of the integration strategy for integration testing in the small? To specify which modules to combine when, and how many at once. 53. What is the purpose of a test completion criterion? To determine when to stop testing 54. Given the following code, which statement is true about the minimum number of test cases required for full statement and branch coverage? Read p Read q IF p+q> 100 THEN Print "Large" ENDIF IF p > 50 THEN Print "p Large" ENDIF 1 test for statement coverage, 2 for branch coverage 55. What is the difference between re-testing and regression testing? Re-testing ensures the original fault has been removed; regression testing looks for unexpected sideeffects. 56. Which review is normally used to evaluate a product to determine its suitability for intended use and to identify discrepancies? Technical Review. 57. Why we use decision tables?. The techniques of equivalence partitioning and boundary value analysis are often applied to specific situations or inputs. However, if different combinations of inputs result in different actions being taken, this can be more difficult to show using equivalence partitioning and boundary value analysis, which tend to be more focused on the user interface. The other two specification-based tech-niques, decision tables and state transition testing are more focused on business logic or business rules. A decision table is a good way to deal with combinations of things (e.g. inputs). This technique is sometimes also referred to as a 'cause-effect' table. The reason for this is that there is an associated logic diagramming technique called 'cause-effect graphing' which was sometimes used to help derive the decision table 58. Faults found should be originally documented by who? By testers. 59. Which is the current formal world-wide recognized documentation standard? There isnt one. 60. Which of the following is the review participant who has created the item to be reviewed? Author 61. A number of critical bugs are fixed in software. All the bugs are in one module, related to reports. The test manager decides to do regression testing only on the reports module.

Regression testing should be done on other modules as well because fixing one module may affect other modules. 62. Why does the boundary value analysis provide good test cases? Because errors are frequently made during programming of the different cases near the edges of the range of values. 63. What makes an inspection different from other review types? It is led by a trained leader, uses formal entry and exit criteria and checklists. 64. Why can be tester dependent on configuration management? Because configuration management assures that we know the exact version of the testware and the test object. 65. What is a V-Model ? A software development model that illustrates how testing activities integrate with software development phases 66. What is maintenance testing? Triggered by modifications, migration or retirement of existing software 67. What is test coverage? Test coverage measures in some specific way the amount of testing performed by a set of tests (derived in some other way, e.g. using specification-based techniques). Wherever we can count things and can tell whether or not each of those things has been tested by some test, then we can measure coverage. 68. Why is incremental integration preferred over big bang integration? Because incremental integration has better early defects screening and isolation ability 69. When do we prepare RTM (Requirement traceability matrix), is it before test case designing or after test case designing? The would be before. Requirements should already be traceable from Review activities since you should have traceability in the Test Plan already. This question also would depend on the organisation. If the organisation do test after development started then requirements must be already traceable to their source. To make life simpler use a tool to manage requirements. 70. What is called the process starting with the terminal modules ? Bottom-up integration 71. During which test activity could faults be found most cost effectively? During test planning 72. The purpose of requirement phase is To freeze requirements, to understand user needs, to define the scope of testing 73. How much testing is enough? The answer depends on the risks for your industry, contract and special requirements 74. Why we split testing into distinct stages? Each test stage has a different purpose. 75. Which of the following is likely to benefit most from the use of test tools providing test capture and replay facilities? a) Regression testing b) Integration testing c) System testing d) User acceptance testing Regression testing 76. How would you estimate the amount of re-testing likely to be required? Metrics from previous similar projects and discussions with the development team 77. What studies data flow analysis ? The use of data on paths through the code. 78. What is Alpha testing? Pre-release testing by end user representatives at the developers site. 79. What is a failure? Failure is a departure from specified behaviour. 80. What are Test comparators ? Is it really a test if you put some inputs into some software, but never look to see whether the software produces the correct result? The essence of testing is to check whether the software produces the correct

result, and to do that, we must compare what the software produces to what it should produce. A test comparator helps to automate aspects of that comparison. 81. Who is responsible for document all the issues, problems and open point that were identified during the review meeting Scribe 82. What is the main purpose of Informal review Inexpensive way to get some benefit 83. What is the purpose of test design technique? Identifying test conditions and Identifying test cases 84. When testing a grade calculation system, a tester determines that all scores from 90 to 100 will yield a grade of A, but scores below 90 will not. This analysis is known as: Equivalence partitioning 85. A test manager wants to use the resources available for the automated testing of a web application. The best choice is Tester, test automater, web specialist, DBA 86. During the testing of a module tester X finds a bug and assigned it to developer. But developer rejects the same, saying that its not a bug. What X should do? Send to the detailed information of the bug encountered and check the reproducibility 87. A type of integration testing in which software elements, hardware elements, or both are combined all at once into a component or an overall system, rather than in stages. Big-Bang Testing 88. In practice, which Life Cycle model may have more, fewer or different levels of development and testing, depending on the project and the software product. For example, there may be component integration testing after component testing, and system integration testing after system testing. V-Model 89. Which technique can be used to achieve input and output coverage? It can be applied to human input, input via interfaces to a system, or interface parameters in integration testing. Equivalence partitioning 90. This life cycle model is basically driven by schedule and budget risks This statement is best suited for V-Model 91. In which order should tests be run? The most important tests first 92. The later in the development life cycle a fault is discovered, the more expensive it is to fix. why? The fault has been built into more documentation,code,tests, etc 93. What is Coverage measurement? It is a partial measure of test thoroughness. 94. What is Boundary value testing? Test boundary conditions on, below and above the edges of input and output equivalence classes. 95. What is Fault Masking ? Error condition hiding another error condition. 96. What does COTS represent? Commercial Off The Shelf. 97.The purpose of wich is allow specific tests to be carried out on a system or network that resembles as closely as possible the environment where the item under test will be used upon release? Test Environment 98. What can be though of as being based on the project plan, but with greater amounts of detail? Phase Test Plan 99. What is exploratory testing? Exploratory testing is a hands-on approach in which testers are involved in minimum planning and maximum test execution. The planning involves the cre-ation of a test charter, a short declaration of the

scope of a short (1 to 2 hour) time-boxed test effort, the objectives and possible approaches to be used. The test design and test execution activities are performed in parallel typi-cally without formally documenting the test conditions, test cases or test scripts. This does not mean that other, more formal testing techniques will not be used. For example, the tester may decide to use boundary value analysis but will think through and test the most important boundary values without necessarily writing them down. Some notes will be written during the exploratory-testing session, so that a report can be produced afterwards. 100. What is failure? Deviation from expected result to actual result 36. What are the characteristic of process? Any process has the following characteristics: The process prescribes all of the major process activities. The process uses resources, subject to a set of constraints (such as a schedule), and produces intermediate and final products. The process may be composed of sub processes that are linked in some way. The process may be defined as a hierarchy of processes, organized so that each sub process has its own process model. Each process activity has entry and exit criteria, so that we know when the activity begins and ends. The activities are organized in a sequence, so that it is clear when one activity is performed relative to the other activities. Every process has a set of guiding principles that explain the goals of each activity. 37. What are the advantages of waterfall model? The various advantages of the waterfall model include: It is a linear model. It is a segmental model. It is systematic and sequential. It is a simple one. It has proper documentation. 38. What is RAD? The RAD (Rapid Application Development Model) model is proposed when requirements and solutions can be modularized as independent system or software components, each of which can be developed by different teams. After these smaller system components are developed, they are integrated to produce the large software system solution. 39. What is system integration testing? Testing of software components that have been distributed across multiple platforms (e.g., client, web server, application server, and database server) to produce failures caused by system integration defects (i.e. defects involving distribution and back office integration). 40. What are the types of attributes? Simple Attribute Composite Attribute Single Valued Attribute Multivalued Attribute Derived Attribute

46. What is a test case? A test case is a set of instructions designed to discover a particular type of error or defect in the software system by inducing a failure. 47. What is a software review? A software review can be defined as a filter for the software engineering process. The purpose of any review is to discover errors in the analysis, design, and coding, testing and implementation phases of the softwaredevelopment cycle. The other purpose of a review is to see whether procedures are applied uniformly and in a manageable manner. 48. What are the types of reviews? Reviews are one of two types : informal technical reviews and formal technical reviews. Informal Technical Review : An informal meeting and informal desk checking. Formal Technical Review : A formal software quality assurance activity through various approaches, such as structured walkthroughs, inspections, etc. 49. What is data flow diagrams(DFD)? Data Flow Diagrams (DFD) are also known as data flow graphs or bubble charts. A DFD serves the purpose of clarifying system requirements and identifying major transformations. DFDs show the flow of data through a system. It is an important modeling tool that allows us to picture a system as a network of functional processes. 50. What is reverse engineering? Reverse engineering is the process followed in order to find difficult, unknown, and hidden information about a software system. It is becoming important, since several software products lack proper documentation, and are highly unstructured, or their structure has degraded through a series of maintenance efforts. Maintenance activities cannot be performed without a complete understanding of the software system.
51.wat is test scenario Test scenario will be framed on basis of the requrement,which need to be checked.For that,we will frame set of testcases,in other terms,we can say all the conditions,which can be determined the testing coverage againest business requirement. Please see the below example,which is exactly matched to my explanation. As we know all most all the application are having login screen,which contains login name and password.Here is the test scenario for login screen. Scenario: USER'S LOGIN Condtions to be checked to test the above scenario: ---------------------------------------------------1.Test login field and Password fields indicisually. 2.Try to login with valid login and valid password. 3.Try to login with invaling login and valid pwd. etcc........................................ 52.wat is build duration it is a tine gap between old version build and new version build in new version build some new extra features are added 53.wat is test deliverables Test deliverables are nothing but documents preparing after testing like test plan document testcase template bugreport templateTest deliverables will be delivered to the client not only for the completed activities ,but also for the activites,which we are implementing for the better productivity.(As per the company's standards).Here I am giving you some of the Test deliverables in my project. 1.QA TestPlan 2.Testcase Docs 3.QA Testplan,if we are using Automation. 4.Automation scripts 5.QA Coverage Matrix and defect matrix.

6.Traceability Matrix 7.Test Results doc 8.QA Schesule doc(describes the deadlines) 9.Test Report or Project Closure Report.(Prepared once we rolled out the project to client) 10.Weekly status report(sent by PM to the client) 11.Release Notes.

44.Expalin about metrics Management?

Metrics: is nothing but a measurement analysis.Measurment analysis and Improvement is one of the process area in CMM I L2.

45.What is performance Testing and Regression Testing?

Performance Testing:-testing the present wroking condition of the product Regression Testing:-Regression Testing is checking for the newly added functionality causing any erros interms of functionality and the common functionality should be stable in the latest and the previous versions

46.How do you review testcase?? Type of Review...

types of reviewing testcases depends upon company standards,viz.., peer review,team lead review,roject manager review. Some times client may also review the test cases reg what is approach following for project

41.what is positive and negative testing explian with example?

Positive Testing - testing the system by giving the valid data. Negative Testing - testing the system by giving the Invalid data. For Ex,an application contains a textbox and as per the user's Requirements the textbox should accept only Strings.By providing only String as input data to the textbox & to check whether its working properly or not means it is Positive Testing.If giving the input other than String means it is negative Testing.. 35.how do u perform regression testing,means what test cases u select for regression Regression testing will be conducted after any bug fixedor any functionality changed. During defect fixing procedure some part of coding may be changed or functionality may be manipulated.In this case the old testcases will be updated or completely re written according to new features of the application where bug fixed area.Here possible areas are old test cases will be executed as usual or some new testcases will be added to existing testcases or some testcases may be deleted. 29.How many testcases can you write per a day, an average figure? Complex test cases 4-7 per day Medium test cases 10-15 per day Normal test cases 20-30 per day

30.Who will prepare FRS(functional requirement documents)? What is the importent of FRS?
The Business Analyst will pre pare the FRS. Based on this we are going to prepare test cases. It contains 1. Over view of the project 2. Page elements of the Application(Filed Names) 3. Proto type of the of the application 4. Business rules and Error States 5. Data Flow diagrams 6. Use cases contains Actor and Actions and System Responces

18.what is basis for testcase review?

the main basis for the test case review is 1.testing techniques oriented review 2.requirements oriented review 3.defects oriented review.

41. What is acceptance testing? Testing the system with the intent of confirming readiness of the product and customer acceptance. Also known as User Acceptance Testing.

42. What are the types of system testing? There are essentially three main kinds of system testing : Alpha testing Beta testing Acceptance testing 43. What is the difference between alpha, beta and acceptance testing? Alpha Testing : Alpha testing refers to the system testing carried out by the test team within the development organization. Beta Testing : Beta testing is the system testing performed by a selected group of friendly customers. Acceptance Testing : Acceptance testing is the system testing performed by the customer to determine whether to accept or reject the delivery of the system. 44. What are the advantages of black box testing? The advantages of this type of testing include : The test is unbiased because the designer and the tester are independent of each other. The tester does not need knowledge of any specific programming languages. The test is done from the point-of-view of the user, not the designer. Test cases can be designed as soon as the specifications are complete. 45. What are the advantages of white box testing? The various advantages of white box testing include : Forces test developer to reason carefully about implementation Approximates the partitioning done by execution equivalence Reveals errors in hidden code What is the difference between static and dynamic testing? Static testing : is performed using the software documentation. The code is not executing during static testing. Dynamic testing : requires the code to be in an executable state to perform the tests. Define software? Software is a set of instructions used to acquire inputs and to manipulate them to produce the desired output in terms of functions and performance as determined by the user of the software. 2. Define testing? Testing is a process of executing a program with the intent of finding of an error. 3. What are the types of software? There are two types of software. There are System Software Application Software 4. What is the difference between system and application software?

Computer software is often divided into two categories : System software : This software includes the operating system and all utilities that enable the computer to function. Application software : These consist of programs that do real work for users. What are the categories of metrics? There are three types of metrics are : Product Metrics Process Metrics Project Metrics
Define Metrics? The continuous application of measurement based techniques to the software development process and its products to supply meaningful and timely management information, together with the use of those techniques to improve that process and its products.

Traceability Matrix is used in entire software development life cycle phases: Risk Analysis phase Requirements Analysis and Specification phase Design Analysis and Specification phase Source Code Analysis, Unit Testing & Integration Testing phase Validation System Testing, Functional Testing phase In this topic we will discuss: What is Traceability Matrix from Software Testing perspective? (Point 5) Types of Traceability Matrix Disadvantages of not using Traceability Matrix Benefits of using Traceability Matrix in testing Step by step process of creating an effective Traceability Matrix from requirements. Sample formats of Traceability Matrix basic version to advanced version. In Simple words - A requirements traceability matrix is a document that traces and maps user requirements [requirement Ids from requirement specification document] with the test case ids. Purpose is to make sure that all the requirements are covered in test cases so that while testing no functionality can be missed. This document is prepared to make the clients satisfy that the coverage done is complete as end to end, this document consists of Requirement/Base line doc Ref No., Test case/Condition, and Defects/Bug id. Using this document the person can track the Requirement based on the Defect id Note We can make it a Test case Coverage checklist document by adding few more columns. We will discuss in later posts Types of Traceability Matrix: Forward Traceability Mapping of Requirements to Test cases Backward Traceability Mapping of Test Cases to Requirements Bi-Directional Traceability - A Good Traceability matrix is the References from test cases to basis documentation and vice versa.

Why Bi-Directional Traceability is required? Bi-Directional Traceability contains both Forward & Backward Traceability. Through Backward Traceability Matrix, we can see that test cases are mapped with which requirements. This will help us in identifying if there are test cases that do not trace to any coverage item in which case the test case is not required and should be removed (or maybe a specification like a requirement or two should be added!). This backward Traceability is also very helpful if you want to identify that a particular test case is covering how many requirements? Through Forward Traceability we can check that requirements are covered in which test cases? Whether is the requirements are coved in the test cases or not? Forward Traceability Matrix ensures We are building the Right Product. Backward Traceability Matrix ensures We the Building the Product Right. Traceability matrix is the answer of the following questions of any Software Project: How is it feasible to ensure, for each phase of the SDLC, that I have correctly accounted for all the customers needs? How can I certify that the final software product meets the customers needs? Now we can only make sure requirements are captured in the test cases by traceability matrix. Disadvantages of not using Traceability Matrix [some possible (seen) impact]: No traceability or Incomplete Traceability Results into: 1. Poor or unknown test coverage, more defects found in production 2. It will lead to miss some bugs in earlier test cycles which may arise in later test cycles. Then a lot of discussions arguments with other teams and managers before release. 3. Difficult project planning and tracking, misunderstandings between different teams over project dependencies, delays, etc Benefits of using Traceability Matrix Make obvious to the client that the software is being developed as per the requirements. To make sure that all requirements included in the test cases To make sure that developers are not creating features that no one has requested Easy to identify the missing functionalities. If there is a change request for a requirement, then we can easily find out which test cases need to update. The completed system may have Extra functionality that may have not been specified in the design specification, resulting in wastage of manpower, time and effort. Steps to create Traceability Martix: 1. Make use of excel to create Traceability Matrix: 2. Define following columns: Base Specification/Requirement ID (If any) Requirement ID

Requirement description TC 001 TC 002 TC 003.. So on. 3. Identify all the testable requirements in granular level from requirement document. Typical requirements you need to capture are as follows: Used cases (all the flows are captured) Error Messages Business rules Functional rules SRS FRS So on 4. Identity all the test scenarios and test flows. 5. Map Requirement IDs to the test cases. Assume (as per below table), Test case TC 001 is your one flow/scenario. Now in this scenario, Requirements SR-1.1 and SR-1.2 are covered. So mark x for these requirements. Now from below table you can conclude Requirement SR-1.1 is covered in TC 001 Requirement SR-1.2 is covered in TC 001 Requirement SR-1.5 is covered in TC 001, TC 003 [Now it is easy to identify, which test cases need to be updated if there is any change request]. TC 001 Covers SR-1.1, SR, 1.2 [we can easily identify that test cases covers which requirements]. TC 002 covers SR-1.3.. So on.. Requirement ID Requirement description TC 001 TC 002 TC 003 SR-1.1 User should be able to do x this SR-1.2 User should be able to do x that SR-1.3 On clicking this, following x message should appear SR-1.4 x SR-1.5 x x SR-1.6 x SR-1.7 x This is a very basic traceability matrix format. You can add more following columns and make it more effective: ID, Assoc ID, Technical Assumption(s) and/or Customer Need(s), Functional Requirement, Status, Architectural/Design Document, Technical Specification, System Component(s), Software Module(s), Test Case Number, Tested In, Implemented In, Verification, Additional Comments,