Академический Документы
Профессиональный Документы
Культура Документы
“A mistake in coding is called error, error found by tester is called defect, defect accepted by
development team then it is called bug, build does not meet the requirements then it is failure.”
WEB TESTING
Functionality:
In testing the functionality of the web sites the following should be tested:
• Links
I. Internal Links
ii. External Links
iii. Mail Links
IV. Broken Links
• Forms
I. Field validation
ii. Error message for wrong input
iii. Optional and Mandatory fields
• Database
* Testing will be done on the database integrity.
• Cookies
* Testing will be done on the client system side, on the temporary Internet files.
Performance:
Performance testing can be applied to understand the web site’s scalability, or to benchmark the
performance in the environment of third party products such as servers and middleware for
potential purchase.
• Connection Speed:
Tested over various networks like Dial Up, ISDN etc
• Load:
I. what is the no. of users per time?
ii. Check for peak loads and how system behaves
iii. Large amount of data accessed by user.
• Stress:
i. Continuous Load
ii. Performance of memory, CPU, file handling etc.
Usability:
Usability testing is the process by which the human-computer interaction characteristics of a
system are measured, and weaknesses are identified for correction.
• Ease of learning
• Navigation
• Subjective user satisfaction
• General appearance
Server Side Interface:
In web testing the server side interface should be tested. This is done by verify that
communication is done properly. Compatibility of server with software, hardware, network and
database should be tested.
Client Side Compatibility:
The client side compatibility is also tested in various platforms, using various browsers etc.
What is the difference between client-server testing and web based testing and what
are things that we need to test in such applications?
Ans:
Projects are broadly divided into two types of:
2 tier applications.
3 tier applications.
Desktop application:
1. Application runs in single memory (Front end and Back end in one place).
2. Single user only.
Client/Server application:
1. Application runs in two or more machines.
2. Application is a menu-driven.
3. Connected mode (connection exists always until logout).
4. Limited number of users.
5. Less number of network issues when compared to web app.
Web application:
1. Application runs in two or more machines.
2. URL-driven.
3. Disconnected mode (state less).
4. Unlimited number of users.
5. Many issues like hardware compatibility, browser compatibility, version compatibility,
security issues, performance issues etc.
Desktop application runs on personal computers and work stations, so when you test
the desktop application you are focusing on a specific environment. You will test complete
application broadly in categories like GUI, functionality, Load, and backend i.e. DB.
In client server application you have two different components to test. Application is
loaded on server machine while the application exe on every client machine.
You will test broadly in categories like, GUI on both sides, functionality, Load, client-server
interaction, backend.
This environment is mostly used in Intranet networks.
You are aware of number of clients and servers and their locations in the test scenario.
Web application is a bit different and complex to test as tester don’t have that much
control over the application.
Application is loaded on the server whose location may or may not be known and no exe is
installed on the client machine, you have to test it on different web browsers.
Web applications are supposed to be tested on different browsers and OS platforms so
broadly Web application is tested mainly for browser compatibility and operating system
compatibility, error handling, static pages, backend testing and load testing.
1) Session cookies: This cookie is active till the browser that invoked the cookie is
open. When we close the browser this session cookie gets deleted. Some time
session of say 20 minutes can be set to expire the cookie.
2) Persistent cookies: The cookies that are written permanently on user machine
and last for months or years.
Test cases:
1) As a Cookie privacy policy make sure from your design documents that no personal or
sensitive data is stored in the cookie.
2) If you have no option than saving sensitive data in cookie make sure data stored in
cookie is stored in encrypted format.
3) Make sure that there is no overuse of cookies on your site under test. Overuse of
cookies will annoy users if browser is prompting for cookies more often and this could
result in loss of site traffic and eventually loss of business.
4) Disable the cookies from your browser settings: If you are using cookies on your site,
your sites major functionality will not work by disabling the cookies.
Then try to access the web site under test. Navigate through the site. See if appropriate
messages are displayed to user like “For smooth functioning of this site make sure that
cookies are enabled on your browser”. There should not be any page crash due to
disabling the cookies. (Please make sure that you close all browsers, delete all previously
written cookies before performing this test)
5) Delete cookie: Allow site to write the cookies and then close all browsers and
manually delete all cookies for web site under test. Access the web pages and check the
behavior of the pages.
6) Cookie Testing on Multiple browsers: This is the important case to check if your
web application page is writing the cookies properly on different browsers as intended and
site works properly using these cookies. You can test your web application on Major used
browsers like Internet explorer (Various versions), Mozilla Firefox, Netscape, Opera etc.
- Maintaining a standard repository of reusable test cases for your application will ensure the most
common bugs will be caught more quickly.
- Checklist helps to quickly complete writing test cases for new versions of the application.
- Reusing test cases help to save money on resources to write repetitive tests.
Master QA Doc-Mahesh Wagh
- Important test cases will be covered always making it almost impossible to forget.
- Testing checklist can be referred by developers to ensure most common issues are fixed in
development phase itself.
Risk-based testing is the term used for an approach to creating a test strategy that is based on
prioritizing tests by risk. The basis of the approach is a detailed risk analysis and prioritizing of
risks by risk level. Tests to address each risk are then specified, starting with the highest risk first.
Decision table testing is used for testing systems for which the specification takes the form of rules
or cause-effect combinations. In a decision table the inputs are listed in a column, with the outputs
in the same column but below the inputs. The remainder of the table explores combinations of
inputs to define the outputs produced.
16. What is the difference between Testing Techniques and Testing Tools?
Testing technique: – Is a process for ensuring that some aspects of the application system or unit
functions properly there may be few techniques but many tools.
Component testing, also known as unit, module and program testing, searches for defects in, and
verifies the functioning of software (e.g. modules, programs, objects, classes, etc.) that are
separately testable. Component testing may be done in isolation from the rest of the system
depending on the context of the development life cycle and the system. Most often stubs and
drivers are used to replace the missing software and simulate the interface between the software
components in a simple manner. A stub is called from the software component to be tested; a
driver calls a component to be tested.
Testing the end to end functionality of the system as a whole is defined as a functional system
testing.
Independent testers are unbiased and identify different defects at the same time.
1. There are currently seven different agile methodologies that I am aware of:
2. Extreme Programming (XP)
3. Scrum
4. Lean Software Development
5. Feature-Driven Development
6. Agile Unified Process
7. Crystal
8. Dynamic Systems Development Model (DSDM)
9. Random testing often known as monkey testing. In such type of testing data is generated
randomly often using a tool or automated mechanism. With this randomly generated input the
system is tested and results are analysed accordingly. These testing are less reliable; hence it is
normally used by the beginners and to see whether the system will hold up under adverse
effects.
10. Planning
11. Kick-off
12. Preparation
13. Review meeting
14. Rework
15. Follow-up.
16. The moderator (or review leader) leads the review process. He or she determines, in co-
operation with the author, the type of review, approach and the composition of the review
team. The moderator performs the entry check and the follow-up on the rework, in order to
control the quality of the input and output of the review process. The moderator also schedules
the meeting, disseminates documents before the meeting, coaches other team members, paces
the meeting, leads possible discussions and stores the data that is collected.
17. A negative test is when you put in an invalid input and receives errors. While a positive testing,
is when you put in a valid input and expect some action to be completed in accordance with the
specification.
18. The purpose of test completion criterion is to determine when to stop testing
19. Re-testing ensures the original fault has been removed; regression testing looks for unexpected
side effects.
20. In experience-based techniques, people's knowledge, skills and background are a prime
contributor to the test conditions and test cases. The experience of both technical and business
people is important, as they bring different perspectives to the test analysis and design process.
Due to previous experience with similar systems, they may have insights into what could go
wrong, which is very useful for testing.
21. It depends on the risks for the system being tested. There are some criteria bases on which you
can stop testing.
22. Deadlines (Testing, Release)
23. Test budget has been depleted
24. Bug rate fall below certain level
25. Test cases completed with certain percentage passed
26. Alpha or beta periods for testing ends
27. Coverage of code, functionality or requirements are met to a specified point
What is black box testing? What are the different black box testing techniques?
28. Black box testing is the software testing method which is used to test the software without
knowing the internal structure of code or program. This testing is usually done to check the
functionality of an application. The different black box testing techniques are
29. Equivalence Partitioning
30. Boundary value analysis
31. Cause effect graphing
32. Test coverage measures in some specific way the amount of testing performed by a set of tests
(derived in some other way, e.g. using specification-based techniques). Wherever we can count
things and can tell whether or not each of those things has been tested by some test, then we
can measure coverage.
What is a failure?
35. In order to identify and execute the functional requirement of an application from end to finish
“use case” is used and the techniques used to do this is known as “Use Case Testing”
What is the difference between STLC (Software Testing Life Cycle) and SDLC (Software
Development Life Cycle)?
What is white box testing and list the types of white box testing?
37. White box testing technique involves selection of test cases based on an analysis of the internal
structure (Code coverage, branches coverage, paths coverage, condition coverage etc.) of a
component or system. It is also known as Code-Based testing or Structural testing. Different
types of white box testing are
38. Statement Coverage
39. Decision Coverage
Static testing: During Static testing method, the code is not executed and it is performed
using the software documentation.
Dynamic testing: To perform this testing the code is required to be in an executable form.
Test design, scope, test strategies, approach are various details that Test plan document consists
of.
System Testing: System testing is finding defects when the system under goes testing as a
whole, it is also known as end to end testing. In such type of testing, the application
undergoes from beginning till the end.UAT: User Acceptance Testing (UAT) involves
running a product through a series of specific tests which
Ans. Testing functionality with all valid, invalid inputs and preconditions is called exhaustive
testing.
Q. What is Defect Clustering?
Ans. Any small module or functionality may contain more number of defects – concentrate more
testing on this functionality.
Q. What is Positive Testing?
Ans. Testing conducted on the application to determine if system works. Basically known as “test
to pass” approach.
Q. What is Negative Testing?
Ans. Testing Software with negative approach to check if system is not “showing error when not
supposed to” and “not showing error when supposed to”.
Q. What is Process?
Ans. A process is set of a practices performed to achieve a give purpose; it may include tools,
methods, materials and or people.
Q. What is a Defect?
Ans. Any flaw imperfection in a software work product.
(or)
Expected result is not matching with the application actual result.
Q. What is Severity?
Ans. It defines the important of defect with respect to functional point of view i.e. how critical is
defect with respective to the application.
Q. What is Priority?
Ans. It indicates the importance or urgency of fixing a defect
Risk-based testing is the term used for an approach to creating a test strategy that is based on
prioritizing tests by risk. The basis of the approach is a detailed risk analysis and prioritizing of
risks by risk level. Tests to address each risk are then specified, starting with the highest risk first.
The various advantages of the waterfall model include: It is a linear model. It is a segmental
model. It is systematic and sequential. It is a simple one. It has proper documentation.
The RAD (Rapid Application Development Model) model is proposed when requirements and
solutions can be modularized as independent system or software components, each of which can
be developed by different teams. After these smaller system components are developed, they are
integrated to produce the large software system solution.
13. What is the difference between Two Tier Architecture and Three Tier Architecture?
In Two Tier Architecture or Client/Server Architecture two layers like Client and Server is
involved. The Client sends request to Server and the Server responds to the request by fetching
the data from it. The problem with the Two Tier Architecture is the server cannot respond to
multiple requests at the same time which causes data integrity issues.
The Client/Server Testing involves testing the Two Tier Architecture of user interface in the front
end and database as backend with dependencies on Client, Hardware and Servers.
In Three Tier Architecture or Multi Tier Architecture three layers like Client, Server and
Database are involved. In this the Client sends a request to Server, where the Server sends the
request to Database for data, based on that request the Database sends back the data to Server
and from Server the data is forwarded to Client.
The Web Application Testing involves testing the Three Tier Architecture including the User
interface, Functionality, Performance, Compatibility, Security and Database testing.
Defect leakage occurs at the Customer or the End user side after the application delivery. After the release of
the application to the client, if the end user gets any type of defects by using that application then it is called
as Defect leakage. This Defect Leakage is also called as Bug Leakage.
1. Process metrics: Primary metrics are also called as Process metrics. This is the metric the Six Sigma
practitioners care about and can influence. Primary metrics are almost the direct output characteristic
of a process. It is a measure of a process and not a measure of a high-level business objective.
Primary Process metrics are usually Process Defects, Process cycle time and Process consumption.
2. Product metrics: Product metrics quantitatively characterize some aspect of the structure of a
software product, such as a requirements specification, a design, or source code.
We cannot perform 100% testing on any application. But the criteria to ensure test completion on a project
are:
3. All the test cases are executed with the certain percentage of pass.
4. Bug falls below a certain level
5. Test budget depleted
6. Deadlines reached (project or test)
7. When all the functionalities are covered in a test cases
8. All critical & high bugs must have a status of CLOSED
The Unified Modelling Language is a third-generation method for specifying, visualizing, and documenting the
artefacts of an object-oriented system under development from the inside, the Unified Modelling Language
consists of three things:
9. A formal metamodel
10. A graphical notation
11. A set of idioms of usage
Bug:
An Error found in the development environment before the product is shipped to the customer.
Bug: Simply Bug is an error found BEFORE the application goes into production.
Defect:
Defect is the difference between expected and actual result in the context of testing. Defect is the
deviation of the customer requirement. Simply defect can be defined as a variance between
expected and actual. Defect is an error found AFTER the application goes into production.
Error:
Failure:
Failures are caused by environment or sometime due to mishandling of product. Suppose we are
using a compass just beside a current running wire then this will not show the correct direction
and this is not helping in getting the right information from the product.
Fault:
An incorrect step, process, or data definition in a computer program which causes the program to
perform in an unintended or unanticipated manner. See: bug, defect, error, exception.