Академический Документы
Профессиональный Документы
Культура Документы
1) See the limit of username field. I mean the data type of this field in DB and the field size. Try
adding more characters to this field than the field size limit. See how application respond to this.
2) Repeat above case for number fields. Insert number beyond the field storage capacity. This is
typically a boundary test.
3) For username field try adding numbers and special characters in various combinations.
(Characters like !@#$%^&*()_+}{“:?><,./;’[]). If not allowed specific message should be
displayed to the user.
4) Try above special character combination for all the input fields on your sign up page having
some validations. Like Email address field, URL field validations etc.
5) Many applications crash for the input field containing ‘ (single quote) and ” (double quote)
examples field like: “Vijay’s web”. Try it in all the input fields one by one.
6) Try adding only numbers to input fields having validation to enter only characters and vice
versa.
7) If URL validation is there then see different rules for url validation and add urls not fitting to
the rules to observe the system behavior.
Example urls like: vijay.com/?q=vijay’s!@#$%^&*()_+}{“:?><,./;’[]web_page
Also add urls containing http:// and https:// while inserting into url input box.
8 ) If your sign up page is of some steps like step 1 step 2 etc. then try changing parameter values
directly into browser address bar. Many times urls are formatted with some parameters to
maintain proper user steps. Try altering all those parameters directly without doing anything
actually on the sign up page.
9) Do some monkey testing manually or automating (i.e. Insert whatever comes in mind or
random typing over keyboard) you will come up with some observations.
10) See if any page is showing JavaScript error either at the browser left bottom corner or enable
the browser settings to display popup message to any JavaScript error.
• Obtain requirements, functional design, and internal design specifications, user stories,
and other available/necessary information
• Obtain budget and schedule requirements
• Determine project-related personnel and their responsibilities, reporting requirements,
required standards and processes (such as release processes, change processes, etc.)
• Determine project context, relative to the existing quality culture of the
product/organization/business, and how it might impact testing scope, aproaches, and
methods.
• Identify application's higher-risk and mor important aspects, set priorities, and determine
scope and limitations of tests.
• Determine test approaches and methods - unit, integration, functional, system, security,
load, usability tests, etc.
• Determine test environment requirements (hardware, software, configuration, versions,
communications, etc.)
• Determine testware requirements (automation tools, coverage analyzers, test tracking,
problem/bug tracking, etc.)
• Determine test input data requirements
• Identify tasks, those responsible for tasks, and labor requirements
• Set schedule estimates, timelines, milestones
• Determine, where apprapriate, input equivalence classes, boundary value analyses, error
classes
• Prepare test plan document(s) and have needed reviews/approvals
• Write test cases
• Have needed reviews/inspections/approvals of test cases
• Prepare test environment and testware, obtain needed user manuals/reference
documents/configuration guides/installation guides, set up test tracking processes, set up
logging and archiving processes, set up or obtain test input data
• Obtain and install software releases
• Perform tests
• Evaluate and report results
• Track problems/bugs and fixes
• Retest as needed
• Maintain and update test plans, test cases, test environment, and testware through life
cycle
• Title
• Identification of software including version/release numbers
• Revision history of document including authors, dates, approvals
• Table of Contents
• Purpose of document, intended audience
• Objective of testing effort
• Software product overview
• Relevant related document list, such as requirements, design documents, other test plans,
etc.
• Relevant standards or legal requirements
• Traceability requirements
• Relevant naming conventions and identifier conventions
• Overall software project organization and personnel/contact-info/responsibilties
• Test organization and personnel/contact-info/responsibilities
• Assumptions and dependencies
• Project risk analysis
• Testing priorities and focus
• Scope and limitations of testing
• Test outline - a decomposition of the test approach by test type, feature, functionality,
process, system, module, etc. as applicable
• Outline of data input equivalence classes, boundary value analysis, error classes
• Test environment - hardware, operating systems, other required software, data
configurations, interfaces to other systems
• Test environment validity analysis - differences between the test and production systems
and their impact on test validity.
• Test environment setup and configuration issues
• Software migration processes
• Software CM processes
• Test data setup requirements
• Database setup requirements
• Outline of system-logging/error-logging/other capabilities, and tools such as screen
capture software, that will be used to help describe and report bugs
• Discussion of any specialized software or hardware tools that will be used by testers to
help track the cause or source of bugs
• Test automation - justification and overview
• Test tools to be used, including versions, patches, etc.
• Test script/test code maintenance processes and version control
• Problem tracking and resolution - tools and processes
• Project test metrics to be used
• Reporting requirements and testing deliverables
• Software entrance and exit criteria
• Initial sanity testing period and criteria
• Test suspension and restart criteria
• Personnel allocation
• Personnel pre-training needs
• Test site/location
• Outside test organizations to be utilized and their purpose, responsibilties, deliverables,
contact persons, and coordination issues
• Relevant proprietary, classified, security, and licensing issues.
• Open issues
• Appendix - glossary, acronyms, etc.
What's a 'test case'?
A test case describes an input, action, or event and an expected response, to determine if a
feature of a software application is working correctly. A test case may contain particulars such as
test case identifier, test case name, objective, test conditions/setup, input data requirements,
steps, and expected results. The level of detail may vary significantly depending on the
organization and project context.
Note that the process of developing test cases can help find problems in the requirements or
design of an application, since it requires completely thinking through the operation of the
application. For this reason, it's useful to prepare test cases early in the development cycle if
possible.
• Complete information such that developers can understand the bug, get an idea of it's
severity, and reproduce it if necessary.
• Bug identifier (number, ID, etc.)
• Current bug status (e.g., 'Released for Retest', 'New', etc.)
• The application name or identifier and version
• The function, module, feature, object, screen, etc. where the bug occurred
• Environment specifics, system, platform, relevant hardware specifics
• Test case name/number/identifier
• One-line bug description
• Full bug description
• Description of steps needed to reproduce the bug if not covered by a test case or if the
developer doesn't have easy access to the test case/test script/test tool
• Names and/or descriptions of file/data/messages/etc. used in test
• File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be
helpful in finding the cause of the problem
• Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
• Was the bug reproducible?
• Tester name
• Test date
• Bug reporting date
• Name of developer/group/organization the problem is assigned to
• Description of problem cause
• Description of fix
• Code section/file/module/class/method that was fixed
• Date of fix
• Application version that contains the fix
• Tester responsible for retest
• Retest date
• Retest results
• Regression testing requirements
• Tester responsible for regression tests
• Regression testing results
• What are the expected loads on the server, and what kind of performance is required
under such loads (such as web server response time, database query response times).
What kinds of tools will be needed for performance testing (such as web load testing
tools, other tools already in house that can be adapted, load generation appliances, etc.)?
• Who is the target audience? What kind and version of browsers will they be using, and
how extensively should testing be for these variations? What kind of connection speeds
will they by using? Are they intra- organization (thus with likely high connection speeds
and similar browsers) or Internet-wide (thus with a wider variety of connection speeds
and browser types)?
• What kind of performance is expected on the client side (e.g., how fast should pages
appear, how fast should flash, applets, etc. load and run)?
• Will down time for server and content maintenance/upgrades be allowed? how much?
• What kinds of security (firewalls, encryption, passwords, functionality, etc.) will be
required and what is it expected to do? How can it be tested?
• What internationilization/localization/language requirements are there, and how are they
to be verified?
• How reliable are the site's Internet connections required to be? And how does that affect
backup system or redundant connection requirements and testing?
• What processes will be required to manage updates to the web site's content, and what are
the requirements for maintaining, tracking, and controlling page content, graphics, links,
etc.?
• Which HTML and related specification will be adhered to? How strictly? What variations
will be allowed for targeted browsers?
• Will there be any standards or requirements for page appearance and/or graphics, 508
compliance, etc. throughout a site or parts of a site?
• Will there be any development practices/standards utilized for web page components and
identifiers, which can significantly impact test automation.
• How will internal and external links be validated and updated? how often?
• Can testing be done on the production system, or will a separate test system be required?
How are browser caching, variations in browser option settings, connection variabilities,
and real-world internet 'traffic congestion' problems to be accounted for in testing?
• How extensive or customized are the server logging and reporting requirements; are they
considered an integral part of the system and do they require testing?
• How are flash, applets, javascripts, ActiveX components, etc. to be maintained, tracked,
controlled, and tested?
The primary goal of unit testing is to take the smallest piece of testable software in the
application, isolate it from the remainder of the code, and determine whether it behaves exactly
as you expect. Each unit is tested separately before integrating them into modules to test the
interfaces between modules. Unit testing has proven its value in that a large percentage of defects
are identified during its use.
The most common approach to unit testing requires drivers and stubs to be written. The driver
simulates a calling unit and the stub simulates a called unit.
This is also known as Classic Life Cycle Model (or) Linear Sequential Model (or) Waterfall
Method. This model has the following activities.
As software is always of a large system (or business), work begins by establishing the
requirements for all system elements and then allocating some subset of these requirements to
software. This system view is essential when the software must interface with other elements
such as hardware, people and other resources. System is the basic and very critical requirement
for the existence of software in any entity. So if the system is not in place, the system should be
engineered and put in place. In some cases, to extract the maximum output, the system should
be re-engineered and spruced up. Once the ideal system is engineered or tuned, the development
team studies the software requirement for the system.
This process is also known as feasibility study. In this phase, the development team visits the
customer and studies their system. They investigate the need for possible software automation in
the given system. By the end of the feasibility study, the team furnishes a document that holds
the different specific recommendations for the candidate system. It also includes the
personnel assignments, costs, project schedule, target dates etc.... The requirement gathering
process is intensified and focussed specially on software. To understand the nature of the
program(s) to be built, the system engineer or "Analyst" must understand the information domain
for the software, as well as required function, behavior, performance and interfacing. The
essential purpose of this phase is to find the need and to define the problem that needs to be
solved .
In this phase, the software development process, the software's overall structure and its nuances
are defined. In terms of the client/server technology, the number of tiers needed for the package
architecture, the database design, the data structure design etc... are all defined in this phase. A
software development model is thus created. Analysis and Design are very crucial in the whole
development cycle. Any glitch in the design phase could be very expensive to solve in the later
stage of the software development. Much care is taken during this phase. The logical system of
the product is developed in this phase.
4. Code Generation
The design must be translated into a machine-readable form. The code generation step performs
this task. If the design is performed in a detailed manner, code generation can be accomplished
without much complication. Programming tools like compilers, interpreters, debuggers etc...
are used to generate the code. Different high level programming languages like C, C++, Pascal,
Java are used for coding. With respect to the type of application, the right programming
language is chosen.
5. Testing
Once the code is generated, the software program testing begins. Different testing methodologies
are available to unravel the bugs that were committed during the previous phases. Different
testing tools and methodologies are already available. Some companies build their own testing
tools that are tailor made for their own development operations.
6. Maintenance
The software will definitely undergo change once it is delivered to the customer. There can be
many reasons for this change to occur. Change could happen because of some unexpected input
values into the system. In addition, the changes in the system could directly affect the software
operations. The software should be developed to accommodate changes that could happen during
the post implementation period.
Bug is a terminology used in “Software Testing”. Most of the people in have a misconception that bug
is an error or something like that. Bug is not an error but it affects the quality parameters of a
software.