Вы находитесь на странице: 1из 15

Home work-2

Subject Name – Software Testing and Quality Assurance

Subject Code – CAP526

Submitted to –Miss Gargi Sharma

Submitted By-
Jivtesh Singh Ahuja

Roll No- RD3803B52

Class – MCA 4Th Sem

Section – d3803

Reg. No. - 10812519


DECELERATION: - I declare that this homework is my individual work.
work.
I have not copied from other students work or from any other source except
where due acknowledgement is made explicitly in the text, not has any part
being written for me by another person.

Jivtesh Singh Ahuja

Evaluator
comments………………………………………………………………………………………
………………………

Marks Obtained _________________ Out Of


____________________
Part – A

Q1. Why does knowing how the Software works influence how an what you
should test?.

ANS:

The whole of the testing process depends on how the software works according to requirement
specification given by the user. So the developed software has to pass through the validation
process as well as the verification process of testing. Testing will be done on the basis of
requirements of software. It should be test on the basis of overall working of software and
requirements of the software.

If you test only by running the software and not seeing the coding part i.e. by a black-box testing
the software, you won't know if your test cases adequately cover all the parts of the software ie
what types of data types its able to interact with, what are the different boundaries condition and
what to test them and how the control is moving from one module of the software to the another
one.

So all the various types of testing techniques should be involves according to software like:

Black Box Testing White Box Testing


Functional Testing Unit Testing
Stress Testing Static & dynamic Analysis
Load Testing
Statement Coverage
Exploratory Testing
Branch Coverage
Usability Testing
Security Testing
Smoke Testing
Mutation Testing
Recovery Testing

Volume testing

Domain Testing

Scenario testing

Regression Testing

User Acceptance
Alpha Testing

Beta Testing
Q2. What is the biggest problem of White-Box testing either Static or
Dynamic?
White box testing (Clear Box Testing, Open Box Testing, Glass Box Testing, Transparent Box
Testing or Structural Testing) traditionally refers to the use of program source code as a test
basis, that is, as the basis for designing tests and test cases. White-box testing usually involves
tracing possible execution paths through the code and working out what input values would force
the execution of those paths. White box testing, on its own, cannot identify problems caused by
mismatches between the actual requirements or specification and the code as implemented but it
can help identify some types of design weaknesses in the code

Problems regarding white box testing are:

1. Test cases are tough and challenging to design, without having clear functional specifications.

2. It is difficult to identify tricky inputs, if the test cases are not developed based on specifications.

3. It is difficult to identify all possible inputs in limited testing time. So writing test cases is slow and
difficult.

4. As knowledge of code and internal structure is a prerequisite, a skilled tester is needed to carry out
this type of testing, which increases the cost.

5. And it is nearly impossible to look into every bit of code to find out hidden errors, which may
create problems, resulting in failure of the application.

6. Chances of having unidentified paths during this testing

7. Very few white-box tests can be done without modifying the program, changing values to force
different execution paths, or to generate a full range of inputs to test a particular function.

8. Miss cases omitted in the code.

9. Chances of having repetition of tests that are already done by programmer.


Q3. How could you guarantee that your software would never have a
configuration problem?

Ans:
No, we can’t guarantee that a software would never have a configuration problem
because as a new software is developed it may be the case that its developed on the new
hardware basis and and may run according to that. But we can’t guarantee that all the user
will be having that hardware configuration and some might have evem latest
configuration on which the software is successfully tested. So it is not possible that user
update all the new hardware that are coming in the market. may be, software is not
compactable with it.therefore it is never possible that software would never have a
configuration problem.

So in order to tackle this problem we can do


two things…..

1. You'd need to ship the hardware and software


together as one package, the software would
only work on that hardware. As the hardware
is send along with the software package
which is successfully tested so there will not
be any issue regarding configuration problem.

Example: A pinnacle studio software along


with the tv-tuner card , data cable and with
software package cd comes in a single
package to avoid configuration problem.

2. The other thing we can do


it that we can Add the
Documentation of the
recommended and the
minimum hardware
requirement that the user
should have in order to
have the proper working of
the software….
Q4. Create the equivalence partitioning and write test cases to test the login
screen containing username and password?

Ans:
Consider a ymail account which has:

User Id= Jivtesh_ahuja@ymail.com

Password=jivteshahuja

Test cases:
1) User Id=NULL , Password=NULL
E.g.
User Id= , Password=
Result-- prompt message “please enter user id and password”

2) User ID=NULL , Password= not null


E.g.
User Id= , Password=12345678
Result-- prompt message “please enter valid email id”

3) User ID=Not NULL , Password= null


E.g.
User Id=Ahuja@yahoo.com , Password=

Result-- prompt message “please enter valid password”

4) User ID=Not Valid (Not containing both ‘.’ And ‘@’) , Password= valid
E.g.
User Id= Jivtesh_ahujaymailcom , Password= jivteshahuja
Result-- prompt message “please enter valid user id or password”

5) User ID=Not Valid (Not containing ‘@’ but containing ‘.’) , Password= valid
E.g.
User Id= Jivtesh_ahujaymail.com , Password= jivteshahuja
Result-- prompt message “please enter valid user id or password”

6) User ID=Valid (Not containing both ‘.’ but containing ‘@’) , Password= not valid
E.g.
User Id= Jivtesh_ahuja@ymailcom , Password= jivteshahuja
Result-- prompt message “please enter valid user id or password”
7) User ID=Valid (containing both ‘.’ And ‘@’) , Password= Less than 6 characters
E.g.
User Id= Jivtesh_ahuja@ymail.com , Password= jiv
Result-- prompt message “Too small to be a password”

8) User ID=Valid (containing both ‘.’ And ‘@’) , Password= Greater than 12
characters
E.g.
User Id= Jivtesh_ahuja@ymailcom , Password= jivteshahuja12345678
Result-- prompt message “Too long to be a password”

9) User ID=Valid (containing both ‘.’ And ‘@’) , Password= valid(10 Characters)
E.g.
User Id= Jivtesh_ahuja@ymailcom , Password= jivteshahuja
Result-- prompt message “valid user id and password” Logging…….
Q5. Explain the key elements involved in formal reviews?

Ans:
There are four essential elements to a formal review:

• Identify problems.

The goal of the review is to find problems with the softwarenot just items that are
wrong, but missing items as well. All criticism should be directed at the design or
code, not the person who created it. Participants shouldn't take any criticism
personally. Leave your egos, emotions, and sensitive feelings at the door.

• Follow rules.

A fixed set of rules should be followed. They may set the amount of code to be
reviewed (usually a couple hundred lines), how much time will be spent (a couple
hours), what can be commented on, and so on. This is important so that the
participants know what their roles are and what they should expect. It helps the review
run more smoothly.

• Prepare.

Each participant is expected to prepare for and contribute to the review. Depending on
the type of review, participants may have different roles. They need to know what
their duties and responsibilities are and be ready to actively fulfill them at the review.
Most of the problems found through the review process are found during preparation,
not at the actual review.

• Write a report.

The review group must produce a written report summarizing the results of the review
and make that report available to the rest of the product development team. It's
imperative that others are told the results of the meeting how many problems were
found, where they were found, and so on.
Part – B
Q6. Is it acceptable to release a software product that has configuration bugs?

Ans:
No it is not a proper approach to release a software having configuration bugs

For this probably you will never be able to fix all of them. As in all testing, the process is
risk based. You and your team will need to decide what you can fix and what you can't.
Leaving in an obscure bug that only appears with a rare piece of hardware is an easy
decision. Others won't be as easy.

For example:

1. In year 1997 the Microsoft release a Windows 97 but at the release


ceremony when they attach peripheral devices it starts to give the blue screen
error. An thus was not able to link will all types of the peripheral devices ie it was
having some configurationally bugs in it. For this the in 1998 Microsoft updates
and redeveloped the 97 to Windows 98 which was one of the most stable os
known today.

2. In 1994 the Disney releases its first multimedia CD-ROM game for
children. THE LOIN KING ANIMATED STORYBOOK. Its sale was huge in
millions. But there was a bug, the game was not able to support the customer
computer configuration.

It turns that Disney fails to test the software on a broad representation of


different PC models available. So for that Disney reconfigured its game software
and then Re-send to their customers for free. Resulting into a great loss and waste
of money….
Q: 7. In addition to age and popularity what other criteria might you use to
equivalence partition hardware for configuration testing?

Ans:

Region or country: is a possibility as some hardware devices such as DVD players only
work with DVDs in their geographic region like a software which is developed by a
country organization which is not globally spread doesn’t know what type of hardware are
being used in the rest of the world. Another might be consumer or business. Some
hardware is specific to one, but not the other. Think of others that might apply to your
software.

Cost of a software is directly relates to time for testing any software. Greater time for
testing the software greater will be its cost or budget. If a proper amount of testing is not
done then also also the number of buz will go on increasing. If a excess testing is done the
cost will exponentially increase. If software does not work on the current hardware i.e.,
the software is failed to pass the configuration testing, the project manager must find
those types of hardware on that the software will work. To do this, he/she has to make
some alternatives on the basis of brand, cost, models, etc. And choose the best among
them.
Q8. What are the different levels of testing and the goals of different levels?
For each level which testing approach is more suitable?

Ans: Different Levels of testing are as :

• ACCEPTANCE TESTING
Testing to verify a product meets customer specified requirements. A customer usually does
this type of testing on a product that is developed externally.

• BLACK BOX TESTING


Testing without knowledge of the internal workings of the item being tested. Tests are usually
functional.

• COMPATIBILITY TESTING
Testing to ensure compatibility of an application or Web site with different browsers, OSs,
and hardware platforms. Compatibility testing can be performed manually or can be driven by
an automated functional or regression test suite.

• CONFORMANCE TESTING
Verifying implementation conformance to industry standards. Producing tests for the behavior
of an implementation to be sure it provides the portability, interoperability, and/or
compatibility a standard defines.

• FUNCTIONAL TESTING
Validating an application or Web site conforms to its specifications and correctly performs all
its required functions. This entails a series of tests which perform a feature by feature
validation of behavior, using a wide range of normal and erroneous input data. This can
involve testing of the product's user interface, APIs, database management, security,
installation, networking, etcF testing can be performed on an automated or manual basis using
black box or white box methodologies.

• INTEGRATION TESTING
Testing in which modules are combined and tested as a group. Modules are typically code
modules, individual applications, client and server applications on a network, etc. Integration
Testing follows unit testing and precedes system testing.

• LOAD TESTING
Load testing is a generic term covering Performance Testing and Stress Testing.

• PERFORMANCE TESTING
Performance testing can be applied to understand your application or WWW site's scalability,
or to benchmark the performance in an environment of third party products such as servers
and middleware for potential purchase. This sort of testing is particularly useful to identify
performance bottlenecks in high use applications. Performance testing generally involves an
automated test suite as this allows easy simulation of a variety of normal, peak, and
exceptional load conditions.
• REGRESSION TESTING
Similar in scope to a functional test, a regression test allows a consistent, repeatable validation
of each new release of a product or Web site. Such testing ensures reported product defects
have been corrected for each new release and that no new quality problems were introduced in
the maintenance process. Though regression testing can be performed manually an automated
test suite is often used to reduce the time and resources needed to perform the required testing.

• SMOKE TESTING
A quick-and-dirty test that the major functions of a piece of software work without bothering
with finer details. Originated in the hardware testing practice of turning on a new piece of
hardware for the first time and considering it a success if it does not catch on fire.
• STRESS TESTING
Testing conducted to evaluate a system or component at or beyond the limits of its specified
requirements to determine the load under which it fails and how. A graceful degradation under
load leading to non-catastrophic failure is the desired result. Often Stress Testing is performed
using the same process as Performance Testing but employing a very high level of simulated
load.
• SYSTEM TESTING
Testing conducted on a complete, integrated system to evaluate the system's compliance with
its specified requirements. System testing falls within the scope of black box testing, and as
such, should require no knowledge of the inner design of the code or logic.

• UNIT TESTING
Functional and reliability testing in an Engineering environment. Producing tests for the
behavior of components of a product to ensure their correct behavior prior to system
integration.

• WHITE BOX TESTING


Testing based on an analysis of internal workings and structure of a piece of software.
Includes techniques such as Branch Testing and Path Testing. Also known as Structural
Testing and Glass Box Testing.
Q9. Relate verification and validation to the quality control and quality
assurance with an example?

ANS:
Verification is a Quality control process that is used to evaluate whether or not a product,
service, or system complies with regulations, specifications, or conditions imposed at the
start of a development phase. Verification can be in development, scale-up, or production.
This is often an internal process.

Validation is Quality assurance process of establishing evidence that provides a high


degree of assurance that a product, service, or system accomplishes its intended
requirements. This often involves acceptance of fitness for purpose with end users and
other product stakeholders.

It is sometimes said that verification can be expressed by the query "Are you building the
thing right?" and validation by "Are you building the right thing?" "Building the right
thing" refers back to the user's needs, while "building it right" checks that the
specifications be correctly implemented by the system. In some contexts, it is required to
have written requirements for both as well as formal procedures or protocols for
determining compliance.
Q10. In a code review check list there are some items as given below categories
them. Does the code follow the coding conventions of the organization?

1. Is the entire conditional path reachable?

ANS: Its a Control flow error

2. If the pointers are used, are they initialized properly?

ANS: It’s a Data reference error or can be a memory reference error

3. Is there any part of code unreachable?

ANS: It’s a Data reference error

4. Has the use of similar looking operators (e.g. &,&& or =,== in C)checked ?

ANS: it’s a Comparison error

Вам также может понравиться