Вы находитесь на странице: 1из 22

Principles of Software Testing

What is not Software Testing?

Test is not debugging. Debugging has the goal to remove errors. The existence and the approximate location of the error are known. Debugging is not documented. There is no specification and there will be no record (log) or report. Debugging is the result of testing but never a substitution for it. Test can never find 100% of the included errors. There will be always a rest of remaining errors which can not be found. Each kind of test will find a different kind of errors. Test has the goal to find errors and not their reasons. Therefore the activity of testing will not include any bug fixing or implementation of functionality. The result of testing is a test report. A tester must not modify the code he is testing, by no means! This has to be done by the developer based on the test report he receives from the tester.

What is Software Testing?

Test is a formal activity. It involves a strategy and a systematic approach. The different stages of tests supplement each other. Tests are always specified and recorded. Test is a planned activity. The workflow and the expected results are specified. Therefore the duration of the activities can be estimated. The point in time where tests are executed is defined. Test is the formal proof of software quality.

Overview of Test Methods


Static tests The software is not executed but analyzed offline. In this category would be code inspections (e.g. Fagan inspections), Lint checks, cross reference checks, etc. Dynamic tests This requires the execution of the software or parts of the software (using stubs). It can be executed in the target system, an emulator or simulator. Within the dynamic tests the state of the art distinguishes between structural and functional tests.

Structural tests These are so called "white-box tests" because they are performed with the knowledge of the source code details. Input interfaces are stimulated with the aim to run through certain predefined branches or paths in the software. The software is stressed with critical values at the boundaries of the input values or even with illegal input values. The behavior of the output interface is recorded and compared with the expected (predefined) values. Functional tests These are the so called "black-box" tests. The software is regarded as a unit with unknown content. Inputs are stimulated and the values at the output results are recorded and compared to the expected and specified values.

Test by progressive Stages


The various tests are able to find different kinds of errors. Therefore it is not enough to rely on one kind of test and completely neglect the other. E.g. white-box tests will be able to find coding errors. To detect the same coding error in the system test is very difficult. The system malfunction which may result from the coding error will not necessarily allow conclusions about the location of the coding error. Test therefore should be progressive and supplement each other in stages in order to find each kind of error with the appropriate method. Module test A module is the smallest compilable unit of source code. Often it is too small to allow functional tests (black-box tests). However it is the ideal candidate for white-box tests. These have to be first of all static tests (e.g. Lint and inspections) followed by dynamic tests to check boundaries, branches and paths. This will usually require the employment of stubs and special test tools. Component test This is the black-box test of modules or groups of modules which represent certain functionality. There are no rules about what can be called a component. It is just what the tester defined to be a component, however it should make sense and be a testable unit. Components can be step by step integrated to bigger components and tested as such. Integration test The software is step by step completed and tested by tests covering a collaboration of modules or classes. The integration depends on the kind of system. E.g. the steps could be to run the operating system first and gradually add one component after the other and check if the black-box tests still run (the test cases of course will increase with every added component). The integration is still done in the laboratory. It may be done using simulators or emulators. Input signals may be stimulated.

System test This is a black-box test of the complete software in the target system. The environmental conditions have to be realistic (complete original hardware in the destination environment).

Which Test finds which Error?


Possible error Can be found by
Compiler, Lint

best

Example
Missing semicolons, Values defined but not initalized or used, order of evaluation disregarded. Overflow of variables at calculation, usage of inappropriate data types, values not

Syntax errors

Data errors

Software inspection,

module

tests

initialized, values loaded with wrong data or loaded at a wrong point in time, lifetime of pointers. Wrong program flow, use of wrong formulas and calculations. Overlapping ranges, range violation (min. and max. values not observed or limited), unexpected inputs, wrong sequence of input parameters. Disturbances by OS interruptions or hardware interrupts, timing problems, lifetime and duration problems. Resource problems (runtime, stack, registers, memory, etc.) Wrong system behaviour, specification errors

Algorithm and logical errors

Software inspection, tests

module

Interface errors

Software inspection, module tests, component tests. Design inspection, integration tests Integration system tests System tests tests,

Operating system errors, architecture and design errors Integration errors System errors

/======================================/

Metrics Used In Testing


In this tutorial you will learn about metrics used in testing, The Product Quality Measures 1. Customer satisfaction index, 2. Delivered defect quantities, 3. Responsiveness (turnaround time) to users, 4. Product volatility, 5. Defect ratios, 6. Defect removal efficiency, 7. Complexity of delivered product, 8. Test coverage, 9. Cost of defects, 10. Costs of quality activities, 11. Re-work, 12. Reliability and Metrics for Evaluating Application System Testing.

Ads

The Product Quality Measures:


1. Customer satisfaction index

This index is surveyed before product delivery and after product delivery (and on-going on a periodic basis, using standard questionnaires).The following are analyzed:

Number of system enhancement requests per year Number of maintenance fix requests per year User friendliness: call volume to customer service hotline User friendliness: training time per new user Number of product recalls or fix releases (software vendors) Number of production re-runs (in-house information systems groups)

2. Delivered defect quantities

They are normalized per function point (or per LOC) at product delivery (first 3 months or first year of operation) or Ongoing (per year of operation) by level of severity, by category or cause, e.g.: requirements defect, design defect, code defect, documentation/on-line help defect, defect introduced by fixes, etc.

3. Responsiveness (turnaround time) to users

Turnaround time for defect fixes, by level of severity Time for minor vs. major enhancements; actual vs. planned elapsed time

4. Product volatility

Ratio of maintenance fixes (to repair the system & bring it into compliance with specifications), vs. enhancement requests (requests by users to enhance or change functionality)

5. Defect ratios

Defects found after product delivery per function point. Defects found after product delivery per LOC

Pre-delivery defects: annual post-delivery defects Defects per function point of the system modifications

6. Defect removal efficiency

Number of post-release defects (found by clients in field operation), categorized by level of severity

Ratio of defects found internally prior to release (via inspections and testing), as a percentage of all defects

All defects include defects found internally plus externally (by customers) in the first year after product delivery

7. Complexity of delivered product

McCabe's cyclomatic complexity counts across the system Halsteads measure Card's design complexity measures Predicted defects and maintenance costs, based on complexity measures

8. Test coverage

Breadth of functional coverage Percentage of paths, branches or conditions that were actually tested Percentage by criticality level: perceived level of risk of paths The ratio of the number of detected faults to the number of predicted faults.

9. Cost of defects

Business losses per defect that occurs during operation Business interruption costs; costs of work-arounds Lost sales and lost goodwill

Litigation costs resulting from defects Annual maintenance cost (per function point) Annual operating cost (per function point) Measurable damage to your boss's career

10. Costs of quality activities

Costs of reviews, inspections and preventive measures Costs of test planning and preparation Costs of test execution, defect tracking, version and change control Costs of diagnostics, debugging and fixing Costs of tools and tool support Costs of test case library maintenance Costs of testing & QA education associated with the product Costs of monitoring and oversight by the QA organization (if separate from the development and test organizations)

11. Re-work

Re-work effort (hours, as a percentage of the original coding hours) Re-worked LOC (source lines of code, as a percentage of the total delivered LOC) Re-worked software components (as a percentage of the total delivered components)

12. Reliability

Availability (percentage of time a system is available, versus the time the system is needed to be available)

Mean time between failure (MTBF). Man time to repair (MTTR) Reliability ratio (MTBF / MTTR)

Number of product recalls or fix releases Number of production re-runs as a ratio of production runs

Metrics for Evaluating Application System Testing:


Metric = Formula

Test Coverage = Number of units (KLOC/FP) tested / total size of the system. (LOC represents Lines of Code)

Number of tests per unit size = Number of test cases per KLOC/FP (LOC represents Lines of Code).

Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria

Defects per size = Defects detected / system size

Test cost (in %) = Cost of testing / total cost *100

Cost to locate defect = Cost of testing / the number of defects located

Achieving Budget = Actual cost of testing / Budgeted cost of testing

Defects detected in testing = Defects detected in testing / total system defects

Defects detected in production = Defects detected in production/system size

Quality of Testing = No of defects found during Testing/(No of defects found during testing + No of acceptance defects found after delivery) *100

Effectiveness of testing to business = Loss due to problems / total resources processed by the system.

System complaints = Number of third party complaints / number of transactions processed

Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10

Source Code Analysis = Number of source code statements changed / total number of tests.

Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for Design and Documentation

Test Execution Productivity = No of Test cycles executed / Actual Effort for testing

/==============S/W Design Validation/Verifications=============/

These are FDA's (an agency notorious for their strict definitions...) of design verification and design validation: Design verification means confirmation by examination and provision of objective evidence that specified requirements have been fulfilled. Design validation means establishing by objective evidence that device (product) specifications conform with user needs and intended use(s). In other words:

You verify a design by checking drawings and specs, running simulations, checking that all design requirements have been addressed, that calculations are correct, etc. It's mainly a "paper" exercise and when you are through with it you should be quite confident that the design is complete and that a product that will eventually be built according to those drawings and specs stands a good chance of conforming to the requirements in the real world. You validate a design by trying out actual products (an initial run or batch of products), built as above by real workers, installed and operated in the real environment of use by real operators, etc. It's the proverbial proof of the pudding where you can catch errors and other problems that escaped all former verification efforts ... and others bugs that might have crept in later on.

/=========================/
After reading many of the responses to the verification/validation problem, I keep wondering why, after all these years, this is still debated. I am also surprised at how unhelpful most of the examples posted on this list appear to be, because they often do not accurately differentiate between the two concepts. I hope the description below will help. Verification means examining all of the things you have anticipated the customer will want the product to do, and checking to see if the product does those things the way you have anticipated the customer will want. Verification is about checking the things you have intentionally designed into the product and making sure the product meets those design requirements. It is about whether you were smart enough to _meet_ your specified design goals. Validation means checking to see if the product does what the customer (or user) actually wants the product to do under real-world conditions, or as-close-to-real-world conditions as you can possibly simulate. Validation encompasses all of the things that can be verified, as well as all of the things that cannot -- i.e., all of the things that the product designers might never have anticipated the customer might want or expect the product to do. It is about whether you were smart enough to specify the _right_ design goals. Of course, in the ideal world, product designers would correctly anticipate all of the things the customer would ever want the product to do, they would specify all the right design goals, and they would meet them. As a result, the validation would reveal nothing beyond what the verification covered. In the real world, product designers sometimes don't correctly anticipate all of the things the customer will want the product to do, and they discover that the product exhibits certain unanticipated or undesirable characteristics, from the user's perspective. Thus, the difference between verification and validation is that verification checks the adequacy of the design with respect to _identified_ design goals, whereas validation checks the adequacy of the design with respect to both identified and _unidentified_ customer expectataions. If you want an example of the difference between verification and validation, consider the difference between the foot-operated controls and the hand-operated controls on most cars. If you rent a car, even a model you have never driven before, you always know which foot

pedal makes the car go and which foot pedal makes the car stop. In fact, virtually any driver can easily get into any rental car -- at night or in the rain -- and operate the foot pedals to make the car go and stop. This is an example of a design that passes both verification and validation, because the pedals work as the designers intended (verification) and they also work as the user expects, even under foreseeable adverse use conditions (validation) such as darkness or rain. Now consider the hand-operated controls. I don't know about you, but there have been times when I have rented a car and simply not known how to operate some of the critical controls. For example, the stalk to the left of the steering wheel might operate the headlights, it might operate the windshield wipers, it might operate both, or it might operate neither. If the controls differ from the car I own and drive daily, I might reach for the stalk in an effort to turn on the windshield wipers, and instead turn on the headlights. This is an example of a design that passes verification, because the controls perform their stated function; but it fails validation, because the controls do not work as the user expects under foreseeable conditions of use. Indeed, in this example, a user's inability to correctly operate the vehicle controls could, at a critical moment, cause the user to be unable to see oncoming hazards on the road, possibly with fatal consequences. This illustrates one more thing about verification and validation: instead of "intended use," which many people think is relevant to a design validation, I find it far more helpful to think in terms of "reasonably foreseeable use," "reasonably foreseeable user" and "reasonably foreseeable use environment." The important distinction is that what is "intended" is never, ever enough: one must also consider things that are "unintended." The phrase "reasonably foreseeable" includes both, and is, in fact, the standard that U.S. tort law applies in cases of product liability. It should therefore be the minimum standard that should be used by product designers. Another very simple example is the common screwdriver. Intended use: turn screws. Intended use environment: home and shop. Intended user: Joe Sixpack. Simple, right? Now consider what is "reasonably foreseeable." Reasonably foreseeable uses might include: turn screws, pry open paint cans, chisel holes in wood, remove dried spackling compound, chisel off rusted bolts.... Reasonably foreseeable use environments: home and shop, chemistry lab, salt-water pier, high tension electrical tower, off-shore oil rig.... Reasonably foreseeable user: Joe Sixpack, his neighbor Martha with arthritis in her hands, her 10-year-old grandson Johnny with small hands, his sister Sue who is left handed, their mother Jan who works in that chemical lab with all sorts of corrosive materials, her husband Jim who works on those high tension electrical towers wearing heavily insulated gloves, Jim's brother Dave who works on that off-shore oil rig and who's hands are always covered with oil.... Get the idea? If you only think in terms of what is "intended," you might not think hard enough about all the ways your product might be used. If so, your product might actually be hazardous to use or, at the very least, you might miss an important market segment that your competitor will be very happy to fill at your expense -- and, by doing so, reduce your market share, eat your lunch, and take away the income you were planning to save for your kid's college education or your early retirement. And that, as I see it, is the important difference between verification and validation.

/============================/

Software Testing Life Cycle Models


Software Testing Life Cycle Models The various activities which are undertaken when developing software are commonly Modeled as a software development lifecycle. The software development lifecycle begins with the identification of a requirement for software and ends with the formal verification of the developed software against that requirement. The software development lifecycle does not exist by itself; it is in fact part of an overall Product lifecycle. Within the product lifecycle, software will undergo maintenance to correct errors and to comply with changes to requirements. The simplest overall form is Where the product is just software, but it can become much more complicated, with Multiple software developments each forming part of an overall system to comprise a Product. There are a number of different models for software development lifecycles. One thing Which all models have in common, is that at some point in the lifecycle, software has to be tested. This paper outlines some of the more commonly used software development Lifecycles, with particular emphasis on the testing activities in each model. A software life cycle model depicts the significant phases or activities of a software project from conception until the product is retired. It specifies the relationship between project phases, including transition criteria, feedback mechanisms, milestones, baselines, reviews, and deliverables. Typically, a life cycle model addresses the following phases of a software project: requirement phase, design phase, implemenentation, integration, testing, operations and maintenance. Much of the motivation behind utilizing a life cycle model is to provide structure. Life cycle models describe the inter relationship between software development phases .the common life cycle models are;

V-mode of SDLC
V & V PROCESS MODEL : V&V Model is Verification & Validation Model.In This Model We work simultaneously Development and Testing.In this Model One V for Verification and one For Validation first 'V' we follow SDLC(software Development Life Cycle) and Second 'V' we follow STLC-(Software Testing Life Cycle).

Testing normally done in a large system in 2 parts. The functional verification and validation against the Requirement specification and Performance evaluation against the indicated requirements. Testing activity is involved right from the beginning of the project. Use of V&V process model increases the rate of success in a project development company to deliver the application on time and increases the cost effectiveness.

Testing Related Activities During Requirement Phase


Creation and finalization of testing template. Creation of test plan and test strategy . Capturing Acceptance criteria and preparation of acceptance test plan. Capturing Performance Criteria of the software requirements.

Testing activities in Design Phase


Develop test cases to ensure that product is on par with Requirement Specification document. Verify Test Cases & Test Scripts by peer reviews. Preparation of traceability matrix from system requirements.

Testing activities in Unit Testing Phase


Unit test is done for validating the product with respect to client requirements. Testing can be in multiple rounds. Defects found during system test should be logged in to defect tracking system for the purpose of resolving and tracking. Test logs and defects are captured and maintained. Review of all test documents.

Testing activities in Integration Testing Phase


This testing is done in parallel with integration of various applications or components. Testing the product with its external and internal interfaces without using drivers and stubs.

Incremental approach while integrating the interfaces.

Performance testing

This is done to validate the performance criteria of the product/ application. This is non-functional testing.

Business Cycle testing

This refers to end to end testing of real life like business scenarios.

Testing activities during Release phase


Acceptance testing is conducted at the customer location. Resolves all defects reported by the customer during Acceptance testing. Conduct Root Cause Analysis (RCA) for those defects reported by customer during acceptance testing.

Waterfall Model
Waterfall Model The waterfall Model is an engineering model designed to be applied to the development of software.The idea is the following: there are different stages to the development and the outputs of the First Stage "Flow" into the second stage and these outputs "flow" into the third stage and so on.there are Usually five stages in this model of software Development.

Stages of Water Fall Model


Requirement analyis and Planninng:- In This stage the requirements of the "To be developed Software" are established.These are usually the services it will provide,its constraints and Goalsof the Software.Once theae are Established they Have to be defined such a way that are usable in the next Stage.This Stage is often Preludes by a Feasibility or a Feasible study is included in this stage.the Feasibility study includes Questions like;Should we develop the Software,what are the alternatives? It could be called the conception of a software project and might be seen as the very begining of the life cycle.

Prototype Model

Spiral Model

Iteration Model

/================================/

Coupling (computer programming)


From Wikipedia, the free encyclopedia

Jump to: navigation, search This article needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (September 2010)

In computer science, coupling or dependency is the degree to which each program module relies on each one of the other modules. Coupling is usually contrasted with cohesion. Low coupling often correlates with high cohesion, and vice versa. The software quality metrics of coupling and cohesion were invented by Larry Constantine, an original developer of Structured Design[1] who was also an early proponent of these concepts (see also SSADM). Low coupling is often a sign of a well-structured computer system and a good design, and when combined with high cohesion, supports the general goals of high readability and maintainability.

Contents
[hide]

1 Types of coupling o 1.1 Object-oriented programming 2 Disadvantages 3 Performance issues 4 Solutions 5 Coupling versus Cohesion 6 Module coupling 7 See also 8 References

Types of coupling

Conceptual model of coupling

Coupling can be "low" (also "loose" and "weak") or "high" (also "tight" and "strong"). Some types of coupling, in order of highest to lowest coupling, are as follows:
Content coupling (high) Content coupling is when one module modifies or relies on the internal workings of another module (e.g., accessing local data of another module). Therefore changing the way the second module produces data (location, type, timing) will lead to changing the dependent module. Common coupling Common coupling is when two modules share the same global data (e.g., a global variable). Changing the shared resource implies changing all the modules using it. External coupling External coupling occurs when two modules share an externally imposed data format, communication protocol, or device interface. Control coupling Control coupling is one module controlling the flow of another, by passing it information on what to do (e.g., passing a what-to-do flag). Stamp coupling (Data-structured coupling)

Stamp coupling is when modules share a composite data structure and use only a part of it, possibly a different part (e.g., passing a whole record to a function that only needs one field of it). This may lead to changing the way a module reads a record because a field that the module doesn't need has been modified. Data coupling Data coupling is when modules share data through, for example, parameters. Each datum is an elementary piece, and these are the only data shared (e.g., passing an integer to a function that computes a square root). Message coupling (low) This is the loosest type of coupling. It can be achieved by state decentralization (as in objects) and component communication is done via parameters or message passing (see Message passing). No coupling Modules do not communicate at all with one another. Object-oriented programming Subclass Coupling Describes the relationship between a child and its parent. The child is connected to its parent, but the parent isn't connected to the child. Temporal coupling When two actions are bundled together into one module just because they happen to occur at the same time.

Disadvantages
Tightly coupled systems tend to exhibit the following developmental characteristics, which are often seen as disadvantages:
1. A change in one module usually forces a ripple effect of changes in other modules. 2. Assembly of modules might require more effort and/or time due to the increased inter-module dependency. 3. A particular module might be harder to reuse and/or test because dependent modules must be included.

Performance issues

Whether loosely or tightly coupled, a system's performance is often reduced by message and parameter creation, transmission, translation and interpretation overhead. See event-driven programming.
Message Creation Overhead and Performance Since all messages and parameters must possess particular meanings to be consumed (i.e., result in intended logical flow within the receiver), they must be created with a particular meaning. Creating any sort of message requires overhead in either CPU or memory usage. Creating a single integer value message (which might be a reference to a string, array or data structure) requires less overhead than creating a complicated message such as a SOAP message. Longer messages require more CPU and memory to produce. To optimize runtime performance, message length must be minimized and message meaning must be maximized. Message Transmission Overhead and Performance Since a message must be transmitted in full to retain its complete meaning, message transmission must be optimized. Longer messages require more CPU and memory to transmit and receive. Also, when necessary, receivers must reassemble a message into its original state to completely receive it. Hence, to optimize runtime performance, message length must be minimized and message meaning must be maximized. Message Translation Overhead and Performance Message protocols and messages themselves often contain extra information (i.e., packet, structure, definition and language information). Hence, the receiver often needs to translate a message into a more refined form by removing extra characters and structure information and/or by converting values from one type to another. Any sort of translation increases CPU and/or memory overhead. To optimize runtime performance, message form and content must be reduced and refined to maximize its meaning and reduce translation. Message Interpretation Overhead and Performance All messages must be interpreted by the receiver. Simple messages such as integers might not require additional processing to be interpreted. However, complex messages such as SOAP messages require a parser and a string transformer for them to exhibit intended meanings. To optimize runtime performance, messages must be refined and reduced to minimize interpretation overhead.

Solutions
One approach to decreasing coupling is functional design, which seeks to limit the responsibilities of modules along functionality, coupling increases between two classes A and B if:

A has an attribute that refers to (is of type) B. A calls on services of an object B. A has a method that references B (via return type or parameter). A is a subclass of (or implements) class B.

Low coupling refers to a relationship in which one module interacts with another module through a simple and stable interface and does not need to be concerned with the other module's internal implementation (see Information Hiding). Systems such as CORBA or COM allow objects to communicate with each other without having to know anything about the other object's implementation. Both of these systems even allow for objects to communicate with objects written in other languages.

Coupling versus Cohesion


Coupling and Cohesion are the two terms which very frequently occur together. Together they talk about the quality a module should have. Coupling talks about the inter dependencies between the various modules while cohesion describes how related functions within a module are. Low cohesion implies that module performs tasks which are not very related to each other and hence can create problems as the module becomes large.

Module coupling
Coupling in Software Engineering[2] describes a version of metrics associated with this concept. For data and control flow coupling:

di: number of input data parameters ci: number of input control parameters do: number of output data parameters co: number of output control parameters

For global coupling:


gd: number of global variables used as data gc: number of global variables used as control

For environmental coupling:


w: number of modules called (fan-out) r: number of modules calling the module under consideration (fan-in)

Coupling(C)

makes the value larger the more coupled the module is. This number ranges from approximately 0.67 (low coupling) to 1.0 (highly coupled) For example, if a module has only a single input and output data parameter

If a module has 5 input and output data parameters, an equal number of control parameters, and accesses 10 items of global data, with a fan-in of 3 and a fan-out of 4,

Вам также может понравиться