Вы находитесь на странице: 1из 21


1. Give the meaning of the design quality attribute FURPS.


2. List the different types of Cohesion.

3. Define modularity. [CO3, L1]
Modularity is the most common manifestation of separation of concerns. Software
is divided into separately named and addressable components, sometimes called
modules, that are integrated to satisfy problem requirements.

4. What are the various models produced by the software design process? [CO3, L1]
Data Design Elements
Architectural Design Elements
Interface Design Elements
Component-Level Design Elements
Deployment-Level Design Elements
5. What is system design? [CO3, L1]
System design is process of translating customers requirements into a software
product layout. In this process a system design model is created.
6. Distinguish verification and validation. [CO4, L2]
Verification Validation
Verification refers to the set of activities that Validation refers to the set of activities that
ensure software correctly implements the specific ensure that the software that has been build is
function. traceable to customer requirements.
Are we building the product right? Are we building the right product?

7. What is smoke testing? [CO4, L1]

Smoke testing is an integration testing approach that is commonly used when product
software is developed. It is designed as a pacing mechanism for time-critical projects, allowing the
software team to assess the project on a frequent basis

8. How do you measure cyclomatic complexity?

Can be computed three ways
The number of regions
V(G) = E N + 2, where E is the number of edges and N is the number of nodes in
graph G
V(G) = P + 1, where P is the number of predicate nodes in the flow graph G

9. Distinguish between defects and errors.

Errors Defects
It is a state that can lead to a system behavior that Any errors that remain uncovered and are found
is unexpected by the system user. The software in later tasks are called defects.
team performs the formal technical reviews to
test the software developed. In this review errors
are identified and corrected.
Error removal is software development activity Defect removal is a software quality assurance
and activity.

10. What is a Big-Bang approach?

Big-Bang approach is used in non incremental integration testing. In this approach of integration
testing, all the components are combined in advance and then entire program is tested as a whole.

11. Define Cyclomatic complexity

Cyclomatic complexity is kind of software metric that gives the quantitative measure of
logical complexity of a program. In basis path testing method the cyclomatic complexity defines the
number of independent paths of the program structure.
12. Write short note on equivalence partitioning
In equivalence partitioning the equivalence classes are evaluated for given input condition.
Equivalence class represents a set of valid or invalid states for input conditions.
It is a black box technique that divides the input domain into classes of data. From this data test cases
can be derived.


1. Place the user in control.

Define interaction modes in a way that does not force a user into unnecessary or
undesired actions.
Provide for flexible interaction.
Allow user interaction to be interruptible and undoable.
Streamline interaction as skill levels advance and allow the interaction to be customized.
Hide technical internals from the casual user.
Design for direct interaction with objects that appear on the screen.
2. Reduce the users memory load.
Reduce demand on short-term memory.
Establish meaningful defaults.
Define shortcuts that are intuitive.
The visual layout of the interface should be based on a real world metaphor.
Disclose information in a progressive fashion.
3. Make the interface consistent.
Allow the user to put the current task into a meaningful context.
Maintain consistency across a family of applications.
If past interactive models have created user expectations, do not make changes unless
there is a compelling reason to do so.

Procedural abstraction a sequence of instructions that have a specific and limited
Data abstraction a named collection of data that describes a data object
The overall structure of the software and the ways in which the structure provides
conceptual integrity for a system
Consists of components, connectors, and the relationship between them
A design structure that solves a particular design problem within a specific context
It provides a description that enables a designer to determine whether the pattern is
applicable, whether the pattern can be reused, and whether the pattern can serve as a
guide for developing similar patterns
Separately named and addressable components (i.e., modules) that are integrated to
satisfy requirements (divide and conquer principle)
Makes software intellectually manageable so as to grasp the control paths, span of
reference, number of variables, and overall complexity
Information hiding
The designing of modules so that the algorithms and local data contained within them
are inaccessible to other modules
This enforces access constraints to both procedural (i.e., implementation) detail and
local data structures
Functional independence
Modules that have a "single-minded" function and an aversion to excessive interaction
with other modules
High cohesion a module performs only a single task
Low coupling a module has the lowest amount of connection needed with other
Stepwise refinement
Development of a program by successively refining levels of procedure detail
Complements abstraction, which enables a designer to specify procedure and data and
yet suppress low-level details
A reorganization technique that simplifies the design (or internal code structure) of a
component without changing its function or external behavior
Removes redundancy, unused design elements, inefficient or unnecessary algorithms,
poorly constructed or inappropriate data structures, or any other design failures
Design classes
Refines the analysis classes by providing design detail that will enable the classes to
be implemented
Creates a new set of design classes that implement a software infrastructure to support
the business solution.


Cohesion is the single-mindedness of a component
It implies that a component or class encapsulates only attributes and operations that are
closely related to one another and to the class or component itself
The objective is to keep cohesion as high as possible
The kinds of cohesion can be ranked in order from highest (best) to lowest (worst)
A module performs one and only one computation and then returns a result
A higher layer component accesses the services of a lower layer component
All operations that access the same data are defined within one class.
Components or operations are grouped in a manner that allows the first to
provide input to the next and so on in order to implement a sequence of
Components or operations are grouped in a manner that allows one to be
invoked immediately after the preceding one was invoked, even when no data
passed between them
Operations are grouped to perform a specific behavior or establish a certain
state such as program start-up or when an error is detected
Components, classes, or operations are grouped within the same category
because of similar general functions but are otherwise unrelated to each other
As the amount of communication and collaboration increases between operations and classes,
the complexity of the computer-based system also increases
As complexity rises, the difficulty of implementing, testing, and maintaining software also
Coupling is a qualitative measure of the degree to which operations and classes are connected
to one another
The objective is to keep coupling as low as possible
The kinds of coupling can be ranked in order from lowest (best) to highest (worst)
Data coupling
Operation A() passes one or more atomic data operands to operation B(); the
less the number of operands, the lower the level of coupling
Stamp coupling
A whole data structure or class instantiation is passed as a parameter to an
Control coupling
Operation A() invokes operation B() and passes a control flag to B that directs
logical flow within B()
Consequently, a change in B() can require a change to be made to the meaning
of the control flag passed by A(), otherwise an error may result
Common coupling
A number of components all make use of a global variable, which can lead to
uncontrolled error propagation and unforeseen side effects
Content coupling
One component secretly modifies data that is stored internally in another
Other kinds of coupling (unranked)
Subroutine call coupling
When one operation is invoked it invokes another operation within side of it
Type use coupling
Component A uses a data type defined in component B, such as for an instance
variable or a local variable declaration
If/when the type definition changes, every component that declares a variable
of that data type must also change
Inclusion or import coupling
Component A imports or includes the contents of component B
External coupling
A component communicates or collaborates with infrastructure components
that are entities external to the software (e.g., operating system functions,
database functions, networking functions)


Data-Centered Architecture

Data Flow Architecture or Pipe and Filter Architecture

Call-and-Return Style
Layered Architecture


Basis Path Testing

White-box testing technique proposed by Tom McCabe
Enables the test case designer to derive a logical complexity measure of a procedural design
Uses this measure as a guide for defining a basis set of execution paths
Test cases derived to exercise the basis set are guaranteed to execute every statement in the
program at least one time during testing.

Cyclomatic complexity

Provides a quantitative measure of the logical complexity of a program

Defines the number of independent paths in the basis set
Provides an upper bound for the number of tests that must be conducted to ensure all
statements have been executed at least once
Can be computed three ways
The number of regions
V(G) = E N + 2, where E is the number of edges and N is the number of nodes in
graph G
V(G) = P + 1, where P is the number of predicate nodes in the flow graph G

A white-box testing technique that focuses exclusively on the validity of loop constructs
Four different classes of loops exist
Simple loops
Nested loops
Concatenated loops
Unstructured loops
Testing occurs by varying the loop boundary values

while (currentTemp >= MINIMUM_TEMPERATURE)

Testing of Simple Loops

1) Skip the loop entirely

2) Only one pass through the loop
3) Two passes through the loop
4) m passes through the loop, where m < n
5) n 1, n, n + 1 passes through the loop

n is the maximum number of allowable passes through the loop

Testing of Nested Loops

1) Start at the innermost loop; set all other loops to minimum values
2) Conduct simple loop tests for the innermost loop while holding the outer loops at their
minimum iteration parameter values; add other tests for out-of-range or excluded values
3) Work outward, conducting tests for the next loop, but keeping all other outer loops at
minimum values and other nested loops to typical values
4) Continue until all loops have been tested
Testing of Concatenated Loops
For independent loops, use the same approach as for simple loops
Otherwise, use the approach applied for nested loops
Testing of Unstructured Loops
Redesign the code to reflect the use of structured programming practices
Depending on the resultant design, apply testing for simple loops, nested loops, or
concatenated loops

6. unit testing process

Unit testing focuses verification effort on the smallest unit of software designthesoftware
component or module. Using the component-level design description as a guide, important control
paths are tested to uncover errors within the boundary of the module. The relative complexity of tests
and the errors those tests uncover is limited by the constrained scope established for unit testing. The
nit test focuses on the internal processing logic and data structures within the boundaries of a
This type of testing can be conducted in parallel for multiple components.
Unit-test considerations. Unit tests are illustrated schematically in Figure 17.3.
The module interface is tested to ensure that information properly flows into and out
of the program unit under test. Local data structures are examined to ensure that
data stored temporarily maintains its integrity during all steps in an algorithms
execution. All independent paths through the control structure are exercised to ensure
that all statements in a module have been executed at least once. Boundary conditions
are tested to ensure that the module operates properly at boundaries established to
limit or restrict processing. And finally, all error-handling paths are tested.
Data flow across a component interface is tested before any other testing is initiated. If data do not
enter and exit properly, all other tests are moot. In addition, local
data structures should be exercised and the local impact on global data should be ascertained (if
possible) during unit testing.
Selective testing of execution paths is an essential task during the unit test. Test
cases should be designed to uncover errors due to erroneous computations, incorrect comparisons, or
improper control flow.
Boundary testing is one of the most important unit testing tasks. Software often
fails at its boundaries. That is, errors often occur when the nth element of an
n-dimensional array is processed, when the ith repetition of a loop with i passes is
invoked, when the maximum or minimum allowable value is encountered. Test
cases that exercise data structure, control flow, and data values just below, at, and
just above maxima and minima are very likely to uncover errors. A good design anticipates error
conditions and establishes error-handling paths to reroute or cleanly terminate processing when an
error does occur.

Unit-test procedures. Unit testing is normally considered as an adjunct to the

coding step. The design of unit tests can occur before coding begins or after source
code has been generated. A review of design information provides guidance for
establishing test cases that are likely to uncover errors in each of the categories discussed earlier.
Each test case should be coupled with a set of expected results. Because a component is not a stand-
alone program, driver and/or stub software must often be developed for each unit test. The unit test
environment is illustrated in Figure. In most applications a driver is nothing more than a main
program that accepts test case data, passes such data to the component (to be tested), and prints
relevant results. Stubs serve to replace modules that are subordinate (invoked by) the
component to be tested. A stub or dummy subprogram uses the subordinate modules interface,
may do minimal data manipulation, prints verification of entry, and
returns control to the module undergoing testing.
Drivers and stubs represent testing overhead. That is, both are software that
must be written (formal design is not commonly applied) but that is not delivered with
the final software product. If drivers and stubs are kept simple, actual overhead is relatively low.
Unfortunately, many components cannot be adequately unit tested with
simple overhead software. In such cases, complete testing can be postponed until
the integration test step (where drivers or stubs are also used).
Unit testing is simplified when a component with high cohesion is designed.
When only one function is addressed by a component, the number of test cases is
reduced and errors can be more easily predicted and uncovered.

7. black box testing techniques.

Black-box testing, also called behavioral testing, focuses on the functional requirements of the
software. That is, black-box testing techniques enable you to derive sets of input conditions that will
fully exercise all functional requirements for a program.
Types of Black Box testing:
Graph-Based Testing Methods
Each and every application is build up of some objects. All such objects are identified and
graph is prepared. From this object graph each object relationship is identified and test cases
written accordingly to discover the errors.
Equivalence Partitioning
Equivalence partitioning is a black-box testing method that divides the input domain
of a program into classes of data from which test cases can be derived. An ideal test
case single-handedly uncovers a class of errors (e.g., incorrect processing of all
character data) that might otherwise require many test cases to be executed before
the general error is observed.
Test-case design for equivalence partitioning is based on an evaluation of
equivalence classes for an input condition. Using concepts introduced in the preceding
section, if a set of objects can be linked by relationships that are symmetric, transitive, and
reflexive, an equivalence class is present. An equivalence
class represents a set of valid or invalid states for input conditions. Typically, an input
condition is either a specific numeric value, a range of values, a set of related values,
or a Boolean condition. Equivalence classes may be defined according to the
following guidelines:
1. If an input condition specifies a range, one valid and two invalid equivalence
classes are defined.
2. If an input condition requires a specific value, one valid and two invalid
equivalence classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid
equivalence class are defined.
4. If an input condition is Boolean, one valid and one invalid class are defined.
By applying the guidelines for the derivation of equivalence classes, test cases for
each input domain data item can be developed and executed. Test cases are selected
so that the largest number of attributes of an equivalence class are exercised at once.
Boundary Value Analysis
Many systems have tendency to fail on boundary. So testing boundry values of application is
important. Boundary Value Analysis (BVA) is a test Functional Testing technique where the
extreme boundary values are chosen. Boundary values include maximum, minimum, just
inside/outside boundaries, typical values, and error values.
Extends equivalence partitioning Test both sides of each boundary
Look at output boundaries for test cases tool Test min, min-1, max, max+1, typical values
BVA techniques:
1. Number of variables For n variables: BVA yields 4n + 1 test cases.
2. Kinds of ranges Generalizing ranges depends on the nature or type of variables
Advantages of Boundary Value Analysis
1.Robustness Testing Boundary Value Analysis plus values that go beyond the limits
2.Min 1, Min, Min +1, Nom, Max -1, Max, Max +1
3.Forces attention to exception handling
Orthogonal Array Testing
Orthogonal array testing can be applied to problems in which the input domain is
relatively small but too large to accommodate exhaustive testing. The orthogonal
array testing method is particularly useful in finding region faultsan error category
associated with faulty logic within a software component.
8..Integration testing process and discuss their outcomes.
Integration testing is a systematic technique for constructing the software architecture while
at the same time conducting tests to uncover errors associated with
interfacing. The objective is to take unit-tested components and build a program
structure that has been dictated by design.
There is often a tendency to attempt non incremental integration; that is, to construct the
program using a big bang approach. All components are combined in
advance. The entire program is tested as a whole. And chaos usually results! A set
of errors is encountered. Correction is difficult because isolation of causes is complicated by
the vast expanse of the entire program. Once these errors are corrected,
new ones appear and the process continues in a seemingly endless loop.
Top town Integration
Top-down integration testing is an incremental approach
to construction of the software architecture. Modules are integrated by moving
downward through the control hierarchy, beginning with the main control module. (main
program). Modules subordinate (and ultimately subordinate) to the main control module are
incorporated into the structure in either a depth-first or breadth-first

Referring to Figure, depth-first integration integrates all components on a

major control path of the program structure. Selection of a major path is somewhat
arbitrary and depends on application-specific characteristics. For example, selecting
the left-hand path, components M1, M2 , M5 would be integrated first. Next, M8 or (if
necessary for proper functioning of M2) M6 would be integrated. Then, the central
and right-hand control paths are built. Breadth-first integration incorporates all components
directly subordinate at each level, moving across the structure horizontally.
From the figure, components M2, M3, and M4 would be integrated first. The next control
level, M5, M6, and so on, follows. The integration process is performed in a series
of five steps:
1. The main control module is used as a test driver and stubs are substituted for all
components directly subordinate to the main control module.
2. Depending on the integration approach selected (i.e., depth or breadth first),sub ordinate
stubs are replaced one at a time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced with the real component.
5. Regression testing (discussed later in this section) may be conducted to ensure that new
errors have not been introduced.
The process continues from step 2 until the entire program structure is built.
Bottom-up integration. Bottom-up integration testing, as its name implies, begins
construction and testing with atomic modules (i.e., components at the lowest levels
in the program structure). Because components are integrated from the bottom up,
the functionality provided by components subordinate to a given level is always
available and the need for stubs is eliminated. A bottom-up integration strategy may be
implemented with the following steps:
1. Low-level components are combined into clusters (sometimes called builds)
that perform a specific software subfunction.
2. A driver (a control program for testing) is written to coordinate test case input
and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program structure.
Integration follows the pattern illustrated in below figure. Components are combined to form
clusters 1, 2, and 3. Each of the clusters is tested using a driver (shown
as a dashed block). Components in clusters 1 and 2 are subordinate to Ma. Drivers D1
and D2 are removed and the clusters are interfaced directly to Ma. Similarly, driver D3
for cluster 3 is removed prior to integration with module Mb. Both Ma and Mb will
ultimately be integrated with component Mc, and so forth.

Regression testing. Each time a new module is added as part of integration testing, the
software changes. New data flow paths are established, new I/O may occur,
and new control logic is invoked. These changes may cause problems with functions
that previously worked flawlessly. In the context of an integration test strategy,
regression testing is the re execution of some subset of tests that have already been
conducted to ensure that changes have not propagated unintended side effects.
In a broader context, successful tests (of any kind) result in the discovery of errors,
and errors must be corrected. Whenever software is corrected, some aspect of the
software configuration (the program, its documentation, or the data that support it)
is changed. Regression testing helps to ensure that changes (due to testing or for
other reasons) do not introduce unintended behavior or additional errors.
Regression testing may be conducted manually, by reexecuting a subset of all test
cases or using automated capture/playback tools. Capture/playback tools enable the
software engineer to capture test cases and results for subsequent playback and
comparison. The regression test suite (the subset of tests to be executed) contains
three different classes of test cases:
A representative sample of tests that will exercise all software functions.
Additional tests that focus on software functions that are likely to be affected
by the change.
Tests that focus on the software components that have been changed.
As integration testing proceeds, the number of regression tests can grow quite
large. Therefore, the regression test suite should be designed to include only those
tests that address one or more classes of errors in each of the major program functions. It is
impractical and inefficient to re execute every test for every program function once a change
has occurred.
Smoke testing. Smoke testing is an integration testing approach that is commonly used when
product software is developed. It is designed as a pacing mechanism for time-critical projects,
allowing the software team to assess the project on
a frequent basis. In essence, the smoke-testing approach encompasses the following activities:
1. Software components that have been translated into code are integrated into
a build. A build includes all data files, libraries, reusable modules, and engineered
components that are required to implement one or more product
2. A series of tests is designed to expose errors that will keep the build from
properly performing its function. The intent should be to uncover showstopper errors that
have the highest likelihood of throwing the software
project behind schedule.
3. The build is integrated with other builds, and the entire product (in its current
form) is smoke tested daily. The integration approach may be top down or
bottom up.
The daily frequency of testing the entire product may surprise some readers. However,
frequent tests give both managers and practitioners a realistic assessment of
integration testing progress.
Smoke testing provides a number of benefits when it is applied on complex, time critical
software projects:
Integration risk is minimized. Because smoke tests are conducted daily,
incompatibilities and other show-stopper errors are uncovered early, thereby
reducing the likelihood of serious schedule impact when errors are
The quality of the end product is improved. Because the approach is construction
(integration) oriented, smoke testing is likely to uncover functional
errors as well as architectural and component-level design errors. If these
errors are corrected early, better product quality will result.
Error diagnosis and correction are simplified. Like all integration testing
approaches, errors uncovered during smoke testing are likely to be associated with new
software incrementsthat is, the software that has just been
added to the build(s) is a probable cause of a newly discovered error.
Progress is easier to assess. With each passing day, more of the software has
been integrated and more has been demonstrated to work. This improves team
morale and gives managers a good indication that progress is being made

1. Write the program for sorting of n numbers. Draw the flow chart, flow graph, and point out
the cyclomatic complexity.

Bubble Sort Algorithm

int n;
1 void main()
2 {
3 int i, A[10];
4 int j,temp;
5 printf(\n \t \t Bubble sort \n);
6 printf(\n How many elements are there?);
7 scanf(%d, &n);
8 printf9\n Enter the elements \n);
9 for(i=0; i<n; i++)
10 scanf(%d,&A[i]);
11 for(i=0; i<=2; i++);
12 {
13 for(j=0; j<=n-2-i; j++)
14 {
15 if(A[j]>A[j+1])
16 {
17 temp=A[j];
18 A[j]=A[j+1];
19 A[j+1]=temp;
20 }
21 }
22 }
23 printf(\n The sorted List is.\n);
24 for(i=0; i<n; i++)
25 printf(%d,A[i]);
26 }

Provides a quantitative measure of the logical complexity of a program

Defines the number of independent paths in the basis set
Provides an upper bound for the number of tests that must be conducted to ensure all
statements have been executed at least once
Can be computed three ways
The number of regions(7)
V(G) = E N + 2, where E is the number of edges and N is the number of nodes in
graph G
V(G) = P + 1, where P is the number of predicate nodes in the flow graph G
Results in the following equations for the example flow graph
Number of regions = 4
V(G) = 31 edges 26 nodes + 2 = (7)
V(G) = 6 predicate nodes + 1 = 7

2. Basic design principles for designing class based components.

The Open-Closed Principle (OCP). A module [component] should be open for extension
but closed for modification.
The Liskov Substitution Principle (LSP). Subclasses should be substitutable for their base
Dependency Inversion Principle (DIP). Depend on abstractions. Do not depend on
The Interface Segregation Principle (ISP). Many client-specific interfaces are better than one
general purpose interface.
The Release Reuse Equivalency Principle (REP). The granule of reuse is the granule of
The Common Closure Principle (CCP). Classes that change together belong together.
The Common Reuse Principle (CRP). Classes that arent reused together should not be
grouped together.
3.Explain transform and transactional mapping by applying design steps to an example system.
Step 1. Review the fundamental system model.

Level 1 DFD for the SafeHome security function

Step 2. Review and refine data flow diagrams for the software.
Level 2 DFD that refines the monitor sensor transform

Level 3 DFD for monitor sensors with flow boundaries

Step 3. Determine whether the DFD has transform or transaction flow characteristics.
Evaluating the DFD in the figure, we see data entering the software along one incoming path and
exiting along three outgoing paths. Therefore, an overall transform characteristic will be assumed for
information flow.
Step 4. Isolate the transform center by specifying incoming and outgoing
flow boundaries.
Step 5. Perform first-level factoring.

Step 6. Perform second-level factoring.

Step 7. Refine the first-iteration architecture using design heuristics for
improved software quality.
4.. A program specs state the following for an input field. The program shall accept an input value
of 4-digit integer equal or greater than 2000 and less than or equal to 8000. Determine the test
cases using
(i) Equivalence class partitioning
(ii) A black-box testing method that divides the input domain of a program into classes of
data from which test cases are derived
(iii) An ideal test case single-handedly uncovers a complete class of errors, thereby reducing
the total number of test cases that must be developed
(iv) Test case design is based on an evaluation of equivalence classes for an input condition
(v) An equivalence class represents a set of valid or invalid states for input conditions
(vi) From each equivalence class, test cases are selected so that the largest number of
attributes of an equivalence class are exercise at once.

If an input condition specifies a range, one valid and two invalid equivalence classes are
Input range: 1 10 Eq classes: {1..10}, {x < 1}, {x > 10}
If an input condition requires a specific value, one valid and two invalid equivalence classes
are defined
Input value: 250 Eq classes: {250}, {x < 250}, {x > 250}
If an input condition specifies a member of a set, one valid and one invalid equivalence class
are defined
Input set: {-2.5, 7.3, 8.4} Eq classes: {-2.5, 7.3, 8.4}, {any other x}
If an input condition is a Boolean value, one valid and one invalid class are define
Input: {true condition} Eq classes: {true condition}, {false condition}
Partition the test cases into "equivalence classes"
Each equivalence class contains a set of "equivalent" test cases
Two test cases are considered to be equivalent if we expect the program to process them both
in the same way (i.e., follow the same path through the code)
If you expect the program to process two test cases in the same way, only test one of them,
thus reducing the number of test cases you have to run
Input range: 2000 8000 Eq classes: {2000..8000}, {x < 2000}, {x >

(vii) Boundary value analysis

A greater number of errors occur at the boundaries of the input domain rather than in the
Boundary value analysis is a test case design method that complements equivalence
a. It selects test cases at the edges of a class
It derives test cases from both the input domain and output domain.

Guidelines for Boundary Value Analysis

1. If an input condition specifies a range bounded by values a and b, test cases should be
designed with values a and b as well as values just above and just below a and b
2. If an input condition specifies a number of values, test case should be developed that
exercise the minimum and maximum numbers. Values just above and just below the
minimum and maximum are also tested
3. Apply guidelines 1 and 2 to output conditions; produce output that reflects the
minimum and the maximum values expected; also test the values just below and just
4. If internal program data structures have prescribed boundaries (e.g., an array), design a
test case to exercise the data structure at its minimum and maximum boundaries

If input conditions have a range from a to b (such as a=2000 to b=8000), create test
immediately below a (1999)
at a (2000)
immediately above a (2001)
immediately below b (7999)
at b (8000)
immediately above b (8001)