Вы находитесь на странице: 1из 7

SMU 2011

Master in Business Administration-MBA SEM – III


MI0033 – Software Engineering

1. Discuss the Objective & Principles Behind Software Testing.

TESTING OBJECTIVES:

1. Testing is a process of executing a program with the intent of finding an error.


2. A good test case is one that has a high probability of finding an as yet undiscovered error.
3. A successful test is one that uncovers an as yet undiscovered error.

Testing should systematically uncover different classes of errors in a minimum amount of time and with a minimum amount of
effort. A secondary benefit of testing is that it demonstrates that the software appears to be working as stated in the specifications.
The data collected through testing can also provide an indication of the software's reliability and quality. But, testing cannot show
the absence of defect -- it can only show that software defects are present.

TESTING PRINCIPLES:

Before applying methods to design effective test cases, a software engineer must understand the basic principles that guide
software testing.

1. All tests should be traceable to customer requirements. As we have seen, the objective of software testing is to uncover
errors. It follows that the most severe defects (from the customer’s point of view) are those that cause the program to
fail to meet its requirements.
2. Tests should be planned long before testing begins. Test planning can begin as soon as the requirements model is
complete. Detailed definition of test cases can begin as soon as the design model has been solidified. Therefore, all tests
can be planned and designed before any code has been generated.
3. The Pareto principle applies to software testing. Stated simply, the Pareto principle implies that 80 percent of all errors
uncovered during testing will likely be traceable to 20 percent of all program components. The problem, of course, is to
isolate these suspect components and to thoroughly test them.
4. Testing should begin “in the small” and progress toward testing “in the large.” The first tests planned and executed
generally focus on individual components. As testing progresses, focus shifts in an attempt to find errors in integrated
clusters of components and ultimately in the entire system
5. Exhaustive testing is not possible. The number of path permutations for even a moderately sized program is
exceptionally large. For this reason, it is impossible to execute every combination of paths during testing. It is possible,
however, to adequately cover program logic and to ensure that all conditions in the component-level design have been
exercised. To be most effective, testing should be conducted by an independent third party.
6. By most effective, we mean testing that has the highest probability of finding errors (the primary objective of testing).

2. Discuss the CMM 5 Levels for Software Process.

Levels of the CMM:

Level 1 - Initial

Processes are usually ad hoc and the organization usually does not provide a stable environment. Success in these
organizations depends on the competence and heroics of the people in the organization and not on the use of
proven processes. In spite of this ad hoc, chaotic environment, maturity level 1 organizations often produce
products and services that work; however, they frequently exceed the budget and schedule of their projects.

Organizations are characterized by a tendency to over commit, abandon processes in the time of crisis, and not be
able to repeat their past successes again.

1 MBA III
SEM
SMU 2011

Software project success depends on having quality people.

Level 2 - Repeatable

Software development successes are repeatable. The processes may not repeat for all the projects in the
organization. The organization may use some basic project management to track cost and schedule.

Process discipline helps ensure that existing practices are retained during times of stress. When these practices are
in place, projects are performed and managed according to their documented plans.

Project status and the delivery of services are visible to management at defined points (for example, at major
milestones and at the completion of major tasks).

Basic project management processes are established to track cost, schedule, and functionality. The minimum
process discipline is in place to repeat earlier successes on projects with similar applications and scope. There is
still a significant risk of exceeding cost and time estimate.

Level 3 - Defined

The organization’s set of standard processes, which is the basis for level 3, is established and improved over time.
These standard processes are used to establish consistency across the organization. Projects establish their defined
processes by the organization’s set of standard processes according to tailoring guidelines.

The organization’s management establishes process objectives based on the organization’s set of standard
processes and ensures that these objectives are appropriately addressed.

A critical distinction between level 2 and level 3 is the scope of standards, process descriptions, and procedures.
At level 2, the standards, process descriptions, and procedures may be quite different in each specific instance of
the process (for example, on a particular project). At level 3, the standards, process descriptions, and procedures
for a project are tailored from the organization’s set of standard processes to suit a particular project or
organizational unit.

Level 4 - Managed

Using precise measurements, management can effectively control the software development effort. In particular,
management can identify ways to adjust and adapt the process to particular projects without measurable losses of
quality or deviations from specifications. At this level organization set a quantitative quality goal for both
software process and software maintenance.

Sub processes are selected that significantly contribute to overall process performance. These selected sub
processes are controlled using statistical and other quantitative techniques.

A critical distinction between maturity level 3 and maturity level 4 is the predictability of process performance. At
maturity level 4, the performance of processes is controlled using statistical and other quantitative techniques, and
is quantitatively predictable. At maturity level 3, processes are only qualitatively predictable.

Level 5 - Optimizing

Focusing on continually improving process performance through both incremental and innovative technological
improvements. Quantitative process-improvement objectives for the organization are established, continually
revised to reflect changing business objectives, and used as criteria in managing process improvement. The effects
of deployed process improvements are measured and evaluated against the quantitative process-improvement
objectives. Both the defined processes and the organization’s set of standard processes are targets of measurable
improvement activities.

Process improvements to address common causes of process variation and measurably improve the organization’s

2 MBA III
SEM
SMU 2011

processes are identified, evaluated, and deployed.

Optimizing processes that are nimble, adaptable and innovative depends on the participation of an empowered
workforce aligned with the business values and objectives of the organization. The organization’s ability to
rapidly respond to changes and opportunities is enhanced by finding ways to accelerate and share learning.

A critical distinction between maturity level 4 and maturity level 5 is the type of process variation addressed. At
maturity level 4, processes are concerned with addressing special causes of process variation and providing
statistical predictability of the results. Though processes may produce predictable results, the results may be
insufficient to achieve the established objectives. At maturity level 5, processes are concerned with addressing
common causes of process variation and changing the process (that is, shifting the mean of the process
performance) to improve process performance (while maintaining statistical probability) to achieve the established
quantitative process-improvement objectives.

3. Discuss the Water Fall model for Software Development.

Water fall model:

The simplest software development life cycle model is the waterfall model, which states that the phases are organized in a linear
order. A project begins with feasibility analysis. On the successful demonstration of the feasibility analysis, the requirements
analysis and project planning begins.

The design starts after the requirements analysis is done. And coding begins after the design is done. Once the programming is
completed, the code is integrated and testing is done. On successful completion of testing, the system is installed. After this the
regular operation and maintenance of the system takes place. The following figure demonstrates the steps involved in waterfall
life cycle model.

The Waterfall Software Life Cycle Model

With the waterfall model, the activities performed in a software development project are requirements analysis, project planning,
system design, detailed design, coding and unit testing, system integration and testing. Linear ordering of activities has some
important consequences. First, to clearly identify the end of a phase and beginning of the others. Some certification mechanism
has to be employed at the end of each phase. This is usually done by some verification and validation. Validation means
confirming the output of a phase is consistent with its input (which is the output of the previous phase) and that the output of the
phase is consistent with overall requirements of the system.

The consequence of the need of certification is that each phase must have some defined output that can be evaluated and certified.
Therefore, when the activities of a phase are completed, there should be an output product of that phase and the goal of a phase is
to produce this product. The outputs of the earlier phases are often called intermediate products or design document. For the

3 MBA III
SEM
SMU 2011
coding phase, the output is the code. From this point of view, the output of a software project is to justify the final program along
with the use of documentation with the requirements document, design document, project plan, test plan and test results.

Another implication of the linear ordering of phases is that after each phase is completed and its outputs are certified, these
outputs become the inputs to the next phase and should not be changed or modified. However, changing requirements cannot be
avoided and must be faced. Since changes performed in the output of one phase affect the later phases that might have been
performed. These changes have to make in a controlled manner after evaluating the effect of each change on the project. This
brings us to the need for configuration control or configuration management.

The certified output of a phase that is released for the best phase is called baseline. The configuration management ensures that
any changes to a baseline are made after careful review, keeping in mind the interests of all parties that are affected by it. There
are two basic assumptions for justifying the linear ordering of phase in the manner proposed by the waterfall model.

For a successful project resulting in a successful product, all phases listed in the waterfall model must be performed anyway.

Any different ordering of the phases will result in a less successful software product.

4. Explain the Different types of Software Measurement Techniques.

Types of Software Measurement Techniques:

Most estimating methodologies are predicated on analogous software programs. Expert opinion is based on experience from
similar programs; parametric models stratify internal data bases to simulate environments from many analogous programs;
engineering builds reference similar experience at the unit level; and cost estimating relationships (like parametric models)
regress algorithms from several analogous programs. Deciding which of these methodologies (or combination of methodologies)
is the most appropriate for your program usually depends on availability of data, which is in turn depends on where you are in
the life cycle or your scope definition.

Analogies

Cost and schedule are determined based on data from completed similar efforts. When applying this method, it is often difficult
to find analogous efforts at the total system level. It may be possible, however, to find analogous efforts at the subsystem or
lower level computer software configuration item/computer software component/computer software unit (CSCI/CSC/CSU).
Furthermore, you may be able to find completed efforts that are more or less similar in complexity. If this is the case, a scaling
factor may be applied based on expert opinion (e.g., CSCI-x is 80% as complex). After an analogous effort has been found,
associated data need to be assessed. It is preferable to use effort rather than cost data; however, if only cost data are available,
these costs must be normalized to the same base year as your effort using current and appropriate inflation indices. As with all
methods, the quality of the estimate is directly proportional to the credibility of the data.

Expert (engineering) opinion

Cost and schedule are estimated by determining required effort based on input from personnel with extensive experience on
similar programs. Due to the inherent subjectivity of this method, it is especially important that input from several independent
sources be used. It is also important to request only effort data rather than cost data as cost estimation is usually out of the realm
of engineering expertise (and probably dependent on non-similar contracting situations). This method, with the exception of
rough orders-of-magnitude estimates, is rarely used as a primary methodology alone. Expert opinion is used to estimate lower-
level, low cost, pieces of a larger cost element when a labor-intensive cost estimate is not feasible.

Parametric models

The most commonly-used technology for software estimation is parametric models, a variety of which are available from both
commercial and government sources. The estimates produced by the models are repeatable, facilitating sensitivity and domain
analysis. The models generate estimates through statistical formulas that relate a dependent variable (e.g., cost, schedule,
resources) to one or more independent variables. Independent variables are called “cost drivers” because any change in their
value results in a change in the cost, schedule, or resource estimate. The models also address both the development (e.g.,
development team skills/experience, process maturity, tools, complexity, size, domain, etc.) and operational (how the software

4 MBA III
SEM
SMU 2011
will be used) environments, as well as software characteristics. The environmental factors, used to calculate cost
(manpower/effort), schedule, and resources (people, hardware, tools, etc.), are often the basis of comparison among historical
programs, and can be used to assess on-going program progress.

5. Explain the COCOMO Model & Software Estimation Technique.

COCOMO Model:

COCOMO stands for constructive cost model. It is used for software cost estimation and uses regression formula with parameters
based on historic data. COCOMO has a hierarchy of 3 accurate and detail forms, namely: Basic, Intermediate and Detailed. The
Basic level is good for a quick and early overall cost estimate for the project but is not accurate enough. The intermediate level
considers some of the other project factors that influence the project cost and the detailed level accounts for various project
phases that affect the cost of the project.

Advantages of COCOMO estimating model are:


COCOMO is factual and easy to interpret. One can clearly understand how it works.
Accounts for various factors that affect cost of the project.
Works on historical data and hence is more predictable and accurate.

Disadvantages of COCOMO estimating model

1. COCOMO model ignores requirements and all documentation.


2. It ignores customer skills, cooperation, knowledge and other parameters.
3. It oversimplifies the impact of safety/security aspects.
4. It ignores hardware issues
5. It ignores personnel turnover levels
6. It is dependent on the amount of time spent in each phase.

Software Estimation Technique:

Accurately estimating software size, cost, effort, and schedule is probably the biggest challenge facing Software developers
today.

Estimating Software Size

An accurate estimate of software size is an essential element in the calculation of estimated project costs and schedules. The fact
that these estimates are required very early on in the project (often while a contract bid is being prepared) makes size estimation a
formidable task. Initial size estimates are typically based on the known system requirements. You must hunt for every known
detail of the proposed system, and use these details to develop and validate the software size estimates. In general, you present
size estimates as lines of code (KSLOC or SLOC) or as function points. There are constants that you can apply to convert
function points to lines of code for specific languages, but not vice versa. If possible, choose and adhere to one unit of
measurement, since conversion simply introduces a new margin of error into the final estimate. Regardless of the unit chosen,
you should store the estimates in the metrics database. You will use these estimates to determine progress and to estimate future
projects. As the project progresses, revise them so that cost and schedule estimates remain accurate. The following section
describes techniques for estimating software size.

Estimating Software Cost

The cost of medium and large software projects is determined by the cost of developing the software, plus the cost of equipment
and supplies. The latter is generally a constant for most projects. The cost of developing the software is simply the estimated
effort, multiplied by presumably fixed labor costs. For this reason, we will concentrate on estimating the development effort, and
leave the task of converting the effort to dollars to each company.

Estimating Effort

There are two basic models for estimating software development effort (or cost): holistic and activity-based. The single biggest
cost driver in either model is the estimated project size. Holistic models are useful for organizations that are new to software
development, or that do not have baseline data available from previous projects to determine labor rates for the various
development activities. Estimates produced with activity-based models are more likely to be accurate, as they are based on the

5 MBA III
SEM
SMU 2011
software development rates common to each organization. Unfortunately, you require related data from previous projects to apply
these techniques.

Estimating Software Schedule

There are many tools on the market (such as Timeline, MacProject, On Target, etc.) which help develop Gantt and PERT charts
to schedule and track projects. These programs are most effective when you break the project down into a Work Breakdown
Structure (WBS), and assign estimates of effort and staff to each task in the WBS.

6. Write a note on myths of Software.


SOFTWARE MYTHS:

There are many myths that are associated with the software development world. Unlike ancient myths which provide human
lessons, software myths transmit confusion. Various Software Myths are given below:

 Software can work right at the first time

We cannot expect software to work properly at the first time. For example, you go to an aeronautics engineer, and ask
him to build a jet fighter; he will quote us the price. But if we demand it to be put into production without building its
prototype, then he will refuse the job. Software engineers are often asked to do that sort of work that is quite
unexpected.

 Software is easy to change

Well, source code of the software is easy to edit, but is quite different from saying that it is easy to change. Because
changes made in software without introducing problems are very difficult. Verification of the complete system is
needed after every change. Therefore software is not easy to change and proper care is needed during every change.

 Software with more features is a better software

It is a totally false statement that software with more features is better software. Best software is that which does one
task well.

 Reusing Software increases safety

Reusing software does not increase safety. Reusing code can helps in improving the development efficiency, but it
needs a lot of analysis to check its suitability and testing to see if it works.

 Addition of more software engineers will cover up the delay

It is not true in all the cases. Sometimes, by adding more software engineers during the project, we may further delay
the project. Therefore it is not true in our case, but adding more people can certainly cover up the delay in civil
engineering work.

Software myths list is endless. Various other software myths are like computer provides greater reliability than the device they
replaced; testing errors can remove the errors etc.

6 MBA III
SEM
SMU 2011

7 MBA III
SEM

Вам также может понравиться