Вы находитесь на странице: 1из 13

SOFTWARE ENGINEERING

MB 0033

Set 1

Name: Rajesh Kumar

Roll number: 520932980

Learning centre: 03036

Subject: MB 0033- SOFTWARE ENGINEERING

Assignment No.: Set 1

Date of submission at learning centre:


MB0033 - Software Engineering

ASSIGNMENTS
Subject code: MB0033
(4 credits)
Set 1
Marks 60
SUBJECT NAME: SOFTWARE ENGINEERING

Note: Each Question carries 10 marks

Q1. Discuss the Objective & Principles Behind Software Testing.

Ans: During the process of software development the software development team goes
through the stages of the development based on the principles of the developmental models
and the requirements stated. The engineer attempts to build software from an abstract concept
to what is known as tangible product. To check upon the development testing is done. Testing
is one step in the software process that could be viewed psychologically as a destructive
process rather than constructive process though there is no such damage done to the system
as may be perceived in the general sense.

i) Testing Objectives

Glen Myers states a number of rules that can serve well as testing objectives:

1. Testing is a process of executing a program with the intent of finding an error.

2. A good test case is one that has a high probability of finding an as return discovered error.

3. A successful test is one that uncovers an as-yet-undiscovered error.

ii) Testing Principles

Davis suggests a set of testing principles that have been adapted in this book:

· All tests should be traceable to customer requirements: As we have seen, the objective
of software testing is to uncover errors. It follows that the most severe defects (from the
customer’s point of view) are those that cause the program to fail to meet its requirements.

· Tests should be planned long before testing begins: Test planning can begin as soon as
the requirements model is complete. Detailed definition of test cases can begin as soon as the
design model has been solidified. Therefore, all tests can be planned and designed before any
code has been generated.

· The Pareto principle applies to software testing: Stated simply, the Pareto principle
implies that 80 percent of all errors uncovered during testing will most likely be traceable to
20 percent of all program components. The problem, of course, is to isolate these suspect
components and to thoroughly test them.

2
MB0033 - Software Engineering

· Testing should begin “in the small” and progress toward testing “in the large”: The
first tests planned and executed generally focus on individual components. As testing
progresses, focus shifts in an attempt to find errors in integrated clusters of components and
ultimately in the entire system.

· Exhaustive testing is not possible: The number of path permutations for even a moderately
sized program is exceptionally large. For this reason, it is impossible to execute every
combination of paths during testing. It is possible, however, to adequately cover program
logic and to ensure that all conditions in the component-level design have been exercised.

· To be most effective, testing should be conducted by an independent third party- By


most effective, we mean testing that has the highest probability of finding errors (the primary
objective of testing). For reasons that have been introduced earlier in this unit, the software
engineer who created the system is not the best person to conduct all tests for the software.

iii) Testability

James Bach describes testability in the following manner. Software testability is simply how
easily [a computer program] can be tested. Since testing is so profoundly difficult, it pays to
know what can be done to streamline it. Sometimes programmers are willing to do things that
will help the testing process and a checklist of possible design points, features, etc., can be
useful in negotiating with them. There are certainly metrics that could be used to measure
testability in most of its aspects. Sometimes, testability is used to mean how adequately a
particular set of tests will cover the product. It’s also used by the military to mean how easily
a tool can be checked and repaired in the field. Those two meanings are not the same as
software testability.

Q2. Discuss the CMM 5 Levels for Software Process.

Ans: There are five levels defined along the continuum of the CMM and, according to the
SEI: "Predictability, effectiveness, and control of an organization's software processes are
believed to improve as the organization moves up these five levels. While not rigorous, the
empirical evidence to date supports this belief."

1. Initial (chaotic, ad hoc, individual heroics) - the starting point for use of a new
process.
2. Managed - the process is managed in accordance with agreed metrics.
3. Defined - the process is defined/confirmed as a standard business process, and
decomposed to levels 0, 1 and 2 (the latter being Work Instructions).
4. Quantitatively managed
5. Optimizing - process management includes deliberate process
optimization/improvement.

Within each of these maturity levels are Key Process Areas (KPAs) which characterise that
level, and for each KPA there are five definitions identified:

1. Goals
2. Commitment

3
MB0033 - Software Engineering

3. Ability
4. Measurement
5. Verification

The KPAs are not necessarily unique to CMM, representing — as they do — the stages that
organizations must go through on the way to becoming mature.

The CMM provides a theoretical continuum along which process maturity can be developed
incrementally from one level to the next. Skipping levels is not allowed/feasible.

N.B.: The CMM was originally intended as a tool to evaluate the ability of government
contractors to perform a contracted software project. It has been used for and may be suited
to that purpose, but critics pointed out that process maturity according to the CMM was not
necessarily mandatory for successful software development. There were/are real-life
examples where the CMM was arguably irrelevant to successful software development, and
these examples include many Shrinkwrap companies (also called commercial-off-the-shelf or
"COTS" firms or software package firms). Such firms would have included, for example,
Claris, Apple, Symantec, Microsoft, and Lotus. Though these companies may have
successfully developed their software, they would not necessarily have considered or defined
or managed their processes as the CMM described as level 3 or above, and so would have
fitted level 1 or 2 of the model. This did not - on the face of it - frustrate the successful
development of their software.

Level 1 - Initial (Chaotic)


It is characteristic of processes at this level that they are (typically) undocumented and
in a state of dynamic change, tending to be driven in an ad hoc, uncontrolled and
reactive manner by users or events. This provides a chaotic or unstable environment
for the processes.
Level 2 - Repeatable
It is characteristic of processes at this level that some processes are repeatable,
possibly with consistent results. Process discipline is unlikely to be rigorous, but
where it exists it may help to ensure that existing processes are maintained during
times of stress.
Level 3 - Defined
It is characteristic of processes at this level that there are sets of defined and
documented standard processes established and subject to some degree of
improvement over time. These standard processes are in place (i.e., they are the AS-IS
processes) and used to establish consistency of process performance across the
organization.
Level 4 - Managed
It is characteristic of processes at this level that, using process metrics, management
can effectively control the AS-IS process (e.g., for software development ). In
particular, management can identify ways to adjust and adapt the process to particular
projects without measurable losses of quality or deviations from specifications.
Process Capability is established from this level.
Level 5 - Optimizing
It is a characteristic of processes at this level that the focus is on continually
improving process performance through both incremental and innovative
technological changes/improvements.

4
MB0033 - Software Engineering

At maturity level 5, processes are concerned with addressing statistical common causes of
process variation and changing the process (for example, to shift the mean of the process
performance) to improve process performance. This would be done at the same time as
maintaining the likelihood of achieving the established quantitative process-improvement
objectives.

Q 3. Discuss the Water Fall model for Software Development.

Ans: The simplest software development life cycle model is the waterfall model, which states
that the phases are organized in a linear order. A project begins with feasibility analysis. On
the successful demonstration of the feasibility analysis, the requirements analysis and project
planning begins.

The design starts after the requirements analysis is done. And coding begins after the design
is done. Once the programming is completed, the code is integrated and testing is done. On
successful completion of testing, the system is installed. After this the regular operation and
maintenance of the system takes place. The following figure demonstrates the steps involved
in waterfall life cycle model.

The Waterfall Software Life Cycle Model

With the waterfall model, the activities performed in a software development project are
requirements analysis, project planning, system design, detailed design, coding and unit
testing, system integration and testing. Linear ordering of activities has some important
consequences. First, to clearly identify the end of a phase and beginning of the others. Some
certification mechanism has to be employed at the end of each phase. This is usually done by
some verification and validation. Validation means confirming the output of a phase is
consistent with its input (which is the output of the previous phase) and that the output of the
phase is consistent with overall requirements of the system.

The consequence of the need of certification is that each phase must have some defined
output that can be evaluated and certified. Therefore, when the activities of a phase are
completed, there should be an output product of that phase and the goal of a phase is to
produce this product. The outputs of the earlier phases are often called intermediate products
or design document. For the coding phase, the output is the code. From this point of view, the
output of a software project is to justify the final program along with the use of

5
MB0033 - Software Engineering

documentation with the requirements document, design document, project plan, test plan and
test results.

Another implication of the linear ordering of phases is that after each phase is completed and
its outputs are certified, these outputs become the inputs to the next phase and should not be
changed or modified. However, changing requirements cannot be avoided and must be faced.
Since changes performed in the output of one phase affect the later phases, that might have
been performed. These changes have to made in a controlled manner after evaluating the
effect of each change on the project. This brings us to the need for configuration control or
configuration management.

The certified output of a phase that is released for the best phase is called baseline. The
configuration management ensures that any changes to a baseline are made after careful
review, keeping in mind the interests of all parties that are affected by it. There are two basic
assumptions for justifying the linear ordering of phase in the manner proposed by the
waterfall model.

For a successful project resulting in a successful product, all phases listed in the waterfall
model must be performed anyway.

Any different ordering of the phases will result in a less successful software product.

Project Output in a Waterfall Model

As we have seen, the output of a project employing the waterfall model is not just the final
program along with documentation to use it. There are a number of intermediate outputs,
which must be produced in order to produce a successful product.

The set of documents that forms the minimum that should be produced in each project are:

• Requirement document
• Project plan
• System design document
• Detailed design document
• Test plan and test report
• Final code
• Software manuals (user manual, installation manual etc.)
• Review reports

Except for the last one, these are all the outputs of the phases. In order to certify an output
product of a phase before the next phase begins, reviews are often held. Reviews are
necessary especially for the requirements and design phases, since other certification means
are frequently not available. Reviews are formal meeting to uncover deficiencies in a product.
The review reports are the outcome of these reviews.

Q 4. Explain the Different types of Software Measurement Techniques.

Ans: A software quality factor is a non-functional requirement for a software program which
is not called up by the customer's contract, but nevertheless is a desirable requirement which
enhances the quality of the software program. Note that none of these factors are binary; that

6
MB0033 - Software Engineering

is, they are not “either you have it or you don’t” traits. Rather, they are characteristics that
one seeks to maximize in one’s software to optimize its quality. So rather than asking
whether a software product “has” factor x, ask instead the degree to which it does (or does
not).

Some software quality factors are listed here:

Understandability - Clarity of purpose. This goes further than just a statement of purpose; all of the
design and user documentation must be clearly written so that it is easily understandable. This is
obviously subjective in that the user context must be taken into account: for instance, if the software
product is to be used by software engineers it is not required to be understandable to the layman.

Completeness - Presence of all constituent parts, with each part fully developed. This means that if
the code calls a subroutine from an external library, the software package must provide reference to
that library and all required parameters must be passed. All required input data must also be
available.

Conciseness - Minimization of excessive or redundant information or processing. This is important


where memory capacity is limited, and it is generally considered good practice to keep lines of code to
a minimum. It can be improved by replacing repeated functionality by one subroutine or function which
achieves that functionality. It also applies to documents.

Portability - Ability to be run well and easily on multiple computer configurations. Portability can mean
both between different hardware—such as running on a PC as well as a smart phone—and between
different operating systems—such as running on both Mac OS X and GNU/Linux.

Consistency - Uniformity in notation, symbology, appearance, and terminology within itself.

Maintainability - Propensity to facilitate updates to satisfy new requirements. Thus the software
product that is maintainable should be well-documented, should not be complex, and should have
spare capacity for memory, storage and processor utilization and other resources.

Testability - Disposition to support acceptance criteria and evaluation of performance. Such a


characteristic must be built-in during the design phase if the product is to be easily testable; a
complex design leads to poor testability.

Usability - Convenience and practicality of use. This is affected by such things as the human-
computer interface. The component of the software that has most impact on this is the user interface
(UI), which for best usability is usually graphical (i.e. a GUI).

Reliability - Ability to be expected to perform its intended functions satisfactorily. This implies a time
factor in that a reliable product is expected to perform correctly over a period of time. It also
encompasses environmental considerations in that the product is required to perform correctly in
whatever conditions it finds itself (sometimes termed robustness).

Efficiency - Fulfillment of purpose without waste of resources, such as memory, space and processor
utilization, network bandwidth, time, etc.

Security - Ability to protect data against unauthorized access and to withstand malicious or
inadvertent interference with its operations. Besides the presence of appropriate security mechanisms
such as authentication, access control and encryption, security also implies resilience in the face of
malicious, intelligent and adaptive attackers.

Measurement of software quality factors

There are varied perspectives within the field on measurement. There are a great many
measures that are valued by some professionals—or in some contexts, that are decried as
harmful by others. Some believe that quantitative measures of software quality are essential.

7
MB0033 - Software Engineering

Others believe that contexts where quantitative measures are useful are quite rare, and so
prefer qualitative measures. Several leaders in the field of software testing have written about
the difficulty of measuring what we truly want to measure well.

Understandability

Are variable names descriptive of the physical or functional property represented? Do


uniquely recognizable functions contain adequate comments so that their purpose is clear?
Are deviations from forward logical flow adequately commented? Are all elements of an
array functionally related?...

Completeness

Are all necessary components available? Does any process fail for lack of resources or
programming? Are all potential pathways through the code accounted for, including proper
error handling?

Conciseness

Is all code reachable? Is any code redundant? How many statements within loops could be
placed outside the loop, thus reducing computation time? Are branch decisions too complex?

Portability

Does the program depend upon system or library routines unique to a particular installation?
Have machine-dependent statements been flagged and commented? Has dependency on
internal bit representation of alphanumeric or special characters been avoided? How much
effort would be required to transfer the program from one hardware/software system or
environment to another?

Consistency

Is one variable name used to represent different logical or physical entities in the program?
Does the program contain only one representation for any given physical or mathematical
constant? Are functionally similar arithmetic expressions similarly constructed? Is a
consistent scheme used for indentation, nomenclature, the color palette, fonts and other visual
elements?

Maintainability

Has some memory capacity been reserved for future expansion? Is the design cohesive—i.e.,
does each module have distinct, recognizable functionality? Does the software allow for a
change in data structures (object-oriented designs are more likely to allow for this)? If the

8
MB0033 - Software Engineering

code is procedure-based (rather than object-oriented), is a change likely to require


restructuring the main program, or just a module?

Testability

Are complex structures employed in the code? Does the detailed design contain clear pseudo-
code? Is the pseudo-code at a higher level of abstraction than the code? If tasking is used in
concurrent designs, are schemes available for providing adequate test cases?

Usability

Is a GUI used? Is there adequate on-line help? Is a user manual provided? Are meaningful
error messages provided?

Reliability

Are loop indexes range-tested? Is input data checked for range errors? Is divide-by-zero
avoided? Is exception handling provided? It is the probability that the software performs its
intended functions correctly in a specified period of time under stated operation conditions.
but there could also be a problem with the requirement document...

Efficiency

Have functions been optimized for speed? Have repeatedly used blocks of code been formed
into subroutines? Has the program been checked for memory leaks or overflow errors?

Security

Does the software protect itself and its data against unauthorized access and use? Does it
allow its operator to enforce security policies? Are security mechanisms appropriate,
adequate and correctly implemented? Can the software withstand attacks that can be
anticipated in its intended environment?

Q 5. Explain the COCOMO Model & Software Estimation Technique.

Ans: COCOMO Model - Intermediate COCOMO computes software development effort as


function of program size and a set of "cost drivers" that include subjective assessment of
product, hardware, personnel and project attributes. This extension considers a set of four
"cost drivers”, each with a number of subsidiary attributes:-

• Product attributes
o Required software reliability
o Size of application database
o Complexity of the product
• Hardware attributes

9
MB0033 - Software Engineering

o Run-time performance constraints


o Memory constraints
o Volatility of the virtual machine environment
o Required turnabout time
• Personnel attributes
o Analyst capability
o Software engineering capability
o Applications experience
o Virtual machine experience
o Programming language experience
• Project attributes
o Use of software tools
o Application of software engineering methods
o Required development schedule

Each of the 15 attributes receives a rating on a six-point scale that ranges from "very
low" to "extra high" (in importance or value). An effort multiplier from the table below
applies to the rating. The product of all effort multipliers results in an effort adjustment
factor (EAF). Typical values for EAF range from 0.9 to 1.4.

Ratings
Very Very Extra
Cost Drivers Low Low Nominal High High High
Product attributes
Required software reliability 0.75 0.88 1.00 1.15 1.40
Size of application database 0.94 1.00 1.08 1.16
Complexity of the product 0.70 0.85 1.00 1.15 1.30 1.65
Hardware attributes
Run-time performance constraints 1.00 1.11 1.30 1.66
Memory constraints 1.00 1.06 1.21 1.56
Volatility of the virtual machine environment 0.87 1.00 1.15 1.30
Required turnabout time 0.87 1.00 1.07 1.15
Personnel attributes
Analyst capability 1.46 1.19 1.00 0.86 0.71
Applications experience 1.29 1.13 1.00 0.91 0.82
Software engineer capability 1.42 1.17 1.00 0.86 0.70
Virtual machine experience 1.21 1.10 1.00 0.90
Programming language experience 1.14 1.07 1.00 0.95
Project attributes
Application of software engineering methods 1.24 1.10 1.00 0.91 0.82
Use of software tools 1.24 1.10 1.00 0.91 0.83
Required development schedule 1.23 1.08 1.00 1.04 1.10

The Intermediate Cocomo formula now takes the form:

E=ai(KLoC)(bi).EAF

Where E is the effort applied in person-months, KLoC is the estimated number of thousands
of delivered lines of code for the project, and EAF is the factor calculated above. The
coefficient ai and the exponent bi are given in the next table.

Software project ai bi
Organic 3.2 1.05

10
MB0033 - Software Engineering

Semi-detached 3.0 1.12


Embedded 2.8 1.20

The Development time D calculation uses E in the same way as in the Basic COCOMO.

Software Estimation Technique Before we begin, we need to understand what types of


estimates we can provide. Estimates can be roughly divided into three types:

1. Ballpark or order of magnitude: Here the estimate is probably an order of magnitude from
the final figure. Ideally, it would fall within two or three times the actual value.
2. Rough estimates: Here the estimate is closer to the actual value. Ideally it will be about 50%
to 100% off the actual value.
3. Fair estimates: This is a very good estimate. Ideally it will be about 25% to 50% off the actual
value.

Deciding which of these three different estimates you can provide is crucial. Fair estimates
are possible when you are very familiar with what needs to be done and you have done it
many times before. This sort of estimate is possible when doing maintenance type work
where the fixes are known, or one is adding well-understood functionality that has been done
before. Rough estimates are possible when working with well-understood needs and one is
familiar with domain and technology issues. In all other cases, the best we can hope for
before we begin is order of magnitude estimates. Some may quibble than order of magnitude
estimates are close to no estimate at all! However, they are very valuable because they give
the organization and project team some idea of what the project is going to need in terms of
time, resources, and money. It is better to know that something is going to take between two
and six months to do rather than have no idea how much time it will take. In many cases, we
may be able to give more detailed estimates for some items rather than others. For example,
we may be able to provide a rough estimate of the infrastructure we need but only an order of
magnitude estimate of the people and time needed.

Doing an order of magnitude estimate

This is what most of us face when starting off a new project. New technology, teams
unfamiliar with the technology or domain, or unclear requirements ensure that this will
probably be the best estimate we can provide.

1. Break the project down into the different tasks needed. Try to get as many tasks as possible.
A useful way to break down tasks is to consider typical software activities such as analysis,
design, build, demo, test, fix, document, deploy, and support and see if they are required for
each task and whether they need to be broken out into new tasks.
2. Evaluate each task on two scales: complexity (high, medium, low) and size of work (large,
medium, small). A less complex task may still involve a large amount of work; for example,
loading a database with information from paper forms may take several weeks. A very
complex task may not involve much actual work but can still take a lot of time, as in tuning a
database for optimum performance. Complex tasks are usually hard to split between many
people/teams while large-size, less complex tasks can usually be split up between many
people/teams.
3. Tasks effectively fall into one of nine combinations of complexity and size. For each
combination, define an expected amount of time and resources required. For example, we
could say that low complexity and small-size tasks will take one week at most, medium
complexity and small-size tasks will take three weeks, and so on. These weighing factors will
differ based on the team and project and should be reviewed after the project to help get

11
MB0033 - Software Engineering

better values the next time. Add together all these values for each task to get an estimate of
time and resources required.

Complexity
Size
Low Medium High
Small 1) Tune database
Medium
Large 1) Load data 1) Integrate with security system
2) Create data validation routines

Figure 1. A sample table used for doing an order of magnitude estimate.

Doing rough and fair estimates

These estimates can be done when you have a good idea of the tasks to be done and how to
do them.

1. Those who will do the actual work are the best people to do these estimates. One then can
add up all the estimates from different people to get the final estimates.
2. Ensure you collect estimates on the three variables of time, people, and
infrastructure/material needs.
3. Break down tasks to as detailed a level as possible. As mentioned previously, it can help to
consider typical software activities such as analysis, design, build, demo, test, fix, document,
deploy, and support and see if they are required for each task. Break tasks down to a
granularity of eighty hours or less.

Q6. Write a note on myths of Software

Ans: Software Myths- beliefs about software and the process used to build it - can be traced
to the earliest days of computing. Myths have a number of attributes that have made them
insidious. For instance, myths appear to be reasonable statements of fact, they have an
intuitive feel, and they are often promulgated by experienced practitioners who "know the
score".

Management Myths
Managers with software responsibility, like managers in most disciplines, are often under
pressure to maintain budgets, keep schedules from slipping, and improve quality. Like a
drowning person who grasps at a straw, a software manager often grasps at belief in a
software myth, If the Belief will lessen the pressure.

Myth : We already have a book that's full of standards and procedures for building software.
Won't that provide my people with everything they need to know?
Reality : The book of standards may very well exist, but is it used?
- Are software practitioners aware of its existence?
- Does it reflect modern software engineering practice?
- Is it complete? Is it adaptable?
- Is it streamlined to improve time to delivery while still maintaining a focus on Quality?

12
MB0033 - Software Engineering

In many cases, the answer to these entire question is no.

Myth : If we get behind schedule, we can add more programmers and catch up (sometimes
called the Mongolian horde concept)
Reality : Software development is not a mechanistic process like manufacturing. In the words
of Brooks [BRO75]: "Adding people to a late software project makes it later." At first, this
statement may seem counterintuitive. However, as new people are added, people who were
working must spend time educating the newcomers, thereby reducing the amount of time
spent on productive development effort

Myth : If we decide to outsource the software project to a third party, I can just relax and let
that firm build it.
Reality : If an organization does not understand how to manage and control software project
internally, it will invariably struggle when it out sources software project.

Customer Myths
A customer who requests computer software may be a person at the next desk, a technical
group down the hall, the marketing /sales department, or an outside company that has
requested software under contract. In many cases, the customer believes myths about
software because software managers and practitioners do little to correct misinformation.
Myths led to false expectations and ultimately, dissatisfaction with the developers.

Myth : A general statement of objectives is sufficient to begin writing programs we can fill in
details later.
Reality : Although a comprehensive and stable statement of requirements is not always
possible, an ambiguous statement of objectives is a recipe for disaster. Unambiguous
requirements are developed only through effective and continuous communication between
customer and developer.

Myth : Project requirements continually change, but change can be easily accommodated
because software is flexible.
Reality : It's true that software requirement change, but the impact of change varies with the
time at which it is introduced. When requirement changes are requested early, cost impact is
relatively small. However, as time passes, cost impact grows rapidly - resources have been
committed, a design framework has been established, and change can cause upheaval that
requires additional resources and major design modification.

13

Вам также может понравиться