Академический Документы
Профессиональный Документы
Культура Документы
February 2011
Master of Computer Application (MCA) – Semester 3
MC0071 – Software Engineering– 4 Credits
(Book ID: B0808 & B0809) Assignment Set – 1
1. Describe the concurrent development model in your own words.
Ans: The concurrent process model can be represented schematically as a series of major technical
activities, tasks, and their associated states. For e.g.:, the engineering activity defined for the spiral
model is accomplished by invoking the following tasks. Prototyping and / or analysis modeling,
requirements specification, and design.
The below figure shows that it provides a schematic representation of one activity with the
concurrent process model. The activity‐analysis‐may be in any one of the states noted at any given
time. Similarly, other activities (e.g. Design or customer communication) can be represented in an
analogous manner. All activities exist concurrently but reside in different states. For e.g., early in a
project the customer communication activity has completed its first iteration and exists in the
awaiting Changes State. The analysis activity (which existed in the none state while initial customer
communication was completed) now makes a transition into the under development state. If the
customer indicates that changes in requirements must be made, the analysis activity moves from the
under development state into the awaiting changes state.
The concurrent process model defines a series of events that will trigger transition from state to state
for each of the software engineering activities. For e.g., during early stages of design, an inconsistency
in the analysis model is uncovered. This generates the event analysis model correction, which will
trigger the analysis activity from the done state into the awaiting Changes State.
The concurrent process model is often used as the paradigm for the development of client/server
applications. A client/server system is composed of a set of functional components. When applied to
client/server, the concurrent process model defines activities in two dimensions a system dimension
and component dimension. System level issues are addressed using three activities, design assembly,
and use. The component dimension addressed with two‐activity design and realization. Concurrency
is achieved in two ways; (1) System and component activities occur simultaneously and can be
modeled using the state – oriented approach (2) a typical client server application is implemented
with many components, each of which can be designed and realized concurrently.
The concurrent process model is applicable to all types of software development and provides an
accurate picture of the current state of a project. Rather than confining software‐engineering
activities to a sequence of events, it defines a net work of activities. Each activity on the network
exists simultaneously with other activities. Events generated within a given activity or at some other
place in the activity network trigger transitions among the sates of an activity.
UNDER
DEV EL OPM ENT
A WA I TI NG
C HA NGES
BA SEL INED
D ONE
Represents a state of a software engineered activity
One element of concurrent process model
Component based development model:
This model incorporates the characteristics of the spiral model. It is evolutionary in nature,
demanding an iterative approach to the creation of software. However, the component‐based
development model composes applications from prepackaged software components called classes.
Classes created in past software engineering projects are stored in a class library or repository. Once
candidate classes are identified, the class library is searched to determine if these classes already
exist. If they do, they are extracted from the library and reused. If a candidate class does not reside in
the library, it is engineered using object‐oriented methods. The first iteration of the application to be
built is then composed using classes extracted from the library and any new classes built to meet the
unique needs of the application. Process flow then returns to the spiral and will ultimately re‐enter
the component assembly iteration during subsequent passes through the engineering activity.
The component based development model leads to software reuse, and reusability provides software
engineers with a number of measurable benefits although it is very much dependent on the
robustness of the component library.
2. Explain the following concepts with respect to Software Reliability:
A) Software Reliability Metrics
Ans: Metrics which have been used for software reliability specification are shown in below Figure.The
choice of which metric should be used depends on the type of system to which it applies and the
requirements of the application domain. For some systems, it may be appropriate to use different
reliability metrics for different sub‐systems.
This is measure of the likelihood Safety critical and non‐stop
P OF O D that the system will fail when a systems, such as hardware control
PROBAB ILITY service request is made. For systems.
O F F AI L U R E ON example, a POFOD of 0,001 means
D E M AN D that 1 out of 1000 service requests
may result in failure.
This is a measure of how likely the Continuously running systems such
system is to be available for use. as telephone switching system
A V AIL For example, an availability of 0.998
AV AILABILITY means that in every 1000 time
units, the system is likely to be
available for 998 of these
Reliability matrix
In some cases, system users are most concerned about how often the system will fail, perhaps
because there is a significant cost in restarting the system. In those cases, a metric based on a rate of
failure occurrence (ROCOF) or the mean time to failure should be used.
In other cases, it is essential that a system should always meet a request for service because there is
some cost in failing to deliver the service. The number of failures in some time period is less
important. In those cases, a metric based on the probability of failure on demand (POFOD) should be
used. Finally, users or system operators may be mostly concerned that the system is available when a
request for service is made. They will incur some loss if the system is unavailable. Availability (AVAIL).
Which takes into account repair or restart time, is then the most appropriate metric.
There are three kinds of measurement, which can be made when assessing the reliability of a system:
1. The number of system failures given a number of systems inputs. This is used to measure the
POFOD.
2. The time (or number of transaction) between system failures. This is used to measure ROCOF and
MTTF.
3. The elapsed repair or restart time when a system failure occurs. Given that the system must be
continuously available, this is used to measure AVAIL.
Time is a factor in all of this reliability metrics. It is essential that the appropriate time units should be
chosen if measurements are to be meaningful. Time units, which may be used, are calendar time,
processor time or may be some discrete unit such as number of transactions.
Software reliability specification
System requirements documents, reliability requirements are expressed in an informal, qualitative,
untestable way. Ideally, the required level of reliability should be expressed quantitatively in the
software requirement specification. Depending on the type of system, one or more of the metrics
discussed in the previous section may be used for reliability specifications. Statistical testing
techniques (discussed later) should be used to measure the system reliability. The software test plan
should include an operational profile of the software to assess its reliability.
The steps involved in establishing a reliability specification are as follows:
1. For each identified sub‐system, identify the different types of system failure, which may occur and
analyze the consequences of these failures.
2. From the system failure analysis, partition failures into appropriate classes. A reasonable starting
point is to use the failure types shown in Figure shown below. For each failure class identified,
define the reliability requirement using the appropriate reliability metric. It is not necessary to use
the same metric for different classes of failure. For example, where a failure requires some
intervention to recover from it, the probability of that failure occurring on demand might be the
most appropriate metric. When automatic recover is possible and the effect of the failure is some
user inconvenience, ROCOF might be more appropriate.
F AIL UR E CLA S SQ DESCRIPTION
R E C OV ER AB L E System can recover without operator intervention
U N R E C OV ER AB L E Operator intervention needed to recover from failure
Failure classification
The cost of developing and validating a reliability specification for software system is very high.
Statistical testing
Statistical testing is a software testing process in which the objective is to measure the reliability of
the software rather than to discover software faults. It users different test data from defect testing,
which is intended to find faults in the software.
The steps involved in statistical testing are:
1. Determine the operational profile of the software. The operational profile is the probable pattern
of usage of the software. This can be determined by analyzing historical data to discover the
different classes of input to the program and the probability of their occurrence.
2. Select or generate a set of test data corresponding to the operational profile.
3. Apply these test cases to the program, recording the amount of execution time between each
observed system failure. It may not be appropriate to use raw execution time. As discussed in the
previous section, the time units chosen should be appropriate for the reliability metric used.
4. After a statistically significant number of failures have been observed, the software reliability can
then be computed. This involves using the number of failures detected and the time between these
failures to computer the required reliability metric.
This approach to reliability estimation is not easy to apply in practice. The difficulties, which arise, are
due to:
• Operational profile uncertainty;
• High cost of operational profile generation;
• Statistical uncertainty when high reliability is specified.
B) Programming for Reliability
Ans: There is a general requirement for more reliable systems in all application domains. Customers expect
their software to operate without failures and to be available when it is required. Improved
programming techniques, better programming languages and better quality management have led to
very significant improvements in reliability for most software. However, for some systems, such as
those, which control unattended machinery, these ‘normal’ techniques may not be enough to achieve
the level of reliability required. In these cases, special programming techniques may be necessary to
achieve the required reliability. Some of these techniques are discussed in this chapter.
Reliability in a software system can be achieved using three strategies:
• Fault avoidance: This is the most important strategy, which is applicable to all types of system. The
design and implementation process should be organized with the objective of producing fault‐free
systems.
• Fault tolerance: This strategy assumes that residual faults remain in the system. Facilities are
provided in the software to allow operation to continue when these faults cause system failures.
• Fault detection: Faults are detected before the software is put into operation. The software
validation process uses static and dynamic methods to discover any faults, which remain in a
system after implementation.
Fault avoidance
A good software process should be oriented towards fault avoidance rather than fault detection and
removal. It should have the objective of developing fault‐free software. Fault‐free software means
software, which conforms to its specification. Of course, there may be errors in the specification or it
may not reflect the real needs of the user so fault‐free software does not necessarily mean that the
software will always behave as the user wants.
Fault avoidance and the development of fault‐free software relies on :
1. The availability of a precise system specification, which is an unambiguous description of what,
must be implemented.
2. The adoption of an organizational quality philosophy in which quality is the driver of the software
process. Programmers should expect to write bug‐free program.
3. The adoption of an approach to software design and implementation which is based on information
hiding and encapsulation and which encourages the production of readable programs.
4. The use of a strongly typed programming language so that possible errors are detected by the
language compiler.
5. Restriction on the use of programming construct, such as pointers, which are inherently error‐
prone.
Achieving fault‐free software is virtually impossible if low‐level programming
Languages with limited type checking are used for program development.
Cost Per
error
deleted
We must be realistic and accept that human errors will always occur. Faults may remain in the
software after development. Therefore, the development process must include a validation phase,
which checks the developed software for the presence of faults. This validation phase is usually very
expensive. As faults are removed from a program, the cost of finding and removing remaining faults
tends to rise exponentially. As the software becomes more reliable, more and more testing is required
to find fewer and fewer faults.
Structured programming and error avoidance
Structured programming is term which is to mean programming without using go to statements,
programming using only while loops and if statements as control constructs and designing using a top‐
down approach. The adoption of structured programming was an important milestone in the
development of software engineering because it was the first step away from an undisciplined
approach to software development.
Go to statement was an inherently errorprone programming construct. The disciplined use of control
structures force programmers to think carefully about their program. Hence they are less likely to
make mistakes during development. Structured programming means programs can be read
sequentially and are therefore easier to understand and inspect. However, avoiding unsafe control
statements is only the first step in programming for reliability.
Faults are less likely to be introduced into programs if the use of these constructs is minimized. These
constructs include:
1. Floating‐point numbers: Floating‐point numbers are inherently imprecise. They present a particular
problem when they are compared because representation imprecision may lead to invalid
comparisons. Fixed‐point numbers, where a number is represented to a given number of decimal
places, are safer as exact comparisons are possible.
2. Pointer: Pointers are low‐level constructs, which refer directly to areas of the machine memory.
They are dangerous because errors in their use can be devastating and because they allow
‘aliasing’. This means the same entity may be referenced using different names. Aliasing makes
programs harder to may be referenced using different names. Alilasing makes programs harder to
understand so that errors are more difficult to find. However, efficiency requirements mean that it
is often impractical to avoid the use of pointers.
3. Dynamic memory allocation: Program memory is allocated at run‐time rather than compile‐time.
The danger with this is that the memory may not be de‐allocated so that the system eventually
runs out of available memory. This can be a very subtle type of errors to detect as the system may
run successfully for a long time before the problem occurs.
4. Parallelism: Parallelism is dangerous because of the difficulties of predicting the subtle effects of
timing interactions between parallel process. Timing problems cannot usually e detected by
program inspection and the peculiar combination of circumstances, which cause a timing problem,
may not result during system testing. Parallelism may be unavoidable but its use should be carefully
controlled to minimize inter‐process dependencies. Programming language facilities, such as Ada
tasks, help avoid some of the problems of parallelism as the compiler can detect some kinds of
programming errors.
5. Recursion: Recursion is the situation in which a subroutine calls itself or calls another subroutine,
which then calls the calling subroutine. Its use can result in very concise programs but it can be
difficult to follow the logic of recursive programs. Errors in using recursion may result in the
allocation of all they system’s memory as temporary stack variables are created.
6. Interrupts: Interrupts are a means of forcing control to transfer to a section of code irrespective of
the code currently executing. The dangers of this are obvious as the interrupt may cause a critical
operation to be terminated.
Fault tolerance
A fault‐tolerant system can continue in operation after some system failures have occurred. Fault
tolerance is needed in situations where system failure would cause some accident or where a loss of
system operation would cause large economic losses. For example, the computers in an aircraft must
continue in operation until the aircraft has landed; the computers in an traffic control system must be
continuously available.
Fault‐tolerance facilities are required if the system is to failure. There are four aspects to fault
tolerance.
1. Failure detection: The system must detect a particular state combination has resulted or will result
in a system failure.
2. Damage assessment: The parts of the system state, which have been affected by the failure, must
be detected.
3. Fault recovery: The system must restore its state to a known ‘safe’ state. This may be achieved by
correcting the damaged state or by restoring the system the system to a known ‘safe’ state.
Forward error recovery is more complex as it involves diagnosing system faults and knowing what
the system state should have been had the faults not caused a system failure.
4. Fault repair: This involves modifying the system so that the fault does not recur. In many cases,
software failures are transient and due to a peculiar combination of system inputs. No repair is
necessary as normal processing can resume immediately after fault recovery. This is an important
distinction between hardware and software faults.
There has been a need for many years to build fault‐tolerant hardware. The most commonly used
hardware fault‐tolerant technique is based around the notion of triple‐modular redundancy (TMR)
shown in the below figure. The hardware unit is replicated three (or sometimes more) times. The
output from each unit is compared. If one of the units fails and does not produce the same output as
the other units, its output is ignored. The system functions with two working units.
A1
Output
A2
Comparator
A3
Triple modular redundancy to cope with hardware failure
The weakness of both these approaches to fault tolerance is that they are based on the assumption
that the specification is correct. They do not tolerate specification errors.
There have been two comparable approaches to the provision of software fault tolerance. Both have
been derived from the hardware model where a component is replicated.
(1) N‐version programming: Using a common specification, the software system is implemented in a
number of different versions by different teams. These versions are executed in parallel. Their
outputs are compared using a voting system and inconsistent outputs are rejected. At least three
versions of the system should be available.
N‐version programming
(2) Recovery Blocks: this is a finer grain approach to fault tolerance. Each program component
includes a test to check if the component has executed successfully. It also includes alternative
code, which allows the system to back‐up and repeat the computation if the test detects a failure.
Unlike N‐version programming, the implementation is different rather than independent
implementation of the same specification. They are executed in sequence rather than in parallel.
Test for
Try algorithm 1
success Acceptance Continue execution it
Algorithm 1
test acceptance test success
Signal Exception if all
Retry algorithms fail
Acceptance test
Retest Retest
fails‐retry
Algorithm 2 Algorithm 2
Recovery blocks
Exception Handling
When an error of some kind or an unexpected event occurs during the execution of a program, this is
called an exception. Exceptions may be caused by hardware or software errors. When an exception
has not been anticipated, control is transferred to system exceptions handling mechanism. If an
exception has been anticipated, code must be included in the program to detect and handle that
exception.
Most programming languages do not include facilities to detect and handle exceptions. The normal
decision constructs (if statements) of the language must be used to detect the exception and control
constructs used to transfer control to exception occurs in a sequence of nested procedure calls, there
is not easy way to transmit it from one procedure to another.
Consider example as shown in figure below a number of nested procedure calls where procedure A
calls procedure B which calls procedure C. If an exception occurs during the execution of C this may be
so serious that execution of B cannot continue. Procedure B has to return immediately to Procedure
A, which must also be informed that B has terminated abnormally and that an exception has
occurred.
A
B:
B
Exception
C: return
Call
sequence C
Exception
Occurrence
Exception return in embedded procedure calls
An exception handler is something like a case statement. It states exception names and appropriate
actions for each exception.
Exceptions in a freezer temperature controller(C++)
Above table illustrates the use of exceptions and exception handling. These program fragments show
the design of a temperature controller on a food freezer. The required temperature may be set
between –18 and –40 degrees Celsius. Food may start to defrost and bacteria become active at
temperatures over – 18 degrees. The control system maintains this temperature by switching a
refrigerant pump on and off depending on the value of a temperature sensor. If the required
temperature cannot be maintained, the controlled sets off an alarm. The temperature of the freezer is
discovered by interrogating an object called Sensor and the required temperature by inspecting an
object called the exceptions Freezer_too_hot and Control_problem and the type FREEZER_TEMP are
declared. There are no built‐in exceptions in C++ but other information is declared in a separate
header file.
The temperature controller tests the temperature and switches the pump as required. If the
temperature is too hot, it transfers control to the exception handler, which activates an alarm.
In C++, Once an exception has been, it is not re‐thrown.
Defensive programming
Defensive programming is an approach to program development whereby programmers assume that
there may be undetected faults or inconsistencies in their programs. Redundant code is incorporated
to check the System State after modifications and to ensure that the state change is consistent. If
inconsistencies are detected, the state change is retracted or the state is restored to a known correct
state.
Defensive programming is an approach to fault tolerance, which can be carried out without a fault‐
tolerant controller. The techniques used, however, are fundamental to the activities in the fault
tolerance process, namely detecting a failure, damage assessment, and recovering from that failure.
Failure prevention
Programming languages such as Ada and C++ allow many errors which cause state corruption and
system failure to be detected at compile‐time. The compiler can detect those problems which uses
the strict type rules of the language. Compiler checking is obviously limited to static values but the
compiler can also automatically add code to a program to perform run‐time checks.
Damage assessment
Damage assessment involves analyzing the system state to gauge the extent of the state corruption.
In many cases, corruption can be avoided by checking for fault occurrence before finally committing a
change of state. If a fault is detected, the state change is not accepted so that no damage is caused.
However, damage assessment may be needed when a fault arises because a sequence of state
changes (all of which are individually correct) causes the system to enter an incorrect state.
The role of the damage assessment procedures is not to recover from the fault but to assess what
parts of the state space have been affected by the fault. Damage can only be assessed if it is possible
to apply some ‘validity function’, which checks if the state is consistent. If inconsistencies are found,
these are highlighted or signaled in some way.
Other techniques which can be used for fault detection and damage assessment are dependent on
the system state representation and on the application . Possible methods are:
• The use of checksums in data exchange and check digits in numeric data;
• The use of redundant links in data structures which contain pointers;
• The use of watchdog timers in concurrent systems.
A checksum is a value that is computed by applying some mathematical function to the data. The
function used should give a unique value for the packet of data, which is exchanged. The sender who
applies the checksum function to the data and appends that function value to the data computes this
checksum. The receiver applies the same function to the data and compares the checksum values. If
these differ, some data corruption has occurred.
When linked data structures are used, the representation can be made redundant by including
backward pointers. That is, for every reference from A to B, there exists a comparable reference from
B to A. It is also possible to keep count of the number of elements in the structure. Checking can
determine whether or not all pointers have an inverse value and whether or not the stored size and
the computed structure size are the same.
When processes must react within a specific time period, a watch‐dog timer may be installed. A
watch‐dog timer is a timer which must be reset by the executing process after its action is complete.
It is started at the same time as a process and times the process execution. If, for some reason the
process fails to terminate, the watch‐dog timer is not reset. The controller can therefore detect that a
problem has arisen and take action to force process termination.
Fault recovery
Fault recovery is the process of modifying the state space of the system so that the effects of the fault
are minimized. The system can continue in operation, perhaps in same degraded form. Forward
recovery involves trying to correct the damaged System State. Backward recovery restores the System
State to a known ‘correct’ state.
There are two general situations where forward error recovery can be applied:
1. When coded is corrupted The use of coding techniques which add redundancy to the data allows
errors to be corrected as well as detected.
2. When linked structures are corrupted if forward and backward pointers are included in the data
structure, the structure can be recreated if enough pointers remain uncorrupted. This technique is
frequently used for file system and database repair.
Backward error recovery is a simpler technique, which restores the state to a known safe state after
an error has been detected. Most database systems include backward error recovery. When a user
initiates a database computation a transaction is initiated. Changes made during that transaction are
not immediately incorporated in the database. The database is only updated after the transaction is
finished and no problems are detected. If the transaction fails, the database is not updated.
Design by Contract
Meyer suggests an approach to design, called design by contract, to help ensure that a design meets
its specifications. He begins by viewing software system as a set of communicating components
whose interaction is based on a precisely defined specification of what each component is supposed
to do. These specifications, called contracts, govern how the component is to interact with other
components and systems. Such specification cannot guarantee correctness, but it forms a good basis
for testing and validation.
Contract is written between two parties when one commissions the other for a particular service or
product. Each party expects some benefit for some obligation; the supplier produces a service or
product in a given period of time in exchange for money, and the client accepts the service or product
for the money. The contract makes the obligation and benefits explicit.
Mayer applies the notion of a contract to software. A software component, called a client, adopts a
strategy to perform a set of tasks, t1, t2,……tn. In turn, each nontrivial subtask, it is executed when the
client calls another component, the supplier, to perform it. That is a contract between the two
components to perform the sub‐task. Each contract covers mutual obligation (called preconditions),
benefits (called postconditions), and consistency constraints (called invariant). Together, these
contract properties are called assertions.
For example, suppose the client component has a table where each element is identified by a
character string used as a key. Our supplier’s component’s task is to insert an element from the table
into a dictionary of limited size. We can describe the contract between the two components in the
following way.
1. The client component ensures that the dictionary is not full and that the key is nonempty.
2. The supplier component records the element in table.
3. The client component accesses the updated table where the element appears.
4. If the table is full or the key is empty, no action is taken.
3. Suggest six reasons why software reliability is important. Using an example, explain the difficulties
of describing what software reliability means.
Ans: The need for a means to objectively determine software reliability comes from the desire to apply the
techniques of contemporary engineering fields to the development of software. That desire is a result
of the common observation, by both lay‐persons and specialists, that computer software does not
work the way it ought to. In other words, software is seen to exhibit undesirable behaviour, up to and
including outright failure, with consequences for the data which is processed, the machinery on which
the software runs, and by extension the people and materials which those machines might negatively
affect. The more critical the application of the software to economic and production processes, or to
life‐sustaining systems, the more important is the need to assess the software's reliability.
Regardless of the criticality of any single software application, it is also more and more frequently
observed that software has penetrated deeply into most every aspect of modern life through the
technology we use. It is only expected that this infiltration will continue, along with an accompanying
dependency on the software by the systems which maintain our society. As software becomes more
and more crucial to the operation of the systems on which we depend, the argument goes, it only
follows that the software should offer a concomitant level of dependability. In other words, the
software should behave in the way it is intended, or even better, in the way it should.
A software quality factor is a non‐functional requirement for a software program which is not called
up by the customer's contract, but nevertheless is a desirable requirement which enhances the
quality of the software program. Note that none of these factors are binary; that is, they are not
“either you have it or you don’t” traits. Rather, they are characteristics that one seeks to maximize in
one’s software to optimize its quality. So rather than asking whether a software product “has” factor
x, ask instead the degree to which it does (or does not).
Some software quality factors are listed here:
Understandability
Clarity of purpose. This goes further than just a statement of purpose; all of the design and user
documentation must be clearly written so that it is easily understandable. This is obviously subjective
in that the user context must be taken into account: for instance, if the software product is to be used
by software engineers it is not required to be understandable to the layman.
Completeness
Presence of all constituent parts, with each part fully developed. This means that if the code calls a
subroutine from an external library, the software package must provide reference to that library and
all required parameters must be passed. All required input data must also be available.
Conciseness
Minimization of excessive or redundant information or processing. This is important where memory
capacity is limited, and it is generally considered good practice to keep lines of code to a minimum. It
can be improved by replacing repeated functionality by one subroutine or function which achieves
that functionality. It also applies to documents.
Portability
Ability to be run well and easily on multiple computer configurations. Portability can mean both
between different hardware—such as running on a PC as well as a smartphone—and between
different operating systems—such as running on both Mac OS X and GNU/Linux.
Consistency
Uniformity in notation, symbology, appearance, and terminology within itself.
Maintainability
Propensity to facilitate updates to satisfy new requirements. Thus the software product that is
maintainable should be well‐documented, should not be complex, and should have spare capacity for
memory, storage and processor utilization and other resources.
Testability
Disposition to support acceptance criteria and evaluation of performance. Such a characteristic must
be built‐in during the design phase if the product is to be easily testable; a complex design leads to
poor testability.
Usability
Convenience and practicality of use. This is affected by such things as the human‐computer interface.
The component of the software that has most impact on this is the user interface (UI), which for best
usability is usually graphical (i.e. a GUI).
Reliability
Ability to be expected to perform its intended functions satisfactorily. This implies a time factor in
that a reliable product is expected to perform correctly over a period of time. It also encompasses
environmental considerations in that the product is required to perform correctly in whatever
conditions it finds itself (sometimes termed robustness).
Efficiency
Fulfillment of purpose without waste of resources, such as memory, space and processor utilization,
network bandwidth, time, etc.
Security
Ability to protect data against unauthorized access and to withstand malicious or inadvertent
interference with its operations. Besides the presence of appropriate security mechanisms such as
authentication, access control and encryption, security also implies resilience in the face of malicious,
intelligent and adaptive attackers.
Time Example
There are two major differences between hardware and software curves. One difference is that in the
last phase, software does not have an increasing failure rate as hardware does. In this phase, software
is approaching obsolescence; there are no motivation for any upgrades or changes to the software.
Therefore, the failure rate will not change. The second difference is that in the useful‐life phase,
software will experience a drastic increase in failure rate each time an upgrade is made. The failure
rate levels off gradually, partly because of the defects found and fixed after the upgrades.
Test/Debug Useful Life Obsolescence
Upgrade
Failure Rate
Upgrade
Upgrade
λ
Time
4. What are the essential skills and traits necessary for effective project managers in successfully
handling projects?
Ans: The Successful Project Manager: A successful project manager knows how to bring together the
definition and control elements and operate them efficiently. That means you will need to apply the
leadership skills you already apply in running a department and practice the organizational abilities
you need to constantly look to the future. In other words, if you’re a qualified department manager,
you already possess the skills and attributes for succeeding as a project manager. The criteria by
which you will be selected will be similar. Chances are, the project you’re assigned will have a direct
relationship to the skills you need just to do your job. For example:
• Organizational and leadership experience. An executive seeking a qualified project manager usually
seeks someone who has already demonstrated the ability to organize work and to lead others. He
or she assumes that you will succeed in a complicated long‐term project primarily because you
have already demonstrated the required skills and experience.
• Contact with needed resources. For projects that involve a lot of coordination between
departments, divisions, or subsidiaries, top management will look for a project manager who
already communicates outside of a single department. If you have the contacts required for a
project, it will naturally be assumed that you are suited to run a project across departmental lines.
• Ability to coordinate a diverse resource pool. By itself, contact outside of your department may not
be enough. You must also be able to work with a variety of people and departments, even when
their backgrounds and disciplines are dissimilar. For example, as a capable project manager, you
must be able to delegate and monitor work not only in areas familiar to your own department but
in areas that are alien to your background.
• Communication and procedural skills. An effective project manager will be able to convey and
receive information to and from a number of team members, even when particular points of view
are different from his own. For example, a strictly administrative manager should understand the
priorities of a sales department, or a customer service manager may need to understand what
motivates a production crew.
• Ability to delegate and monitor work. Project managers need to delegate the work that will be
performed by each team member, and to monitor that work to stay on schedule and within budget.
A contractor who builds a house has to understand the processes involved for work done by each
subcontractor, even if the work is highly specialized. The same is true for every project manager.
It’s not enough merely to assign someone else a task, complete with a schedule and a budget.
Delegation and monitoring are effective only if you’re also able to supervise and assess progress.
• Dependability. Your dependability can be tested only in one way: by being given responsibility and
the chance to come through. Once you gain the reputation as a manager who can and does
respond as expected, you’re ready to take on a project.
These project management qualifications read like a list of evaluation points for every department
manager. If you think of the process of running your department as a project of its own, then you
already understand what it’s like to organize a project—the difference, of course, being that the
project takes place in a finite time period, whereas your departmental tasks are ongoing. Thus, every
successful manager should be ready to tackle a project, provided it is related to his or her skills,
resources, and experience.
5. Which are the four phases of development according to Rational Unified Process?
Ans: The Rational Unified Process® is a Software Engineering Process. It provides a disciplined approach to
assigning tasks and responsibilities within a development organization. Its goal is to ensure the
production of high‐quality software that meets the needs of its end‐users, within a predictable
schedule and budget.
The Rational Unified Process is a process product, developed and maintained by Rational® Software.
The development team for the Rational Unified Process are working closely with customers, partners,
Rationale’s product groups as well as Rationales consultant organization, to ensure that the process is
continuously updated and improved upon to reflect recent experiences and evolving and proven best
practices. The Rational Unified Process enhances team productivity, by providing every team member
with easy access to a knowledge base with guidelines, templates and tool mentors for all critical
development activities. By having all team members accessing the same knowledge base, no matter if
you work with requirements, design, test, project management, or configuration management, we
ensure that all team members share a common language, process and view of how to develop
software.
The Rational Unified Process activities create and maintain models. Rather than focusing on the
production of large amount of paper documents, the Unified Process emphasizes the development
and maintenance of models—semantically rich representations of the software system under
development.
The Rational Unified Process is a guide for how to effectively use the Unified Modeling Language
(UML). The UML is an industry‐standard language that allows us to clearly communicate
requirements, architectures and designs. The UML was originally created by Rational Software, and is
now maintained by the standards organization Object Management Group (OMG).
Effective Deployment of 6 Best Practices
The Rational Unified Process describes how to effectively deploy commercially proven approaches to
software development for software development teams. These are called “best practices” not so
much because you can precisely quantify their value, but rather, because they are observed to be
commonly used in industry by successful organizations. The Rational Unified Process provides each
team member with the guidelines, templates and tool mentors necessary for the entire team to take
full advantage of among others the following best practices:
1. Develop software iteratively
2. Manage requirements
3. Use component‐based architectures Rational Unified Process: Best Practices for Software
development Teams
4. Visually model software
5. Verify software quality
6. Control changes to software
Develop Software Iteratively
Given today’s sophisticated software systems, it is not possible to sequentially first define the entire
problem, design the entire solution, build the software and then test the product at the end. An
iterative approach is required that allows an increasing understanding of the problem through
successive refinements, and to incrementally grow an effective solution over multiple iterations. The
Rational Unified Process supports an iterative approach to development that addresses the highest
risk items at every stage in the lifecycle, significantly reducing a project’s risk profile. This iterative
approach helps you attack risk through demonstrable progress frequent, executable releases that
enable continuous end user involvement and feedback. Because each iteration ends with an
executable release, the development team stays focused on producing results, and frequent status
checks help ensure that the project stays on schedule. An iterative approach also makes it easier to
accommodate tactical changes in requirements, features or schedule.
Manage Requirements
The Rational Unified Process describes how to elicit, organize, and document required functionality
and constraints; track and document tradeoffs and decisions; and easily capture and communicate
business requirements. The notions of use case and scenarios proscribed in the process has proven to
be an excellent way to capture functional requirements and to ensure that these drive the design,
implementation and testing of software, making it more likely that the final system fulfills the end
user needs. They provide coherent and traceable threads through both the development and the
delivered system.
Use Component‐based Architectures
The process focuses on early development and baselining of a robust executable architecture, prior to
committing resources for full‐scale development. It describes how to design a resilient architecture
that is flexible, accommodates change, is intuitively understandable, and promotes more
effective software reuse. The Rational Unified Process supports component‐based software
development.
Components are non‐trivial modules, subsystems that fulfill a clear function. The Rational Unified
Process provides a systematic approach to defining an architecture using new and existing
components. These are assembled in a well‐defined architecture, either ad hoc, or in a component
infrastructure such as the Internet, CORBA, and COM, for which an industry of reusable components
is emerging.
Visually Model Software
The process shows you how to visually model software to capture the structure and behavior of
architectures and components. This allows you to hide the details and write code using “graphical
building blocks.” Visual abstractions help you communicate different aspects of your software; see
how the elements of the system fit together; make sure that the building blocks are consistent with
your code; maintain consistency between a design and its implementation; and promote
unambiguous communication. The industrystandard Unified Modeling Language (UML), created by
Rational Software, is the foundation for successful visual modeling.
Verify Software Quality
Poor application performance and poor reliability are common factors which dramatically inhibit the
acceptability of today’s software applications. Hence, quality should be reviewed with respect to the
requirements based on reliability, functionality, application performance and system performance.
The Rational Unified Process assists you in the planning, design, implementation, execution, and
evaluation of these test types. Quality assessment is built into the process, in all activities, involving all
participants, using objective measurements and criteria, and not treated as an afterthought or a
separate activity performed by a separate group.
Control Changes to Software
The ability to manage change is making certain that each change is acceptable, and being able to track
changes is essential in an environment in which change is inevitable. The process describes how to
control, track and monitor changes to enable successful iterative development. It also guides you in
how to establish secure workspaces for each developer by providing isolation from changes made in
other workspaces and by controlling changes of all software artifacts (e.g., models, code, documents,
etc.). And it brings a team together to work as a single unit by describing how to automate integration
and build management.
The Rational Unified Process product consists of:
• A web‐enabled searchable knowledge base providing all team members with guidelines, templates,
and tool mentors for all critical development activities. The knowledge base can further be broken
down to:
• Extensive guidelines for all team members, and all portions of the software lifecycle. Guidance is
provided for both the high‐level thought process, as well as for the more tedious day‐to‐day
activities. The guidance is published in HTML form for easy platform‐independent access on your
desktop.
• Tool mentors providing hands‐on guidance for tools covering the full lifecycle. The tool mentors are
published in HTML form for easy platform‐independent access on your desktop. See section
"Integration with Tools" for more details.
• Rational Rose ® examples and templates providing guidance for how to structure the information in
Rational Rose when following the Rational Unified Process (Rational Rose is Rational's tool for visual
modeling)
• SoDA ® templates more than 10 SoDA templates that helps automate software documentation
(SoDA is Rationale’s Document Automation Tool)
• Microsoft® Word templates more than 30 Word templates assisting documentation in all workflows
and all portions of the lifecycle Microsoft Project Plans
• Many managers find it difficult to create project plans that reflects an iterative development
approach. Our templates jump start the creation of project plans for iterative development,
according to the Rational Unified Process.
• Development Kit: Describes how to customize and extend the Rational Unified Process to the
specific needs of the adopting organization or project, as well as provides tools and templates to
assist the effort. This development kit is described in more detail later in this section.
• Access to Resource Center containing the latest white papers, updates, hints, and techniques, as
well as references to add‐on products and services.
• A book "Rational Unified Process — An Introduction", by Philippe Kruchten, published by Addison
Wesley. The book is on 277 pages and provides a good introduction and overview to the process
and the knowledge base.
6. Describe the Capability Maturity Model with suitable real time examples.
Ans: The Capability Maturity Model (CMM)) is a multistaged, process definition model intended to
characterize and guide the engineering excellence or maturity of an organization’s software
development processes. The Capability Maturity Model: Guidelines for Improving the Software
Process (1995) contains an authoritative description. See also Paulk et al. (1993) and Curtis, Hefley,
and Miller (1995) and, for general remarks on continuous process improvement, Somerville, Sawyer,
and Viller (1999) (see Table 3.2). The model prescribes practices for “planning, engineering, and
managing software development and maintenance” and addresses the usual goals of organizational
system engineering processes: namely, “quality improvement, risk reduction, cost reduction,
predictable process, and statistical quality control” (Oshana& Linger 1999).
However, the model is not merely a program for how to develop software in a professional,
engineering‐based manner; it prescribes an “evolutionary improvement path from an ad hoc,
immature process to a mature, disciplined process” (Oshana& Linger 1999). Walnau, Hissam, and
Seacord (2002) observe that the ISO and CMM process standards “established the context for
improving the practice of software develop‐ meant” by identifying roles and behaviors that define a
software factory.
The CMM identifies five levels of software development maturity in an organization:
• At level 1, the organization’s software development follows no formal development process.
• The process maturity is said to be at level 2 if software management controls have been introduced
and some software process is followed. A decisive feature of this level is that the organization’s
process is supposed to be such that it can repeat the level of performance that it achieved on
similar successful past projects. This is related to a central purpose of the CMM: namely, to improve
the predictability of the development process significantly. The major technical requirement at
level 2 is incorporation of configuration management into the process.
Configuration management(or change management, as it is sometimes called) refers to the processes
used to keep track of the changes made to the development product (including all the intermediate
deliverables) and the multifarious impacts of these changes. These impacts range from the
recognition of development problems; identification of the need for changes; alteration of previous
work; verification that agreed upon modifications have corrected the problem and that corrections
have not had a negative impact on other parts of the system; etc.
An organization is said to be at level 3 if the development process is standard and consistent. The
project management practices of the organization are supposed to have been formally agreed
on,defined, and codified at this stage of process maturity.
• Organizations at level 4 are presumed to have put into place qualitative and quantitative measures
of organizational process. These process metrics are intended to monitor development and to
signal trouble and indicate where and how a development is going wrong when problems occur.
• Organizations at maturity level 5 are assumed to have established mechanisms designed to ensure
continuous process improvement and optimization. The metric feedbacks at this stage are not just
applied to recognize and control problems with the current project as they were in level‐4
organizations. They are intended to identify possible root causes in the process that have allowed
the problems to occur and to guide the evolution of the process so as to prevent the recurrence of
such problems in future projects, such as through the introduction of appropriate new technologies
and tools.
The higher the CMM maturity level is, the more disciplined, stable, and well‐defined the development
process is expected to be and the environment is assumed to make more use of “automated tools and
the experience gained from many past successes” (Zhiying 2003). The staged character of the model
lets organizations progress up the maturity ladder by setting process targets for the organization.
Each advance reflects a further degree of stabilization of an organization’s development process, with
each level “institutionaliz[ing] a different aspect” of the process (Oshana& Linger 1999).
Each CMM level has associated key process areas (KPA) that correspond to activities that must be
formalized to attain that level. For example, the KPAs at level 2 include configuration management,
quality assurance, project planning and tracking, and effective management of subcontracted
software. The KPAs at level 3 include intergroup communication, training, process definition, product
engineering, and integrated software management. Quantitative process management and
development quality define the required KPAs at level 4. Level 5 institutionalizes process and
technology change management and optimizes defect prevention.
Bamberger (1997), one of the authors of the Capability Maturity Model, addresses what she believes
are some misconceptions about the model. For example, she observes that the motivation for the
second level, in which the organization must have a “repeatable software process,” arises as a direct
response to the historical experience of developers when their software development is “out of
control” (Bamberger 1997). Often this is for reasons having to do with configuration management – or
mismanagement! Among the many symptoms of configuration mismanagement are: confusion over
which version of a file is the current official one; inadvertent side effects when repairs by one
developer obliterate the changes of another developer; inconsistencies among the efforts of different
developers; etc.
A key appropriate response to such actual or potential disorder is to get control of the product and
the “product pieces under development” (configuration management) by (Bamberger 1997):
• Controlling the feature set of the product so that the “impact/s of changes are more fully
understood” (requirements management)
• Using the feature set to estimate the budget and schedule while “leveraging as much past
knowledge as possible” (project planning)
• Ensuring schedules and plans are visible to all the stakeholders (project tracking)
• Ensuring that the team follows its own plan and standards and “corrects discrepancies when they
occur” (quality assurance)
• Bamberger contends that this kind of process establishes the “basic stability and visibility” that are
the essence of the CMM repeatable level.