Академический Документы
Профессиональный Документы
Культура Документы
AJAY PASHANKAR
INDEX
PAGE NO.
SR.NO TOPIC
From To
1 Project management 3 10
3 Project Scheduling 24 37
5 Management of OO Software 52 57
Syllabus
Size & Effort Estimation –Concepts of LOC & Estimation, Function Point, COCOMO Model,
Concept of Effort Estimation & Uncertainty .
Project Scheduling, Building WBS, Use of Gantt & PERT/CPM chart Staffing .
Process Management, CMM & its levels , Risk Management & activities .
Changing Trends In Software Development - Unified Process, Its phases & disciplines,
Agile Development – Principles & Practices, Extreme programming- Core values & Practices
The project management process specifies all activities that need to be done by the
project management to ensure that cost and quality objectives are met.
planning a project,
o the basic task is to plan the detailed implementation of the process for
the particular project and then ensure that the plan is followed
estimating resource and schedule,
and monitoring and controlling the project.
The activities in the management process for a project are grouped into three phases:
planning,
o Project management begins with planning.
o The goal of this phase is to develop a plan for software development
following which the objectives of the project can be met successfully and
efficiently.
o A software plan is produced before the development activity begins and is
updated as development proceeds and data about progress of the project
becomes available.
o the major activities are
cost estimation,
schedule and milestone determination,
project staffing,
quality control plans,
controlling and monitoring plans.
o it forms the basis for monitoring and control.
monitoring and control
o this process is the longest in terms of duration, it encompasses most of
the development process.
o It includes all activities the project management has to perform while the
development is going on to ensure that project objectives are met and
the development proceeds according to the developed plan (and update
the plan, if needed).
o Factors to monitor are
cost, schedule
quality
Monitoring potential risks for the project, which might
prevent the project from meeting its objectives
And if the information obtained by monitoring suggests that objectives may
not be met, necessary actions are taken in this phase by exerting suitable
control on the development activities.
Requires proper information about the project. Such information is typically
obtained by the management process from the development process. As
shown earlier in Figure 2.2, the implementation of a development process
model should be such that each step in the development process produces
information that the management process needs for that step.
That is, the development process provides the information the management
process needs. However, interpretation of the information is part of
monitoring and control.
Termination analysis.
o This phase is performed when the development process is over.
o The basic reason is to provide information about the development
process and learn from the project in order to improve the process.
o This phase is also often called postmortem analysis.
o In iterative development, this analysis can be done after each iteration to
provide feedback to improve the execution of further iterations. The
temporal relationship between the management process and the
development process is shown in Figure 2.12.
o This is an idealized relationship showing that planning is done before
development begins, and termination analysis is done after development
is over.
o During the development, from the various phases of the development
process, quantitative information flows to the monitoring and control
phase of the management process, which uses the information to exert
control on the development process.
-------------------------------------------------------------------------------------------------------
plan is developed that specifies the activities that must take place, the deliverables that
must be produced, and the resources that are needed.
project management can be defined as the processes used to plan the project and then
to monitor and control it.
The project manager defines and executes project management tasks. The success or
failure of a given project is directly related to the skills and abilities of the project
manager.
In some companies, the project manager functions as a coordinator and does not have
direct ―line‖ (reporting) authority.
At the other end of the spectrum, for big development projects, the project manager may
be a very experienced developer with both management skills and a solid understanding
of a full range of technical issues. In those situations, the role of the project manager is
very much a ―line‖ position with responsibility and authority for other staff members.
Many career paths lead to project management. In some companies, the project
coordination role is done by recent college graduates. Other companies recognize the
value of a person with very strong organizational and people skills, who understands the
technology but does not want a highly technical career. Those companies provide
opportunities for employees to gain experience in management and business skills and to
advance to project management through experience as a coordinator of smaller projects.
Other companies take a ―lead engineer‖ approach to project management, in which a
person must thoroughly understand the technology to manage a project. These
companies believe that project management requires someone with strong development
skills to understand the technical issues and to manage other developers.
-------------------------------------------------------------------------------------------------------------
Q. What are the responsibilities of the project manager?(OCT 12)5M.
o Work directly with the client (the project’s sponsor) and other
stakeholders.
o Identify resource needs and obtain resources.
the project manager does not always perform all the tasks involved with these
responsibilities; other team members assist the manager.
The role of the project manager and careers in project management vary
tremendously. Figure 3-1 lists some of the different positions project managers
hold.
TITLE POWER/AU ORGANIZATIONAL DESCRIPTION OF DUTIES
THORITY STRUCTURE
Project Moderate Projects are run within the IT • May have both project
manager department, but other management duties and
business functions are some technical duties
Project independent
officer • Manages projects that are
generally medium sized.
Team
leader • May share project
responsibility with clients.
At the other end of the spectrum, for big development projects, the project
manager may be a very experienced developer with both management skills
and a solid understanding of a full range of technical issues. In those situations,
the role of the project manager is very much a ―line‖ position with responsibility
and authority for other staff members.
-------------------------------------------------------------------------------------------------------
Project Scope Management. Defining and controlling the functions that are to
be included in the system, as well as the scope of the work to be done by the
project team
Project Time Management. Building a detailed schedule of all project tasks
and then monitoring the progress of the project against defined milestones
Project Cost Management. Calculating the initial cost/benefit analysis and its
later updates and monitoring expenditures as the project progresses
Project Quality Management. Establishing a comprehensive plan for ensuring
quality, which includes quality-control activities for every phase of the
project?
Project Human Resource Management. Recruiting and hiring project team
members; training, motivating, and team building; and implementing related
activities to ensure a happy, productive team
Project Communications Management. Identifying all stakeholders and the
key communications to each; also establishing all communications
mechanisms and schedules
Project Risk Management. Identifying and reviewing throughout the project all
potential risks for failure and developing plans to reduce these risks
Project Procurement Management. Developing requests for proposals,
evaluating bids, writing contracts, and then monitoring vendor performance
Project Integration Management. Integrating all the other knowledge areas
into one seamless whole
-------------------------------------------------------------------------------------------------------
Q. Write a short note on software metrics?
Hence, we can say that metrics-based management is also a key component in the software
engineering strategy to achieve its objectives.
-------------------------------------------------------------------------------------------------------
Q. What are the attributes of effective s/w metrics?
Hundreds of metrics have been proposed for computer software, but not all provide
practical support to the software engineer. Some demand measurement that is too complex,
others are so obscure that few real-world professionals have any hope of understanding
them, and others violate the basic intuitive notions of what high- quality software really is.
Simple and It should be relatively easy to learn how to derive the metric, and
computable its computation should not demand inordinate effort or time.
Empirically and The metric should satisfy the engineer’s intuitive notions about the
intuitively persuasive product attribute under consideration (e.g., a metric that measures
module cohesion should increase in value as the level of cohesion
increases).
Consistent and The metric should always yield results that are unambiguous. An
objective independent third party should be able to derive the same metric
value using the same information about the software.
Consistent in its use of The mathematical computation of the metric should use measures
units and dimensions that do not lead to bizarre combinations of units. For example,
multiplying people on the project teams by programming language
variables in the program, results in a suspicious mix of units that
are not intuitively persuasive.
An effective That is, the metric should provide you with information that can
mechanism for high- lead to a higher-quality end product.
quality feedback.
Although most software metrics satisfy these attributes, some commonly used
metrics may fail to satisfy one or two of them.
An example is the function point a measure of the ―functionality‖ delivered by the
software.
It can be argued that the consistent end version of the product somewhat less
functional, but we can announce all functionality and then deliver over the 14-
month period.
Third, we can dispense with reality and wish the project complete in nine months.
We’ll wind up with nothing that can be delivered to a customer.
The third option, I hope you’ll agree, is unacceptable. Past history and our best
estimates say that it is unrealistic and a recipe for disaster.
There will be some grumbling, but if a solid estimate based on good historical
data is presented, it’s likely that negotiated versions of option 1 or 2 will be
chosen.
The unrealistic deadline evaporates.
Quick Revision
1 3
Q. What are the objectives of the project manager?
2 4
Q. What are the roles of the project manager?
3 5
Q. What are the responsibilities of the project manager?
4 6
Q. Write a short note on project management knowledge areas?
5 8
Q. What are the steps involved in change management?
Topic Covered:
Concepts of LOC & Estimation, Function Point, COCOMO Model, Concept of Effort
Estimation & Uncertainty
-------------------------------------------------------------------------------------------------------
Q. Write a short note on LOC estimation with a suitable example?(March, OCT 13,
March 14)5M.
Problem-Based Estimation
LOC and FP data are used in two ways during software project estimation:
LOC and FP estimation are distinct estimation techniques. Both have a number of
characteristics in common.
1. You begin with a bounded statement of software scope and from this statement
attempt to decompose the statement of scope into problem functions that can
each be estimated individually.
2. LOC or FP (the estimation variable) is then estimated for each function.
Alternatively, you may choose another component for sizing, such as classes or
objects, changes, or business processes affected.
3. Baseline productivity metrics (e.g., LOC/pm or FP/pm6) are then applied to the
appropriate estimation variable, and cost or effort for the function is derived.
Function estimates are combined to produce an overall estimate for the entire
project.
NOTE: There is often substantial scatter in productivity metrics for an organization,
making the use of a single-baseline productivity metric suspect.
When a new project is estimated, it should first be allocated to a domain, and then
the appropriate domain average for past productivity should be used in generating the
estimate.
The LOC and FP estimation techniques differ in the level of detail required for
decomposition and the target of the partitioning.
The resultant estimates can then be used to derive an FP value that can be tied
to past data and used to generate an estimate.
Regardless of the estimation variable that is used, you should begin by
estimating a range of values for each function or information domain value.
Using historical data or (when all else fails) intuition, estimate an optimistic,
most likely, and pessimistic size value for each function or count for each
information domain value. An implicit indication of the degree of uncertainty is
provided when a range of values is specified.
A three-point or expected value for the estimation variable (size) Scan be
computed as a weighted average of the optimistic (s0j, most likely (Sm), and
pessimistic estimates. For example,
s = Sopt + 4Sm + Spess
gives heaviest credence to the ―most likely‖ estimate and follows a beta
probability distribution. We assume that there is a very small probability the actual
size result will fall outside the optimistic or pessimistic values.
Once the expected value for the estimation variable has been determined,
historical LOC or FP productivity data are applied. Are the estimates
correct? The only reasonable answer to this question is: ―You can’t be
sure.‖ Any estimation technique, no matter how sophisticated, must be
cross-checked with another approach.
An Example of LOC-Based Estimation
This statement of scope is preliminary—it is not bounded. Every sentence would have
to be expanded to provide concrete detail and quantitative bounding. For example, before
estimation can begin, the planner must determine what ―characteristics of good
human/machine interface design‖ means or what the size and sophistication of the ―CAD
database‖ are to be.
For our purposes, assume that further refinement has occurred and that the
major software functions listed in following figure are identified.
Following the decomposition technique for LOC, an estimation table (Figure 26.2) is
developed. A range of LOC estimates is developed for each function.
For example, the range of LOC estimates for the 3D geometric analysis function is
optimistic, 4600 LOC; most likely, 6900 LOC; and pessimistic, 8600 LOC. Applying the
equation to the expected value for the 3D geometric analysis function is 6800 LOC.
The organizational average productivity for systems of this type is 620 LOC/pm.
Based on a burdened labor rate of $8000 per month, the cost per line of code is
approximately $13.
Based on the LOC estimate and the historical productivity data, the total
estimated project cost is $431,000 and the estimated effort is 54 person-
months’
-------------------------------------------------------------------------------------------------------
Function Point:
Requirement of Function point:
A major problem after requirements are done is to estimate the effort and
schedule for the project. For this, some metrics are needed that can be extracted from
the requirements and used to estimate cost and schedule (through the use of some
model). As the primary factor that determines the cost (and schedule) of a software
project is its size, a metric that can help get an idea of the size of the project will be
useful for estimating cost. This implies that during the requirement phase measuring
the size of the requirement specification itself is pointless, unless the size of the SRS
reflects the effort required for the project. This also requires that relationships of any
proposed size measure with the ultimate effort of the project be established before
making general use of the metric. A commonly used size metric for requirements is
the size of the text of the SRS. The size could be in number of pages, number of
paragraphs, number of functional requirements, etc. As can be imagined, these
measures are highly dependent on the authors of the document. A talkative analyst
who likes to make heavy use of illustrations may produce an SRS that is many times
the size of the SRS of a short analysis. Similarly, how much an analyst refines the
requirements has an impact on the size of the document. Generally, such metrics
cannot be accurate indicators of the size of the project. They are used mostly to
convey a general sense about the size of the project.
Function points are one of the most widely used measures of software size. The basis
of function points is that the "functionality" of a system, that is, what the system performs,
is the measure of the system size. And as functionality is independent of how the
requirements of the system are specified, or even how they are eventually implemented,
parameters that can be obtained after requirements analysis and that are independent of
the specification (and implementation) language.
The original formulation for computing the function points uses the count of five
different parameters. To account for complexity, each parameter in a type is classified as
Simple
Average or
complex.
1. External input types: Each unique input (data or control) type that is given as
input to the application from outside is considered of external input type and is
counted. An external input type is considered unique if the format is different from
others or if the specifications require a different processing for this type from other
inputs of the same format. The source of the external input can be the user, or some
other application, files. An external input type is considered simple if it has a few
data elements and affects only a few internal files of the application. It is considered
complex if it has many data items and many internal logical files are needed for
processing them. The complexity is average if it is in between. Note that files needed
by the operating system or the hardware (e.g., configuration files) are not counted
as external input files because they do not belong to the application but are needed
due to the underlying technology. Reports or messages to the users or other
applications are counted as external input types.
2. External output types: Each unique output that leaves the system boundary is
counted is an external output type. Again, an external output type is considered
unique if its format or processing is different. For a report, if it contains a few
columns it is considered simple, if it has multiple columns it is considered average,
and if it contains complex structure of data and references many files for production,
it is considered complex.
3. Logical internal file types: Each application maintains information internally for
performing its functions. Each logical group of data or control information that is
generated, used, and maintained by the application is counted as a logical internal
file type. A logical internal file is simple if it contains a few record types, complex if it
has many record types, and average if it is in between.
4. External interface file types: Files that are passed or shared between applications
are counted as external interface file type. Note that each such file is counted for all
the applications sharing it. The complexity levels are defined as for logical internal
file type.
5. External inquiry types: A system may have queries also, where a query is defined
as an input/output combination where the input causes the output to be generated
almost immediately. Each unique input-output pair is counted as an external inquiry
type. A query is unique if it differs from others in format of input or output or if it
requires different processing. For classifying the query type, the input and output are
classified as for external input type and external output type, respectively. The query
complexity is the larger of the two.
These five parameters capture the entire functionality of a system. However, two
elements of the same type may differ in their complexity and hence should not contribute
the same amount to the "functionality" of the system.
Each element of the same type and complexity contributes a fixed and same amount
to the overall function point count of the system (which is a measure of the functionality of
the system), but the contribution is different for the different types, and for a type, it is
different for different complexity levels. The counts for all five different types are known for
all three different complexity classes, the raw or unadjusted function point (UFP) can be
computed as a weighted sum as follows:
where i reflects the row and j reflects the column in Table 3.3; wij is the entry in the
i'th row and jth column of the table (i.e., it represents the contribution of an element of
the type i and complexity j ) ; and Cij is the count of the number of elements of type i that
have been classified as having the complexity corresponding to column j .
The 14 degrees of influence for the system are then summed, giving a total A'' (A^
ranges from 0 to 14*5=70). This N is used to obtain a complexity adjustment factor (CAP)
as follows:
With this equation, the value of CAF ranges between 0.65 and 1.35. The delivered
function points (DFP) are simply computed by multiplying the UFP by CA. i.e.
As we can see, by adjustment for environment complexity, the DFP can differ from
the UFP by at most 35%. The final function point count for an application is the computed
DFP.
Function points have been used as a size measure extensively and have been used
for cost estimation. Studies have also been done to establish correlation between DFP and
the final size of the software (measured in lines of code.)
By building models between function points and delivered lines of code (and existing
results have shown that a reasonably strong correlation exists between DFP and KLOC so
that such models can be built), one can estimate the size of the software in KLOC, if
desired.
As can be seen from the manner in which the functionality of the system is defined,
the function point approach has been designed for the data processing type of
applications. For data processing applications, function points generally perform very well
and have now gained a widespread acceptance. For such applications, function points are
used as an effective means of estimating cost and evaluating productivity.
However, its utility as a size measure for nondata processing types of applications
(e.g., real-time software, operating systems, and scientific applications) has not been well
established, and it is generally behaved that for such applications function points are not
very well suited.
A major drawback of the function point approach is that the process of computing
the function points involves subjective evaluation at various points and the final computed
function point for a given SRS may not be unique and can depend on the analyst.
(1) different interpretations of the SRS (e.g., whether something should count as an
external input type or an external interface type; whether or not something constitutes a
logical internal file; if two reports differ in a very minor way should they be counted as two
or one);
(2) complexity estimation of a user function is totally subjective and depends entirely
on the analyst (an analyst may classify something as complex while someone else may
classify it as average) and complexity can have a substantial impact on the final count as
the weighs for simple and complex frequently differ by a factor of 2; and
These factors make the process of function point counting somewhat subjective.
Organizations that use function points try to specify a more precise set of counting rules in
an effort to reduce this subjectivity. It has also been found that with experience this
subjectivity is reduced. Overall, despite this subjectivity, use of function points for data
processing applications continues to grow.
The main advantage of function points over the size metric of KLOC, the other
commonly used approach, is that the definition of DFP depends only on information
available from the specifications, whereas the size in KLOC cannot be directly determined
from specifications. Furthermore, the DFP count is independent of the language in which
the project is implemented.
Though these are major advantages, another drawback of the function point
approach is that even when the project is finished, the DFP is not uniquely known and has
subjectivity. This makes building of models for cost estimation hard, as these models are
based on information about completed projects. In addition, determining the DFP—from
either the requirements or a completed project—cannot be automated. That is, considerable
effort is required to obtain the size, even for a completed project. This is a drawback
compared to KLOC measure, as KLOC can be determined uniquely by automated tools once
the project is completed.
-------------------------------------------------------------------------------------------------------
COCOMO Model
A top-down model can depend on many different factors, instead of depending only
on one variable, giving rise to multivariable models. One approach for building multivariable
models is to start with an initial estimate determined by using the static single-variable
model equations, which depend on size, and then adjusting the estimates based on other
variables. This approach implies that size is the primary factor for cost; other factors have a
lesser effect. Here we will discuss one such model called the Constructive Cost Model
(COCOMO) developed by Boehm [20, 21], This model also estimates the total effort in
terms of person-months. The basic steps in this model are:
3. Adjust the effort estimate by multiplying the initial estimate with all the
multiplying factors.
Ei = a * (KLOC)b.
The value of the constants a and b depend on the project type. In COCOMO, projects
are categorized into three types-—organic, semidetached, and embedded. These categories
roughly characterize the complexity of the project with organic projects being those that are
relatively straightforward and developed by a small team, and embedded are those that are
ambitious and novel, with stringent constraints from the environment and high
requirements for such aspects as interfacing and reliability. The constants a and b for
different systems are:
The values of the constants for a cost model depend on the process and have to be
determined from past data. COCOMO has instead provided "global" constant values.
These values should be considered as values to start with until data for some projects is
available. With project data, the value of the constants can be determined through
regression analysis. There are 15 different attributes, called cost driver attributes^ that
determine the multiplying factors. These factors depend on product, computer, personnel,
and technology attributes (called project attributes). Examples of the attributes are required
software reliability (RELY), product complexity (CPLX), analyst capability (ACAP), application
experience (AEXP), use of modern tools (TOOL), and required development schedule
(SCHD), Each cost driver has a rating scale, and for each rating, a multiplying factor is
provided. For example, for the product attribute RELY, the rating scale is very low, low,
nominal, high, and very high (and in some cases extra high). The multiplying factors for
these ratings are .75, .88, 1.00, 1.15, and 1.40, respectively. So, if the reliability
requirement for the project is judged to be low then the multiplying factor is .75,
while if it is judged to be very high the factor is 1.40. The attributes and their multiplying
factors for different ratings are shown in Table 5.1 [20, 21]. The COCOMO approach also
provides guidelines for assessing the rating for the different attributes [20]. The multiplying
factors for ah 15 cost drivers are multiplied to get the effort adjustment factor (EAF). The
final effort estimate, E, is obtained by multiplying the initial estimate by the EAF. That is,
E = EAF ^ Ei..
By this method, the overall cost of the project can be estimated. For planning and
monitoring purposes, estimates of the effort required for the different phases are also
desirable. In COCOMO, effort for a phase is a defined percentage of the overall effort. The
percentage of total effort spent in a phase varies with the type and size of the project. The
percentages for an organic software project are given in Table 5.2. Using this table, the
estimate of the effort required for each phase can be determined from the total effort
estimate. For example, if the total effort estimate for an organic software system is 20 PM
and the size estimate is 20KLOC, then the percentage effort for the coding and unit testing
phase will be 40 + (38 - 40)/(32 - 8) * 20 - 39%. The estimate for the effort needed
for this phase is 7.8 PM. This table does not list the cost of requirements as a
percentage of the total cost estimate because the project plan (and cost estimation) is being
done after the requirements are complete. In COCOMO the detailed design and code and
unit testing are sometimes combined into one phase called the programming phase. As an
example, suppose a system for office automation has to be designed. Prom the
requirements, it is clear that there will be four major modules in the system: data entry,
data update, query, and report generator. It is also clear from the requirements that this
project will fall in the organic category. The sizes for the different modules and the overall
system are estimated to be:
From the requirements, the ratings of the different cost driver attributes are assessed.
These ratings, along with their multiplying factors, are:
All other factors had a nominal rating. From these, the effort adjustment factor (EAF) is
The initial effort estimate for the project is obtained from the relevant equations.
Using the preceding table, we obtain the percentage of the total effort consumed in different
phases. The office automation system's size estimate is 3 KLOC, so we will have to use
interpolation to get the appropriate percentage (the two end values for interpolation will be
the percentages for 2 KLOC and 8 KLOC). The percentages for the different phases are:
design—16%,
detailed design—25.83%
With these, the effort estimates for the different phases are:
-------------------------------------------------------------------------------------------------------
For a given set of requirements it is desirable to know how much it will cost to
develop the software and how much time the development will take. These estimates are
needed before development is initiated. The primary reason for cost and schedule
estimation is cost-benefit analysis, and project monitoring and control. A more practical
use of these estimates is in bidding for software projects, where cost estimates must be
given to a potential client for the development contract. The bulk of the cost of software
development is due to the human resources needed, and therefore most cost estimation
procedures focus on estimating effort in terms of person-months (PM). By properly
including the "overheads" (i.e., the cost of hardware, software, office space, etc.) in the cost
of a person-month, effort estimates can be converted into cost. For a software development
project, effort and schedule estimates are essential prerequisites for managing the project.
Otherwise, even simple questions like "is the project late?" "are there cost overruns?" and
"when is the project likely to complete?" cannot be answered. Effort and schedule estimates
are also required to determine the staffing level for a project during different phases.
One can perform effort estimation at any point in the software life cycle. As the effort
of the project depends on the nature and characteristics of the project, at any point, the
accuracy of the estimate will depend on the amount of reliable information we have about
the final product. Clearly, when the product is delivered, the effort can be accurately
determined, as all the data
about the project and the resources spent can be fully known by then. This is effort
estimation with complete knowledge about the project. On the other extreme is the point
when the project is being initiated or during the feasibility study. At this time, we have
only some idea of the classes of data the system will get and produce and the major
functionality of the system.
There is a great deal of uncertainty about the actual specifications of the system.
Specifications with uncertainty represent a range of possible final products, not one
precisely defined product. Hence, the effort estimation based on this type of information
cannot be accurate. Estimates at this phase of the project can be off by as much as a factor
As we specify the system more fully and accurately, the uncertainties are reduced
and more accurate effort estimates can be made. For example, once the requirements are
completely specified, more accurate effort estimates can be made compared to the
estimates after the feasibility study. Once the design is complete, the estimates can be
made still more accurately. The obtainable accuracy of the estimates as it varies with the
different phases is shown in Figure
Note that this figure is simply specifying the limitations of effort estimating
strategies—the best accuracy a effort estimating strategy can hope to achieve. It does not
say anything about the existence of strategies that can provide the estimate with that
accuracy. For actual effort estimation, estimation models or procedures have to be
developed. The accuracy of the estimates will depend on the effectiveness and accuracy of
the estimation procedures or models employed and the process (i.e., how predictable it is).
Despite the limitations, estimation models have matured considerably and generally
give fairly accurate estimates. For example, when the COCOMO model (discussed later) was
checked with data from some projects, it was found that the estimates were within 20% of
the actual effort 68% of the time.
It should also be mentioned that achieving an estimate after the requirements have
been specified within 20% is actually quite good. With such an estimate, there need not
even be any cost and schedule overruns, as there is generally enough slack or free time
available (recall the study mentioned earlier that found a programmer spends more than
30% of his time in personal or miscellaneous tasks) that can be used to meet the targets
set for the project based on the estimates. In other words, if the estimate is within 20% of
the actual, the effect of this inaccuracy will not even be reflected in the final cost and
schedule. Highly precise estimates are generally not needed.
Quick Revision
_________________________________________________________________L
OC and FP data are used in two ways during software project estimation:
3. Adjust the effort estimate by multiplying the initial estimate with all the
multiplying factors.
The bulk of the cost of software development is due to the human resources needed,
and therefore most cost estimation procedures focus on estimating effort in terms of
person-months (PM).
Topic Covered:
Fred Brooks was once asked how software projects fall behind schedule. His response was
as simple as it was profound: “One day at a time.”
As a project manager, your objective is to define all project tasks, build a network that
depicts their interdependencies, identify the tasks that are critical within the network, and
then track their progress to ensure that delay is recognized ―one day at a time.‖
To accomplish this, you must have a schedule that has been defined at a degree of
resolution that allows progress to be monitored and the project to be controlled.
-------------------------------------------------------------------------------------------------------
1. Software project scheduling is an action that distributes estimated effort across the
planned project duration by allocating the effort to specific software engineering
tasks.
NOTE: The schedule evolves over time.
1. an end date for release of a computer-based system has already (and irrevocably)
been established. The software organization is constrained to distribute effort within
the prescribed time frame.
2. The second view of software scheduling assumes that rough chronological bounds
have been discussed but that the end date is set by the software engineering
organization. Effort is distributed to make best use of resources, and an end date is
defined after careful analysis of the software.
Unfortunately, the first situation is encountered far more frequently than the second.
(Like all other areas of software engineering, a number of basic principles guide
software project scheduling)
-------------------------------------------------------------------------------------------------------
Q. What is the requirement of detailed project scheduling?
Detailed Scheduling
Once the milestones and the resources are fixed, it is time to set the detailed scheduling.
For detailed schedules, the major tasks fixed while planning the milestones are
broken into small schedulable activities in a hierarchical manner.
For example, the detailed design phase can be broken into tasks for
At each level of refinement, the project manager determines the effort for the overall
task from the detailed schedule and checks it against the effort estimates.
If this detailed schedule is not consistent with the overall schedule and effort
estimates, the detailed schedule must be changed.
If it is found that the best detailed schedule cannot match the milestone effort and
schedule, then the earlier estimates must be revised. Thus, scheduling is an iterative
process.
The project manager refines the tasks to a level so that the lowest-level activity can
be scheduled to occupy no more than a few days from a single resource. Activities related
to tasks such as project management, coordination, database management, and
configuration management may also be listed in the schedule, even though these activities
have less direct effect on determining the schedule because they are ongoing tasks rather
than schedulable activities. Nevertheless, they consume resources and hence are often
included in the project schedule.
NOTE: Rarely will a project manager complete the detailed schedule of the entire
project all at once.
Once the overall schedule is fixed, detailing for a phase may only be done at the
start of that phase.
For detailed scheduling, tools like Microsoft Project or a spreadsheet can be very
useful.
The effort
Duration
Start date
End date
Resources.
Dependencies between activities, due either to an inherent dependency (for
example, you can conduct a unit test plan for a program only after it has been coded) or to
a resource-related dependency (the same resource is assigned two tasks) may also be
specified.
-------------------------------------------------------------------------------------------------------
The final schedule, frequently maintained using some suitable tool, is often the most
"live" project plan document.
During the project, if plans must be changed and additional activities must be done,
after the decision is made, the changes must be reflected in the detailed schedule, as this
reflects the tasks actually planned to be performed. Hence, the detailed schedule
becomes the main document that tracks the activities and schedule.
1. Barbara and Steve, the CSS project team, developed the lists for the system scope
document after talking to William McDougal, vice president of marketing and sales, and
his assistants.
a. It is essential to obtain information from and to involve people who will be using the
system and who will obtain the most benefit from it. They provide valuable insights
to ensure that the system satisfies the business needs.
b. The project team conducts a preliminary investigation of alternative solutions to
reassess the assumptions the team made when the project was initiated. Because
the schedule and budget for the remainder of the project inherently assume a
particular approach to developing the system, it is critical to make those implicit
assumptions explicit so that all participants understand the constraints on the project
schedule and the team can perform an accurate feasibility analysis.
c. For example, if an ―off-the-shelf‖ program is identified as a possible solution, part of
the schedule during the analysis phase must include tasks to evaluate the program
against the needs being researched. If the most viable solution appears to be a new
system developed completely in- house, detailed analysis tasks are planned and
scheduled.
NOTE: the most critical element in the success of a system development project is
user involvement.
2. While Barbara was finishing the problem definition statement, Steve did some
preliminary investigation of possible solutions. He researched the trade magazines, the
Internet, and other resources to determine whether sales and customer support
systems could be bought and installed rapidly. Although he found several, none seemed
to have the exact match of capabilities that RMO needed. He and Barbara, along with
William McDougal, had several discussions about how best to proceed. They decided
that the best approach was to proceed with the analysis phase of the project before
making any final decision about solutions. They would revisit this decision, in much
more detail, after the analysis phase activities were completed. For now, Barbara and
Steve began developing a schedule, budget, and feasibility statement for the new
system.
Before discussing the details of a project schedule, let’s clarify three terms: task, activity,
and phase.
For example, suppose that you are scheduling the design phase. Within the design phase,
you identify activities such as Design the user interface, Design and integrate the database,
and complete the application design. Within the Design the user interface activity, you
might identify individual tasks such as Design the customer entry form and Design the
order-entry form.
Thus, the phase, activity, and task breakdown provides a three-level hierarchy.
During the project planning phase, it may not be possible to schedule every task in the
entire project because it is too early to know all of the tasks that will be necessary.
One of the requirements of the project planning phase is to provide estimates of the time to
complete the project and the total cost of the project.
Because one of the major factors in project cost is payment of salaries to the project team,
the estimate of the time and labor to complete the project becomes critical.
-------------------------------------------------------------------------------------------------------
Q. Write a short note on WBS, explain with a suitable example?(March 12,OCT 13 )5M.
DEVELOPING A WORK BREAKDOWN STRUCTURE
A work breakdown structure (WBS)
It is a list of all the individual tasks that are required to complete the project. It is
essential in planning and executing the project because it is the foundation for
developing the project schedule, for identifying milestones in the schedule, and for
managing cost.
FIGURE 3-10 work breakdown structure for the planning phase of the RMO project
This figure is based on the list of activities in the project planning phase of the SDLC.
In other words, a resource may spend only half-time on a two-day task, which results in
only one day of effort.
1. An expected duration
2. A pessimistic duration
3. An optimistic duration.
Having various values permits the project manager to develop an optimistic,
expected, and pessimistic schedule.
The project team could meet to brainstorm and try to think of everything that it
needs to complete the project. This meeting is a type of bottom-up approach—just
brainstorms for every single task and hope you cover everything.
One of two techniques is used as a starting point. These techniques are called
It provides the most accurate estimate for the duration and effort
required for the project.
Time estimates are more accurate if they are developed at a detailed
level rather than a ―guesstimate‖ of the major processes.
Schedules built from a good WBS are also much easier to monitor and
control.
The WBS is the key to a successful project.
-------------------------------------------------------------------------------------------------------
Q. What are the steps involved in developing GANT, PERT, CPM charts?(OCT
12)5M.
A PERT/CPM is a diagram of all the tasks identified in the WBS, showing the
sequence of dependencies of the tasks. An example of a PERT/CPM chart for RMO is shown
in Figure 3-11.
This example, which is only a partial example of the RMO schedule, illustrates the
basic ideas of a PERT/CPM chart.
Note: Some tasks are done sequentially, one after the other, and other tasks are
done in parallel, or at the same time.
The arrows represent task dependencies and indicate the normal sequence of
carrying out a project.
By showing which tasks can be done concurrently, a PERT chart assists in assigning staff.
It is always a juggling act to balance the availability and workload of the team
members with the dependent and independent tasks.
Building the PERT/CPM chart begins with the list of activities and tasks
developed in the WBS.
The WBS is analyzed, including the duration and expected resources for each
task, to determine the dependencies.
For each task, the chart identifies all the immediate predecessor tasks and
the successor tasks.
Decisions must be made concerning the most probable progression of the
project, for example, should the team design the customer entry from first or
the product definition form? Either will work. Automation does not help in
this step.
The critical path is the longest path through the PERT/CPM diagram and
contains all the tasks that must be done in the defined sequential order. It is
called the critical path because if any task on the critical path is delayed, the
entire project will be completed late. Project managers usually monitor the
tasks on the critical path very carefully.
Viewing the activities spread out over a calendar requires a different chart called a Gantt
chart.
It is essentially a bar chart, one bar for each task, with the horizontal axis being units of
time.
A Gantt chart is good for monitoring the progress of the project as it moves along.
Figure 3-12 shows a version of a Gantt chart, called a tracking Gantt chart, for the same
tasks shown in Figures 3-10 and 3-1 1.
In this example of Microsoft Project, the task bars can be color coded to indicate critical
path, not started, partially complete, or completed.
The red tasks are on the critical path, whereas the blue ones are not on the critical path.
Complete tasks, either complete or partially complete, are shown in solid colors. The solid
vertical line in February represents today’s date. So, the project manager can easily check
the status of the tasks on the diagram.
Any tasks to the left of the vertical line that are not completed are behind schedule.
The tasks to the right that have been completed are ahead of schedule.
Tasks that intersect with the vertical line may be either on track, ahead, or behind.
Depending on how status is reported, the Gantt chart might or might not indicate the
status of those tasks.
Most project managers find PERT/CPM charts beneficial while they are developing the
schedule, but Gantt charts are most useful after the project begins.
The examples shown in Figures 3-10, 3-11, and 3-12 only detail the WBS for the project
planning phase. The other SDLC phases would also need to be scheduled.
The SDLC provides the activities for each phase. Each activity is made up of a list of tasks.
A standards-based or analogy- based WBS can be used to provide the detailed list of tasks
for each analysis, design, and implementation phase activity.
If we assume that the project includes overlapping SDLC phases, a Gantt chart showing the
entire project at the phase and activity level of detail might look like Figure 3-13.
Figure 3-13: Tracking Gantt chart for the customer support system project planning
phase
Note: the length of each activity does not imply that the team is working full-time on that
activity from start to finish. The activity starts and continues with varying degrees of effort
for the duration. All team members get used to multitasking, that is, working on more than
one activity or task at the same time. Therefore, an overlapping view of the project is not
useful for calculating total labor cost, but it can show the completion of each phase and the
entire project. The elapsed time for the CSS development project is about nine months.
After that, the support phase begins.
The overlapping phases are usually the result of an iterative approach of the SDLC. For
planning and scheduling purposes, many project managers use project management
software and Gantt charts to plan and track the iterations of the project.
Each iteration includes analysis, design, and implementation activities that focus on a
portion of the systems functionality. Some analysis activities will be included in every
iteration; other activities might only be included in a few.
Analysis activities:
gather information
Design activities:
Implementation activities
Verify
Test.
Other activities from each of these phases might be included in some but not all iterations,
depending on the project plan.
A Gantt chart in Figure 3-14 shows how the RMO project might be scheduled with three
iterations.
The project team does not concern itself with what phase of the SDLC the project is in at
any point in time.
The SDLC phases and activities provide the framework for defining project planning and
multiple iterations that are scheduled throughout the project.
Team Structure:
We have seen that the number of resources is fixed when schedule is being planned.
Detailed scheduling is done only after actual assignment of people has been done, as task
assignment needs information about the capabilities of the team members.
We have implicitly assumed that the project's team is led by a project manager, who does
the planning and task assignment. This form of hierarchical team organization is fairly
common, and was earlier called the Chief Programmer Team.
In this hierarchical organization, the project manager is responsible for all major technical
decisions of the project. He does most of the design and assigns coding of the different
parts of the design to the programmers.
The team typically consists of programmers, testers, a configuration controller, and possibly
a librarian for documentation. There may be other roles like database manager, network
manager, backup project manager, or a backup configuration controller.
It should be noted that these are all logical roles and one person may do multiple such
roles.
For larger projects, this organization can be extended easily by partitioning the project into
modules, and having module leaders who are responsible for all tasks related to their
module and have a team with them for performing these tasks.
A different team organization is the egoless team: Egoless teams consist of ten or
fewer programmers. The goals of the group are set by consensus, and input from every
member is taken for major decisions. Group leadership rotates among the group members.
Due to their nature, egoless teams are sometimes called democratic teams. This structure
allows input from all members, which can lead to better decisions for difficult problems. This
structure is well suited for long-term research-type projects that do not have time
constraints. It is not suitable for regular tasks that have time constraints; for such tasks,
the communication in democratic structure is unnecessary and results in inefficiency.
In recent times, for very large product developments, another structure has emerged.
This structure recognizes that there are three main task categories in software
development—
Management related
Development related
Testing related.
It also recognizes that it is often desirable to have the test and development team be
relatively independent, and also not to have the developers or tests report to a nontechnical
manager.
In this structure, consequently, there is an overall unit manager, under whom there are
three small hierarchic organizations—
For program management: The program managers provide the specifications for what is
being built, and ensure that development and testing are properly coordinated.
For development: The primary job of developers is to write code and they work under a
development manager.
For testing: The responsibility of the testers is to test the code and they work under a test
manager.
In a large product this structure may be replicated, one for each major unit. This type of
team organization is used in corporations like Microsoft.
Quick Revision
Software project scheduling is an action that distributes estimated effort across the
planned project duration by allocating the effort to specific software engineering tasks.
Compartmentalization. The project must be compartmentalized into a number of
manageable activities and tasks. To accomplish compartmentalization, both the product
and the process are refined.
Interdependency. The interdependency of each compartmentalized activity or the task
must be determined. Some tasks must occur in sequence, while others can occur in
parallel. Some activities cannot commence until the work product pro interdependencies,
produced by another is available. Other activities can occur independently.
Time allocation. Each task to be scheduled must be allocated some number of work
units (e.g., person-days of effort). In addition, each task must be assigned a start date
and a completion date that are a function of the interdependencies and whether work
will be conducted on a full-time or part-time basis.
Effort validation. Every project has a defined number of people on the software team.
As time allocation occurs, you must ensure that no more than the allocated number of
people has been scheduled at any given time.
Defined responsibilities. Every task that is scheduled should be assigned to a specific
team member.
Defined outcomes. Every task that is scheduled should have a defined outcome.
Defined milestones. Every task or group of tasks should be associated with a project
milestone.
A work breakdown structure (WBS)
It is a list of all the individual tasks that are required to complete the project. It is
essential in planning and executing the project because it is the foundation for
developing the project schedule, for identifying milestones in the schedule, and for
managing cost.
PERT stands for Project Evaluation and Review Technique,
CPM stands for Critical Path Method.
A PERT/CPM is a diagram of all the tasks identified in the WBS, showing the sequence of
dependencies of the tasks.
The SDLC provides the activities for each phase. Each activity is made up of a list of
tasks. A standards-based or analogy- based WBS can be used to provide the detailed list
of tasks for each analysis, design, and implementation phase activity.
1 24
Q. Explain the steps involved in Project Scheduling?
2 Q. What is the requirement of detailed project scheduling? 25
3 28
Q. Write a short note on WBS, explain with a suitable example?
4 30
Q. What are the steps involved in developing GANT, PERT,CPM
charts?
5 26
Q. Why is Project Scheduling Dynamic in nature(i.e. not static)?
Topic Covered:
All these are reflected as changes in the files containing source, data, or
documentation.
The IEEE defines SCM as "the process of identifying and defining the items in the
system, controlling the change of these items throughout their life cycle,
recording and reporting the status of items and change requests, and verifying
the completeness and correctness of items".
Though all three are types of changes, changes due to product evolution and
changes due to bug fixes can be, in some sense, treated as a natural part of the
project itself which have to be dealt with even if the requirements do not
change.
Requirements changes, on the other hand, have a different dynamic.
In this situation, how does a program manager ensure that the appropriate
versions of sources are combined without missing any source, and the correct
versions of the documents, which are consistent with the final source, are sent?
This is ensured through proper CM.
-------------------------------------------------------------------------------------------------------
Q. Explain CM Functionality?
CM Functionality
To better understand CM, let us consider some of the functionality that a project
requires from the CM process.
CM Mechanisms
The main purpose of CM is to provide various mechanisms that can support the
functionality needed by a project to handle the types of scenarios discussed above that
arise due to changes.
The mechanisms commonly used to provide the necessary functionality include the
following
Most CM systems also provide means for access control. To understand the need
for access control, let us understand the life cycle of an SCI.
Typically, while an SCI is under development and is not visible to other SCIs, it
is considered being in the working state. An SCI in the working state is not under
SCM and can be changed freely. Once the developer is satisfied that the SCI is
stable enough for it to be used by others, the SCI is given for review, and the
item enters the state "under review."
Once an item is in this state, it is considered as "frozen," and any changes made
to a private copy that the developer may have made are not recognized.
After a successful review the SCI is entered into a library, after which the item is
formally under SCM.
The basic purpose of this review is to make sure that the item is of satisfactory
quality and is needed by others, though the exact nature of review will depend on
the nature of the SCI and the actual practice of SCM.
For example, the review might entail checking if the item meets its specifications
or if it has been properly unit tested. If the item is not approved, the developer
may be given the item back and the SCI enters the working state again. This "life
cycle" of an item from the SCM perspective
Once an SCI is in the library, any modification should be controlled, as others
may be using that item. Hence, access to items in the library is controlled. For
making an approved change, the SCI is checked out of the library, the change is
made, the modification is reviewed and then the SCI is checked back into the
library.
When a new version is checked in, the old version is not replaced and both old
and new versions may exist in the library—often logically with one file being
maintained along with information about changes to recreate the older version.
This aspect of SCM is sometimes called library management and is done with the
aid of tools.
CM Process
The CM process defines the set of activities that need to be performed to control change.
STAGES OF CM PROCESS
Any customer supplied products or purchased items that will be part of the delivery
(called "included software product") are also configuration items.
As there are typically a lot of items in a project, how they are to be organized is also
decided in the planning phase.
Typically, the directory structure that will be employed to store the different
elements is decided in the plan.
A software process is not a static entity—it has to change to improve so that the products
produced using the process are of higher quality and are less costly. Improving quality and
productivity are fundamental goals of engineering. To achieve these goals the software
process must continually be improved, as quality and productivity are determined to a great
extent by the process. Improving the quality and productivity of the process is the main
objective of the process management process.
-------------------------------------------------------------------------------------------------------
Q. What is the difference between project & process management?
To improve its software process, an organization needs to first understand the status
of the current status and then develop a plan to improve the process. It is generally agreed
that changes to a process are best introduced in small increments and that it is not feasible
to totally revolutionize a process. The reason is that it takes time to internalize and truly
follow any new methods that may be introduced. And only when the new methods are
properly implemented will their effects be visible. Introducing too many new methods for
the software process will make the task of implementing the change very hard.
This depends on the current state of the process. For example, if the process is very
primitive there is no point in suggesting sophisticated metrics-based project control as an
improvement strategy; incorporating it in a primitive process is not easy.
On the other hand, if the process is already using many basic models, such a step
might be the right step to further improve the process. Hence, deciding what activities to
undertake for process improvement is a function of the current state of the process. Once
some process improvement takes place, the process state may change, and a new set of
possibilities may emerge. This concept of introducing changes in small increments based on
the current state of the process has been captured in the Capability Maturity Model (CMM)
framework.
Software process capability describes the range of expected results that can be
achieved by following the process. The process capability of an organization determines
what can be expected from the organization in terms of quality and productivity.
The CMM framework says that as process improvement is best incorporated in small
increments, processes go from their current levels to the next higher level when they are
improved. Hence, during the course of process improvement, a process moves from level
to level until it reaches level 5. This is shown in Figure 2.17.
The CMM provides characteristics of each level, which can be used to assess the
current level of the process of an organization. As the movement from one level is to the
next level, the characteristics of the levels also suggest the areas in which the process
should be improved so that it can move to the next higher level.
For each level it specifies the areas in which improvement can be absorbed and will
bring the maximum benefits. Overall, this provides a roadmap for continually improving the
process.
The initial process (level 1) is essentially an ad hoc process that has no formalized
method for any activity. Basic project controls for ensuring that activities are being done
properly, and that the project plan is being adhered to, are missing. In crisis the project
plans and development processes are abandoned in favor of a code-and-test type of
approach. Success in such organizations depends solely on the quality and capability of
individuals. The process capability is unpredictable as the process constantly changes.
Organizations at this level can benefit most by improving project management, quality
assurance, and change control.
In a repeatable process (level 2), policies for managing a software project and
procedures to implement those policies exist. That is, project management is well developed
in a process at this level. Some of the characteristics of a process at this level are:
Project commitments are realistic and based on past experience with similar
projects,
Cost and schedule are tracked and problems resolved when they arise,
Essentially, results obtained by this process can be repeated as the project planning
and tracking is formal.
At the managed level (level 4) quantitative goals exist for process and products.
Data is collected from software processes, which is used to build models to characterize the
process. Hence, measurement plays an important role in a process at this level. Due to the
models built, the organization has a good insight of the process capability and its
deficiencies. The results of using such a process can be predicted in quantitative terms.
At the optimizing level (level 5), the focus of the organization is on continuous
process improvement. Data is collected and routinely analyzed to identify areas that can be
strengthened to improve quality or productivity. New technologies and tools are introduced
and their effects measured in an effort to improve the performance of the process. Best
software engineering and management practices are used throughout the organization. This
CMM framework can be used to improve the process. Improvement requires first assessing
the level of the current process. Based on the current level, the areas in which maximum
benefits can be derived are known from the framework
For example, for improving a process at level 1 (or for going from level 1 to level 2),
project management and the change control activities must be made more formal. The
complete CMM framework provides more details about which particular areas need to be
strengthened to move up the maturity framework. This is generally done by specifying the
key process areas of each maturity level, which in turn, can be used to determine which
areas to strengthen to move up. Some of the key process areas of the different levels are
shown in Figure 2.18
Though the CMM framework specifies the process areas that should be improved to
increase the maturity of the process, it does not specify how to bring about the
improvement. That is, it is essentially a framework that does not suggest detailed
prescriptions for improvement, but guides the process improvement activity along the
maturity levels such that process improvement is introduced in increments and the
improvement activity at any time is clearly focused. Many organizations have successfully
used this framework to improve their processes. It is a major driving force for process
improvement. A detailed example of how an organization that follows the CMM executes its
project can be found in [96].
-------------------------------------------------------------------------------------------------------
That is, risk implies that there is a possibility that something negative may happen.
In the context of software projects, negative implies that there is an adverse effect
on cost, quality, or schedule.
Risk management is the area that tries to ensure that the impact of risks on cost,
quality, and schedule is minimal. Risk management can be considered as dealing with the
possibility and actual occurrence of those events that are not "regular" or commonly
expected, that is, they are probabilistic.
So, in a sense, risk management begins where normal project management ends.
It deals with events that are infrequent, somewhat out of the control of the project
management, and which can have a major impact on the project. Most projects have risk.
The idea of risk management is to minimize the possibility of risks materializing, if possible,
or to minimize the effects if risks actually materialize.
For example, when constructing a building, there is a risk that the building may later
collapse due to an earthquake. That is, the possibility of an earthquake is a risk. If the
building is a large residential complex, then the potential cost in case the earthquake risk
materializes can be enormous. This risk can be reduced by shifting to a zone that is not
earthquake prone. Alternatively, if this is not acceptable, then the effects of this risk
materializing are minimized by suitably constructing the building (the approach taken in
Japan and California). At the same time, if a small dumping ground is to be constructed,
no such approach might be followed, as the financial and other impact of an actual
earthquake on such a building is so low that it does not warrant special measures.
It should be clear that risk management has to deal with identifying the undesirable
events that can occur, the probability of their occurring, and the loss if an undesirable event
does occur. Once this is known, strategies can be formulated for either reducing the
probability of the risk materializing or reducing the effect of risk materializing. So the risk
management revolves around risk assessment and risk control. For each of these major
activities, some sub activities must be performed. A breakdown of these activities is given in
Figure 5.4.
-------------------------------------------------------------------------------------------------------
Risk Assessment
Risk assessment is an activity that must be undertaken during project planning. This
involves identifying the risks, analyzing them, and prioritizing them on the basis of the
analysis. Due to the nature of a software project, uncertainties are highest near the
beginning of the project (just as for cost estimation). Due to this, although risk assessment
should be done throughout the project, it is most needed in the starting phases of the
project. The goal of risk assessment is to prioritize the risks so that attention and resources
can be focused on the more risky items.
Risk identification is the first step in risk assessment, which identifies all the different
risks for a particular project. These risks are project-dependent and identifying them is an
exercise in envisioning what can go wrong.
The top-ranked risk item is personnel shortfalls. This involves just having fewer
people than necessary or not having people with specific skills that a project might
require. Some of the ways to manage this risk is to get the top talent possible and
to match the needs of the project with the skills of the available personnel.
Adequate training, along with having some key personnel for critical areas of the
project, will also reduce this risk
i. Unrealistic schedules and budgets, happens very frequently due to business and
other reasons. It is very common that high-level management imposes a
schedule for a software project that is not based on the characteristics of the
project and is unrealistic. Underestimation may also happen due to inexperience
or optimism.
ii. The next few items are related to requirements. Projects run the risk of
developing the wrong software if the requirements analysis is not done properly
and if development begins too early.
iii. often improper user interface may be developed. This requires extensive rework
of the user interface later or the software benefits are not obtained because users
are reluctant to use it.
iv. Gold plating refers to adding features in the software that are only marginally
useful. This adds unnecessary risk to the project because gold plating consumes
resources and time with little return.
v. Some requirement changes are to be expected in any project, but sometimes
frequent changes are requested, which is often a reflection of the fact that the
client has not yet understood or settled on its own requirements. The effect of
requirement changes is substantial in terms of cost, especially if the changes
occur when the project has progressed to later phases.
vi. Performance shortfalls are critical in real-time systems and poor performance can
mean the failure of the project. If a project depends on externally available
components—either to be provided by the client or to be procured as an off-the-
shelf component—the project runs some risks.
vii. The project might be delayed if the external component is not available on time.
The project would also suffer if the quality of the external component is poor or if
the component turns out to be incompatible with the other project components or
with the environment in which the software is developed or is to operate.
viii. If a project relies on technology that is not well developed, it may fail. This is a
risk due to straining the computer science capabilities.
meetings and brainstorming, and
Reviews of plans, processes, and work products.— prepared from a survey of
previous projects. Such a list can form the starting
Using the checklist of the top 10 risk items is one way to identify risks. This approach is
likely to suffice in many projects. The other methods are decision driver analysis,
assumption analysis, and decomposition. Decision driver analysis involves questioning and
analyzing all the major decisions taken for the project. If a decision has been driven by
factors other than technical and management reasons, it is likely to be a source of risk in
the project. Such decisions may be driven by politics, marketing, or the desire for short-
term gain. Optimistic assumptions made about the project also are a source of risk. Some
such optimistic assumptions are that nothing will go wrong in the project, no personnel will
quit during the project, people will put in extra hours if required, and all external
components (hardware or software) will be delivered on time. Identifying such assumptions
will point out the source of risks.
An effective method for identifying these hidden assumptions is comparing them with past
experience. Decomposition implies breaking a large project into clearly defined parts and
then analyzing them. Many software systems have the phenomenon that 20% of the
modules cause 80% of the project problems. Decomposition will help identify these
modules. Risk identification merely identifies the undesirable events that might take place
during the project, i.e., enumerates the "unforeseen" events that might occur. It does not
specify the probabilities of these risks materializing nor the impact on the project if the risks
indeed materialize.
In risk analysis, the probability of occurrence of a risk has to be estimated, along with the
loss that will occur if the risk does materialize. This is often done through discussion, using
experience and understanding of the situation. However, if cost models are used for cost
and schedule estimation, then the same models can be used to assess the cost and
schedule risk.
For example, in the COCOMO cost model, the cost estimate depends on the ratings of the
different cost drivers. One possible source of cost risk is underestimating these cost drivers.
The other is underestimating the size. Risk analysis can be done by estimating the worst-
case value of size and all the cost drivers and then estimating the project cost from these
values. This will give us the worst-case analysis. Using the worst-case effort estimate, the
worst-case schedule can easily be obtained. A more detailed analysis can be done by
considering different cases or a distribution of these drivers. The other approaches for risk
analysis include studying the probability and the outcome of possible decisions (decision
analysis), understanding the task dependencies to decide critical activities and the
probability and cost of their not being completed on time (network analysis), risks on the
various quality factors like reliability and usability (quality factor analysis), and evaluating
the performance early through simulation, etc., if there are strong performance constraints
on the system (performance analysis). The reader is referred to for further discussion of
these topics. Once the probabilities of risks materializing and losses due to materialization
of different risks have been analyzed, they can be prioritized. One approach for prioritization
is through the concept of risk exposure (RE), Which is sometimes called risk impact. RE is
defined by the relationship RE = Prob{UO) * Loss{UO),
Where Prob{UO) is the probability of the risk materializing (i.e., undesirable outcome) and
Loss{UO) is the total loss incurred due to the unsatisfactory outcome. The loss is not only
the direct financial loss that might be incurred but also any loss in terms of credibility,
future business, and loss of property or life. The RE is the expected value of the loss due to
a particular risk. For risk prioritization using RE, the higher the RE, the higher the priority of
the risk item.
It is not always possible to use models and prototypes to assess the probabilities of
occurrence and of loss associated with particular events. Due to the no availability of
models, assessing risk probabilities is frequently subjective. A subjective assessment can be
done by the estimate of one person or by using a group consensus technique like the Delphi
approach In the Delphi method; a group of people discusses the problem of estimation and
finally converges on a consensus estimate.
-------------------------------------------------------------------------------------------------------
Risk Control
The main objective of risk management is to identify the top few risk items and then
focus on them. Once a project manager has identified and prioritized the risks, the top risks
can be easily identified. The question then becomes what to do about them. Knowing the
risks is of value only if you can prepare a plan so that their consequences are minimal—that
is the basic goal of risk management. One obvious strategy is risk avoidance, which entails
taking actions that will avoid the risk altogether, like the earlier example of shifting the
building site to a zone that is not earthquake-prone. For some risks, avoidance might be
possible. For most risks, the strategy is to perform the actions that will either reduce the
probability of the risk materializing or reduce the loss due to the risk materializing. These
are called risk mitigation steps. To decide what mitigation steps to take, a list of commonly
used risk mitigation steps for various risks is very useful here. Generally the compiled
table of commonly occurring risks also contains the compilation of the methods used for
mitigation in the projects in which the risks appeared. Note that unlike risk assessment,
which is largely an analytical exercise, risk mitigation comprises active measures that have
to be performed to minimize the impact of risks. In other words, selecting a risk
mitigation step is not just an intellectual exercise. The risk mitigation step must be
executed (and monitored). To ensure that the needed actions are executed properly, they
must be incorporated into the detailed project schedule. Risk prioritization and consequent
planning are based on the risk perception at the time the risk analysis is performed.
Because risks are probabilistic events that frequently depend on external factors, the threat
due to risks may change with time as factors change. Clearly, then, the risk perception may
also change with time. Furthermore, the risk mitigation steps undertaken may affect the
risk perception. This dynamism implies that risks in a project should not be treated as static
and must be monitored and reevaluated periodically. Hence, in addition to monitoring the
progress of the planned risk mitigation steps, a project must periodically revisit the risk
perception and modify the risk mitigation plans, if needed. Risk monitoring is the activity of
monitoring the status of various risks and their control activities. One simple approach for
risk monitoring is to analyze the risks afresh at each major milestone, and change the plans
as needed.
Quick Revision
_________________________________________________________________________
Topic Covered:
Use cases are used widely as a method for describing customer-level or business
domain requirements that imply software features and functions. It would seem reasonable
to use the use case as a normalization measure similar to LOC or PP. Like FP, the use case
is defined early in the software process, allowing it to be used for estimation before
significant modeling and construction activities are initiated. Use cases describe (indirectly,
at least) user-visible functions and features that are basic requirements for a system. The
use case is independent of programming language. In addition, the number of use cases is
directly proportional to the size of the application in LOC and to the number of test cases
that will have to be designed to fully exercise the application. Because use cases can be
created at vastly different levels of abstraction, there is no standard ―size‖ for a use case.
Without a standard measure of what a use case is, its application as a normalization
measure (e.g., effort expended per use case) is suspect.
Use cases are described using many different formats and styles—there is no
standard form.
Use cases represent an external view (the user’s view) of the software and
can therefore be written at many different levels of abstraction.
Use cases do not address the complexity of the functions and features that
are described.
Use cases can describe complex behavior (e.g., interactions) that involves
many functions and features.
Unlike an LOC or a function point, one person’s ―use case‖ may require months of
effort while another person’s use case may be implemented in a day or two.
Smith argues that any level of this structural hierarchy can be described by no more
than 10 use cases. Each of these use cases would encompass no more than 30 distinct
scenarios. Obviously, use cases that describe a large system are written at a much higher
level of abstraction (and represent considerably more development effort) than use cases
that are written to describe a single subsystem. Therefore, before use cases can be used for
estimation, the level within the structural hierarchy is established, the average length (in
pages) of each use case is determined, the type of software (e.g., real-time, business,
engineering/scientific, WebApp, embedded) is defined, and a rough architecture for the
system is considered. Once these characteristics are established, empirical data may be
used to establish the estimated number of LOC or FP per use case (for each level of the
hierarchy). Historical data are then used to compute the effort required to develop the
system.
-------------------------------------------------------------------------------------------------------
Q.Write a short note on Use Cases Estimation.(OCT 13)5M.
Q. How to estimate LOC with the help of Use Cases? Explain with an example?
As I have noted throughout Part 2 of this book, use cases provide a software team
with insight into software scope and requirements. However, developing an estimation
approach with use cases is problematic for the following reasons
Use cases are described using many different formats and styles—there is no
standard form.
Use cases represent an external view (the user’s view) of the software and
can therefore be written at many different levels of abstraction.
Use cases do not address the complexity of the functions and features that
are described.
Use cases can describe complex behavior (e.g., interactions) that involves
many functions and features.
Unlike an LOC or a function point, one person’s ―use case‖ may require months of
effort while another person’s use case may be implemented in a day or two.
Smith argues that any level of this structural hierarchy can be described by no more
than 10 use cases. Each of these use cases would encompass no more than 30 distinct
scenarios. Obviously, use cases that describe a large system are written at a much higher
level of abstraction (and represent considerably more development effort) than use cases
that are written to describe a single subsystem. Therefore, before use cases can be used for
estimation, the level within the structural hierarchy is established, the average length (in
pages) of each use case is determined, the type of software (e.g., real-time, business,
engineering/scientific, WebApp, embedded) is defined, and a rough architecture for the
system is considered. Once these characteristics are established, empirical data may be
used to establish the estimated number of LOC or FP per use case (for each level of the
hierarchy). Historical data are then used to compute the effort required to develop the
system.
Where
LOCav5 = historical average LOC per use case for this type of subsystem
Ph= average pages per use case for this type of subsystem
Expression (26.2) could be used to develop a rough estimate of the number of LOC based
on the actual number of use cases adjusted by the number of scenarios and the page length
of the use cases. The adjustment represents up to n percent of the historical average LOC
per use case.
Using the relationship noted in Expression (26.2) with n = 30 percent, the table
shown in Figure 26.5 is developed. Considering the first row of the table, historical data
indicate that UI software requires an average of 800 LOC per use case when the use case
has no more than 12 scenarios and is described in less than five pages. These data conform
reasonably well for the CAD system. Hence the LOC estimate for the user interface
subsystem is computed using expression (26.2). Using the same approach, estimates are
made for both the engineering and infrastructure subsystem groups.Figure26.5 summarizes
the estimates and indicates that the overall size of the CAD is estimated at 42,500 LOC.
Using 620 LOC/pm as the average productivity for systems of this type and a
burdened labor rate of $8000 per month, the cost per line of code is approximately $13.
Based on the use-case estimate and the historical productivity data, the total estimated
project cost is $552,000 and the estimated effort is 68 person- months.
-------------------------------------------------------------------------------------------------------
Although an iterative model is the best framework for an 00 project, task parallelism
makes project tracking difficult. You may have difficulty establishing meaningful milestones
for an 00 project because a number of different things are happening at once. In general,
the following major milestones can be considered ―completed‖ when the criteria noted have
been met.
All classes and the class hierarchy have been defined and reviewed.
Class attributes and operations associated with a class have been defined and
reviewed.
Class relationships (Chapter 6) have been established and reviewed.
A behavioral model (Chapter 7) has been created and reviewed.
Reusable classes have been noted.
Recalling that the OO process model is iterative, each of these milestones may be revisited
as different increments are delivered to the customer.
-------------------------------------------------------------------------------------------------------
Q. Explain CASE tools?
CASE Tools: CASE tools are a class of software that automate many of the activities
involved in various life cycle phases. For example, when establishing the functional
requirements of a proposed application, prototyping tools can be used to develop
graphic models of application screens to assist end users to visualize how an application
will look after development. Subsequently, system designers can use automated design
tools to transform the prototyped functional requirements into detailed design
documents. Programmers can then use automated code generators to convert the
design documents into code. Automated tools can be used collectively, as mentioned, or
individually. For example, prototyping tools could be used to define application
requirements that get passed to design technicians who convert the requirements
into detailed designs in a traditional manner using flowcharts and narrative documents,
without the assistance of automated design software.
1. Life-cycle support
2. Integration dimension
3. Construction dimension
4. Knowledge-based CASE dimension
Let us take the meaning of these dimensions along with their examples one by one:
This dimension classifies CASE Tools on the basis of the activities they support in the
information systems life cycle. They can be classified as Upper or Lower CASE tools.
Integration dimension
1. CASE Framework
2. ICASE Tools
3. Integrated Project Support Environment(IPSE)
Workbenches
Workbenches integrate several CASE tools into one application to support specific software-
process activities. Hence they achieve:
Use cases are used widely as a method for describing customer-level or business domain
requirements that imply software features and functions.
Use cases are described using many different formats and styles—there is no standard
form.
Use cases represent an external view (the user’s view) of the software and can therefore
be written at many different levels of abstraction.
Use cases do not address the complexity of the functions and features that are
described.
Use cases can describe complex behavior (e.g., interactions) that involves many
functions and features.
CASE tools are a class of software that automate many of the activities involved in
various life cycle phases. For example, when establishing the functional requirements of
a proposed application, prototyping tools can be used to develop graphic models of
application screens to assist end users to visualize how an application will look after
development.
Topic Covered:
Unified Process, Its phases & disciplines, Agile Development – Principles & Practices,
Extreme programming- Core values & Practices , Frameworks, Components,
Services, Introduction to Design Patterns.
UP Phases:
Each phase of the UP life cycle describes the emphasis or objectives of the project team
members and their activities at that point in time. The four phases provide a general
framework for planning and tracking the project overtime. Within each phase, several
iterations are planned to allow the team flexibility to adjust to problems or changing
conditions. The emphases or objectives of the project team in each of the four phases are
described briefly in Figure 16-2.
-------------------------------------------------------------------------------------------------------
Figure 16-3 shows how the UP disciplines are involved in each iteration, which is
typically planned to span four weeks.
The size of the shaded area under the curve for each discipline indicates the relative
amount of work included in each discipline during the iteration. The amount and nature of
the work differs from iteration to iteration. For example, in iteration 2 much of the effort
focuses on business modeling and defining requirements, with much less effort focused on
implementation and deployment. In iteration 5, very little effort is focused on modeling and
requirements, with much more effort focused on implementation, testing, and deployment.
But most iterations involve some work in all disciplines.
Figure 16-4 shows the entire UP life cycle—phases, iterations, and disciplines. This
figure includes all of the key UP life cycle features and is usefl.il for understanding how a
typical information system development project is managed.
The previous figures illustrate how the phases include activities from each discipline.
But what about the detailed activities that occur within each discipline?
1. Business modeling
2. Requirements
3. Design
4. Implementation
5. Testing
6. Deployment
Each iteration is similar to a mini project, which completes a portion of the system.
For each iteration, The project team must understand the business environment
(business modeling),
Define the requirements that the portion of the system must satisfy
(requirements),
Design a solution for that portion of the system that satisfies the requirements
(design),
Write and integrate the computer code that makes the portion of the system
actually work (implementation),
Thoroughly test the portion of the system (testing),
Then in some cases put the portion of the system that is completed and tested into
operation for users (deployment).
Three additional support disciplines are necessary for planning and controlling the project:
All nine UP disciplines are employed throughout the lifetime of a project but to
different degrees.
For example, in the inception phase there is one iteration. During the inception
phase iteration, the project manager might complete a model showing some aspect of the
system environment (the business modeling discipline). The scope of the system is
delineated by defining many of the key requirements of the system and listing use cases
(the requirements discipline).
The elaboration phase includes several iterations. In the first iteration, the team
works on the details of the domain classes and use cases addressed in the iteration (the
business modeling and requirements disciplines). At the same time, they might complete
the description of all use cases to finalize the scope (the requirements discipline). The use
cases addressed in the iteration are designed by creating design class diagrams and
interaction diagrams (the design discipline), programmed using Java or Visual Basic .NET
(the implementation discipline), and fully tested (the testing discipline). The project
manager works on the plan for the next iteration and continues to refine the schedule and
feasibility assessments (the project management discipline), and all team members
continue to receive training on the UP activities they are completing and the system
development tools they are using (the environment discipline).
By the time the project progresses to the construction phase, most of the use cases
have been designed and implemented in their initial form. The focus of the project turns to
satisfying other technical, performance, and reliability requirements for each use case,
finalizing the design, and implementation. These requirements are usually routine and lower
risk, but they are key to the success of the system. The• effort focuses on designing system
controls and security and on implementing and testing these aspects.
-------------------------------------------------------------------------------------------------------
Q. Write a short note on agile development?(March 12,13,14)5M.
Agile Development:
The highly volatile marketplace has forced businesses to respond rapidly to new
opportunities. Sometimes new opportunities appear in the middle of implementing another
business initiative. To survive, businesses must be agile. Agility—being able to change
directions rapidly, even in the middle of a project—is the keystone of Agile Development.
Agile Development is a philosophy and set of guidelines for developing software in an
unknown, rapidly changing environment. It provides an overarching philosophy for specific
development approaches such as the Unified Process. The amount of agility in each
approach can vary. For example, we identified the UP as being somewhat adaptive. Some
UP projects may adopt many agile philosophies, and others may use fewer.
The ―Manifesto for Agile Software Development‖ identifies four basic values, which
represent the core philosophy of the agile movement. The four values emphasize
Note that each of the phrases in the list prioritizes the value on the left over the
value on the right. The people involved in system development, whether as team
members, users, or other stakeholders, all need to accept these priorities for a project to
be truly agile. Adopting an agile approach is not always easy.
Some industry leaders in the agile movement coined the term chaordic to describe
an agile project. Chaordic comes from two words, chaos and order. The first two values in
our list—responding to change over following a plan and individuals and interactions over
processes and tools—do seem to be a recipe for chaos. But they recognize that software
projects inherently have many unknowns and unpredictable elements, and hence a certain
amount of chaos. Developers need to accept the chaos but also need to use the specific
methodologies discussed later to impose order on this chaos to move the project ahead.
Managers and executive stakeholders frequently struggle to accept this less rigid
point of view, often wanting to impose more controls on development teams and to enforce
detailed plans and schedules. However, the agile philosophy takes the opposite approach,
providing more flexibility in project schedules and letting the project teams plan and
execute their work as the project progresses.
Contracts also take on an entirely different flavor. Fixed prices and fixed deliverables
do not make sense. Contracts take more of a collaborative tack but include options for the
customer to cancel if the project is not progressing, as measured by the incremental
deliverables. Incremental deliverables in agile projects are working pieces of the new
system, not documents or specifications.
Models and modeling are critical to Agile Development, so we look next at Agile
Modeling. Many of the core values are illustrated in the principles and practices of building
models.
-------------------------------------------------------------------------------------------------------
Your first impression might be that an agile approach means less modeling, or
maybe even no modeling. Agile Modeling (AM) is not about doing less modeling but about
doing the right kind of modeling at the right level of detail for the right purposes. Early in
this chapter, we identified two primary reasons to build models: (1) to understand what you
are building and (2) to communicate important aspects of the solution system. AM consists
of a set of principles and practices that reinforce these two reasons for modeling. AM does
not dictate which models to build or how formal to make those models but instead helps
developers to stay on track with their models—by using them as a means to an end instead
of building models as end deliverables. AM’s basic principles express the attitude that
developers should have as they develop software. Figure 16-6 summarizes Agile Modeling
Develop Software as Your Primary Goal The primary goal of a software development
project is always to produce high-quality software. The primary measurement of progress is
working software, not intermediate models of system requirements or specifications.
Modeling is always a means to an end, not the end itself:
Any activity that does not directly contribute to the end goal of producing software
should be questioned and avoided if it cannot be justified.
Enable the Next Effort as Your Secondary Goal Focusing only on working software
can also be self- defeating, so developers must consider two important objectives. First,
requirements models might be necessary to develop design models. So, do not think that if
the model cannot be used to write code it is unnecessary. Sometimes several intermediate
steps are needed before the final code can be written. Second, although high-quality
software is the primary goal, long-term use of that code is also important. So, some models
might be necessary to support maintenance and enhancement of the system. Yes, the code
is the best documentation, but some architectural design decisions might not be easily
identified from the code. Look carefully at what other artifacts might be necessary to
produce high-quality systems in the long term.
Minimize Your Modeling Activity—Few and Simple Create only the models that are
necessary. Do just enough to get by. This principle is not a justification for sloppy work or
inadequate analysis. The models you create should be clear, correct, and complete. But do
not create unnecessary models. Also, keep each model as simple as possible. Normally, the
simplest solution is the best solution. Elaborate solutions tend to be difficult to understand
and maintain. However, we emphasize again that simplicity is not justification for being
incomplete.
steps and address problems in small bites. Change your model incrementally, and then
validate into make sure it is correct. Do not try to accomplish everything in one big release.
Model with a Purpose We indicated earlier that the two reasons to build models are
to understand what you are building and to communicate important aspects of the solution
system. Make sure that your modeling efforts support those reasons. Sometimes
developers try to justify,’ building certain models by claiming that (1) the development
methodology mandates the development of the model, (2) someone wants a model, even
though the person does not know why it is important, or (3) a model can replace a face-to-
face discussion of issues. Always identify’ a reason and an audience for each model you
develop. Then develop the model in sufficient detail to satisfy the reason and the audience.
Incidentally, the audience might be you.
Build Multiple Models UML, along with other modeling methodologies, has several
models to represent different aspects of the problem at hand. To be successful—in
understanding or communication—you will need to model various aspects of the required
solution. Don’t develop all of them; be sure to minimize your modeling, but develop enough
models to make sure you have addressed all the issues.
Build High-Quality Models and Get Feedback Rapidly Nobody likes sloppy work. It is
based on faulty thinking and introduces errors. One way to avoid error in models is to get
feedback rapidly, while the work is still fresh. Feedback comes from users, as well as
technical team members. Others will have helpful insights and different ways to view a
problem and identify’ a solution.
Focus on Content Rather than Representation Sometimes a project team has access
to a sophisticated CASE tool. CASE tools can be helpful, but at times they are distracting
because developers spend time making the diagrams pretty. Be judicious in the use of tools.
Some models need to be well drawn for communication or contracts or even to handle
expected changes and updates. In other cases, a hand- drawn diagram might suffice.
Learn from Each Other with Open Communication all of the adaptive approaches
emphasize working in teams. Do not be defensive about your models. Other team members
have good suggestions. You can never truly master every aspect of a problem or its models.
Know Your Models arid How to Use Them Being an agile modeler does not mean that
you are not skilled. If anything, you must be more skilled to know the strengths and
weaknesses of the models, including how and when to use them. An expert modeler applies
the previous principles of simplicity, quality, and development of multiple models.
The following practices support the AM principles just expressed. The heart of AM is
in its practices, which give the practitioner specific modeling techniques. Figure 16-7
summarizes the Agile Modeling practices.
Validation Between modeling sessions, the team can begin to write code for the
solutions already conceived so that they can validate the models. Simplicity supports
frequent validation. Do not create too many or complex models until the simple ones have
been validated with code.
Documentation Many models are temporary working documents that are developed
to solve a particular problem. These models quickly become obsolete as the code evolves
and improves. Do not try to keep them up to date. Discard them. If they were posted to a
repository, date them so that everyone knows they show a history of decisions and progress
but are not now in sync with the code. Updating only when it hurts is a guideline that tells
us not to waste time trying to keep temporary models synchronized. During the first
iteration, when many models are developed concurrently, they should be consistent.
However, as development progresses, some models will become working documents that no
longer relate well to other models. Remember that the objective of the project is to develop
software, not to have a set of pretty models. Only update when it hurts—that is, when the
project team can’t work effectively without the information.
Motivation Remember the basic objectives of modeling. Only build a model if it helps
you understand a process or solve a problem or if you need to record and communicate
something. For example, the team members in a design session might make some design
decisions. To communicate these decisions, the team posts a simple model to make it
public. The model can be a very effective tool to document the decisions and ensure that all
have a common understanding and reference point. Again, a model is simply used as a tool
for communication, not as an end in itself.
Now that we have explored the basic philosophy, principles, and practices underlying
Agile Development, we turn to two methodologies that employ agile concepts: Extreme
Programming and Scrum.
Extreme programming—
XP Core Values
The four core values of XP—communication, simplicity, feedback, and courage—drive its
practices and project activities. You will recognize the first three as best practices for any
development project. With a little thought, you should also see that the fourth is a desired
value for any project, even though it might not be stated explicitly.
Communication One of the major causes of project failure has been a lack of open
communication with the right players at the right time and at the right level. Effective
communication involves not only documentation but also open verbal discussion. The
practices and methods of XP are designed to ensure that open, frequent communication
occurs
Simplicity Even though developers have always advocated keeping solutions simple, they do
not always follow their own advice. XP includes techniques to reinforce this principle and
make it a standard way of developing systems.
Courage Developers always need courage to face the harsh choice of doing things right or
throwing away bad code and starting over. But all too frequently they have not had the
courage to stand up to a too-tight schedule, resulting in bad mistakes. XP practices are
designed to make it easier to give developers the courage to ―do it right.‖
XP Practices
XP’s 12 practices embody the basic values just presented. These practices are consistent
with the agile principles explained earlier in the chapter. Planning Some people describe XP
as glorified hacking or as the old ―code and fix‖ methodology that was used in the 1960s.
That is not true; XP does include planning. However, XP is an adaptive technique that
recognizes that you cannot know everything at the start. As indicated earlier, XP embraces
change. XP planning focuses on making a rough plan quickly and then refining it as things
become clearer. This reflects the Agile Development philosophy that change is more
important than detailed plans. It is also consistent with the idea that individuals—and their
abilities—are more important than an elaborate process.
-------------------------------------------------------------------------------------------------------
The four core values of XP—communication, simplicity, feedback, and courage—drive its
practices and project activities. You will recognize the first three as best practices for any
development project. With a little thought, you should also see that the fourth is a desired
value for any project, even though it might not be stated explicitly.
Communication One of the major causes of project failure has been a lack of open
communication with the right players at the right time and at the right level. Effective
communication involves not only documentation but also open verbal discussion. The
practices and methods of XP are designed to ensure that open, frequent communication
occurs
Simplicity Even though developers have always advocated keeping solutions simple, they do
not always follow their own advice. XP includes techniques to reinforce this principle and
make it a standard way of developing systems.
Courage Developers always need courage to face the harsh choice of doing things right or
throwing away bad code and starting over. But all too frequently they have not had the
courage to stand up to a too-tight schedule, resulting in bad mistakes. XP practices are
designed to make it easier to give developers the courage to ―do it right.‖
XP Practices
XP’s 12 practices embody the basic values just presented. These practices are consistent
with the agile principles explained earlier in the chapter. Planning Some people describe XP
as glorified hacking or as the old ―code and fix‖ methodology that was used in the 1960s.
That is not true; XP does include planning. However, XP is an adaptive technique that
recognizes that you cannot know everything at the start. As indicated earlier, XP embraces
change. XP planning focuses on making a rough plan quickly and then refining it as things
become clearer. This reflects the Agile Development philosophy that change is more
important than detailed plans. It is also consistent with the idea that individuals—and their
abilities—are more important than an elaborate process.
The basis of an XP plan is a set of stories that users develop. A story simply describes what
the system needs to do. XP does not use the term use case, but a user story and a use case
express a similar idea. Planning involves two aspects: business issues and technical issues.
In XP the business issues are decided by the users and clients, while technical issues are
decided by the development team. The plan, especially in the early stages of the project,
consists of the list of stories—from the users—and the estimates of effort, risk, and work
dependencies for each story—from the development team. As in Agile Development, the
idea is to involve the users heavily in the project, rather than requiring them simply to sign
off on specifications. Testing Every new piece of software requires testing, and every
methodology includes testing. XP intensifies testing by requiring that the tests for each
story be written first—before the solution is programmed. There are two major types of
tests: unit tests, which test the correctness of a small piece of code, arid acceptance tests,
which test the business function. The developers write the unit tests, and the users write
the acceptance tests. Before any code can be integrated into the library of the growing
system, it must pass the tests. By having the tests written first, XP automates their use and
executes them frequently. Over time, a library of required tests is created, so when
requirements change and the code needs to be updated, the tests can be rerun quickly and
automatically.
Pair Programming This practice, more than any other, is one for which XP is famous.
Instead of simply requiring one programmer to watch another’s work, pair programming
divides up the coding work. First, one programmer might focus more on design and double-
checking the algorithms while the other writes the code. Then they switch roles so that both
think about design, coding, and testing. XP relies on comprehensive and continual code
reviews. Interestingly, research has shown that pair programming is actually more efficient
than programming alone. It takes longer to write the initial code, but the long-term quality
is higher. Errors are caught quickly and early, two people become familiar with every part of
the system, all design decisions are developed by two brains, and fewer ―quick-and-dirty‖
shortcuts are taken. The quality of the code is always higher in a pair-programming
environment.
Simple Designs Opponents say that XP neglects design, but that is not true. XP conforms to
the principles of Agile Modeling expressed earlier by avoiding the ―Big Design Up Front‖
approach. Instead, it views design as so important that it should be done continually, but in
small chunks. As with everything else, the design must be verified immediately by reviewing
it along with coding and testing.
So what is a simple design? It is one that accomplishes the desired result with as few
classes and methods as possible and that does not duplicate code. Accomplishing all that is
often a major challenge.
Refactoring the Code Refactoring is the technique of improving the code without changing
what it does. XP programmers continually refactor their code. Before and after adding any
new functions, XP programmers review their code to see whether there is a simpler design
or a simpler method of achieving the same result. Refactoring produces high-quality, robust
code.
Owning the Code Collectively This practice requires all team members to have a new
mindset. In XP, everyone is responsible for the code. No one person can say, ―This is my
code.‖ Someone can say, ―I wrote it,‖ but everyone owns it. Collective ownership allows
anyone to modify,’ any piece of code. However, because unit tests are run before and after
every change, if programmers see something that needs fixing, they can run the unit tests
to make sure that the change did not break something. This practice embodies the team
concept that developers are building a system together.
Continuous Integration This practice embodies XP’s idea of ―growing‖ the software. Small
pieces of code—which have passed the unit tests—are integrated into the system daily or
even more often. Continuous integration highlights errors rapidly and keeps the project
moving ahead. The traditional approach of integrating large chunks of code late in the
project often resulted in tremendous amounts of rework and time lost while developers tried
to determine just what went wrong. XP’s practice of continuous integration prevents that.
System Metaphor This practice is XP’s unique and interesting approach to defining an
architectural vision. It answers the questions, ―How does the system work? What are its
major components?‖ To answer those questions, developers identify a metaphor for the
system. For example, Big Three automaker Chrysler’s payroll system was built as a
production-line metaphor, with its system components using production-line terms.
Everyone at Chrysler understands a production line, so a payroll transaction was treated the
same way—developers started with a basic transaction and then applied various processes
to complete it. Of course, the metaphor should be easily understood or well known to the
members of the development team. A system metaphor can guide members toward a vision
and help them understand the system.
Small Releases A release is a point at which the new system can be turned over to users for
acceptance testing, and sometimes even for productive use. Consistent with the entire
philosophy of growing the software, small and frequent releases provide upgraded solutions
to the users and keep them involved in the project. They also facilitate other practices, such
as immediate feedback and continual integration.
Forty-Hour Week and Coding Standards These final two practices set the tone for how the
developers should work. The exact number of hours a developer works is not the issue. The
issue is that the project should not be a death march that burns out every member of the
team. Neither is the project a haphazard coding exercise. Developers should follow
standards for coding and documentation. XP uses just the engineering principles that are
appropriate for an adaptive process based on empirical controls.
-------------------------------------------------------------------------------------------------------
XP Project Activities:
middle ring), and iteration (the inner ring). System-level activities occur once during each
development project. A system is delivered to users in multiple stages called releases. Each
release is a fully functional system that performs a subset of the full system requirements.
A release is developed and tested within a period of no more than a few weeks or months.
The activities in the middle ring cycle multiple times—once for each release. Releases are
divided into multiple iterations. During each iteration, developers code and test a specific
functional subset of a release. Iterations are coded and tested in a few days or weeks.
There are multiple iterations within each release, so the iteration ring (inner) cycles
multiple times.
-------------------------------------------------------------------------------------------------------
Developers need to consider several issues when determining whether to use object
frameworks. Object frameworks affect the process of systems design and development in
several different ways:
Of the three object layers typically used in Object Oriented system development
(view, business logic, and data access), the view and data layers most commonly derive
from foundation classes. User interfaces and database access tend to be the areas of
greatest strength in object frameworks, and they are typically the most tedious classes to
develop from scratch. It is not unusual for 80 percent of a system’s code to be devoted to
view and data classes. Thus, constructing view and data classes from foundation classes
provides significant and easily obtainable code reuse benefits. Adapting view classes from
foundation classes has the additional benefit of ensuring a similar look and feel of the user
interface across systems and across application programs within systems.
-------------------------------------------------------------------------------------------------------
Components are standardized and interchangeable software parts. They differ from
objects or classes because they are binary (executable) programs, not symbolic (source
code) programs. This distinction is important because it makes components much easier to
reuse and re implement than source code programs.
To integrate the existing function into a new word processor, the word processor
developers must be provided with the source code of the grammar-checking function. They
then code appropriate calls to the grammar checker into their word processor source code.
The combined program is then compiled, linked, and distributed to users. When the
developers of the grammar checker revise their source code to implement the faster and
more accurate function, they deliver the source code to the developers of both word
processors. Both development teams integrate the new grammar-checking source code into
their word processors, recompile and relink the programs, and deliver a revised word
processor to their users.
So what’s wrong with this scenario? Nothing in theory, but a great deal in practice.
The grammar- checker developers can provide their function to other developers only as
source code, which opens up a host of potential problems concerning intellectual property
rights and software piracy. Of greater importance, the word processor developers must
recompile and relink their entire word processing programs to update the embedded
grammar checker. The revised binary program must then be delivered to users and installed
on their computers. This is an expensive and time-consuming process. Delivering the
grammar-checking program in binary form would eliminate or minimize most of these
problems.
connector. Adherence to this standard guarantees that any video display unit will work with
any compatible personal computer and vice versa.
Components might also require standard support infrastructure. For example, video display
units are not internally powered. Thus, they require not only a standard power plug but also
an infrastructure to supply power to the plug. A component might also require specific
services from an infrastructure. For example, a cellular telephone requires the service
provider to assign a transmission frequency with the nearest cellular radio tower, to transfer
the connection from one tower to another as the user moves among telephone cells, to
establish a connection to another person’s telephone, and to relay all voice data to and from
the other person’s telephone via the public telephone grid. All cellular telephones require
these services. Software components have a similar need for standards. Components could
be hard-wired together, but this reduces their flexibility. Flexibility is enhanced when
components can rely on standard infrastructure services to find other components and
establish connections with them.
In the simplest systems, all components execute on a single computer under the control of
a single operating system. Connection is more complex when components are located on
different machines running different operating systems and when components can be
moved from one location to another. In this case, a network protocol independent of the
hardware platform and operating system is required. In fact, a network protocol is desirable
even when all components execute on the same machine because such a protocol
guarantees that systems can be used in different environments— from a single machine to a
network of computers.
SERVICES
The era of the Internet and high-speed networks has enabled a new method of software
reuse described by various names, including Web services and service-oriented architecture
(SQA). Unlike object frameworks that are inserted into an application when it is compiled or
components that are dynamically or statically linked to an application before execution, an
application interacts with a service via the Internet or a private network during execution.
Like object frameworks and components, services rely on a suite of standards that has
significant implications for software design, development, and performance.
-------------------------------------------------------------------------------------------------------
Service Standards
Service standards have evolved from distributed object standards such as CORBA
and EJBs to include standards such as SOAP, .NET, and J2WS. The primary difference
between service standards and earlier distributed object standards is a decrease in the
amount of information that must be compiled or linked into an executable application and
an increased reliance on Web-based data interchange standards such as XML
components communicate using XML, they can be easily incorporated into applications that
use a Web-browser interface. Complex applications can be constructed using multiple SOAP
components that communicate via the Internet.
Figure 16-14 shows an application and service communicating with SOAP messages.
The SOAP encoder/decoder and HTTP connection manager are standard components of a
SOAP programmer’s toolkit. Applications can also be embedded scripts that use a Web
server to provide SOAP message-passing services.
Microsoft .NET is a service standard based on SOAP. The .NET applications and
services communicate using SOAP protocols and XML messages, and these applications and
services are installed on Microsoft’s Web/application server and rely on Microsoft’s Active
Directory for various naming, location, and security capabilities. The .NET applications and
services can be developed in a several programming languages including Visual BASIC and
C#.
Java 2 Web Services (J2WS) is a service standard for implementing applications and
services in Java. j2WS extends SOAP and several other standards to define a Java-specific
method of implementing
Quick Revision
________________________________________________________________________