Вы находитесь на странице: 1из 81

T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF.

AJAY PASHANKAR

INDEX

PAGE NO.
SR.NO TOPIC
From To

1 Project management 3 10

2 Size & Effort Estimation 11 23

3 Project Scheduling 24 37

4 Software Configuration Management Process 38 51

5 Management of OO Software 52 57

6 Changing Trends In Software Development 58 81

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 1 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

Syllabus

Project management: Revision of Project Management Process, Role of Project Manager,


Project Management Knowledge Areas Managing Changes in requirements
Role of software Metrics .

Size & Effort Estimation –Concepts of LOC & Estimation, Function Point, COCOMO Model,
Concept of Effort Estimation & Uncertainty .

Project Scheduling, Building WBS, Use of Gantt & PERT/CPM chart Staffing .

Configuration Management Process & Functionality & Mechanism.

Process Management, CMM & its levels , Risk Management & activities .

Management of OO software Projects - Object oriented metrics, Use-Case Estimation

Selecting development tools, Introduction to CASE

Changing Trends In Software Development - Unified Process, Its phases & disciplines,

Agile Development – Principles & Practices, Extreme programming- Core values & Practices

Frameworks, Components, Services, Introduction to Design Patterns,

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 2 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

CHAPTER1: PROJECT MANAGEMENT


Topic Covered:

 Revision of Project Management Process, Role of Project Manager,

 Project Management Knowledge Areas, Managing Changes in requirements, Role of


software Metrics
Q. Which activities involved in project management process ?How this activities are
grouped.(OCT 12,March 14)5M.

Q. What are the objectives of the project manager?

 Revision of Project Management Process:


Proper management is an integral part of software development. A large
software development project involves many people working for a long period of time. A
development process typically partitions the problem of developing software into a set of
phases. To meet the cost, quality, and schedule objectives, resources have to be properly
allocated to each activity for the project, and progress of different activities has to be
monitored and corrective actions taken, if needed.

The project management process specifies all activities that need to be done by the
project management to ensure that cost and quality objectives are met.

Its basic task is to ensure that, once a development process is chosen, it is


implemented optimally.

The focus is on issues like

 planning a project,
o the basic task is to plan the detailed implementation of the process for
the particular project and then ensure that the plan is followed
 estimating resource and schedule,
 and monitoring and controlling the project.

The activities in the management process for a project are grouped into three phases:

 planning,
o Project management begins with planning.
o The goal of this phase is to develop a plan for software development
following which the objectives of the project can be met successfully and
efficiently.
o A software plan is produced before the development activity begins and is
updated as development proceeds and data about progress of the project
becomes available.
o the major activities are
 cost estimation,
 schedule and milestone determination,
 project staffing,
 quality control plans,
 controlling and monitoring plans.
o it forms the basis for monitoring and control.
 monitoring and control
o this process is the longest in terms of duration, it encompasses most of
the development process.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 3 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

o It includes all activities the project management has to perform while the
development is going on to ensure that project objectives are met and
the development proceeds according to the developed plan (and update
the plan, if needed).
o Factors to monitor are
 cost, schedule
 quality
 Monitoring potential risks for the project, which might
prevent the project from meeting its objectives
 And if the information obtained by monitoring suggests that objectives may
not be met, necessary actions are taken in this phase by exerting suitable
control on the development activities.
 Requires proper information about the project. Such information is typically
obtained by the management process from the development process. As
shown earlier in Figure 2.2, the implementation of a development process
model should be such that each step in the development process produces
information that the management process needs for that step.
 That is, the development process provides the information the management
process needs. However, interpretation of the information is part of
monitoring and control.

 Termination analysis.
o This phase is performed when the development process is over.
o The basic reason is to provide information about the development
process and learn from the project in order to improve the process.
o This phase is also often called postmortem analysis.
o In iterative development, this analysis can be done after each iteration to
provide feedback to improve the execution of further iterations. The
temporal relationship between the management process and the
development process is shown in Figure 2.12.
o This is an idealized relationship showing that planning is done before
development begins, and termination analysis is done after development
is over.
o During the development, from the various phases of the development
process, quantitative information flows to the monitoring and control
phase of the management process, which uses the information to exert
control on the development process.
-------------------------------------------------------------------------------------------------------

Q. What are the roles of the project manager?(March, OCT 13)5M.

 Role of Project Manager:


 Project management is organizing and directing other people to achieve a planned result
within a predetermined schedule and budget. At the beginning of a project, a detailed

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 4 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

plan is developed that specifies the activities that must take place, the deliverables that
must be produced, and the resources that are needed.
 project management can be defined as the processes used to plan the project and then
to monitor and control it.
 The project manager defines and executes project management tasks. The success or
failure of a given project is directly related to the skills and abilities of the project
manager.
 In some companies, the project manager functions as a coordinator and does not have
direct ―line‖ (reporting) authority.
 At the other end of the spectrum, for big development projects, the project manager may
be a very experienced developer with both management skills and a solid understanding
of a full range of technical issues. In those situations, the role of the project manager is
very much a ―line‖ position with responsibility and authority for other staff members.
 Many career paths lead to project management. In some companies, the project
coordination role is done by recent college graduates. Other companies recognize the
value of a person with very strong organizational and people skills, who understands the
technology but does not want a highly technical career. Those companies provide
opportunities for employees to gain experience in management and business skills and to
advance to project management through experience as a coordinator of smaller projects.
Other companies take a ―lead engineer‖ approach to project management, in which a
person must thoroughly understand the technology to manage a project. These
companies believe that project management requires someone with strong development
skills to understand the technical issues and to manage other developers.
-------------------------------------------------------------------------------------------------------------
Q. What are the responsibilities of the project manager?(OCT 12)5M.

 A project manager has both internal and external responsibilities.


 The project success factors are usually the responsibility of the project manager, and he
or she must ensure that sufficient attention is given to the following details.
o clear requirement definitions
o substantial user involvement
o upper management support
o thorough planning
o realistic schedules and milestones
 From the internal team perspective, the project manager serves as the director
and locus of control for the project team and all of their activities. The project
manager establishes the team’s structure so that work can be accomplished.
 The following list identifies a few of these internal responsibilities:
o Identify,’ project tasks and build a work breakdown structure,
o Develop the project schedule.
o Recruit and train team members.
o Assign team members to tasks.
o Coordinate activities of team members and sub teams.
o Assess project risks.
o Monitor and control project deliverables and milestones.
o Verify the quality of project deliverables.
 From an external organizational perspective, the project manager is the focal
point or main contact for the project. He or she must represent the team to the
outside world and communicate their needs.
 External responsibilities include the following:
o Report the project’s status and progress.
o Establish good working relationships with those who identify’ the needed
system requirements (that is the people who will use the system).

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 5 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

o Work directly with the client (the project’s sponsor) and other
stakeholders.
o Identify resource needs and obtain resources.
 the project manager does not always perform all the tasks involved with these
responsibilities; other team members assist the manager.
 The role of the project manager and careers in project management vary
tremendously. Figure 3-1 lists some of the different positions project managers
hold.
TITLE POWER/AU ORGANIZATIONAL DESCRIPTION OF DUTIES
THORITY STRUCTURE

Project limited Projects may be run within • Develops the plans


coordinator the department or projects
or may have a strong lead • Coordinates activities
developer, who controls the
project • Keeps people informed of
development of the end
leader status and progress
product
• Does not have ―line‖
authority on project
deliverables.

Project Moderate Projects are run within the IT • May have both project
manager department, but other management duties and
business functions are some technical duties
Project independent
officer • Manages projects that are
generally medium sized.
Team
leader • May share project
responsibility with clients.

Project High to Project organization is a • Usually has extreme


manager almost total prime, high profile part of the experience in technical
company. issues as well as project
Or management.
Company is organized around
Program projects or there is a large • Involved in both
manager and powerful IT department. management decisions and
technical issues.

• Frequently has support staff


to do paperwork.

• Manages projects that can


be big.

 In some companies, the project manager functions as a coordinator and does


not have direct ―line‖ (reporting) authority.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 6 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

 At the other end of the spectrum, for big development projects, the project
manager may be a very experienced developer with both management skills
and a solid understanding of a full range of technical issues. In those situations,
the role of the project manager is very much a ―line‖ position with responsibility
and authority for other staff members.
-------------------------------------------------------------------------------------------------------

Q. Write a short note on project management knowledge areas?


 PROJECT MANAGEMENT KNOWLEDGE AREAS

The Project Management Institute (PMI) is a professional organization that promotes


project management, primarily within the United States but also throughout the world. In
addition, professional organizations in other countries promote project management. If you
are interested in strengthening your project management skills, you should consider joining
one of these organizations, obtaining their materials, and participating in training. The PMI
has a well-respected and rigorous certification program. In fact, many corporations
encourage project managers to become certified, and industry articles frequently indicate
that project management is one of the most important skills today. As part of its mission,
the PMI has defined a body of knowledge (BOK) for project management. This body of
knowledge, referred to as the PMBOK, is a widely accepted foundation of information that
every project manager should know. The PMBOK has been organized into nine different
knowledge areas. Although these nine areas do not represent all there is to know about
project management, they provide an excellent foundation.

 Project Scope Management. Defining and controlling the functions that are to
be included in the system, as well as the scope of the work to be done by the
project team
 Project Time Management. Building a detailed schedule of all project tasks
and then monitoring the progress of the project against defined milestones
 Project Cost Management. Calculating the initial cost/benefit analysis and its
later updates and monitoring expenditures as the project progresses
 Project Quality Management. Establishing a comprehensive plan for ensuring
quality, which includes quality-control activities for every phase of the
project?
 Project Human Resource Management. Recruiting and hiring project team
members; training, motivating, and team building; and implementing related
activities to ensure a happy, productive team
 Project Communications Management. Identifying all stakeholders and the
key communications to each; also establishing all communications
mechanisms and schedules
 Project Risk Management. Identifying and reviewing throughout the project all
potential risks for failure and developing plans to reduce these risks
 Project Procurement Management. Developing requests for proposals,
evaluating bids, writing contracts, and then monitoring vendor performance
 Project Integration Management. Integrating all the other knowledge areas
into one seamless whole

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 7 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

Q. Describe in Brief about Requirement Change Management. (OCT 13)5M.

 Managing Changes in requirements

 Requirements change and changes in requirements can come at any time


during the life of a project (or even after that).
 The farther down in the life cycle the requirements change, the more severe
the impact on the project.
 Instead of wishing that changes will not come, or hoping that somehow the
initial requirements will be "so good" that no changes will be required, it is
better that a project manager prepare to handle change requests as they
come.
 Uncontrolled changes to requirements can have a very adverse effect on the
cost, schedule, and quality of the project.
 Requirement changes can account for as much as 40% of the total cost.
 Due to the potentially large impact of requirement changes on the project,
often a separate process is employed to deal with them.
 The change management process defines the set of activities that are
performed when there are some new requirements or changes to existing
requirements.
 This process is used to manage requirement changes, and any major changes
like design changes or major bug fixes to a system in deployment.
 The change management process has the following steps.
 Log the changes
 Perform impact analysis on the work products
 Estimate impact on effort and schedule
 Review impact with concerned stakeholders
 Rework work products
-------------------------------------------------------------------------------------------------------

Q.Explain the process of change request.(March 14)5M.

Q. What are the steps involved in change management?

 Steps involved in change process are as follows:

 A change is initiated by a change request.


 A change request log is maintained to keep track of the change requests. Each
entry in the log contains a change request number, a brief description of the
change, the effect of the change, the status of the change request, and key dates.
 The effect of a change request is assessed by performing impact analysis. Impact
analysis involves identifying work products and configuration items that need to
be changed and evaluating the quantum of change to each; reassessing the
projects risks by revisiting the risk management plan; and evaluating the overall
implications of the changes for the effort and schedule estimates.
 Once a change is reviewed and approved, then it is implemented, i.e., changes to
all the items are made. The actual tracking of implementation of a change request
may be handled by the configuration management process.
 One danger of requirement changes is that, even though each change is not large
in itself, over the life of the project the cumulative impact of the changes is large.
Hence, besides studying the impact of individual changes and tracking them, the
cumulative impact of changes must also be monitored.
 For cumulative changes, the change log is used. To facilitate this analysis, the log
is frequently maintained as a spreadsheet.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 8 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

-------------------------------------------------------------------------------------------------------
Q. Write a short note on software metrics?

 Role of software Metrics


 A phased development process is central to the software engineering approach.
However, a development process does not specify how to allocate resources to
the different activities in the process. Nor does it specify things like schedule for
the activities, how to divide work within a phase, how to ensure that each phase
is being done properly, or what the risks for the project are and how to mitigate
them.
 Without properly managing these issues relating to the process, it is unlikely that
the cost and quality objectives can be met. These issues relating to managing the
development process of a project are handled through project management. The
management activities typically revolve around a plan.
 A software plan forms the baseline that is heavily used for monitoring and
controlling the development process of the project. This makes planning the most
important project management activity in a project. It can be safely said that
without proper project planning a software project is very unlikely to meet its
objectives.
 Managing a process requires information upon which the management decisions
are based. Otherwise, even the essential questions for example, Q.is the
schedule in a project is being met, what is the extent of cost overrun, are quality
objectives being met? These questions cannot be answered. And information that
is subjective is only marginally better than no information (e.g., Q: how close are
you to finishing? A: We are almost there.)
 Hence, for effectively managing a process, objective data is needed. For this,
software metrics are used. Software metrics are quantifiable measures that could
be used to measure different characteristics of a software system or the software
development process. There are two types of metrics used for software
development product metrics and process metrics.

 Product metrics are used to quantify characteristics of the product being


developed, i.e., the software.
 Process metrics are used to quantify characteristics of the process being used to
develop the software. Process metrics aim to measure such considerations as
productivity, cost and resource requirements, effectiveness of quality assurance
measures, and the effect of development techniques and tools.
Metrics and measurement are necessary aspects of managing a software
development project. For effective monitoring, the management needs to get information
about the project:

 Q. How far it has progressed?


 Q. How much development has taken place?
 Q. How far behind schedule it is?
 Q. what is the quality of the development so far.
Based on this information, decisions can be made about the project. Without
proper metrics to quantify the required information, subjective opinion would have to be
used, which is often unreliable and goes against the fundamental goals of engineering.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 9 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

Hence, we can say that metrics-based management is also a key component in the software
engineering strategy to achieve its objectives.

-------------------------------------------------------------------------------------------------------
Q. What are the attributes of effective s/w metrics?

 The Attributes of Effective Software Metrics

Hundreds of metrics have been proposed for computer software, but not all provide
practical support to the software engineer. Some demand measurement that is too complex,
others are so obscure that few real-world professionals have any hope of understanding
them, and others violate the basic intuitive notions of what high- quality software really is.

A set of attributes that should be encompassed by effective software metrics are as


follows. The derived metric and the measures that lead to it should be:

Simple and It should be relatively easy to learn how to derive the metric, and
computable its computation should not demand inordinate effort or time.

Empirically and The metric should satisfy the engineer’s intuitive notions about the
intuitively persuasive product attribute under consideration (e.g., a metric that measures
module cohesion should increase in value as the level of cohesion
increases).

Consistent and The metric should always yield results that are unambiguous. An
objective independent third party should be able to derive the same metric
value using the same information about the software.

Consistent in its use of The mathematical computation of the metric should use measures
units and dimensions that do not lead to bizarre combinations of units. For example,
multiplying people on the project teams by programming language
variables in the program, results in a suspicious mix of units that
are not intuitively persuasive.

Programming Metrics should be based on the requirements model, the design


language model, or the structure of the program itself. They should not be
independent. dependent on the vagaries of programming language syntax or
semantics.

An effective That is, the metric should provide you with information that can
mechanism for high- lead to a higher-quality end product.
quality feedback.

 Although most software metrics satisfy these attributes, some commonly used
metrics may fail to satisfy one or two of them.
 An example is the function point a measure of the ―functionality‖ delivered by the
software.
 It can be argued that the consistent end version of the product somewhat less
functional, but we can announce all functionality and then deliver over the 14-
month period.
 Third, we can dispense with reality and wish the project complete in nine months.
We’ll wind up with nothing that can be delivered to a customer.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 10 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

 The third option, I hope you’ll agree, is unacceptable. Past history and our best
estimates say that it is unrealistic and a recipe for disaster.
 There will be some grumbling, but if a solid estimate based on good historical
data is presented, it’s likely that negotiated versions of option 1 or 2 will be
chosen.
 The unrealistic deadline evaporates.

Quick Revision

 Proper management is an integral part of software development. A large software


development project involves many people working for a long period of time. A
development process typically partitions the problem of developing software into a set of
phases.
 The project management process specifies all activities that need to be done by the
project management to ensure that cost and quality objectives are met.
 Project management is organizing and directing other people to achieve a planned result
within a predetermined schedule and budget. At the beginning of a project, a detailed
plan is developed that specifies the activities that must take place, the deliverables that
must be produced, and the resources that are needed.
 Project Scope Management. Defining and controlling the functions that are to be
included in the system, as well as the scope of the work to be done by the project team.
 The project manager defines and executes project management tasks. The success or
failure of a given project is directly related to the skills and abilities of the project
manager.

IMP Question Set

SR No. Question Reference page No.

1 3
Q. What are the objectives of the project manager?
2 4
Q. What are the roles of the project manager?
3 5
Q. What are the responsibilities of the project manager?
4 6
Q. Write a short note on project management knowledge areas?
5 8
Q. What are the steps involved in change management?

6 Q. Write a short note on software metrics? 8


7 9
Q. What are the attributes of effective s/w metrics?

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 11 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

CHAPTER2: SIZE & EFFORT ESTIMATION

Topic Covered:

 Concepts of LOC & Estimation, Function Point, COCOMO Model, Concept of Effort
Estimation & Uncertainty

-------------------------------------------------------------------------------------------------------
Q. Write a short note on LOC estimation with a suitable example?(March, OCT 13,
March 14)5M.

 Concepts of LOC & Estimation:

Problem-Based Estimation

LOC and FP data are used in two ways during software project estimation:

1. Estimation variables to ―size‖ each element of the software and


2. Baseline metrics collected from past projects and used in conjunction with
estimation variables to develop cost and effort projections.

LOC and FP estimation are distinct estimation techniques. Both have a number of
characteristics in common.

1. You begin with a bounded statement of software scope and from this statement
attempt to decompose the statement of scope into problem functions that can
each be estimated individually.
2. LOC or FP (the estimation variable) is then estimated for each function.
Alternatively, you may choose another component for sizing, such as classes or
objects, changes, or business processes affected.
3. Baseline productivity metrics (e.g., LOC/pm or FP/pm6) are then applied to the
appropriate estimation variable, and cost or effort for the function is derived.
Function estimates are combined to produce an overall estimate for the entire
project.
NOTE: There is often substantial scatter in productivity metrics for an organization,
making the use of a single-baseline productivity metric suspect.

In general, LOC/pm or FP/pm averages should be computed by project domain i.e.


projects should be grouped by team size, application area complexity, and other relevant
parameters. Local domain averages should then be computed.

When a new project is estimated, it should first be allocated to a domain, and then
the appropriate domain average for past productivity should be used in generating the
estimate.

The LOC and FP estimation techniques differ in the level of detail required for
decomposition and the target of the partitioning.

 When LOC is used as the estimation variable, decomposition is absolutely essential


and is often taken to considerable levels of detail. The greater the degree of

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 12 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

partitioning, the more likely reasonably accurate estimates of LOC can be


developed.

 For FP estimates, decomposition works differently. Rather than focusing on


function, each of the information domain characteristics
 Inputs
 Outputs
 Data files
 Inquiries
 external interfaces

 The resultant estimates can then be used to derive an FP value that can be tied
to past data and used to generate an estimate.
 Regardless of the estimation variable that is used, you should begin by
estimating a range of values for each function or information domain value.
Using historical data or (when all else fails) intuition, estimate an optimistic,
most likely, and pessimistic size value for each function or count for each
information domain value. An implicit indication of the degree of uncertainty is
provided when a range of values is specified.
 A three-point or expected value for the estimation variable (size) Scan be
computed as a weighted average of the optimistic (s0j, most likely (Sm), and
pessimistic estimates. For example,
s = Sopt + 4Sm + Spess

gives heaviest credence to the ―most likely‖ estimate and follows a beta
probability distribution. We assume that there is a very small probability the actual
size result will fall outside the optimistic or pessimistic values.

 Once the expected value for the estimation variable has been determined,
historical LOC or FP productivity data are applied. Are the estimates
correct? The only reasonable answer to this question is: ―You can’t be
sure.‖ Any estimation technique, no matter how sophisticated, must be
cross-checked with another approach.
 An Example of LOC-Based Estimation

As an example of LOC and FP problem-based estimation techniques,

I consider a software package to be developed for a computer-aided design (CAD)


application for mechanical components. The software is to execute on an engineering
workstation and must interface with various computer graphics peripherals including a
mouse, digitizer, high-resolution color display, and laser printer. A preliminary statement of
software scope can be developed:

1. The mechanical CAD software will accept two- and three-dimensional


geometric data from an engineer.
2. The engineer will interact and control the CAD system through a user interface
that will exhibit characteristics of good human/machine interface design.
3. All geometric data and other supporting information will be maintained in a
CAD database.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 13 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

4. Design analysis modules will be developed to produce the required output,


which will be displayed on a variety of graphics devices.
5. The software will be designed to control and interact with peripheral devices
that include a mouse, digitizer, laser printer, and plotter.

This statement of scope is preliminary—it is not bounded. Every sentence would have
to be expanded to provide concrete detail and quantitative bounding. For example, before
estimation can begin, the planner must determine what ―characteristics of good
human/machine interface design‖ means or what the size and sophistication of the ―CAD
database‖ are to be.

For our purposes, assume that further refinement has occurred and that the
major software functions listed in following figure are identified.

Following the decomposition technique for LOC, an estimation table (Figure 26.2) is
developed. A range of LOC estimates is developed for each function.

For example, the range of LOC estimates for the 3D geometric analysis function is
optimistic, 4600 LOC; most likely, 6900 LOC; and pessimistic, 8600 LOC. Applying the
equation to the expected value for the 3D geometric analysis function is 6800 LOC.

Other estimates are derived in a similar fashion. By summing vertically in the


estimated LOC column, an estimate of 33,200 lines of code is established for the CAD
system.

A review of historical data indicates that:

 The organizational average productivity for systems of this type is 620 LOC/pm.
 Based on a burdened labor rate of $8000 per month, the cost per line of code is
approximately $13.
 Based on the LOC estimate and the historical productivity data, the total
estimated project cost is $431,000 and the estimated effort is 54 person-
months’
-------------------------------------------------------------------------------------------------------

Q. Write a short note on function point?

 Function Point:
 Requirement of Function point:

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 14 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

A major problem after requirements are done is to estimate the effort and
schedule for the project. For this, some metrics are needed that can be extracted from
the requirements and used to estimate cost and schedule (through the use of some
model). As the primary factor that determines the cost (and schedule) of a software
project is its size, a metric that can help get an idea of the size of the project will be
useful for estimating cost. This implies that during the requirement phase measuring
the size of the requirement specification itself is pointless, unless the size of the SRS
reflects the effort required for the project. This also requires that relationships of any
proposed size measure with the ultimate effort of the project be established before
making general use of the metric. A commonly used size metric for requirements is
the size of the text of the SRS. The size could be in number of pages, number of
paragraphs, number of functional requirements, etc. As can be imagined, these
measures are highly dependent on the authors of the document. A talkative analyst
who likes to make heavy use of illustrations may produce an SRS that is many times
the size of the SRS of a short analysis. Similarly, how much an analyst refines the
requirements has an impact on the size of the document. Generally, such metrics
cannot be accurate indicators of the size of the project. They are used mostly to
convey a general sense about the size of the project.

Function points are one of the most widely used measures of software size. The basis
of function points is that the "functionality" of a system, that is, what the system performs,
is the measure of the system size. And as functionality is independent of how the
requirements of the system are specified, or even how they are eventually implemented,

The system functionality is calculated in terms of

 the number of functions it implements


 the number of inputs
 the number of outputs, etc

parameters that can be obtained after requirements analysis and that are independent of
the specification (and implementation) language.

The original formulation for computing the function points uses the count of five
different parameters. To account for complexity, each parameter in a type is classified as

Simple

Average or

complex.

1. External input types: Each unique input (data or control) type that is given as
input to the application from outside is considered of external input type and is
counted. An external input type is considered unique if the format is different from
others or if the specifications require a different processing for this type from other
inputs of the same format. The source of the external input can be the user, or some

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 15 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

other application, files. An external input type is considered simple if it has a few
data elements and affects only a few internal files of the application. It is considered
complex if it has many data items and many internal logical files are needed for
processing them. The complexity is average if it is in between. Note that files needed
by the operating system or the hardware (e.g., configuration files) are not counted
as external input files because they do not belong to the application but are needed
due to the underlying technology. Reports or messages to the users or other
applications are counted as external input types.
2. External output types: Each unique output that leaves the system boundary is
counted is an external output type. Again, an external output type is considered
unique if its format or processing is different. For a report, if it contains a few
columns it is considered simple, if it has multiple columns it is considered average,
and if it contains complex structure of data and references many files for production,
it is considered complex.
3. Logical internal file types: Each application maintains information internally for
performing its functions. Each logical group of data or control information that is
generated, used, and maintained by the application is counted as a logical internal
file type. A logical internal file is simple if it contains a few record types, complex if it
has many record types, and average if it is in between.
4. External interface file types: Files that are passed or shared between applications
are counted as external interface file type. Note that each such file is counted for all
the applications sharing it. The complexity levels are defined as for logical internal
file type.
5. External inquiry types: A system may have queries also, where a query is defined
as an input/output combination where the input causes the output to be generated
almost immediately. Each unique input-output pair is counted as an external inquiry
type. A query is unique if it differs from others in format of input or output or if it
requires different processing. For classifying the query type, the input and output are
classified as for external input type and external output type, respectively. The query
complexity is the larger of the two.

These five parameters capture the entire functionality of a system. However, two
elements of the same type may differ in their complexity and hence should not contribute
the same amount to the "functionality" of the system.

Each element of the same type and complexity contributes a fixed and same amount
to the overall function point count of the system (which is a measure of the functionality of
the system), but the contribution is different for the different types, and for a type, it is
different for different complexity levels. The counts for all five different types are known for

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 16 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

all three different complexity classes, the raw or unadjusted function point (UFP) can be
computed as a weighted sum as follows:

where i reflects the row and j reflects the column in Table 3.3; wij is the entry in the
i'th row and jth column of the table (i.e., it represents the contribution of an element of
the type i and complexity j ) ; and Cij is the count of the number of elements of type i that
have been classified as having the complexity corresponding to column j .

Once the UFP is obtained, it is adjusted for the environment complexity.

For this, 14 different characteristics of the system are given.

1. data communications 2. on-line update


3. distributed processing 4. complex processing logic
5. performance objectives 6. re-usability
7. operation configuration load 8. installation ease
9. transaction rate 10. operational ease
11. on-line data entry 12. multiple sites
13. end user efficiency 14. desire to facilitate change.

The degree of influence of each of these factors is taken to be from 0 to 5,


representing the six different levels:

Not present (0),

Insignificant influence (1),

Moderate influence (2),

Average influence (3),

Significant influence (4),

Strong influence (5).

The 14 degrees of influence for the system are then summed, giving a total A'' (A^
ranges from 0 to 14*5=70). This N is used to obtain a complexity adjustment factor (CAP)
as follows:

CAF = 0.65 + O.O1N.

With this equation, the value of CAF ranges between 0.65 and 1.35. The delivered
function points (DFP) are simply computed by multiplying the UFP by CA. i.e.

Delivered Function Points = CAF * Unadjusted Function Points.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 17 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

As we can see, by adjustment for environment complexity, the DFP can differ from
the UFP by at most 35%. The final function point count for an application is the computed
DFP.

Function points have been used as a size measure extensively and have been used
for cost estimation. Studies have also been done to establish correlation between DFP and
the final size of the software (measured in lines of code.)

For example, according to one such conversion given in


www.theadvisors.com/langcomparison.htm, one function point is approximately
equal to about 125 lines of C code, and about 50 lines of C++ or Java code.

By building models between function points and delivered lines of code (and existing
results have shown that a reasonably strong correlation exists between DFP and KLOC so
that such models can be built), one can estimate the size of the software in KLOC, if
desired.

As can be seen from the manner in which the functionality of the system is defined,
the function point approach has been designed for the data processing type of
applications. For data processing applications, function points generally perform very well
and have now gained a widespread acceptance. For such applications, function points are
used as an effective means of estimating cost and evaluating productivity.

However, its utility as a size measure for nondata processing types of applications
(e.g., real-time software, operating systems, and scientific applications) has not been well
established, and it is generally behaved that for such applications function points are not
very well suited.

A major drawback of the function point approach is that the process of computing
the function points involves subjective evaluation at various points and the final computed
function point for a given SRS may not be unique and can depend on the analyst.

Some of the places where subjectivity enters are:

(1) different interpretations of the SRS (e.g., whether something should count as an
external input type or an external interface type; whether or not something constitutes a
logical internal file; if two reports differ in a very minor way should they be counted as two
or one);

(2) complexity estimation of a user function is totally subjective and depends entirely
on the analyst (an analyst may classify something as complex while someone else may
classify it as average) and complexity can have a substantial impact on the final count as
the weighs for simple and complex frequently differ by a factor of 2; and

(3) Value judgments for the environment complexity.

These factors make the process of function point counting somewhat subjective.
Organizations that use function points try to specify a more precise set of counting rules in
an effort to reduce this subjectivity. It has also been found that with experience this
subjectivity is reduced. Overall, despite this subjectivity, use of function points for data
processing applications continues to grow.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 18 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

The main advantage of function points over the size metric of KLOC, the other
commonly used approach, is that the definition of DFP depends only on information
available from the specifications, whereas the size in KLOC cannot be directly determined
from specifications. Furthermore, the DFP count is independent of the language in which
the project is implemented.

Though these are major advantages, another drawback of the function point
approach is that even when the project is finished, the DFP is not uniquely known and has
subjectivity. This makes building of models for cost estimation hard, as these models are
based on information about completed projects. In addition, determining the DFP—from
either the requirements or a completed project—cannot be automated. That is, considerable
effort is required to obtain the size, even for a completed project. This is a drawback
compared to KLOC measure, as KLOC can be determined uniquely by automated tools once
the project is completed.

-------------------------------------------------------------------------------------------------------

Q. Write a short note on COCOMO model?(OCT 12,March 13)5M.

 COCOMO Model

A top-down model can depend on many different factors, instead of depending only
on one variable, giving rise to multivariable models. One approach for building multivariable
models is to start with an initial estimate determined by using the static single-variable
model equations, which depend on size, and then adjusting the estimates based on other
variables. This approach implies that size is the primary factor for cost; other factors have a
lesser effect. Here we will discuss one such model called the Constructive Cost Model
(COCOMO) developed by Boehm [20, 21], This model also estimates the total effort in
terms of person-months. The basic steps in this model are:

1. Obtain an initial estimate of the development effort from the estimate of


thousands of delivered lines of source code (KLOC).

2. Determine a set of 15 multiplying factors from different attributes of the project.

3. Adjust the effort estimate by multiplying the initial estimate with all the
multiplying factors.

The initial estimate (also called nominal estimate) is determined by an equation of


the form used in the static single-variable models, using KLOC as the measure of size. To
determine the initial effort Ei in person-months the equation used is of the type

Ei = a * (KLOC)b.

The value of the constants a and b depend on the project type. In COCOMO, projects
are categorized into three types-—organic, semidetached, and embedded. These categories
roughly characterize the complexity of the project with organic projects being those that are
relatively straightforward and developed by a small team, and embedded are those that are
ambitious and novel, with stringent constraints from the environment and high
requirements for such aspects as interfacing and reliability. The constants a and b for
different systems are:

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 19 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

The values of the constants for a cost model depend on the process and have to be
determined from past data. COCOMO has instead provided "global" constant values.
These values should be considered as values to start with until data for some projects is
available. With project data, the value of the constants can be determined through
regression analysis. There are 15 different attributes, called cost driver attributes^ that
determine the multiplying factors. These factors depend on product, computer, personnel,
and technology attributes (called project attributes). Examples of the attributes are required
software reliability (RELY), product complexity (CPLX), analyst capability (ACAP), application
experience (AEXP), use of modern tools (TOOL), and required development schedule
(SCHD), Each cost driver has a rating scale, and for each rating, a multiplying factor is
provided. For example, for the product attribute RELY, the rating scale is very low, low,
nominal, high, and very high (and in some cases extra high). The multiplying factors for
these ratings are .75, .88, 1.00, 1.15, and 1.40, respectively. So, if the reliability
requirement for the project is judged to be low then the multiplying factor is .75,
while if it is judged to be very high the factor is 1.40. The attributes and their multiplying
factors for different ratings are shown in Table 5.1 [20, 21]. The COCOMO approach also
provides guidelines for assessing the rating for the different attributes [20]. The multiplying
factors for ah 15 cost drivers are multiplied to get the effort adjustment factor (EAF). The
final effort estimate, E, is obtained by multiplying the initial estimate by the EAF. That is,

E = EAF ^ Ei..

By this method, the overall cost of the project can be estimated. For planning and
monitoring purposes, estimates of the effort required for the different phases are also
desirable. In COCOMO, effort for a phase is a defined percentage of the overall effort. The
percentage of total effort spent in a phase varies with the type and size of the project. The
percentages for an organic software project are given in Table 5.2. Using this table, the
estimate of the effort required for each phase can be determined from the total effort
estimate. For example, if the total effort estimate for an organic software system is 20 PM
and the size estimate is 20KLOC, then the percentage effort for the coding and unit testing
phase will be 40 + (38 - 40)/(32 - 8) * 20 - 39%. The estimate for the effort needed

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 20 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

for this phase is 7.8 PM. This table does not list the cost of requirements as a
percentage of the total cost estimate because the project plan (and cost estimation) is being
done after the requirements are complete. In COCOMO the detailed design and code and
unit testing are sometimes combined into one phase called the programming phase. As an
example, suppose a system for office automation has to be designed. Prom the
requirements, it is clear that there will be four major modules in the system: data entry,
data update, query, and report generator. It is also clear from the requirements that this
project will fall in the organic category. The sizes for the different modules and the overall
system are estimated to be:

Data Entry 0.6 KLOC

Data Update 0.6 KLOC

Query 0.8 KLOC

Reports 1.0 KLOC

TOTAL 3.0 KLOC

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 21 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

From the requirements, the ratings of the different cost driver attributes are assessed.
These ratings, along with their multiplying factors, are:

Complexity High 1.15

Storage High 1.06

Experience Low 1.13

Programmer Capability Low 1.17

All other factors had a nominal rating. From these, the effort adjustment factor (EAF) is

EAF = 1.15*1.06*1.13*1.17 = 1.61.

The initial effort estimate for the project is obtained from the relevant equations.

We have Ei= 3.2* 31.05 = 10.14PM.

Using the EAF, the adjusted effort estimate is

E = 1.61* 10.14 = 16.3FM.

Using the preceding table, we obtain the percentage of the total effort consumed in different
phases. The office automation system's size estimate is 3 KLOC, so we will have to use
interpolation to get the appropriate percentage (the two end values for interpolation will be
the percentages for 2 KLOC and 8 KLOC). The percentages for the different phases are:

design—16%,

detailed design—25.83%

code and unit test—41.66%

Integration and testing—16.5%.

With these, the effort estimates for the different phases are:

System Design .16 * 16.3 - 2.6 PM

Detailed Design .258 * 16.3 = 4.2 PM

Code and Unit Test .4166 * 16.3 = 6.8 PM

Integration .165 * 16.3 = 2.7 PM.

-------------------------------------------------------------------------------------------------------

Q. Write a short note on effort estimation & uncertainty?

 Concept of Effort Estimation & Uncertainty

For a given set of requirements it is desirable to know how much it will cost to
develop the software and how much time the development will take. These estimates are
needed before development is initiated. The primary reason for cost and schedule

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 22 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

estimation is cost-benefit analysis, and project monitoring and control. A more practical
use of these estimates is in bidding for software projects, where cost estimates must be
given to a potential client for the development contract. The bulk of the cost of software
development is due to the human resources needed, and therefore most cost estimation
procedures focus on estimating effort in terms of person-months (PM). By properly
including the "overheads" (i.e., the cost of hardware, software, office space, etc.) in the cost
of a person-month, effort estimates can be converted into cost. For a software development
project, effort and schedule estimates are essential prerequisites for managing the project.
Otherwise, even simple questions like "is the project late?" "are there cost overruns?" and
"when is the project likely to complete?" cannot be answered. Effort and schedule estimates
are also required to determine the staffing level for a project during different phases.

Estimates can be based on subjective opinion of some person or determined through


the use of models. Though there are approaches to structure the opinions of persons for
achieving a consensus on the effort estimate (e.g., the Delphi approach [20]), it is generally
accepted that it is important to have a more scientific approach to estimation through the
use of models. In this section we discuss only the model-based approach for effort
estimation. Before we discuss the models, let us first understand the limitations of any
effort estimation procedure.

 Uncertainties in Effort Estimation:

One can perform effort estimation at any point in the software life cycle. As the effort
of the project depends on the nature and characteristics of the project, at any point, the
accuracy of the estimate will depend on the amount of reliable information we have about
the final product. Clearly, when the product is delivered, the effort can be accurately
determined, as all the data

about the project and the resources spent can be fully known by then. This is effort
estimation with complete knowledge about the project. On the other extreme is the point
when the project is being initiated or during the feasibility study. At this time, we have
only some idea of the classes of data the system will get and produce and the major
functionality of the system.

There is a great deal of uncertainty about the actual specifications of the system.
Specifications with uncertainty represent a range of possible final products, not one
precisely defined product. Hence, the effort estimation based on this type of information
cannot be accurate. Estimates at this phase of the project can be off by as much as a factor

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 23 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

of four from the actual final effort.

As we specify the system more fully and accurately, the uncertainties are reduced
and more accurate effort estimates can be made. For example, once the requirements are
completely specified, more accurate effort estimates can be made compared to the
estimates after the feasibility study. Once the design is complete, the estimates can be
made still more accurately. The obtainable accuracy of the estimates as it varies with the
different phases is shown in Figure

Note that this figure is simply specifying the limitations of effort estimating
strategies—the best accuracy a effort estimating strategy can hope to achieve. It does not
say anything about the existence of strategies that can provide the estimate with that
accuracy. For actual effort estimation, estimation models or procedures have to be
developed. The accuracy of the estimates will depend on the effectiveness and accuracy of
the estimation procedures or models employed and the process (i.e., how predictable it is).

Despite the limitations, estimation models have matured considerably and generally
give fairly accurate estimates. For example, when the COCOMO model (discussed later) was
checked with data from some projects, it was found that the estimates were within 20% of
the actual effort 68% of the time.

It should also be mentioned that achieving an estimate after the requirements have
been specified within 20% is actually quite good. With such an estimate, there need not
even be any cost and schedule overruns, as there is generally enough slack or free time
available (recall the study mentioned earlier that found a programmer spends more than
30% of his time in personal or miscellaneous tasks) that can be used to meet the targets
set for the project based on the estimates. In other words, if the estimate is within 20% of
the actual, the effect of this inaccuracy will not even be reflected in the final cost and
schedule. Highly precise estimates are generally not needed.

Reasonable estimates in a software project tend to become a self-fulfilling


prophecy—people work to meet the schedules (which are derived from effort estimates).

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 24 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

Quick Revision
_________________________________________________________________L
OC and FP data are used in two ways during software project estimation:

1. Estimation variables to ―size‖ each element of the software and


2. Baseline metrics collected from past projects and used in conjunction with
estimation variables to develop cost and effort projections.
 When LOC is used as the estimation variable, decomposition is absolutely essential and
is often taken to considerable levels of detail. The greater the degree of partitioning, the
more likely reasonably accurate estimates of LOC can be developed.
 Function points are one of the most widely used measures of software size. The basis of
function points is that the "functionality" of a system, that is, what the system performs,
is the measure of the system size. And as functionality is independent of how the
requirements of the system are specified, or even how they are eventually implemented.
 The system functionality is calculated in terms of
 the number of functions it implements
 the number of inputs
 the number of outputs, etc
 Function points have been used as a size measure extensively and have been used for
cost estimation.
 The Constructive Cost Model (COCOMO) developed by Boehm , This model also
estimates the total effort in terms of person-months. The basic steps in this model are:

1. Obtain an initial estimate of the development effort from the estimate of


thousands of delivered lines of source code (KLOC).

2. Determine a set of 15 multiplying factors from different attributes of the project.

3. Adjust the effort estimate by multiplying the initial estimate with all the
multiplying factors.

 The bulk of the cost of software development is due to the human resources needed,
and therefore most cost estimation procedures focus on estimating effort in terms of
person-months (PM).

IMP Question Set

SR No. Question Reference page No.


1 11
Q. Write a short note on LOC estimation with a suitable example?
2 13
Q. Write a short note on function point?
3 17
Q. Write a short note on COCOMO model?
4 Q. Write a short note on effort estimation & uncertainty? 21

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 25 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

CHAPTER3: PROJECT SCHEDULING

Topic Covered:

 Building WBS, Use of Gantt & PERT/CPM chart Staffing

Fred Brooks was once asked how software projects fall behind schedule. His response was
as simple as it was profound: “One day at a time.”

The reality of a technical project (whether it involves building a hydroelectric plant or


developing an operating system) is that hundreds of small tasks must occur to accomplish a
larger goal. Some of these tasks lie outside the mainstream and may be completed without
worry about impact on project completion date. Other tasks lie on the ―critical path.‖ If
these ―critical‖ tasks fall behind schedule, the completion date of the entire project is put
into jeopardy.

As a project manager, your objective is to define all project tasks, build a network that
depicts their interdependencies, identify the tasks that are critical within the network, and
then track their progress to ensure that delay is recognized ―one day at a time.‖

To accomplish this, you must have a schedule that has been defined at a degree of
resolution that allows progress to be monitored and the project to be controlled.

-------------------------------------------------------------------------------------------------------

Q. Explain the steps involved in Project Scheduling?

 STEPS INVOLVED IN PROJECT SCHEDULING:

1. Software project scheduling is an action that distributes estimated effort across the
planned project duration by allocating the effort to specific software engineering
tasks.
NOTE: The schedule evolves over time.

2. During early stages of project planning, a macroscopic schedule is developed. This


type of schedule identifies all major process framework activities and the product
functions to which they are applied. As the project gets under way, each entry on
the macroscopic schedule is refined into a detailed schedule. Here, specific
software actions and tasks (required to accomplish an activity) are identified and
scheduled.
Scheduling for software engineering projects can be viewed from two rather different
perspectives.

1. an end date for release of a computer-based system has already (and irrevocably)
been established. The software organization is constrained to distribute effort within
the prescribed time frame.
2. The second view of software scheduling assumes that rough chronological bounds
have been discussed but that the end date is set by the software engineering
organization. Effort is distributed to make best use of resources, and an end date is
defined after careful analysis of the software.
Unfortunately, the first situation is encountered far more frequently than the second.

 Basic Principles of project scheduling:

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 26 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

(Like all other areas of software engineering, a number of basic principles guide
software project scheduling)

 Compartmentalization. The project must be compartmentalized into a number of


manageable activities and tasks. To accomplish compartmentalization, both the product
and the process are refined.
 Interdependency. The interdependency of each compartmentalized activity or the task
must be determined. Some tasks must occur in sequence, while others can occur in
parallel. Some activities cannot commence until the work product pro interdependencies,
produced by another is available. Other activities can occur independently.
 Time allocation. Each task to be scheduled must be allocated some number of work
units (e.g., person-days of effort). In addition, each task must be assigned a start date
and a completion date that are a function of the interdependencies and whether work
will be conducted on a full-time or part-time basis.
 Effort validation. Every project has a defined number of people on the software team.
As time allocation occurs, you must ensure that no more than the allocated number of
people has been scheduled at any given time. For example, consider a project that has
three assigned software engineers (e.g., three person-days are available per day of
assigned effort). On a given day, seven concurrent tasks must be accomplished. Each
task requires 0.50 person-days of effort. More effort has been allocated than there are
people to do the work.
 Defined responsibilities. Every task that is scheduled should be assigned to a specific
team member.
 Defined outcomes. Every task that is scheduled should have a defined outcome. For
software projects, the outcome is normally a work product (e.g., the design of a
component) or a part of a work product. Work products are often combined in
deliverables.
 Defined milestones. Every task or group of tasks should be associated with a project
milestone. A milestone is accomplished when one or more work products has been
reviewed for quality and has been approved.

Each of these principles is applied as the project schedule evolves.

-------------------------------------------------------------------------------------------------------
Q. What is the requirement of detailed project scheduling?

 Detailed Scheduling

Once the milestones and the resources are fixed, it is time to set the detailed scheduling.

For detailed schedules, the major tasks fixed while planning the milestones are
broken into small schedulable activities in a hierarchical manner.

For example, the detailed design phase can be broken into tasks for

 Developing the detailed design for each module


 Review of each detailed design
 Fixing of defects found, and so on.
For each detailed task, the project manager estimates the time required to complete
it and assigns a suitable resource so that the overall schedule is met.

At each level of refinement, the project manager determines the effort for the overall
task from the detailed schedule and checks it against the effort estimates.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 27 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

If this detailed schedule is not consistent with the overall schedule and effort
estimates, the detailed schedule must be changed.

If it is found that the best detailed schedule cannot match the milestone effort and
schedule, then the earlier estimates must be revised. Thus, scheduling is an iterative
process.

The project manager refines the tasks to a level so that the lowest-level activity can
be scheduled to occupy no more than a few days from a single resource. Activities related
to tasks such as project management, coordination, database management, and
configuration management may also be listed in the schedule, even though these activities
have less direct effect on determining the schedule because they are ongoing tasks rather
than schedulable activities. Nevertheless, they consume resources and hence are often
included in the project schedule.

NOTE: Rarely will a project manager complete the detailed schedule of the entire
project all at once.

Once the overall schedule is fixed, detailing for a phase may only be done at the
start of that phase.

For detailed scheduling, tools like Microsoft Project or a spreadsheet can be very
useful.

For each lowest-level activity, the project manager specifies

 The effort
 Duration
 Start date
 End date
 Resources.
Dependencies between activities, due either to an inherent dependency (for
example, you can conduct a unit test plan for a program only after it has been coded) or to
a resource-related dependency (the same resource is assigned two tasks) may also be
specified.

-------------------------------------------------------------------------------------------------------

Q. Why is Project Scheduling Dynamic in nature(i.e. not static)?


From these tools the overall effort and schedule of higher level tasks can be
determined. A detailed project schedule is never static. Changes may be needed because
the actual progress in the project may be different from what was planned, because newer
tasks are added in response to change requests, or because of other unforeseen situations.

Changes are done as and when the need arises.

The final schedule, frequently maintained using some suitable tool, is often the most
"live" project plan document.

During the project, if plans must be changed and additional activities must be done,
after the decision is made, the changes must be reflected in the detailed schedule, as this

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 28 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

reflects the tasks actually planned to be performed. Hence, the detailed schedule
becomes the main document that tracks the activities and schedule.

Building WBS, Use of Gantt & PERT/CPM chart

{EXPLANATION WITH THE HELP OF A CASE STUDY}

DEFININGTHE PROBLEM AT RMO

1. Barbara and Steve, the CSS project team, developed the lists for the system scope
document after talking to William McDougal, vice president of marketing and sales, and
his assistants.
a. It is essential to obtain information from and to involve people who will be using the
system and who will obtain the most benefit from it. They provide valuable insights
to ensure that the system satisfies the business needs.
b. The project team conducts a preliminary investigation of alternative solutions to
reassess the assumptions the team made when the project was initiated. Because
the schedule and budget for the remainder of the project inherently assume a
particular approach to developing the system, it is critical to make those implicit
assumptions explicit so that all participants understand the constraints on the project
schedule and the team can perform an accurate feasibility analysis.
c. For example, if an ―off-the-shelf‖ program is identified as a possible solution, part of
the schedule during the analysis phase must include tasks to evaluate the program
against the needs being researched. If the most viable solution appears to be a new
system developed completely in- house, detailed analysis tasks are planned and
scheduled.
NOTE: the most critical element in the success of a system development project is
user involvement.

2. While Barbara was finishing the problem definition statement, Steve did some
preliminary investigation of possible solutions. He researched the trade magazines, the
Internet, and other resources to determine whether sales and customer support
systems could be bought and installed rapidly. Although he found several, none seemed

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 29 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

to have the exact match of capabilities that RMO needed. He and Barbara, along with
William McDougal, had several discussions about how best to proceed. They decided
that the best approach was to proceed with the analysis phase of the project before
making any final decision about solutions. They would revisit this decision, in much
more detail, after the analysis phase activities were completed. For now, Barbara and
Steve began developing a schedule, budget, and feasibility statement for the new
system.

 Producing the Project Schedule:

Before discussing the details of a project schedule, let’s clarify three terms: task, activity,
and phase.

Fundamentally, a phase is made up of a group of related activities, and an activity is made


up of a group of related tasks.

A task is the smallest piece of work that is identified and scheduled.

Activities are also identified, named, and scheduled.

For example, suppose that you are scheduling the design phase. Within the design phase,
you identify activities such as Design the user interface, Design and integrate the database,
and complete the application design. Within the Design the user interface activity, you
might identify individual tasks such as Design the customer entry form and Design the
order-entry form.

Thus, the phase, activity, and task breakdown provides a three-level hierarchy.

During the project planning phase, it may not be possible to schedule every task in the
entire project because it is too early to know all of the tasks that will be necessary.

One of the requirements of the project planning phase is to provide estimates of the time to
complete the project and the total cost of the project.

Because one of the major factors in project cost is payment of salaries to the project team,
the estimate of the time and labor to complete the project becomes critical.

The development of a project schedule is divided into two main steps:

• Develop a work breakdown structure (WBS)

• Build a PERT/CPM chart.

-------------------------------------------------------------------------------------------------------

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 30 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

Q. Write a short note on WBS, explain with a suitable example?(March 12,OCT 13 )5M.
 DEVELOPING A WORK BREAKDOWN STRUCTURE
 A work breakdown structure (WBS)

 It is a list of all the individual tasks that are required to complete the project. It is
essential in planning and executing the project because it is the foundation for
developing the project schedule, for identifying milestones in the schedule, and for
managing cost.

FIGURE 3-10 work breakdown structure for the planning phase of the RMO project

This figure is based on the list of activities in the project planning phase of the SDLC.

Each activity is further divided into individual tasks to be completed.

1. The WBS identifies a hierarchy, much like an outline for a paper.


2. The project requires a WBS for each phase of the SDLC.
3. The analysis, design, and implementation phase WBSs would be even more
important to define because the project planning phase attempts to schedule
the entire project.
4. Each task has a duration and list of the number of resources required for its
completion.
5. The effort for a task includes
b. the product of the duration
c. the number of resources
d. The percentage of time that the resource is active on that task.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 31 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

In other words, a resource may spend only half-time on a two-day task, which results in
only one day of effort.

Three values of the duration are provided:

1. An expected duration
2. A pessimistic duration
3. An optimistic duration.
Having various values permits the project manager to develop an optimistic,
expected, and pessimistic schedule.

It is quite difficult to develop a work breakdown structure from scratch.

The project team could meet to brainstorm and try to think of everything that it
needs to complete the project. This meeting is a type of bottom-up approach—just
brainstorms for every single task and hope you cover everything.

One of two techniques is used as a starting point. These techniques are called

 Standards-based WBS: A standards-based WBS is built from a standard plan.


The example shown in Figure 3-10 can be called a standards-based WBS. Based
on the standard set of activities defined for the planning phase, the project team
expanded each activity into more detailed tasks. The specific methodology and the
SDLC in use by the organization play an important role. The key is to have a list of
activities and tasks that are predefined for this type of project. Companies that do
work for the U.S. government frequently have standard WBS lists for various
types of projects. Government contracts frequently require a standard project
definition. The project team begins with the standard and then builds on it with
the additional tasks that are unique to the particular project.
 Analogy-based WBS: An analogy-based WBS is similar in form to a standards-
based WBS, except that it uses different input as a starting point. In an analogy-
based WBS, the project team will try to identify another project, an analogous
project that is as similar as possible to the project at hand and is, of course,
complete. Then, based on the experience from that previous project, the team will
build the new WBS. Companies that do the same types of projects over and over
again use this technique. For example, a car producer such as General Motors
knows all of the steps required for a project to engineer a new car. Such
companies use the same set of procedures that they have successfully used
previously.
 ADVANTAGES OF WBS:

 It provides the most accurate estimate for the duration and effort
required for the project.
 Time estimates are more accurate if they are developed at a detailed
level rather than a ―guesstimate‖ of the major processes.
 Schedules built from a good WBS are also much easier to monitor and
control.
 The WBS is the key to a successful project.

-------------------------------------------------------------------------------------------------------

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 32 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

Q. What are the steps involved in developing GANT, PERT, CPM charts?(OCT
12)5M.

 BUILDING A PERT/CPM CHART

PERT stands for Project Evaluation and Review Technique,

CPM stands for Critical Path Method.

A PERT/CPM is a diagram of all the tasks identified in the WBS, showing the
sequence of dependencies of the tasks. An example of a PERT/CPM chart for RMO is shown
in Figure 3-11.

This example, which is only a partial example of the RMO schedule, illustrates the
basic ideas of a PERT/CPM chart.

 Each rectangle represents a single task or a summary activity.


 Within each rectangle is the name of the task, a unique identifier, the
duration, and the beginning and end dates.
 The task begins at the start of the date listed on the left side of the rectangle
and finishes at the end of the date listed on the right side.
 The connecting lines with arrows indicate the sequence of the tasks.

Note: Some tasks are done sequentially, one after the other, and other tasks are
done in parallel, or at the same time.

 The arrows represent task dependencies and indicate the normal sequence of
carrying out a project.

Partial PERT/CPM chart for the customer support system project

By showing which tasks can be done concurrently, a PERT chart assists in assigning staff.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 33 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

It is always a juggling act to balance the availability and workload of the team
members with the dependent and independent tasks.

 Building the PERT/CPM chart begins with the list of activities and tasks
developed in the WBS.
 The WBS is analyzed, including the duration and expected resources for each
task, to determine the dependencies.
 For each task, the chart identifies all the immediate predecessor tasks and
the successor tasks.
 Decisions must be made concerning the most probable progression of the
project, for example, should the team design the customer entry from first or
the product definition form? Either will work. Automation does not help in
this step.
 The critical path is the longest path through the PERT/CPM diagram and
contains all the tasks that must be done in the defined sequential order. It is
called the critical path because if any task on the critical path is delayed, the
entire project will be completed late. Project managers usually monitor the
tasks on the critical path very carefully.

 ADVANTAGES OF PERT/CPM: It produces a diagram that makes it easy to see


dependencies and the critical path. However, it is not easy to view the project’s
progress on a PERT chart.
 GANT CHART

Viewing the activities spread out over a calendar requires a different chart called a Gantt
chart.

It is essentially a bar chart, one bar for each task, with the horizontal axis being units of
time.

A Gantt chart is good for monitoring the progress of the project as it moves along.

Figure 3-12 shows a version of a Gantt chart, called a tracking Gantt chart, for the same
tasks shown in Figures 3-10 and 3-1 1.

The left column shows the names of the tasks.

The length of the bars indicates the task durations.

In this example of Microsoft Project, the task bars can be color coded to indicate critical
path, not started, partially complete, or completed.

The red tasks are on the critical path, whereas the blue ones are not on the critical path.

Complete tasks, either complete or partially complete, are shown in solid colors. The solid
vertical line in February represents today’s date. So, the project manager can easily check
the status of the tasks on the diagram.

Any tasks to the left of the vertical line that are not completed are behind schedule.

The tasks to the right that have been completed are ahead of schedule.

Tasks that intersect with the vertical line may be either on track, ahead, or behind.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 34 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

Depending on how status is reported, the Gantt chart might or might not indicate the
status of those tasks.

Most project managers find PERT/CPM charts beneficial while they are developing the
schedule, but Gantt charts are most useful after the project begins.

SCHEDULING THE ENTIRE SDLC

The examples shown in Figures 3-10, 3-11, and 3-12 only detail the WBS for the project
planning phase. The other SDLC phases would also need to be scheduled.

The SDLC provides the activities for each phase. Each activity is made up of a list of tasks.
A standards-based or analogy- based WBS can be used to provide the detailed list of tasks
for each analysis, design, and implementation phase activity.

If we assume that the project includes overlapping SDLC phases, a Gantt chart showing the
entire project at the phase and activity level of detail might look like Figure 3-13.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 35 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

Figure 3-13: Tracking Gantt chart for the customer support system project planning
phase

Note: the length of each activity does not imply that the team is working full-time on that
activity from start to finish. The activity starts and continues with varying degrees of effort
for the duration. All team members get used to multitasking, that is, working on more than
one activity or task at the same time. Therefore, an overlapping view of the project is not
useful for calculating total labor cost, but it can show the completion of each phase and the
entire project. The elapsed time for the CSS development project is about nine months.
After that, the support phase begins.

The overlapping phases are usually the result of an iterative approach of the SDLC. For
planning and scheduling purposes, many project managers use project management
software and Gantt charts to plan and track the iterations of the project.

Each iteration includes analysis, design, and implementation activities that focus on a
portion of the systems functionality. Some analysis activities will be included in every
iteration; other activities might only be included in a few.

For example, each iteration might include

Analysis activities:

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 36 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

gather information

Define system requirement

Design activities:

Design the application architecture,

Design the user interfaces,

Design and integrate the database.

Similarly, each iteration might include

Implementation activities

Construct software components

Verify

Test.

Other activities from each of these phases might be included in some but not all iterations,
depending on the project plan.

A Gantt chart in Figure 3-14 shows how the RMO project might be scheduled with three
iterations.

The project team does not concern itself with what phase of the SDLC the project is in at
any point in time.

The SDLC phases and activities provide the framework for defining project planning and
multiple iterations that are scheduled throughout the project.

Team Structure:

We have seen that the number of resources is fixed when schedule is being planned.
Detailed scheduling is done only after actual assignment of people has been done, as task
assignment needs information about the capabilities of the team members.

We have implicitly assumed that the project's team is led by a project manager, who does
the planning and task assignment. This form of hierarchical team organization is fairly
common, and was earlier called the Chief Programmer Team.

In this hierarchical organization, the project manager is responsible for all major technical
decisions of the project. He does most of the design and assigns coding of the different
parts of the design to the programmers.

The team typically consists of programmers, testers, a configuration controller, and possibly
a librarian for documentation. There may be other roles like database manager, network
manager, backup project manager, or a backup configuration controller.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 37 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

It should be noted that these are all logical roles and one person may do multiple such
roles.

For a small project, a one-level hierarchy suffices.

For larger projects, this organization can be extended easily by partitioning the project into
modules, and having module leaders who are responsible for all tasks related to their
module and have a team with them for performing these tasks.

A different team organization is the egoless team: Egoless teams consist of ten or
fewer programmers. The goals of the group are set by consensus, and input from every
member is taken for major decisions. Group leadership rotates among the group members.
Due to their nature, egoless teams are sometimes called democratic teams. This structure
allows input from all members, which can lead to better decisions for difficult problems. This
structure is well suited for long-term research-type projects that do not have time
constraints. It is not suitable for regular tasks that have time constraints; for such tasks,
the communication in democratic structure is unnecessary and results in inefficiency.

In recent times, for very large product developments, another structure has emerged.

This structure recognizes that there are three main task categories in software
development—

Management related

Development related

Testing related.

It also recognizes that it is often desirable to have the test and development team be
relatively independent, and also not to have the developers or tests report to a nontechnical
manager.

In this structure, consequently, there is an overall unit manager, under whom there are
three small hierarchic organizations—

For program management: The program managers provide the specifications for what is
being built, and ensure that development and testing are properly coordinated.

For development: The primary job of developers is to write code and they work under a
development manager.

For testing: The responsibility of the testers is to test the code and they work under a test
manager.

In a large product this structure may be replicated, one for each major unit. This type of
team organization is used in corporations like Microsoft.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 38 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

Quick Revision

 Software project scheduling is an action that distributes estimated effort across the
planned project duration by allocating the effort to specific software engineering tasks.
 Compartmentalization. The project must be compartmentalized into a number of
manageable activities and tasks. To accomplish compartmentalization, both the product
and the process are refined.
 Interdependency. The interdependency of each compartmentalized activity or the task
must be determined. Some tasks must occur in sequence, while others can occur in
parallel. Some activities cannot commence until the work product pro interdependencies,
produced by another is available. Other activities can occur independently.
 Time allocation. Each task to be scheduled must be allocated some number of work
units (e.g., person-days of effort). In addition, each task must be assigned a start date
and a completion date that are a function of the interdependencies and whether work
will be conducted on a full-time or part-time basis.
 Effort validation. Every project has a defined number of people on the software team.
As time allocation occurs, you must ensure that no more than the allocated number of
people has been scheduled at any given time.
 Defined responsibilities. Every task that is scheduled should be assigned to a specific
team member.
 Defined outcomes. Every task that is scheduled should have a defined outcome.
 Defined milestones. Every task or group of tasks should be associated with a project
milestone.
 A work breakdown structure (WBS)
 It is a list of all the individual tasks that are required to complete the project. It is
essential in planning and executing the project because it is the foundation for
developing the project schedule, for identifying milestones in the schedule, and for
managing cost.
 PERT stands for Project Evaluation and Review Technique,
 CPM stands for Critical Path Method.
 A PERT/CPM is a diagram of all the tasks identified in the WBS, showing the sequence of
dependencies of the tasks.
 The SDLC provides the activities for each phase. Each activity is made up of a list of
tasks. A standards-based or analogy- based WBS can be used to provide the detailed list
of tasks for each analysis, design, and implementation phase activity.

IMP Question Set

SR No. Question Reference page No.

1 24
Q. Explain the steps involved in Project Scheduling?
2 Q. What is the requirement of detailed project scheduling? 25
3 28
Q. Write a short note on WBS, explain with a suitable example?
4 30
Q. What are the steps involved in developing GANT, PERT,CPM
charts?
5 26
Q. Why is Project Scheduling Dynamic in nature(i.e. not static)?

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 39 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

CHAPTER 4: SOFTWARE CONFIGURATION MANAGEMENT PROCESS

Topic Covered:

 Process & Functionality & Mechanism


 Process Management, CMM & its levels, Risk Management & activities

Changes continuously take place in a software project—changes due to the evolution


of work products as the project proceeds, changes due to defects (bugs) being found and
then fixed, and changes due to requirement changes.

All these are reflected as changes in the files containing source, data, or
documentation.

Configuration management (CM) or software configuration management (SCM) is the


discipline for systematically controlling the changes that take place during development.

 The IEEE defines SCM as "the process of identifying and defining the items in the
system, controlling the change of these items throughout their life cycle,
recording and reporting the status of items and change requests, and verifying
the completeness and correctness of items".
 Though all three are types of changes, changes due to product evolution and
changes due to bug fixes can be, in some sense, treated as a natural part of the
project itself which have to be dealt with even if the requirements do not
change.
 Requirements changes, on the other hand, have a different dynamic.

 Software configuration management is a process independent of the


development process largely because most development models look at the
overall picture and not on changes to individual files.
 In a way, the development process is brought under the configuration control
process, so that changes are allowed in a controlled manner, as shown in Figure
2.15 for a waterfall-type development process model.
Note: SCM directly controls only the products of a process and only indirectly influences the
activities producing the product.

 CM is essential to satisfy one of the basic objectives of a project Delivery of a


high-quality software product to the client.
 The software that is delivered contains the various source or object files that
make up the source or object code, scripts to build the working system from
these files, and associated documentation.
 During the course of the project, the files change, leading to different versions.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 40 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

 In this situation, how does a program manager ensure that the appropriate
versions of sources are combined without missing any source, and the correct
versions of the documents, which are consistent with the final source, are sent?
This is ensured through proper CM.
-------------------------------------------------------------------------------------------------------

Q. Which functionalities are needed from configuration management process.(OCT


12,March 14)5M.

Q. Explain CM Functionality?

 CM Functionality

To better understand CM, let us consider some of the functionality that a project
requires from the CM process.

 Give latest version of a program. Suppose that a program has to be


modified. Clearly, the modification has to be carried out in the latest copy of
that program; otherwise, changes made earlier may be lost. A proper CM
process will ensure that latest version of a file can be obtained easily.
 Undo a change or revert back to a specified version. A change is made
to a program, but later it becomes necessary to undo this change request.
Similarly, a change might be made to many programs to implement some
change request and later it may be decided that the entire change should be
undone. The CM process must allow this to happen smoothly.
 Prevent unauthorized changes or deletions. A programmer may decide
to change some programs, only to discover that the change has adverse side
effects. The CM process ensures that unapproved changes are not permitted.
 Gather all sources, documents, and other information for the current
system. All sources and related files are needed for releasing the product.
The CM process must provide this functionality. All sources and related files of
a working system are also sometimes needed for reinstallation. These are
some of the basic needs that a CM process must satisfy. There are other
advanced requirements like handling concurrent updates or handle
invariance.
-------------------------------------------------------------------------------------------------------
Q. Write a short note on CM mechanism?

 CM Mechanisms

The main purpose of CM is to provide various mechanisms that can support the
functionality needed by a project to handle the types of scenarios discussed above that
arise due to changes.

The mechanisms commonly used to provide the necessary functionality include the
following

a. Configuration identification and base lining


i. As discussed above, the software being developed is not a monolith.
ii. A Software configuration item (SCI), or item is a document or an artifact
that is explicitly placed under configuration control and that can be
regarded as a basic unit for modification.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 41 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

iii. As the project proceeds, hundreds of changes are made to these


configuration items. Without periodically combining proper versions of
these items into a state of the system, it will become very hard to get the
system from the different versions of the many SCIs.
iv. For this reason, baselines are established. A baseline, once established,
captures a logical state of the system, and forms the basis of change
thereafter. A baseline also forms a reference point in the development of a
system. A baseline essentially is an arrangement of a set of SCIs. That is, a
baseline is a set of SCIs and the relationship between them.
v. For example, a requirements baseline may consist of many requirement
SCIs (e.g., each requirement is an SCI) and how these SCIs are related in
the requirements baseline (e.g., in which order they appear). The SCIs
being managed by SCM are not independent of one another and there are
dependencies between various SCIs. An SCI X is said to depend on another
SCI Y, if a change to Y might require a change to be made to X for X to
remain correct or for the baselines to remain consistent. A change request,
though, might require changes be made to some SCIs; the dependency of
other SCIs on the ones being changed might require that other SCIs also
need to be changed.
b. Version control or version management
i. Version control is a key issue for CM, and many tools are available to help
manage the various versions of programs. Without such a mechanism,
many of the required CM functions cannot be supported. Version control
helps preserve older versions of the programs whenever programs are
changed.
ii. Commonly used CM systems like SCCS, CVS (www.cvshome.org), VSS
(msdn.microsoft.com/vstudio/previous/ssafe), and focus heavily on version
control.
c. Access control
-------------------------------------------------------------------------------------------------------

Q. Explain the life cycle of SCI?(March 12)5M.

 Most CM systems also provide means for access control. To understand the need
for access control, let us understand the life cycle of an SCI.

 Typically, while an SCI is under development and is not visible to other SCIs, it
is considered being in the working state. An SCI in the working state is not under
SCM and can be changed freely. Once the developer is satisfied that the SCI is
stable enough for it to be used by others, the SCI is given for review, and the
item enters the state "under review."
 Once an item is in this state, it is considered as "frozen," and any changes made
to a private copy that the developer may have made are not recognized.
 After a successful review the SCI is entered into a library, after which the item is
formally under SCM.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 42 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

The basic purpose of this review is to make sure that the item is of satisfactory
quality and is needed by others, though the exact nature of review will depend on
the nature of the SCI and the actual practice of SCM.
 For example, the review might entail checking if the item meets its specifications
or if it has been properly unit tested. If the item is not approved, the developer
may be given the item back and the SCI enters the working state again. This "life
cycle" of an item from the SCM perspective
 Once an SCI is in the library, any modification should be controlled, as others
may be using that item. Hence, access to items in the library is controlled. For
making an approved change, the SCI is checked out of the library, the change is
made, the modification is reviewed and then the SCI is checked back into the
library.
 When a new version is checked in, the old version is not replaced and both old
and new versions may exist in the library—often logically with one file being
maintained along with information about changes to recreate the older version.
This aspect of SCM is sometimes called library management and is done with the
aid of tools.
CM Process

The CM process defines the set of activities that need to be performed to control change.

STAGES OF CM PROCESS

1. Planning: Planning for configuration management involves identifying the configuration


items and specifying the procedures to be used for controlling and implementing
changes to these configuration items.
a. Identifying configuration items is a fundamental activity in any type of CM.
b. Typical examples of configuration items include
i. requirements specifications,
ii. design documents,
iii. source code,
iv. test plans,
v. test scripts,
vi. test procedures,
vii. test data,
viii. standards used in the project (such as coding standards and design
standards),
ix. the acceptance plan,
x. documents such as the CM plan and the project plan,
xi. documentation such as
1. the user manual,
2. the training material,
3. contract documents (including support tools such as a compiler or in-
house tools),
4. quality records (review records, test records),
5. CM records (release records, status tracking records).
Then the process has to be executed, generally by using some tools.

 Any customer supplied products or purchased items that will be part of the delivery
(called "included software product") are also configuration items.
 As there are typically a lot of items in a project, how they are to be organized is also
decided in the planning phase.
 Typically, the directory structure that will be employed to store the different
elements is decided in the plan.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 43 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

 To facilitate proper naming of configuration items, the naming conventions for CM


items are decided during the CM planning stages. In addition to naming standards,
version numbering must be planned. When a configuration item is changed, the old
item is not replaced with the new copy; instead, the old copy is maintained and a
new one is created. This approach results in multiple versions of an item, so policies
for version number assignment are needed. If a CM tool is being used, then
sometimes the tool handles the version numbering. Otherwise, it has to be explicitly
done in the project.
c. The configuration controller or the project manager does the CM planning. It is
begun only when the project has been initiated and the operating environment and
requirements specifications are clearly documented.
d. The output of this phase is the CM plan.
e. The configuration controller (CC) is responsible for the implementation of the CM
plan. Depending on the size of the system under development, his or her role may
be a part-time or full-time job.
2. the process has to be executed.
a. In certain cases, where there are large teams or where two or more
teams/groups are involved in the development of the same or different
portions of the software or interfacing systems, it may be necessary to have a
configuration control board (CCB). This board includes representatives from
each of the teams. A CCB (or a CC) is considered essential for CM [89], and
the CM plan must clearly define the roles and responsibilities of the CC/CCB.
These duties will also depend on the type of file system and the nature of CM
tools being used.
3. Finally, as any CM plan requires some discipline from the project personnel in terms
of storing items in proper locations, and making changes properly, monitoring the
status of the configuration items and performing CM audits are therefore other
activities in the CM process.
a. For a CM process to work well, the people in the project have to use it as per
the CM plan and follow its policies and procedures.
b. However, people make mistakes. And if by mistake an SCI is misplaced, or
access control policies are violated, then the integrity of the product may be
lost.
c. To minimize mistakes and catch errors early, regular status checking of SCIs
may be done. A configuration audit may also be performed periodically to
ensure that the CM system integrity is not being violated.
d. The audit may also check that the changes to SCIs due to change requests
(discussed next) have been done properly and that the change requests have
been implemented. In addition to checking the status of the items, the status
of change requests (discussed below) must be checked.
e. To accomplish this goal, change requests that have been received since the
last CM status monitoring operation are examined. For each change request,
the state of the item as mentioned in the change request records is compared
with the actual state.
f. Checks may also be done to ensure that all modified items go through their
full life cycle (that is, the state diagram) before they are incorporated in the
baseline.
Process Management, CMM & its levels

A software process is not a static entity—it has to change to improve so that the products
produced using the process are of higher quality and are less costly. Improving quality and
productivity are fundamental goals of engineering. To achieve these goals the software
process must continually be improved, as quality and productivity are determined to a great

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 44 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

extent by the process. Improving the quality and productivity of the process is the main
objective of the process management process.

-------------------------------------------------------------------------------------------------------
Q. What is the difference between project & process management?

DIFFERENE BETWEEN PROJET AND PROECESS MANAGEMENT:

 In process management the focus is on improving the process which in turn


improves the general quality and productivity for the products produced using the
process.
 In project management the focus is on executing the current project and ensuring
that the objectives of the project are met.
 The time duration of interest for project management is typically the duration of
the project, while process management works on a much larger time scale as each
project is viewed as providing a data point for the process.
-------------------------------------------------------------------------------------------------------

Q. Explain CMM framework?(March 12,13,OCT 13)5M.

Q.What is the significance of CMM. Discuss its levels.(March 14)5M.

 Capability Maturity Model (CMM) framework

To improve its software process, an organization needs to first understand the status
of the current status and then develop a plan to improve the process. It is generally agreed
that changes to a process are best introduced in small increments and that it is not feasible
to totally revolutionize a process. The reason is that it takes time to internalize and truly
follow any new methods that may be introduced. And only when the new methods are
properly implemented will their effects be visible. Introducing too many new methods for
the software process will make the task of implementing the change very hard.

 If we agree that changes to a process must be introduced in small increments, the


next question is out of a large set of possible enhancements to a process,
a. In what order should the improvement activities be undertaken?
b. Or what small change should be introduced first?

This depends on the current state of the process. For example, if the process is very
primitive there is no point in suggesting sophisticated metrics-based project control as an
improvement strategy; incorporating it in a primitive process is not easy.

On the other hand, if the process is already using many basic models, such a step
might be the right step to further improve the process. Hence, deciding what activities to
undertake for process improvement is a function of the current state of the process. Once
some process improvement takes place, the process state may change, and a new set of
possibilities may emerge. This concept of introducing changes in small increments based on
the current state of the process has been captured in the Capability Maturity Model (CMM)
framework.

The CMM framework provides a general roadmap for process improvement.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 45 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

Software process capability describes the range of expected results that can be
achieved by following the process. The process capability of an organization determines
what can be expected from the organization in terms of quality and productivity.

The goal of process improvement is to improve the process capability.

A maturity level is a well-defined evolutionary plateau towards achieving a mature


software process.

Based on the empirical evidence found by examining the processes of many


organizations, the CMM suggests that there are five well-defined maturity levels for a
software process.

1. initial (level 1),


2. repeatable,
3. defined,
4. managed, and
5. Optimizing (level 5).

The CMM framework says that as process improvement is best incorporated in small
increments, processes go from their current levels to the next higher level when they are
improved. Hence, during the course of process improvement, a process moves from level
to level until it reaches level 5. This is shown in Figure 2.17.

The CMM provides characteristics of each level, which can be used to assess the
current level of the process of an organization. As the movement from one level is to the
next level, the characteristics of the levels also suggest the areas in which the process
should be improved so that it can move to the next higher level.

For each level it specifies the areas in which improvement can be absorbed and will
bring the maximum benefits. Overall, this provides a roadmap for continually improving the
process.

The initial process (level 1) is essentially an ad hoc process that has no formalized
method for any activity. Basic project controls for ensuring that activities are being done

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 46 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

properly, and that the project plan is being adhered to, are missing. In crisis the project
plans and development processes are abandoned in favor of a code-and-test type of
approach. Success in such organizations depends solely on the quality and capability of
individuals. The process capability is unpredictable as the process constantly changes.
Organizations at this level can benefit most by improving project management, quality
assurance, and change control.

In a repeatable process (level 2), policies for managing a software project and
procedures to implement those policies exist. That is, project management is well developed
in a process at this level. Some of the characteristics of a process at this level are:

Project commitments are realistic and based on past experience with similar
projects,

Cost and schedule are tracked and problems resolved when they arise,

Formal configuration control mechanisms are in place, and software project


standards are defined and followed.

Essentially, results obtained by this process can be repeated as the project planning
and tracking is formal.

At the defined level (level 3) the organization has standardized a software


process, which is properly documented. A software process group exists in the organization
that owns and manages the process. In the process each step is carefully defined with
verifiable entry and exit criteria, methodologies for performing the step, and verification
mechanisms for the output of the step. In this process both the development and
management processes are formal.

At the managed level (level 4) quantitative goals exist for process and products.
Data is collected from software processes, which is used to build models to characterize the
process. Hence, measurement plays an important role in a process at this level. Due to the
models built, the organization has a good insight of the process capability and its
deficiencies. The results of using such a process can be predicted in quantitative terms.

At the optimizing level (level 5), the focus of the organization is on continuous
process improvement. Data is collected and routinely analyzed to identify areas that can be
strengthened to improve quality or productivity. New technologies and tools are introduced
and their effects measured in an effort to improve the performance of the process. Best
software engineering and management practices are used throughout the organization. This
CMM framework can be used to improve the process. Improvement requires first assessing
the level of the current process. Based on the current level, the areas in which maximum
benefits can be derived are known from the framework

For example, for improving a process at level 1 (or for going from level 1 to level 2),
project management and the change control activities must be made more formal. The
complete CMM framework provides more details about which particular areas need to be
strengthened to move up the maturity framework. This is generally done by specifying the
key process areas of each maturity level, which in turn, can be used to determine which
areas to strengthen to move up. Some of the key process areas of the different levels are
shown in Figure 2.18

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 47 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

Though the CMM framework specifies the process areas that should be improved to
increase the maturity of the process, it does not specify how to bring about the
improvement. That is, it is essentially a framework that does not suggest detailed
prescriptions for improvement, but guides the process improvement activity along the
maturity levels such that process improvement is introduced in increments and the
improvement activity at any time is clearly focused. Many organizations have successfully
used this framework to improve their processes. It is a major driving force for process
improvement. A detailed example of how an organization that follows the CMM executes its
project can be found in [96].

-------------------------------------------------------------------------------------------------------

Q. Define risk, and explain risk management?

Risk Management & activities

 Risk Management Concepts

Risk is defined as an exposure to the chance of injury or loss.

That is, risk implies that there is a possibility that something negative may happen.

In the context of software projects, negative implies that there is an adverse effect
on cost, quality, or schedule.

Risk management is the area that tries to ensure that the impact of risks on cost,
quality, and schedule is minimal. Risk management can be considered as dealing with the
possibility and actual occurrence of those events that are not "regular" or commonly
expected, that is, they are probabilistic.

The commonly expected events, such as people going on leave or some


requirements changing, are handled by normal project management.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 48 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

So, in a sense, risk management begins where normal project management ends.

It deals with events that are infrequent, somewhat out of the control of the project
management, and which can have a major impact on the project. Most projects have risk.
The idea of risk management is to minimize the possibility of risks materializing, if possible,
or to minimize the effects if risks actually materialize.

For example, when constructing a building, there is a risk that the building may later
collapse due to an earthquake. That is, the possibility of an earthquake is a risk. If the
building is a large residential complex, then the potential cost in case the earthquake risk
materializes can be enormous. This risk can be reduced by shifting to a zone that is not
earthquake prone. Alternatively, if this is not acceptable, then the effects of this risk
materializing are minimized by suitably constructing the building (the approach taken in
Japan and California). At the same time, if a small dumping ground is to be constructed,
no such approach might be followed, as the financial and other impact of an actual
earthquake on such a building is so low that it does not warrant special measures.

It should be clear that risk management has to deal with identifying the undesirable
events that can occur, the probability of their occurring, and the loss if an undesirable event
does occur. Once this is known, strategies can be formulated for either reducing the
probability of the risk materializing or reducing the effect of risk materializing. So the risk
management revolves around risk assessment and risk control. For each of these major
activities, some sub activities must be performed. A breakdown of these activities is given in
Figure 5.4.

-------------------------------------------------------------------------------------------------------

Q. Write a short note on Risk Assessment?(March 13,14)5M.

 Risk Assessment

Risk assessment is an activity that must be undertaken during project planning. This
involves identifying the risks, analyzing them, and prioritizing them on the basis of the
analysis. Due to the nature of a software project, uncertainties are highest near the
beginning of the project (just as for cost estimation). Due to this, although risk assessment
should be done throughout the project, it is most needed in the starting phases of the
project. The goal of risk assessment is to prioritize the risks so that attention and resources
can be focused on the more risky items.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 49 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

Risk identification is the first step in risk assessment, which identifies all the different
risks for a particular project. These risks are project-dependent and identifying them is an
exercise in envisioning what can go wrong.

Methods that can aid risk identification include

 Checklists of possible risks:


a. Checklists of frequently occurring risks are probably the most common
tool for risk identification
b. most organizations prepare a list of commonly occurring risks for projects,
 Surveys: point for identifying risks for the current project. Based on surveys of
experienced project managers, Boehm [19] has produced a list of the top 10 risk
items likely to compromise the success of a software project. Though risks in a
project are specific to the project, this list forms a good starting point for
identifying such risks. Figure 5.5 shows these top 10 items along with the
techniques preferred by management for managing these risks.


The top-ranked risk item is personnel shortfalls. This involves just having fewer
people than necessary or not having people with specific skills that a project might
require. Some of the ways to manage this risk is to get the top talent possible and
to match the needs of the project with the skills of the available personnel.
Adequate training, along with having some key personnel for critical areas of the
project, will also reduce this risk
i. Unrealistic schedules and budgets, happens very frequently due to business and
other reasons. It is very common that high-level management imposes a
schedule for a software project that is not based on the characteristics of the
project and is unrealistic. Underestimation may also happen due to inexperience
or optimism.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 50 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

ii. The next few items are related to requirements. Projects run the risk of
developing the wrong software if the requirements analysis is not done properly
and if development begins too early.
iii. often improper user interface may be developed. This requires extensive rework
of the user interface later or the software benefits are not obtained because users
are reluctant to use it.
iv. Gold plating refers to adding features in the software that are only marginally
useful. This adds unnecessary risk to the project because gold plating consumes
resources and time with little return.
v. Some requirement changes are to be expected in any project, but sometimes
frequent changes are requested, which is often a reflection of the fact that the
client has not yet understood or settled on its own requirements. The effect of
requirement changes is substantial in terms of cost, especially if the changes
occur when the project has progressed to later phases.
vi. Performance shortfalls are critical in real-time systems and poor performance can
mean the failure of the project. If a project depends on externally available
components—either to be provided by the client or to be procured as an off-the-
shelf component—the project runs some risks.
vii. The project might be delayed if the external component is not available on time.
The project would also suffer if the quality of the external component is poor or if
the component turns out to be incompatible with the other project components or
with the environment in which the software is developed or is to operate.
viii. If a project relies on technology that is not well developed, it may fail. This is a
risk due to straining the computer science capabilities.
 meetings and brainstorming, and
 Reviews of plans, processes, and work products.— prepared from a survey of
previous projects. Such a list can form the starting

Using the checklist of the top 10 risk items is one way to identify risks. This approach is
likely to suffice in many projects. The other methods are decision driver analysis,
assumption analysis, and decomposition. Decision driver analysis involves questioning and
analyzing all the major decisions taken for the project. If a decision has been driven by
factors other than technical and management reasons, it is likely to be a source of risk in
the project. Such decisions may be driven by politics, marketing, or the desire for short-
term gain. Optimistic assumptions made about the project also are a source of risk. Some
such optimistic assumptions are that nothing will go wrong in the project, no personnel will
quit during the project, people will put in extra hours if required, and all external
components (hardware or software) will be delivered on time. Identifying such assumptions
will point out the source of risks.

An effective method for identifying these hidden assumptions is comparing them with past
experience. Decomposition implies breaking a large project into clearly defined parts and
then analyzing them. Many software systems have the phenomenon that 20% of the
modules cause 80% of the project problems. Decomposition will help identify these
modules. Risk identification merely identifies the undesirable events that might take place
during the project, i.e., enumerates the "unforeseen" events that might occur. It does not
specify the probabilities of these risks materializing nor the impact on the project if the risks
indeed materialize.

Hence the next tasks are risk analysis and prioritization.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 51 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

In risk analysis, the probability of occurrence of a risk has to be estimated, along with the
loss that will occur if the risk does materialize. This is often done through discussion, using
experience and understanding of the situation. However, if cost models are used for cost
and schedule estimation, then the same models can be used to assess the cost and
schedule risk.

For example, in the COCOMO cost model, the cost estimate depends on the ratings of the
different cost drivers. One possible source of cost risk is underestimating these cost drivers.
The other is underestimating the size. Risk analysis can be done by estimating the worst-
case value of size and all the cost drivers and then estimating the project cost from these
values. This will give us the worst-case analysis. Using the worst-case effort estimate, the
worst-case schedule can easily be obtained. A more detailed analysis can be done by
considering different cases or a distribution of these drivers. The other approaches for risk
analysis include studying the probability and the outcome of possible decisions (decision
analysis), understanding the task dependencies to decide critical activities and the
probability and cost of their not being completed on time (network analysis), risks on the
various quality factors like reliability and usability (quality factor analysis), and evaluating
the performance early through simulation, etc., if there are strong performance constraints
on the system (performance analysis). The reader is referred to for further discussion of
these topics. Once the probabilities of risks materializing and losses due to materialization
of different risks have been analyzed, they can be prioritized. One approach for prioritization
is through the concept of risk exposure (RE), Which is sometimes called risk impact. RE is
defined by the relationship RE = Prob{UO) * Loss{UO),

Where Prob{UO) is the probability of the risk materializing (i.e., undesirable outcome) and
Loss{UO) is the total loss incurred due to the unsatisfactory outcome. The loss is not only
the direct financial loss that might be incurred but also any loss in terms of credibility,
future business, and loss of property or life. The RE is the expected value of the loss due to
a particular risk. For risk prioritization using RE, the higher the RE, the higher the priority of
the risk item.

It is not always possible to use models and prototypes to assess the probabilities of
occurrence and of loss associated with particular events. Due to the no availability of
models, assessing risk probabilities is frequently subjective. A subjective assessment can be
done by the estimate of one person or by using a group consensus technique like the Delphi
approach In the Delphi method; a group of people discusses the problem of estimation and
finally converges on a consensus estimate.

-------------------------------------------------------------------------------------------------------

Q. Write a short note on Risk Control?

 Risk Control

The main objective of risk management is to identify the top few risk items and then
focus on them. Once a project manager has identified and prioritized the risks, the top risks
can be easily identified. The question then becomes what to do about them. Knowing the
risks is of value only if you can prepare a plan so that their consequences are minimal—that
is the basic goal of risk management. One obvious strategy is risk avoidance, which entails
taking actions that will avoid the risk altogether, like the earlier example of shifting the

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 52 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

building site to a zone that is not earthquake-prone. For some risks, avoidance might be
possible. For most risks, the strategy is to perform the actions that will either reduce the
probability of the risk materializing or reduce the loss due to the risk materializing. These
are called risk mitigation steps. To decide what mitigation steps to take, a list of commonly
used risk mitigation steps for various risks is very useful here. Generally the compiled
table of commonly occurring risks also contains the compilation of the methods used for
mitigation in the projects in which the risks appeared. Note that unlike risk assessment,
which is largely an analytical exercise, risk mitigation comprises active measures that have
to be performed to minimize the impact of risks. In other words, selecting a risk
mitigation step is not just an intellectual exercise. The risk mitigation step must be
executed (and monitored). To ensure that the needed actions are executed properly, they
must be incorporated into the detailed project schedule. Risk prioritization and consequent
planning are based on the risk perception at the time the risk analysis is performed.
Because risks are probabilistic events that frequently depend on external factors, the threat
due to risks may change with time as factors change. Clearly, then, the risk perception may
also change with time. Furthermore, the risk mitigation steps undertaken may affect the
risk perception. This dynamism implies that risks in a project should not be treated as static
and must be monitored and reevaluated periodically. Hence, in addition to monitoring the
progress of the planned risk mitigation steps, a project must periodically revisit the risk
perception and modify the risk mitigation plans, if needed. Risk monitoring is the activity of
monitoring the status of various risks and their control activities. One simple approach for
risk monitoring is to analyze the risks afresh at each major milestone, and change the plans
as needed.

Quick Revision
_________________________________________________________________________

 Configuration management (CM) or software configuration management (SCM) is the


discipline for systematically controlling the changes that take place during development.
 Software configuration management is a process independent of the development
process largely because most development models look at the overall picture and not on
changes to individual files.
 The main purpose of CM is to provide various mechanisms that can support the
functionality needed by a project to handle the types of scenarios discussed above
that arise due to changes.
 Planning for configuration management involves identifying the configuration items and
specifying the procedures to be used for controlling and implementing changes to these
configuration items.
 In a repeatable process (level 2), policies for managing a software project and
procedures to implement those policies exist. That is, project management is well
developed in a process at this level.
 Risk management is the area that tries to ensure that the impact of risks on cost,
quality, and schedule is minimal. Risk management can be considered as dealing with
the possibility and actual occurrence of those events that are not "regular" or
commonly expected, that is, they are probabilistic.
 Risk assessment is an activity that must be undertaken during project planning. This
involves identifying the risks, analyzing them, and prioritizing them on the basis of the
analysis.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 53 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

CHAPTER 5: MANAGEMENT OF OO SOFTWARE

Topic Covered:

 Projects - Object oriented metrics, Use-Case Estimation


 Selecting development tools, Introduction to CASE

Q. Explain Use –Case- Oriented Metrics?(March 13)5M.


 Use-Case—Oriented Metrics

Use cases are used widely as a method for describing customer-level or business
domain requirements that imply software features and functions. It would seem reasonable
to use the use case as a normalization measure similar to LOC or PP. Like FP, the use case
is defined early in the software process, allowing it to be used for estimation before
significant modeling and construction activities are initiated. Use cases describe (indirectly,
at least) user-visible functions and features that are basic requirements for a system. The
use case is independent of programming language. In addition, the number of use cases is
directly proportional to the size of the application in LOC and to the number of test cases
that will have to be designed to fully exercise the application. Because use cases can be
created at vastly different levels of abstraction, there is no standard ―size‖ for a use case.
Without a standard measure of what a use case is, its application as a normalization
measure (e.g., effort expended per use case) is suspect.

Researchers have suggested use-case points (UCP5) as a mechanism for estimating


project effort and other characteristics. The UCP is a function of the number of actors and
transactions implied by the use-case models and is analogous to the FP in some ways

 Use cases are described using many different formats and styles—there is no
standard form.
 Use cases represent an external view (the user’s view) of the software and
can therefore be written at many different levels of abstraction.
 Use cases do not address the complexity of the functions and features that
are described.
 Use cases can describe complex behavior (e.g., interactions) that involves
many functions and features.

Unlike an LOC or a function point, one person’s ―use case‖ may require months of
effort while another person’s use case may be implemented in a day or two.

Although a number of investigators have considered use cases as an estimation


input, no proven estimation method has emerged to date.9 Smith [Smi99j suggests that
use cases can be used for estimation, but only if they are considered within the context of
the ―structural hierarchy‖ that they are used to describe.

Smith argues that any level of this structural hierarchy can be described by no more
than 10 use cases. Each of these use cases would encompass no more than 30 distinct
scenarios. Obviously, use cases that describe a large system are written at a much higher
level of abstraction (and represent considerably more development effort) than use cases
that are written to describe a single subsystem. Therefore, before use cases can be used for
estimation, the level within the structural hierarchy is established, the average length (in

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 54 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

pages) of each use case is determined, the type of software (e.g., real-time, business,
engineering/scientific, WebApp, embedded) is defined, and a rough architecture for the
system is considered. Once these characteristics are established, empirical data may be
used to establish the estimated number of LOC or FP per use case (for each level of the
hierarchy). Historical data are then used to compute the effort required to develop the
system.

-------------------------------------------------------------------------------------------------------
Q.Write a short note on Use Cases Estimation.(OCT 13)5M.

Q. How to estimate LOC with the help of Use Cases? Explain with an example?

 Estimation with Use Cases

As I have noted throughout Part 2 of this book, use cases provide a software team
with insight into software scope and requirements. However, developing an estimation
approach with use cases is problematic for the following reasons

 Use cases are described using many different formats and styles—there is no
standard form.
 Use cases represent an external view (the user’s view) of the software and
can therefore be written at many different levels of abstraction.
 Use cases do not address the complexity of the functions and features that
are described.
 Use cases can describe complex behavior (e.g., interactions) that involves
many functions and features.

Unlike an LOC or a function point, one person’s ―use case‖ may require months of
effort while another person’s use case may be implemented in a day or two.

Although a number of investigators have considered use cases as an estimation


input, no proven estimation method has emerged to date.9 Smith [Smi99j suggests that
use cases can be used for estimation, but only if they are considered within the context of
the ―structural hierarchy‖ that they are used to describe.

Smith argues that any level of this structural hierarchy can be described by no more
than 10 use cases. Each of these use cases would encompass no more than 30 distinct
scenarios. Obviously, use cases that describe a large system are written at a much higher
level of abstraction (and represent considerably more development effort) than use cases
that are written to describe a single subsystem. Therefore, before use cases can be used for
estimation, the level within the structural hierarchy is established, the average length (in
pages) of each use case is determined, the type of software (e.g., real-time, business,
engineering/scientific, WebApp, embedded) is defined, and a rough architecture for the
system is considered. Once these characteristics are established, empirical data may be
used to establish the estimated number of LOC or FP per use case (for each level of the
hierarchy). Historical data are then used to compute the effort required to develop the
system.

To illustrate how this computation might be made, consider the following

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 55 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

10 LOC estimate = N X LOCavg + [(Sa/Sh — I) + (Pa/Ph — 1)] X LOCadjust (26.2)

Where

N = actual number of use cases

LOCav5 = historical average LOC per use case for this type of subsystem

LOC adjust = represents an adjustment based on n percent of LOCavg where n is defined


locally and represents the difference between this

project and ―average‖ projects

Sa = actual scenarios per use case

Sh = average scenarios per use case for this type of subsystem

Pa= actual pages per use case

Ph= average pages per use case for this type of subsystem

Expression (26.2) could be used to develop a rough estimate of the number of LOC based
on the actual number of use cases adjusted by the number of scenarios and the page length
of the use cases. The adjustment represents up to n percent of the historical average LOC
per use case.

 An Example of Use-Case-Based Estimation

The CAD software introduced in Section 26.6.3 is composed of three subsystem


groups: user interface subsystem (includes UICF), engineering subsystem group (includes
the 2DGA, 3DGA, and DAM subsystems), and infrastructure subsystem group (includes
CGDF and PCF subsystems). Six use cases describe the user interface subsystem. Each use
case is described by no more than 10 scenarios and has an average length of six pages. The
engineering subsystem group is described by 10 use cases (these are considered to be at a
higher level of the structural hierarchy). Each of these use cases has no more than 20
scenarios associated with it and has an average length of eight pages. Finally, the
infrastructure subsystem group is described by five use cases with an average of only six
scenarios and an average length of five pages.

Using the relationship noted in Expression (26.2) with n = 30 percent, the table
shown in Figure 26.5 is developed. Considering the first row of the table, historical data
indicate that UI software requires an average of 800 LOC per use case when the use case
has no more than 12 scenarios and is described in less than five pages. These data conform
reasonably well for the CAD system. Hence the LOC estimate for the user interface
subsystem is computed using expression (26.2). Using the same approach, estimates are

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 56 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

made for both the engineering and infrastructure subsystem groups.Figure26.5 summarizes
the estimates and indicates that the overall size of the CAD is estimated at 42,500 LOC.

Using 620 LOC/pm as the average productivity for systems of this type and a
burdened labor rate of $8000 per month, the cost per line of code is approximately $13.
Based on the use-case estimate and the historical productivity data, the total estimated
project cost is $552,000 and the estimated effort is 68 person- months.

-------------------------------------------------------------------------------------------------------

Q. Write a note on Tracking Progress for an OO Project?

 Tracking Progress for an OO Project

Although an iterative model is the best framework for an 00 project, task parallelism
makes project tracking difficult. You may have difficulty establishing meaningful milestones
for an 00 project because a number of different things are happening at once. In general,
the following major milestones can be considered ―completed‖ when the criteria noted have
been met.

Technical milestone: 00 analysis completed

 All classes and the class hierarchy have been defined and reviewed.
 Class attributes and operations associated with a class have been defined and
reviewed.
 Class relationships (Chapter 6) have been established and reviewed.
 A behavioral model (Chapter 7) has been created and reviewed.
 Reusable classes have been noted.

Technical milestone: 00 designs completed

 The set of subsystems has been defined and reviewed.


 Classes are allocated to subsystems and reviewed.
 Task allocation has been established and reviewed.
 Responsibilities and collaborations have been identified.
 Attributes and operations have been designed and reviewed.
 The communication model has been created and reviewed.
 Technical milestone: 00 programming completed
 Each new class has been implemented in code from the design model.
 Extracted classes (from a reuse library) have been implemented.
 Prototype or increment has been built.
 Technical milestone: 00 testing
 The correctness and completeness of 00 analysis and design models has been
reviewed.
 A class-responsibility-collaboration network (Chapter 6) has been developed
and reviewed.
 Test cases are designed, and class-level tests (Chapter 19) have been
conducted for each class.
 Test cases are designed, and cluster testing (Chapter 19) is completed and
the classes are integrated.
 System-level tests have been completed.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 57 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

Recalling that the OO process model is iterative, each of these milestones may be revisited
as different increments are delivered to the customer.

-------------------------------------------------------------------------------------------------------
Q. Explain CASE tools?

 CASE Tools: CASE tools are a class of software that automate many of the activities
involved in various life cycle phases. For example, when establishing the functional
requirements of a proposed application, prototyping tools can be used to develop
graphic models of application screens to assist end users to visualize how an application
will look after development. Subsequently, system designers can use automated design
tools to transform the prototyped functional requirements into detailed design
documents. Programmers can then use automated code generators to convert the
design documents into code. Automated tools can be used collectively, as mentioned, or
individually. For example, prototyping tools could be used to define application
requirements that get passed to design technicians who convert the requirements
into detailed designs in a traditional manner using flowcharts and narrative documents,
without the assistance of automated design software.

Existing CASE tools can be classified along 4 different dimensions:

1. Life-cycle support
2. Integration dimension
3. Construction dimension
4. Knowledge-based CASE dimension

Let us take the meaning of these dimensions along with their examples one by one:

 Life-Cycle Based CASE Tools

This dimension classifies CASE Tools on the basis of the activities they support in the
information systems life cycle. They can be classified as Upper or Lower CASE tools.

 Upper CASE Tools support strategic planning and construction of concept-


level products and ignore the design aspect. They support traditional
diagrammatic languages such as ER diagrams, Data flow diagram, Structure
charts, Decision Trees, Decision tables, etc.
 Lower CASE Tools concentrate on the back end activities of the software life
cycle, such as physical design, debugging, construction, testing, component
integration, maintenance, reengineering and reverse engineering.

 Integration dimension

Three main CASE Integration dimensions have been proposed:

1. CASE Framework
2. ICASE Tools
3. Integrated Project Support Environment(IPSE)

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 58 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

 Workbenches

Workbenches integrate several CASE tools into one application to support specific software-
process activities. Hence they achieve:

 a homogeneous and consistent interface (presentation integration).


 easy invocation of tools and tool chains (control integration).
 access to a common data set managed in a centralized way (data integration).

CASE workbenches can be further classified into following 8 classes:

1. Business planning and modeling


2. Analysis and design
3. User-interface development
4. Programming
5. Verification and validation
6. Maintenance and reverse engineering
7. Configuration management
8. Project management
Quick Revision

 Use cases are used widely as a method for describing customer-level or business domain
requirements that imply software features and functions.
 Use cases are described using many different formats and styles—there is no standard
form.
 Use cases represent an external view (the user’s view) of the software and can therefore
be written at many different levels of abstraction.
 Use cases do not address the complexity of the functions and features that are
described.
 Use cases can describe complex behavior (e.g., interactions) that involves many
functions and features.
 CASE tools are a class of software that automate many of the activities involved in
various life cycle phases. For example, when establishing the functional requirements of
a proposed application, prototyping tools can be used to develop graphic models of
application screens to assist end users to visualize how an application will look after
development.

IMP Question Set

SR No. Question Reference page No.


1 Q. Explain Use –Case? 52
2 52
Q. How to estimate LOC with the help of Use Cases? Explain with an
example?
3 Q. Write a note on Tracking Progress for an 00 Project? 54
4 55
Q. Explain CASE tools?

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 59 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

CHAPTER 6: CHANGING TRENDS IN SOFTWARE DEVELOPMENT

Topic Covered:

 Unified Process, Its phases & disciplines, Agile Development – Principles & Practices,
 Extreme programming- Core values & Practices , Frameworks, Components,
Services, Introduction to Design Patterns.

Q. Explain the unified process?(March 14)5M. (with its phases).


 THE UNIFIED PROCESS
 The Unified Process (UP) is an object-oriented system development methodology
originally offered by Rational Software, which is now part of IBM. Developed by Grady
Booch, James Rumbaugh, and Ivar jacobson—the three pioneers who are also behind
the success of the Unified Modeling Language (UML)— the UP is their attempt to
define a complete methodology that uses UML for system models and describes a
new, adaptive system development life cycle.
 In the UP, the term development process is synonymous with development
methodology.
 The UP is now widely recognized as a standard system development methodology for
object-oriented development, and many variations are in use.
 The original version of UP defined an elaborate set of activities and deliverables for
every step of the development process. More recent versions are streamlined, with
fewer activities and deliverables, simplifying the methodology.
 Adaptive methodologies—including the UP—are all based on an iterative approach to
development. Each iteration is like a miniproject, in which requirements are defined
based on analysis tasks, system components are designed, and those components are
then implemented, at least partially, through programming and testing.
 One of the big questions in adaptive development, however, is what the focus of each
iteration should be. In other words, do iterations early in the project have the same
objectives and focus as those done later?
 The UP answers this question by dividing a project into four major phases.
-------------------------------------------------------------------------------------------------------

Q. What are the phases of unified process?(March 12)5M.

 UP Phases:

A phase in the UP can be thought of as a goal, or major emphasis for a particular


portion of the project.

The four phases of the UP life cycle are named

1. Inception: Inception Phase is in any project planning phase, in the inception


phase the project manager develops and refines a vision for the new system to
show how it will improve operations and solve existing problems. The project
manager makes the business case for the new system, meaning that the benefits
of the new system must outweigh the cost of development. The scope of the
system must also be defined so that it is clear what the project will accomplish.
Defining the scope includes identifying many of the key requirements for the
system. The inception phase is usually completed in one iteration, and as with any
iteration, parts of the actual system might be designed, implemented, and tested.
As software is developed, team members must confirm that the system vision still
matches user expectations or that the technology will work as planned.
Sometimes prototypes are discarded after proving that point.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 60 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

2. Elaboration: Elaboration Phase the elaboration phase usually involves several


iterations, and early iterations typically complete the identification and definition
of all of the system requirements. Because the UP is an adaptive approach to
development, the requirements are expected to evolve and change after work
starts on the project. Elaboration phase iterations also complete the analysis,
design, and implementation of the core architecture of the system. Usually the
aspects of the system that pose the greatest risk are identified and implemented
first. Until developers know exactly how the highest-risk aspects of the project will
work out, they cannot determine the amount of effort required to complete the
project. By the end of the elaboration phase, the project manager should have
more realistic estimates for a project’s cost and schedule, and the business case
for the project can be confirmed. Remember that the design, implementation, and
testing of key parts of the system are completed during the elaboration phase.
The elaboration phase is not at all the same as the traditional SDLC’s analysis
phase.
3. Construction: Construction Phase the construction phase involves several
iterations that continue the design and implementation of the system. The core
architecture and highest-risk aspects of the system are already complete. Now the
focus of the work turns to the routine and predictable parts of the system, for
example, detailing the system controls such as data validation, fine-tuning the
user interface’s design, finishing routine data maintenance functions, and
completing the help and user preference functions. The team also begins to plan
for deployment of the system.
4. Transition: Transition Phase During the transition phase, one or more final
iterations involve the final user- acceptance and beta tests, and the system is
made ready for operation. After the system is in operation, it will need to be
supported and maintained.

Each phase of the UP life cycle describes the emphasis or objectives of the project team
members and their activities at that point in time. The four phases provide a general
framework for planning and tracking the project overtime. Within each phase, several

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 61 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

iterations are planned to allow the team flexibility to adjust to problems or changing
conditions. The emphases or objectives of the project team in each of the four phases are
described briefly in Figure 16-2.

-------------------------------------------------------------------------------------------------------

Q. Write a short note on UP disciplines?


 UP Disciplines

As we mentioned earlier, the four UP phases define the project sequentially by


indicating the emphasis of the project team at any point in time. To make iterative
development manageable, the UP defines disciplines to use within each iteration. A
discipline is a set of functionally related activities that together contribute to one aspect
of the development project. UP disciplines include business modeling, requirements, design,
implementation, testing, deployment, configuration and change management, project
management, and environment. Each iteration usually involves activities from all
disciplines.

Figure 16-3 shows how the UP disciplines are involved in each iteration, which is
typically planned to span four weeks.

The size of the shaded area under the curve for each discipline indicates the relative
amount of work included in each discipline during the iteration. The amount and nature of

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 62 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

the work differs from iteration to iteration. For example, in iteration 2 much of the effort
focuses on business modeling and defining requirements, with much less effort focused on
implementation and deployment. In iteration 5, very little effort is focused on modeling and
requirements, with much more effort focused on implementation, testing, and deployment.
But most iterations involve some work in all disciplines.

Figure 16-4 shows the entire UP life cycle—phases, iterations, and disciplines. This
figure includes all of the key UP life cycle features and is usefl.il for understanding how a
typical information system development project is managed.

The previous figures illustrate how the phases include activities from each discipline.

But what about the detailed activities that occur within each discipline?

The disciplines can be divided into two main categories:

System development activities

Project management activities.

The six main UP development disciplines are as follows:

1. Business modeling
2. Requirements
3. Design
4. Implementation
5. Testing
6. Deployment
 Each iteration is similar to a mini project, which completes a portion of the system.
 For each iteration, The project team must understand the business environment
(business modeling),
 Define the requirements that the portion of the system must satisfy
(requirements),
 Design a solution for that portion of the system that satisfies the requirements
(design),
 Write and integrate the computer code that makes the portion of the system
actually work (implementation),
 Thoroughly test the portion of the system (testing),
 Then in some cases put the portion of the system that is completed and tested into
operation for users (deployment).
Three additional support disciplines are necessary for planning and controlling the project:

 Configuration and change management


 Project management
 Environment

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 63 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

All nine UP disciplines are employed throughout the lifetime of a project but to
different degrees.

For example, in the inception phase there is one iteration. During the inception
phase iteration, the project manager might complete a model showing some aspect of the
system environment (the business modeling discipline). The scope of the system is
delineated by defining many of the key requirements of the system and listing use cases
(the requirements discipline).

To prove technological feasibility, some technical aspect of the system might be


designed (the design discipline), programmed (the implementation discipline), and tested to
make sure it will work as planned (the testing discipline). In addition, the project manager
is making plans for handling changes to the project (the configuration and change
management discipline), working on a schedule and cost/benefit analysis (the project
management discipline), and tailoring the UP phases, iterations, deliverables, and tools to
match the needs of the project (the environment discipline).

The elaboration phase includes several iterations. In the first iteration, the team
works on the details of the domain classes and use cases addressed in the iteration (the
business modeling and requirements disciplines). At the same time, they might complete
the description of all use cases to finalize the scope (the requirements discipline). The use
cases addressed in the iteration are designed by creating design class diagrams and
interaction diagrams (the design discipline), programmed using Java or Visual Basic .NET
(the implementation discipline), and fully tested (the testing discipline). The project
manager works on the plan for the next iteration and continues to refine the schedule and
feasibility assessments (the project management discipline), and all team members
continue to receive training on the UP activities they are completing and the system
development tools they are using (the environment discipline).

By the time the project progresses to the construction phase, most of the use cases
have been designed and implemented in their initial form. The focus of the project turns to
satisfying other technical, performance, and reliability requirements for each use case,
finalizing the design, and implementation. These requirements are usually routine and lower
risk, but they are key to the success of the system. The• effort focuses on designing system
controls and security and on implementing and testing these aspects.

The Unified Process as a system development methodology must be tailored to the


development team and specific project. Choices must be made about which deliverables to
produce and the level of formality to be used. Sometimes a project requires formal
reporting and controls. Other times, it can be less formal. The UP should always be tailored
to the project.

-------------------------------------------------------------------------------------------------------
Q. Write a short note on agile development?(March 12,13,14)5M.

Agile Development:

The highly volatile marketplace has forced businesses to respond rapidly to new
opportunities. Sometimes new opportunities appear in the middle of implementing another
business initiative. To survive, businesses must be agile. Agility—being able to change

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 64 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

directions rapidly, even in the middle of a project—is the keystone of Agile Development.
Agile Development is a philosophy and set of guidelines for developing software in an
unknown, rapidly changing environment. It provides an overarching philosophy for specific
development approaches such as the Unified Process. The amount of agility in each
approach can vary. For example, we identified the UP as being somewhat adaptive. Some
UP projects may adopt many agile philosophies, and others may use fewer.

Related to Agile Development, Agile Modeling is a philosophy about how to build


models, some of which are formal and detailed and others sketchy and minimal. Figure 16-5
illustrates the relationships among an Agile Development philosophy, specific adaptive
approaches, and use of Agile Modeling.

Agile Development Philosophy and Values

The ―Manifesto for Agile Software Development‖ identifies four basic values, which
represent the core philosophy of the agile movement. The four values emphasize

 Responding to change over following a plan


 Individuals and interactions over processes and tools
 Working software over comprehensive documentation
 Customer collaboration over contract negotiation

Note that each of the phrases in the list prioritizes the value on the left over the
value on the right. The people involved in system development, whether as team
members, users, or other stakeholders, all need to accept these priorities for a project to
be truly agile. Adopting an agile approach is not always easy.

Some industry leaders in the agile movement coined the term chaordic to describe
an agile project. Chaordic comes from two words, chaos and order. The first two values in
our list—responding to change over following a plan and individuals and interactions over
processes and tools—do seem to be a recipe for chaos. But they recognize that software
projects inherently have many unknowns and unpredictable elements, and hence a certain
amount of chaos. Developers need to accept the chaos but also need to use the specific
methodologies discussed later to impose order on this chaos to move the project ahead.

Managers and executive stakeholders frequently struggle to accept this less rigid
point of view, often wanting to impose more controls on development teams and to enforce
detailed plans and schedules. However, the agile philosophy takes the opposite approach,

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 65 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

providing more flexibility in project schedules and letting the project teams plan and
execute their work as the project progresses.

Another important value of Agile Development is that customers must continually be


involved with the project team. They do not sit down with the project team for a few
sessions to develop the specifications and then go their separate ways. Instead, customers
collaborate with and become part of the technical team. Because working software is being
developed throughout the project, customers are continually involved in defining
requirements and testing components.

Contracts also take on an entirely different flavor. Fixed prices and fixed deliverables
do not make sense. Contracts take more of a collaborative tack but include options for the
customer to cancel if the project is not progressing, as measured by the incremental
deliverables. Incremental deliverables in agile projects are working pieces of the new
system, not documents or specifications.

Models and modeling are critical to Agile Development, so we look next at Agile
Modeling. Many of the core values are illustrated in the principles and practices of building
models.

-------------------------------------------------------------------------------------------------------

Principles & Practices,

Q. Explain the agile development principles?(OCT 12,13)5M.

 Agile Modeling Principles

Your first impression might be that an agile approach means less modeling, or
maybe even no modeling. Agile Modeling (AM) is not about doing less modeling but about
doing the right kind of modeling at the right level of detail for the right purposes. Early in
this chapter, we identified two primary reasons to build models: (1) to understand what you
are building and (2) to communicate important aspects of the solution system. AM consists
of a set of principles and practices that reinforce these two reasons for modeling. AM does
not dictate which models to build or how formal to make those models but instead helps
developers to stay on track with their models—by using them as a means to an end instead
of building models as end deliverables. AM’s basic principles express the attitude that
developers should have as they develop software. Figure 16-6 summarizes Agile Modeling

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 66 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

principles. We discuss those principles next.

Develop Software as Your Primary Goal The primary goal of a software development
project is always to produce high-quality software. The primary measurement of progress is
working software, not intermediate models of system requirements or specifications.
Modeling is always a means to an end, not the end itself:

Any activity that does not directly contribute to the end goal of producing software
should be questioned and avoided if it cannot be justified.

Enable the Next Effort as Your Secondary Goal Focusing only on working software
can also be self- defeating, so developers must consider two important objectives. First,
requirements models might be necessary to develop design models. So, do not think that if
the model cannot be used to write code it is unnecessary. Sometimes several intermediate
steps are needed before the final code can be written. Second, although high-quality
software is the primary goal, long-term use of that code is also important. So, some models
might be necessary to support maintenance and enhancement of the system. Yes, the code
is the best documentation, but some architectural design decisions might not be easily
identified from the code. Look carefully at what other artifacts might be necessary to
produce high-quality systems in the long term.

Minimize Your Modeling Activity—Few and Simple Create only the models that are
necessary. Do just enough to get by. This principle is not a justification for sloppy work or
inadequate analysis. The models you create should be clear, correct, and complete. But do
not create unnecessary models. Also, keep each model as simple as possible. Normally, the
simplest solution is the best solution. Elaborate solutions tend to be difficult to understand
and maintain. However, we emphasize again that simplicity is not justification for being
incomplete.

Embrace Change and Change Incrementally Because the underlying philosophy of


Agile Modeling is that developers must be flexible and respond quickly to change, a good
agile developer willingly accepts—and even embraces—change. Change is seen as the
norm, not the exception. Watch for change and have procedures ready to integrate changes
into the models. The best way to accept change is to develop incrementally. Take small

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 67 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

steps and address problems in small bites. Change your model incrementally, and then
validate into make sure it is correct. Do not try to accomplish everything in one big release.

Model with a Purpose We indicated earlier that the two reasons to build models are
to understand what you are building and to communicate important aspects of the solution
system. Make sure that your modeling efforts support those reasons. Sometimes
developers try to justify,’ building certain models by claiming that (1) the development
methodology mandates the development of the model, (2) someone wants a model, even
though the person does not know why it is important, or (3) a model can replace a face-to-
face discussion of issues. Always identify’ a reason and an audience for each model you
develop. Then develop the model in sufficient detail to satisfy the reason and the audience.
Incidentally, the audience might be you.

Build Multiple Models UML, along with other modeling methodologies, has several
models to represent different aspects of the problem at hand. To be successful—in
understanding or communication—you will need to model various aspects of the required
solution. Don’t develop all of them; be sure to minimize your modeling, but develop enough
models to make sure you have addressed all the issues.

Build High-Quality Models and Get Feedback Rapidly Nobody likes sloppy work. It is
based on faulty thinking and introduces errors. One way to avoid error in models is to get
feedback rapidly, while the work is still fresh. Feedback comes from users, as well as
technical team members. Others will have helpful insights and different ways to view a
problem and identify’ a solution.

Focus on Content Rather than Representation Sometimes a project team has access
to a sophisticated CASE tool. CASE tools can be helpful, but at times they are distracting
because developers spend time making the diagrams pretty. Be judicious in the use of tools.
Some models need to be well drawn for communication or contracts or even to handle
expected changes and updates. In other cases, a hand- drawn diagram might suffice.

Learn from Each Other with Open Communication all of the adaptive approaches
emphasize working in teams. Do not be defensive about your models. Other team members
have good suggestions. You can never truly master every aspect of a problem or its models.

Know Your Models arid How to Use Them Being an agile modeler does not mean that
you are not skilled. If anything, you must be more skilled to know the strengths and
weaknesses of the models, including how and when to use them. An expert modeler applies
the previous principles of simplicity, quality, and development of multiple models.

Adapt to Specific Project Needs Every project is different because it exists in a


unique environment; involves different users, stakeholders, and team members; and
requires a different development environment and deployment platform. Adapt your
models and modeling techniques to fit the needs of the business and the project.
Sometimes models can be informal and simple. For other projects, more formal,
complicated models might be required. An agile modeler is able to adapt to each project.

 Agile Modeling Practices

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 68 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

The following practices support the AM principles just expressed. The heart of AM is
in its practices, which give the practitioner specific modeling techniques. Figure 16-7
summarizes the Agile Modeling practices.

Iterative and Incremental Modeling Remember that modeling is a support activity,


not the end result of software development. As a developer, you should create small
models frequently to help you understand or solve a problem. New developers sometimes
have difficulty deciding which models to select. You should continue to learn about models
and expand your repertoire. UML has a large set of models that cover a lot of analysis and
design territory. However, they are not the only models you might find useful. Many
developers still use data flow diagrams and decomposition diagrams from the traditional
structured approach. The point, is that models are a tool, and as a professional, you should
have a large set of tools. Teamwork As shown in Figure 16-5,AM supports various
development methodologies. One of the tenets in all of these methodologies is that
developers work together in small teams of two to four members. In addition, users should
be integrally involved in modeling exercises. For example, suppose the task at hand is to
understand how a purchase order is created and processed. Good AM practice says to get
the right players together, including team members and users, and develop a detailed
model of the process, possibly on a whiteboard. Other teams could then take a
digital photograph of the whiteboard and post it in a repository on the project’s network
server. The model then becomes public; no one owns it, and all can access it. If it later
needs to be corrected, it can be annotated with software and reposted. An alternative
method, especially if the model will become a permanent document, is to develop the model
using a drawing tool such as Visio with a laptop and a projector. This process is not quite as
flexible as a whiteboard, but it yields a more permanent model. In any case, the model is
again posted for all to use, review, and update.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 69 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

Simplicity The previous purchase order example illustrated an approach that is


simple and easy to support. Also, developers should create a set of models to help them
understand or solve a narrow problem. In the purchase order example, the model focused
on one business process or use case. In the first iteration, developers should only focus on
the typical process, one without all of the possible variations. Then later iterations can add
exception conditions, security and control requirements, and other details.

Validation Between modeling sessions, the team can begin to write code for the
solutions already conceived so that they can validate the models. Simplicity supports
frequent validation. Do not create too many or complex models until the simple ones have
been validated with code.

Documentation Many models are temporary working documents that are developed
to solve a particular problem. These models quickly become obsolete as the code evolves
and improves. Do not try to keep them up to date. Discard them. If they were posted to a
repository, date them so that everyone knows they show a history of decisions and progress
but are not now in sync with the code. Updating only when it hurts is a guideline that tells
us not to waste time trying to keep temporary models synchronized. During the first
iteration, when many models are developed concurrently, they should be consistent.
However, as development progresses, some models will become working documents that no
longer relate well to other models. Remember that the objective of the project is to develop
software, not to have a set of pretty models. Only update when it hurts—that is, when the
project team can’t work effectively without the information.

Motivation Remember the basic objectives of modeling. Only build a model if it helps
you understand a process or solve a problem or if you need to record and communicate
something. For example, the team members in a design session might make some design
decisions. To communicate these decisions, the team posts a simple model to make it
public. The model can be a very effective tool to document the decisions and ensure that all
have a common understanding and reference point. Again, a model is simply used as a tool
for communication, not as an end in itself.

Now that we have explored the basic philosophy, principles, and practices underlying
Agile Development, we turn to two methodologies that employ agile concepts: Extreme
Programming and Scrum.

Q. Discuss XP (Extreme Programming) (March 13)5M.

Extreme programming- Core values & Practices Extreme Programming (XP) is an


adaptive, agile development methodology that was created in the mid- 1990s. The word
extreme sometimes makes people think that it is completely new and that developers who
embrace XP are radicals. However, XP is really an attempt to take the best practices of
software development and extend them ―to the extreme.‖

Extreme programming—

 Takes proven industry best practices and focuses on them intensely


 Combines those best practices (in their intense form) in a new way to produce a
result that is greater than the sum of the parts

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 70 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

 Figure 16-8 lists the core values and practices of XP.

 XP Core Values

The four core values of XP—communication, simplicity, feedback, and courage—drive its
practices and project activities. You will recognize the first three as best practices for any
development project. With a little thought, you should also see that the fourth is a desired
value for any project, even though it might not be stated explicitly.

Communication One of the major causes of project failure has been a lack of open
communication with the right players at the right time and at the right level. Effective
communication involves not only documentation but also open verbal discussion. The
practices and methods of XP are designed to ensure that open, frequent communication
occurs

Simplicity Even though developers have always advocated keeping solutions simple, they do
not always follow their own advice. XP includes techniques to reinforce this principle and
make it a standard way of developing systems.

Feedback As with simplicity, getting frequent, meaningful feedback is recognized as a best


practice of software development. Feedback on functionality and requirements should come
from the users, feedback on designs and code should come from other developers, and
feedback on satisfying a business need should come from the client. XP integrates feedback
into every aspect of development.

Courage Developers always need courage to face the harsh choice of doing things right or
throwing away bad code and starting over. But all too frequently they have not had the
courage to stand up to a too-tight schedule, resulting in bad mistakes. XP practices are
designed to make it easier to give developers the courage to ―do it right.‖

XP Practices

XP’s 12 practices embody the basic values just presented. These practices are consistent
with the agile principles explained earlier in the chapter. Planning Some people describe XP
as glorified hacking or as the old ―code and fix‖ methodology that was used in the 1960s.
That is not true; XP does include planning. However, XP is an adaptive technique that

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 71 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

recognizes that you cannot know everything at the start. As indicated earlier, XP embraces
change. XP planning focuses on making a rough plan quickly and then refining it as things
become clearer. This reflects the Agile Development philosophy that change is more
important than detailed plans. It is also consistent with the idea that individuals—and their
abilities—are more important than an elaborate process.

-------------------------------------------------------------------------------------------------------

Q. Write a short note on XP core values?


 XP Core Values

The four core values of XP—communication, simplicity, feedback, and courage—drive its
practices and project activities. You will recognize the first three as best practices for any
development project. With a little thought, you should also see that the fourth is a desired
value for any project, even though it might not be stated explicitly.

Communication One of the major causes of project failure has been a lack of open
communication with the right players at the right time and at the right level. Effective
communication involves not only documentation but also open verbal discussion. The
practices and methods of XP are designed to ensure that open, frequent communication
occurs

Simplicity Even though developers have always advocated keeping solutions simple, they do
not always follow their own advice. XP includes techniques to reinforce this principle and
make it a standard way of developing systems.

Feedback As with simplicity, getting frequent, meaningful feedback is recognized as a best


practice of software development. Feedback on functionality and requirements should come
from the users, feedback on designs and code should come from other developers, and
feedback on satisfying a business need should come from the client. XP integrates feedback
into every aspect of development.

Courage Developers always need courage to face the harsh choice of doing things right or
throwing away bad code and starting over. But all too frequently they have not had the
courage to stand up to a too-tight schedule, resulting in bad mistakes. XP practices are
designed to make it easier to give developers the courage to ―do it right.‖

XP Practices

XP’s 12 practices embody the basic values just presented. These practices are consistent
with the agile principles explained earlier in the chapter. Planning Some people describe XP
as glorified hacking or as the old ―code and fix‖ methodology that was used in the 1960s.
That is not true; XP does include planning. However, XP is an adaptive technique that
recognizes that you cannot know everything at the start. As indicated earlier, XP embraces
change. XP planning focuses on making a rough plan quickly and then refining it as things
become clearer. This reflects the Agile Development philosophy that change is more
important than detailed plans. It is also consistent with the idea that individuals—and their
abilities—are more important than an elaborate process.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 72 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

The basis of an XP plan is a set of stories that users develop. A story simply describes what
the system needs to do. XP does not use the term use case, but a user story and a use case
express a similar idea. Planning involves two aspects: business issues and technical issues.
In XP the business issues are decided by the users and clients, while technical issues are
decided by the development team. The plan, especially in the early stages of the project,
consists of the list of stories—from the users—and the estimates of effort, risk, and work
dependencies for each story—from the development team. As in Agile Development, the
idea is to involve the users heavily in the project, rather than requiring them simply to sign
off on specifications. Testing Every new piece of software requires testing, and every
methodology includes testing. XP intensifies testing by requiring that the tests for each
story be written first—before the solution is programmed. There are two major types of
tests: unit tests, which test the correctness of a small piece of code, arid acceptance tests,
which test the business function. The developers write the unit tests, and the users write
the acceptance tests. Before any code can be integrated into the library of the growing
system, it must pass the tests. By having the tests written first, XP automates their use and
executes them frequently. Over time, a library of required tests is created, so when
requirements change and the code needs to be updated, the tests can be rerun quickly and
automatically.

Pair Programming This practice, more than any other, is one for which XP is famous.
Instead of simply requiring one programmer to watch another’s work, pair programming
divides up the coding work. First, one programmer might focus more on design and double-
checking the algorithms while the other writes the code. Then they switch roles so that both
think about design, coding, and testing. XP relies on comprehensive and continual code
reviews. Interestingly, research has shown that pair programming is actually more efficient
than programming alone. It takes longer to write the initial code, but the long-term quality
is higher. Errors are caught quickly and early, two people become familiar with every part of
the system, all design decisions are developed by two brains, and fewer ―quick-and-dirty‖
shortcuts are taken. The quality of the code is always higher in a pair-programming
environment.

Simple Designs Opponents say that XP neglects design, but that is not true. XP conforms to
the principles of Agile Modeling expressed earlier by avoiding the ―Big Design Up Front‖
approach. Instead, it views design as so important that it should be done continually, but in
small chunks. As with everything else, the design must be verified immediately by reviewing
it along with coding and testing.

So what is a simple design? It is one that accomplishes the desired result with as few
classes and methods as possible and that does not duplicate code. Accomplishing all that is
often a major challenge.

Refactoring the Code Refactoring is the technique of improving the code without changing
what it does. XP programmers continually refactor their code. Before and after adding any
new functions, XP programmers review their code to see whether there is a simpler design
or a simpler method of achieving the same result. Refactoring produces high-quality, robust
code.

Owning the Code Collectively This practice requires all team members to have a new
mindset. In XP, everyone is responsible for the code. No one person can say, ―This is my

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 73 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

code.‖ Someone can say, ―I wrote it,‖ but everyone owns it. Collective ownership allows
anyone to modify,’ any piece of code. However, because unit tests are run before and after
every change, if programmers see something that needs fixing, they can run the unit tests
to make sure that the change did not break something. This practice embodies the team
concept that developers are building a system together.

Continuous Integration This practice embodies XP’s idea of ―growing‖ the software. Small
pieces of code—which have passed the unit tests—are integrated into the system daily or
even more often. Continuous integration highlights errors rapidly and keeps the project
moving ahead. The traditional approach of integrating large chunks of code late in the
project often resulted in tremendous amounts of rework and time lost while developers tried
to determine just what went wrong. XP’s practice of continuous integration prevents that.

On-Site Customer As with all adaptive approaches, XP projects require continual


involvement of users who can make business decisions about functionality and scope. Based
on the core value of communication, this practice keeps the project moving ahead rapidly. If
the customer is not ready to commit resources to the project, the project will not be very
successful.

System Metaphor This practice is XP’s unique and interesting approach to defining an
architectural vision. It answers the questions, ―How does the system work? What are its
major components?‖ To answer those questions, developers identify a metaphor for the
system. For example, Big Three automaker Chrysler’s payroll system was built as a
production-line metaphor, with its system components using production-line terms.
Everyone at Chrysler understands a production line, so a payroll transaction was treated the
same way—developers started with a basic transaction and then applied various processes
to complete it. Of course, the metaphor should be easily understood or well known to the
members of the development team. A system metaphor can guide members toward a vision
and help them understand the system.

Small Releases A release is a point at which the new system can be turned over to users for
acceptance testing, and sometimes even for productive use. Consistent with the entire
philosophy of growing the software, small and frequent releases provide upgraded solutions
to the users and keep them involved in the project. They also facilitate other practices, such
as immediate feedback and continual integration.

Forty-Hour Week and Coding Standards These final two practices set the tone for how the
developers should work. The exact number of hours a developer works is not the issue. The
issue is that the project should not be a death march that burns out every member of the
team. Neither is the project a haphazard coding exercise. Developers should follow
standards for coding and documentation. XP uses just the engineering principles that are
appropriate for an adaptive process based on empirical controls.

-------------------------------------------------------------------------------------------------------

Q. What are XP project activities?

 XP Project Activities:

Figure 16-9 shows an overview of the XP system development approach. The XP


development approach is divided into three levels—system (the outer ring), release (the

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 74 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

middle ring), and iteration (the inner ring). System-level activities occur once during each
development project. A system is delivered to users in multiple stages called releases. Each
release is a fully functional system that performs a subset of the full system requirements.
A release is developed and tested within a period of no more than a few weeks or months.
The activities in the middle ring cycle multiple times—once for each release. Releases are
divided into multiple iterations. During each iteration, developers code and test a specific
functional subset of a release. Iterations are coded and tested in a few days or weeks.
There are multiple iterations within each release, so the iteration ring (inner) cycles
multiple times.

-------------------------------------------------------------------------------------------------------

Q. Write a short note on object framework?


 OBJECT FRAMEWORKS

An object framework is a set of classes that are specifically designed to be reused in


a wide variety of programs. The object framework is supplied to a developer as a
precompiled library or as program source code that can be included or modified in new
programs. The classes within an object framework are sometimes called foundation classes.
Foundation classes are organized into one or more inheritance hierarchies. Programmers
develop application-specific classes by deriving them from existing foundation classes.
Programmers then add or modil5i class attributes and methods to adapt a ―generic‖
foundation class to the requirements of a specific application.

Object Framework Types

Object frameworks have been developed for a variety of programming needs.

Examples include the following:

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 75 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

 User-interface classes: Classes for commonly used objects within a graphical


user interface, such as windows, menus, toolbars, and file open and save dialog
boxes.
 Generic data structure classes. Classes for commonly used data structures
such as linked lists, indices, and binary trees and related processing operations
such as searching, sorting, and inserting and deleting elements.
 Relational database interface classes. Classes that allow 00 programs to
create database tables, add data to a table, delete data from a table, or query
the data content ofone or more tables.
 Classes specific to an application area. Classes specifically designed for use
in application areas such as banking, payroll, inventory control, and shipping.
General-purpose object frameworks typically contain classes from the first three
categories. Classes in these categories can be reused in a wide variety of application areas.
Application-specific object frameworks provide a set of classes for use in a specific industry
or type of application. Third parties usually design application-specific frameworks as
extensions to a general-purpose object framework. An application- or company-specific
framework requires a significant development effort typically lasting several years. The
effort is repaid overtime through continuing reuse of the framework in newly developed
systems.

The Impact of Object Frameworks on Design and Implementation Tasks

Developers need to consider several issues when determining whether to use object
frameworks. Object frameworks affect the process of systems design and development in
several different ways:

1. Frameworks must be chosen early in the project—within the first development


iteration.
2. Systems design must conform to specific assumptions about application program
structure and operation that the framework imposes.
3. Design and development personnel must be trained to use a framework
effectively.
4. Multiple frameworks might be required, necessitating early compatibility and
integration testing.
The process of developing a system using one or more object frameworks is
essentially one of adaptation. The frameworks supply a template for program construction
and a set of classes that provide generic capabilities. Systems designers adapt the generic
classes to the specific requirements of the new system. Frameworks must be chosen early
so that designers know the application structure imposed by the frameworks, the extent to
which needed classes can be adapted from generic foundation classes, and the classes that
cannot be adapted from foundation classes and thus must be built from scratch.

Of the three object layers typically used in Object Oriented system development
(view, business logic, and data access), the view and data layers most commonly derive
from foundation classes. User interfaces and database access tend to be the areas of
greatest strength in object frameworks, and they are typically the most tedious classes to
develop from scratch. It is not unusual for 80 percent of a system’s code to be devoted to
view and data classes. Thus, constructing view and data classes from foundation classes
provides significant and easily obtainable code reuse benefits. Adapting view classes from
foundation classes has the additional benefit of ensuring a similar look and feel of the user
interface across systems and across application programs within systems.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 76 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

Successful use of an object framework requires a great deal of up-front knowledge


about its class hierarchies and program structure. That is, designers and programmers must
be familiar with a framework before they can successfully use it. Thus, a framework should
be selected as early as possible in the project, and developers must be trained in use of the
framework before they begin to implement the new system.

-------------------------------------------------------------------------------------------------------

Q. What is the component based approach?


 COMPONENTS:

A component is a software module that is fully assembled and tested, is ready to


use, and has well-defined interfaces to connect it to clients or other components.
Components can be single executable objects or groups of interacting objects. A
component can also be a non-O0 program or system ―wrapped‖ in an 00 interface.
Components implemented with non-CO technologies must still implement object like
behavior. In other words, they must implement a public interface, respond to messages,
and hide their implementation details.

Components are standardized and interchangeable software parts. They differ from
objects or classes because they are binary (executable) programs, not symbolic (source
code) programs. This distinction is important because it makes components much easier to
reuse and re implement than source code programs.

For example, consider the grammar-checking function in most word processing


programs. A grammar- checking function can be developed as an object or as a
subroutine. Other parts of the word processing program can call the subroutine or object
methods via appropriate source code constructs (for example, a C++ method invocation or
a BASIC subroutine call). The grammar-checking function’s source code is integrated with
the rest of the word processor’s source code during program compilation and linking. The
executable program is then delivered to users.

Now consider two possible changes to the original grammar-checking function:

 The developers of another word processing program want to incorporate the


existing grammar- checking function into their product.
 The developers of the grammar-checking function discover new ways to
implement the function that result in greater accuracy and faster execution.

To integrate the existing function into a new word processor, the word processor
developers must be provided with the source code of the grammar-checking function. They
then code appropriate calls to the grammar checker into their word processor source code.
The combined program is then compiled, linked, and distributed to users. When the
developers of the grammar checker revise their source code to implement the faster and
more accurate function, they deliver the source code to the developers of both word
processors. Both development teams integrate the new grammar-checking source code into
their word processors, recompile and relink the programs, and deliver a revised word
processor to their users.

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 77 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

So what’s wrong with this scenario? Nothing in theory, but a great deal in practice.
The grammar- checker developers can provide their function to other developers only as
source code, which opens up a host of potential problems concerning intellectual property
rights and software piracy. Of greater importance, the word processor developers must
recompile and relink their entire word processing programs to update the embedded
grammar checker. The revised binary program must then be delivered to users and installed
on their computers. This is an expensive and time-consuming process. Delivering the
grammar-checking program in binary form would eliminate or minimize most of these
problems.

A component-based approach to software design and construction solves both of


these problems. Component developers, such as the developers of the grammar checker,
can deliver their product as a ready- to-use binary component. Users, such as the
developers of the word processing programs, can then simply plug in the component.
Updating a single component doesn’t require recompiling, relinking, and redistributing the
entire application. Perhaps applications already installed on user machines could query an
update site via the Internet each time they started and automatically download and install
updated components.

At this point, you might be thinking that component-based development is just


another form of code reuse. But systems design, object frameworks, and client/server
architecture all address code reuse in different ways. The following points are what make
component-based design and construction different:

 Components are reusable packages of executable code. Systems design and


object frameworks are methods of reusing source code.
 Components are executable objects that advertise a public interface (that is, a
set of methods and messages) and hide (encapsulate) the implementation of
their methods from other components. Client/server architecture is not
necessarily based on 00 principles. Component-based design and construction
are an evolution of client/server architecture into a purely 00 form.

Components provide an inherently flexible approach to systems design and


construction. Developers can design and construct many parts of a new system simply by
acquiring and plugging in an appropriate set of components. They can also make newly
developed functions, programs, and systems more flexible by designing and implementing
them as collections of components. Component-based design and construction have been
the norm in the manufacturing of physical goods (such as cars, televisions, and computer
hardware) for decades. However, it has only recently become a viable approach to
designing and implementing information systems.

COMPONENT STANDARDS AND INFRASTRUCTURE

Interoperability of components requires standards to be developed and readily available. For


example, consider the video display of a typical IBM-compatible personal computer. The
plug on the end of the video signal cable follows an interface standard. The plug has a
specific form, and each connector in the plug carries a well-defined electrical signal. Years
ago, a group of computer and video display manufacturers defined a standard that
describes the physical form of the plug and the type of signals carried through each

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 78 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

connector. Adherence to this standard guarantees that any video display unit will work with
any compatible personal computer and vice versa.

Components might also require standard support infrastructure. For example, video display
units are not internally powered. Thus, they require not only a standard power plug but also
an infrastructure to supply power to the plug. A component might also require specific
services from an infrastructure. For example, a cellular telephone requires the service
provider to assign a transmission frequency with the nearest cellular radio tower, to transfer
the connection from one tower to another as the user moves among telephone cells, to
establish a connection to another person’s telephone, and to relay all voice data to and from
the other person’s telephone via the public telephone grid. All cellular telephones require
these services. Software components have a similar need for standards. Components could
be hard-wired together, but this reduces their flexibility. Flexibility is enhanced when
components can rely on standard infrastructure services to find other components and
establish connections with them.

In the simplest systems, all components execute on a single computer under the control of
a single operating system. Connection is more complex when components are located on
different machines running different operating systems and when components can be
moved from one location to another. In this case, a network protocol independent of the
hardware platform and operating system is required. In fact, a network protocol is desirable
even when all components execute on the same machine because such a protocol
guarantees that systems can be used in different environments— from a single machine to a
network of computers.

SERVICES

The era of the Internet and high-speed networks has enabled a new method of software
reuse described by various names, including Web services and service-oriented architecture
(SQA). Unlike object frameworks that are inserted into an application when it is compiled or
components that are dynamically or statically linked to an application before execution, an
application interacts with a service via the Internet or a private network during execution.
Like object frameworks and components, services rely on a suite of standards that has
significant implications for software design, development, and performance.

-------------------------------------------------------------------------------------------------------

Q. What are the different service standards?

 Service Standards

Service standards have evolved from distributed object standards such as CORBA
and EJBs to include standards such as SOAP, .NET, and J2WS. The primary difference
between service standards and earlier distributed object standards is a decrease in the
amount of information that must be compiled or linked into an executable application and
an increased reliance on Web-based data interchange standards such as XML

Simple Object Access Protocol (SOAP) is a service standard based on existing


Internet protocols, including Hypertext Transport Protocol (HTTP) and extensible Markup
Language (XML). Messages between objects are encoded in XML and transmitted using
HTTP, which enables the objects to be located anywhere on the Internet. Because SOAP

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 79 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

components communicate using XML, they can be easily incorporated into applications that
use a Web-browser interface. Complex applications can be constructed using multiple SOAP
components that communicate via the Internet.

Figure 16-14 shows an application and service communicating with SOAP messages.
The SOAP encoder/decoder and HTTP connection manager are standard components of a
SOAP programmer’s toolkit. Applications can also be embedded scripts that use a Web
server to provide SOAP message-passing services.

Microsoft .NET is a service standard based on SOAP. The .NET applications and
services communicate using SOAP protocols and XML messages, and these applications and
services are installed on Microsoft’s Web/application server and rely on Microsoft’s Active
Directory for various naming, location, and security capabilities. The .NET applications and
services can be developed in a several programming languages including Visual BASIC and
C#.

Java 2 Web Services (J2WS) is a service standard for implementing applications and
services in Java. j2WS extends SOAP and several other standards to define a Java-specific
method of implementing

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 80 of 81
T.Y.B.Sc. (CS) SEM VI PAPER IV S.E-II, UNIT- I, II BY: PROF. AJAY PASHANKAR

Quick Revision
________________________________________________________________________

 The Unified Process (UP) is an object-oriented system development methodology


originally offered by Rational Software, which is now part of IBM. Developed by Grady
Booch, James Rumbaugh, and Ivar jacobson—the three pioneers who are also behind
the success of the Unified Modeling Language (UML)— the UP is their attempt to define a
complete methodology that uses UML for system models and describes a new, adaptive
system development life cycle.
 Inception Phase is in any project planning phase, in the inception phase the project
manager develops and refines a vision for the new system to show how it will improve
operations and solve existing problems.
 Elaboration Phase the elaboration phase usually involves several iterations, and early
iterations typically complete the identification and definition of all of the system
requirements.
 Construction Phase the construction phase involves several iterations that continue the
design and implementation of the system.
 Transition Phase During the transition phase, one or more final iterations involve the
final user- acceptance and beta tests, and the system is made ready for operation.
 Agile Development is a philosophy and set of guidelines for developing software in
an unknown, rapidly changing environment. It provides an overarching philosophy for
specific development approaches such as the Unified Process. The amount of agility in
each approach can vary.
 Agile Modeling (AM) is not about doing less modeling but about doing the right kind of
modeling at the right level of detail for the right purposes.
 The four core values of XP—communication, simplicity, feedback, and courage—drive its
practices and project activities.
 The XP development approach is divided into three levels—system (the outer ring),
release (the middle ring), and iteration (the inner ring). System-level activities occur
once during each development project. A system is delivered to users in multiple stages
called releases.
 An object framework is a set of classes that are specifically designed to be reused in a
wide variety of programs. The object framework is supplied to a developer as a
precompiled library or as program source code that can be included or modified in new
programs.
IMP Question Set

SR No. Question Reference page No.


1 58
Q. Explain the unified process?
2 58
Q. What are the phases of unified process?
3 60
Q. Write a short note on UP disciplines?
4 62
Q. Write a short note on agile development?
5 68
Q. Write a short note on XP core values?
6 71
Q. Write a short note on object framework?
7 75
Q. What are the different service standards?

SHOP NO 1, Opp.R.B.SINGHS BUNGALOW, SHIVMANDIR ROAD, BHOIRWADI, KALYAN (W)


Contact: 8286806797/8108149607/ 9004367024 Page 81 of 81

Вам также может понравиться