Вы находитесь на странице: 1из 253

Software Metrics

Prof. Wei T. Huang Department of Computer Science and Information Engineering National Central University 2007

1 WTH07

Prerequisites
z z z

Software Engineering Object-Oriented Software Engineering Engineering Mathematics

Note: This lecture note is developed for the teaching purpose and is only offered to the senior and graduate students at the National Central University. Any commercial activity using this note is not allowed. Most of the materials in this lecture note, including the text, the tables, and the figures, are excerpted from the sources given in the References, especially from [Fenton97].

2 WTH07

Objective for Software Measurement


z

Measurement helps people to understand the world. Without measurement you cannot manage anything. There are three important activities in software development project:
Understanding what is happening during development and maintenance Controlling what is happening on the projects Improving the processes and products

Thus, people must control their projects and predict the product attributes not just run them. But
You cannot control what you cannot measure. (DeMarco, 1982) You can neither predict nor control what you cannot measure. (DeMacros rule)

3 WTH07

Software Measurement
z

Software Developers get sense of


whether the requirements are consistent and complete whether the design is of high quality whether the code is ready to be tested

Project Managers measure


attributes of process whether product will be ready for delivery whether the budget will be exceeded

Customers measure
whether the final product meets the requirements whether the product is of sufficient quality

Maintainers must be able


to see what should be upgraded and improved
4 WTH07

1. Measurement

5 WTH07

1.1 What is Measurement? (1)


z

Measurement is essential to our daily life. It is the process by which numbers or symbols are assigned to attributes of entities in the real world in such a way as to describe them according to clearly defined rules.
Entity: an object or an event in the real world
A person, a room; a journey; the testing phase of a software project

Attribute: a feature or property of an entity


The area or color of a room; the cost of a journey; the elapsed time of a testing phase

6 WTH07

1.1 What is Measurement? (2)


z

What is not measurable make measurable. (Galileo)


In the physical science, medicine, economies, and even some social science, we are now able to measure attributes that were previously thought unmeasurable.
Human intelligence, air quality, and economic inflation form the basis for important decision that affect our everyday life. Questions: (1) How can you measure to identify the best (or good) individual soccer player? (2) How can you identify good or bad software developer?

There are two kinds of qualification:


Measurement: a direct quantification
Examples: the height of a tree or the weight of a shipment of bricks

Calculation: a indirect quantification


z

So, a software engineer may continue to claim that important software attributes, such as dependability, quality, usability and maintainability, in order to make software engineering as powerful as other engineering disciplines.
7 WTH07

1.2 Measurement in Software Engineering


z

For most software development projects, we


Fail to set measurement targets for our software products.
Gilbs Principle of Fuzzy Target: projects without clear goals will not achieve their goals clearly.

Fail to understand and quantify the component costs of software projects. Do not quantify or predict the quality of the products we produce. Do without a carefully controlled study to determine if the technology is sufficient and effective.

8 WTH07

1.3 Understand and Control a Project (1)


z

Information needed to understand and control a software development project: Managers


What does each process cost? How productive is the staff? How good is the code being developed? Will the user be satisfied with the product? How can we improve? Are the requirements testable? Have we found all the fault? Have we met our product or process goals? What will happen in the future?

Engineers

9 WTH07

Understand and Control a Project (2)


z

Measurement is important for three basic activities:


Measures help us to understand what is happening during development and maintenance establishing baselines to set goals for future behavior; The measurement allows us to control what is happening on our projects predicting what is likely to happen and make changes to processes and products in order to meet the goals; Measurement encourages us to improve our process and products increasing the number or type of design reviews, for instance.

10 WTH07

1.4 The Scope of Software Metrics (1)


z

Cost and effort estimation


Examples: COCOMO model (Beohm, 1981), SLIM model (Putnam, 1978), Albrechts function point model (Albrecht, 1979)

Productivity measurements and models


Productivity

Value

Cost

Quality Reliability Defects

Quantity Size Functionality

Personnel Time Money

Resources H/W S/W

Complexity Environment constraints Problem difficulty 11 WTH07

1.4 The Scope of Software Metrics (2)


z z

Data collection
The collected data distilled into simple graphs or charts

Quality models and measures


Product operation: usability, reliability, efficient Product revision: maintainability, portability, testability

z z z z

Reliability model Performance evaluation and models Structural and complexity metrics Management metrics
Using measurement-based charts or graphs to help customers and developers decide if the project is on track.

Evaluation of methods and tools


New methods and tools that may make the organization or project more productive and the products better and cheaper.

12 WTH07

Exercises
z z

What are metrics? Name the reason why metrics are useful. Specifications, design and code are entities of software products.What are the attributes of those entities?
Hint: see next chapter.

Personnel is one of the resources for software development. Suppose you are a manager. What do you want to measure such an entity?
Hint: see next chapter.

13 WTH07

2. Classifying Measures

14 WTH07

2.1 Classifying Software Measures (1)


z

Software-measurement activity is identifying the entities and attributes.


Process: collections of software-related activities
The duration of the process or one of its activities. The effort associated with the process or one of its activities. The number of incidents of a specified type arising during the process or one of its activities.

Products: any artifacts, deliverables or documents that result from a process activity.
External product attributes depending on product behavior and environment: reliability, usability, integrity, efficiency, testability, reusability, portability and interoperability are attributes that we can measure. Internal product attributes: size, effort, cost, code complexity, structuredness, module coupling and cohesiveness.

15 WTH07

2.1 Classifying Software Measures (2)


Resources: entities required by a process activity.
Personnel (individual or team), materials, tools (software and hardware), methods are candidates for measurement. Cost and productivity ( = amount of output/effort input) Staff: experience, age, or intelligence,

Attributes
Internal attributes: measured by examining the product, process or resource its own. External attributes: product, process or resource related to its environment.

16 WTH07

2.1 Classifying Software Measures (3)


z

Components of software measurement


Entities
Product Specifications Internal size, reuse, modularity, redundancy, functionality, syntactic correctness, etc. size, reuse, modularity, coupling, cohesiveness, functionality, etc. size, reuse, modularity, coupling, functionality, algorithmic complexity, control flow, structuredness, etc. size, coverage level, etc.

Attributes
External

comprehensibility, maintainability, etc.

Design

quality, complexity, maintainability, etc

Code

reliability, usability, maintainability, etc.

Test data

quality, etc. 17 WTH07

2.1 Classifying Software Measures (4)


Process Constructing specification Detailed design Testing Resources Personnel Teams age, price, etc. size, communication level, structuredness, etc. price, size, etc price, speed, memory size, etc. size, temperature, light, etc. productivity, experience, intelligence, etc. productivity, quality, etc time, effort, number of requirements changes, etc. time, effort, number of specification faults found, etc. time, effort, number of coding faults found, etc. quality, cost, stability, etc. cost, cost-effectiveness, etc. cost, cost-effectiveness, stability, etc.

Software Hardware Office

usability, reliability, etc. reliability, etc. comfort, quality, etc. 18 WTH07

2.2 Determine What to Measure (1)


z

The Goal-Question-Metric (GQM) paradigm In order to decide what your project should measure, you may use GQM paradigm. The following high-level goals may be identified.
Improving productivity Improving quality Reducing risk

Templates for goal definition


Purpose
Example: To evaluate the maintenance process in order to improve it

Perspective
Example: Examine the cost from the viewpoint of the manager

Environment
Example: the maintenance staff are poorly motivated programmers who have limited access to tools

19 WTH07

2.2 Determine What to Measure (2)


z

A framework of GQM
List the major goals of the development or maintenance project. Derive from each goal the questions that must be answered to determine if the goals are being met. Decide what must be measured in order to be able to answer the questions adequately.

20 WTH07

2.2 Determine What to Measure (3)


z

Example of deriving metrics from goals and questions

21 WTH07

2.2 Determine What to Measure (4)


z

Examples of AT&T goals, questions and metrics (Barnard and Price 1994)

22 WTH07

2.3 Applying the Framework (1)


z

Cost and effort estimation


Focusing on predicting the attributes of cost or effort for the development process.

Productivity measures and model


Measuring a resource attribute.

Data collection
Gathering accurate and consistent measures of process and resource attributes.

Quality model and measures


Predicting product model cost and productivity are dependent on the quality of products output during various processes. Measuring internal product attributes: such as complexity and structure Measuring external product attributes: attributes with respect to how the product relates to its environment.

23 WTH07

2.3 Applying the Framework (2)


z

Reliability model
Successful operation during a given period of time.

Performance evaluation and models


Measuring efficiency of the product.

Structural and complexity metrics


Measuring internal attributes of products which suggest what the external measures might be.

Capability-maturity assessment
SEI CMM

24 WTH07

2.3 Applying the Framework (3)


z

Management by metrics
Use metrics to set targets for the development projects. Example: US defense projects (NetFocus 1995) Item
Defect removal efficiency Original defect density Slip or cost overrun in excess of risk reserve Total requirements creep (function points or equivalent) Total program documentation Staff turnover

Target
> 95% < 4/function point 0%

Malpractice level
< 70% > 7/function points (*) > 10%

< 1%/month > 50% average < 3 pages/function > 6 pages/function point point 1-3%/year > 5%/year
25 WTH07

(*) Function points: to measure the amount of functionality in a system as described by a spec.

2.3 Applying the Framework (4)


z

Evaluation of methods and tools


Using the proposed tool or method on a small project at first, and evaluating the results Example: Code inspection statistics from AT&T (Barnar and Price 94) Metric
Number of inspections in sample Total KLOC inspected Average LOC inspected (module size) Average preparation rate (LOC/hour) Average inspection rate (LOC/hour) Total faults detected (observable and non-observable)/KLOC Percentage of re-inspections

First sample project


27 9.3 343 194 172 106 11

Second sample project


55 22.5 409 121.9 154.8 89.7 0.5
26 WTH07

2.4 Software Measurement Validation (1)


z

The measurement pattern.

27 WTH07

2.4 Software Measurement Validation (2)


z

Two types of measuring


Measures or measurement systems: used to assess an existing entity by numerically characterizing one or more its attributes. Prediction systems: used to predict some attribute of a future entity involving a mathematical model with associated prediction procedure

Validating prediction systems


Deterministic prediction systems: always get the same output for a given input Stochastic prediction systems: the output for a given input will vary probabilistically Example: Using COCOMO (to be explained later)
Acceptance range within 20% for predicted effort

Validating software measures


Measures should reflect the behavior of entities in the real world. Example: program length

28 WTH07

Exercises
z

You are now taking the course Software Metrics. What are the goals you take this course.
Hints: Using the template of goal definitions

Use the GQM approach to explain the reason you take the Software Metrics course.

29 WTH07

3. Measuring Internal Product Attributes (1)

30 WTH07

3.1 Measuring Size (1)


z

Simple measures of size are often rejected because they do not adequately reflect:
Effort Productivity Cost

The important aspects (attributes) of software size


Length: physical size ( including specifications, design and final code) Functionality: functions supplied by the product to the user

31 WTH07

3.1 Measuring Size (2)


Complexity: underline problem that the software is solving
Problem complexity: the complexity of the underlying problem Algorithmic complexity: the complexity of the algorithm to solve the problem (measuring the efficient of the software) Structural complexity: the structure of the software used to implement the algorithm Cognitive complexity: the effort required to understand the software

Reuse: how much of a product was copied or modified from a previous version of an existing product (including off-the-shell products).

32 WTH07

3.2 Software Size (1)


z

Software size
Specification
a useful indicator of how the design is likely to be

Design
a predictor of code length

Code

33 WTH07

3.2 Software Size (2)


z

Traditional code measures


Number of line of code (LOC) HP definition: NCLOC (non-commented line) or ELOC Total length (LOC) = NCLOC + CLOC The density of comments, for example: CLOC/LOC

34 WTH07

3.3 Software Science (1)


z

Maurice Halsteads software science


A program P is a collection of tokens. Tokens are 1 = number of unique operators 2 = number of unique operands N1 = total occurrences of operators N2 = total occurrences of operands The length of P: N = N1 + N2 The vocabulary of P: = 1 + 2 The volume of P (a suitable metric for the size of any implementation of any algorithm): V = N The program level of P of volume V (the implementation of an algorithm) : L = V*/V (L 1), where V* is the potential volume, the minimal size implementation of P.

35 WTH07

3.3 Software Science (2)


The difficulty: D = 1/L The estimate of L: = 1/D = (2/1) (2/N2) The estimated program length: = 1 1 + 2 2 The effort required to generate P: E = V/ = 1 N2 N / 2 2 where the unit of measurement of E is elementary mental discriminations needed to understand P. John Stroud claimed that the human mind is capable of making a limited number of elementary discriminations per second, thus, 20 5. Halstead claimed that = 18. So the time required for programming is T = E / 18 seconds

36 WTH07

3.3 Software Science (3)


z

Halsteads sample FORTRN program


SUBROUTINE SORT (A, N) INTEGER A(100), N, I, J, SAVE, M C ROUTINE SORTS ARRAY A INTO DESCENDING ORDER IF (N.LT.2) GOTO 40 DO 30 I = 2, N N = N1 + N2 = 93 M=I1 = 1 + 2 = 27 DO 20 J = 1, M V = N = 93 4.75 = 442 bits IF (A(I).GT.A(J)) GOTO 10 V* = 11.6 bits (V > V*) GOTO 20 L = V*/V = 11.6/442 = 0.026 10 SAVE = A(I) D = 1/L = 38.5 A(I) = A(J) = 1/D = (2/1) (2/N2) = (2/14) (13/42) = 0.044 A(J) = SAVE E = V/ = 442/0.044 = 10045 20 CONTINUE T = E/ = 10045/18 = 558 seconds = 10 minutes 30 CONTINUE 40 CONTINUE END 37 WTH07

3.4 Object Programming Environment (1)


z

An example
In the Visual Basic programming environment, you can create a sophisticated Windows program, complete with menus, icons, and graphic, with almost no code in traditional sense. For example, you point at a scrollbar object in the programming environment. In this environment, the executable code to produce a scrollbar is constructed automatically. You need to write code only to perform the specific actions that result from a click on a specific command button. In this kind of environment, it is not clear how you would measure length of the program. Thus, a program with just five BASIC statements, say, can easily generate an executable program of 200 Kb. This is the same case by using component-based software construction technique.

There are two separate measurement issues


How do we account in our length measures for objects that are not textual? How do we account in our length measures for components that are constructed externally?
38 WTH07

3.4 Object Programming Environment (2)


z

In object-oriented development, a count of objects and methods led to more accurate productivity estimates than those using lines of code (Pfleeger 1989). Ontological principles (Bunges ontological terms) (*)
Two objects are coupled if and only if at least one of them acts upon the other. X is said to act upon Y if the history of Y is affected by X.

(*) Ontology: the common words and concepts (the meaning) used to describe and represent an area of knowledge. Suppose you look up ontology in the dictionary, you will find that the metaphysics are: (1) A branch of philosophy that seeks to explain the nature of being and reality; (2) speculative philosophy in general (Websters New World Dictionary). An ontology is the specification of conceptualization for engineering product. 39 WTH07

3.4 Object Programming Environment (3)


z

The Metrics Suite for OOD [Chidamber/Kemerer] using the notion mentioned above.
Metric 1. Weighted Methods Per Class (WMC) = ci (i=1 to n) where consider a Class C, with methods M1..Mn that are defined in the class. ci..cn be the complexity of the methods. If all method complexities are considered to be unity, then WMC = n, the number of methods. Metric 2. Depth of Inheritance Tree (DIT): The length from the node to the root of the inheritance tree.

40 WTH07

3.4 Object Programming Environment (4)


Metric 3. Number of Children (NOC): The number of immediate successors of the class. Metric 4. Coupling between Object Classes (CBO): The number of other classes to which the class is coupled. Metric 5. Response for Class (RFC): The number of local methods plus the number of methods called by local methods. Metrics 6. Lack of Cohesion Metric (LCOM): The number of nonintersecting of local methods.

41 WTH07

3.5 Specification and Design (1)


z

A document of specification and design may consist of both text and diagrams, where diagrams have a uniform syntax, such as DFD, Zschema or class diagrams. We can define appropriate atomic objects for the different type of diagrams and symbols. The well-known method for handle these atomic objects, for instance:
For data flow diagram, the atomic objects are process, external entities, data stores, and data flows. For algebraic specification, the atomic entities are sorts, functions, operations, and axioms, etc. For Z schemas, the atomic entities are various lines appearing in the specification, such as type declaration or a predicate.

42 WTH07

3.5 Specification and Design (2)


z

Example: Structured analysis components (DeMarco 1978)


View
Functional Data State

Diagram
Data flow diagram Data dictionary Entity relation diagram State transition diagram

Atomic objects
Bubbles Data elements Objects, relations States, transitions

43 WTH07

Supplement to Section 3.5 (1)


z

Data flow diagram: Satisfy Material Request as an example.

44 WTH07

Supplement to Section 3.5 (2)


z

Algebraic specification: Coord as an example


Spec Name (Generic Parameter) sort <name> imports <List of Spec Name>

COORD sort Coord Imports INTEGER, BOOLEAN Create (Integer, Integer) Coord; X (Coord) Integer; Y (Coord) Integer; Eq (Coord, Coord) Boolean;

Operation signatures setting out the names and the types of the parameters to the operations defined over the sort

X (Create (x,y)) = x Y (Create (x,y)) = y Eq (Create (x1,y1), Create (x2,y2)) = ((x1 = x2) and (y1 = y2))

Axioms defining the Operations over sort

45 WTH07

Supplement to Section 3.5 (3)


z

Z schema: a Phone DB as an example


Phone DB members, members: P Person telephones, telephones : Person Phone state dom telephones members dom telephones members declaration

predicate

new state telephones = telephones {huang | 0543} where telephones = {lee | 1234, wang | 2345, }
46 WTH07

3.6 Predicting Length


z

Example: Halsteads software science


LOC = N / ck where ck is a constant depending on programming language K, for FORTRAN ck = 7. The example in slid 48, N = 93, LOC = 93/7 13. It is 16 in fact, so,13 is a reasonable estimate.

Length may be predicted by considering the median expansion ratio from spec or design length to code length.
Expansion ratio (design to code) = size of design / size of code

For the module design phase:


LOC = Si (i = 1 to m) where Si is the size of module i, m is the number of modules, and is the design-to-code expansion ratio

47 WTH07

3.7 Reuse (1)


z

The reuse of software (including requirements, designs, documentation, test data, scripts, code, etc.) improves the productivity and quality, allowing the developer to concentrate on new problems. HPs example (Lim 1994):
Organization Quality Productivity Time to market Manufacturing productivity 51% defect reduction 57% increase NA San Diego technical graphics 24% defect reduction 40% increase 42% reduction

Extent of reuse (NASA/Goddards SE Lab.)


Reused verbatim: the code in the unit reused without any change Slightly modified: fewer than 25% of lines of code in the unit modified Extensively modified: 25% of the lines of code modified New: none of the code comes from previously constructed unit
48 WTH07

3.7 Reuse (2)


z

Example: Software Engineering Laboratory


20% reused lines of code 30% reused lines of Ada code

Reuse at HP 90
80 70 Percent reuse 60 50 40 30 20 10 0 0.55 0.73 0.80 2.21 2.85 3.09 000 of noncomment source lines 49 WTH07

3.7 Reuse (3)


z

Example: Programming Research Ltd.


Product (%)
QAC QA Fortran QA Manager(X) QA Manager (Motif) QA C++

Reusable LOC
40 900 34 000 18 300 18 300 40 900

Total LOC
82 300 73 000 50 100 52 700 82 900

Reuse ratio
50 47 37 35 49

50 WTH07

3.8 Functionality (1)


z

Albrechts approach: Function points that are intended to measure the amount of functionality in a system as described by a specification. Using the following items of types to compute an unadjusted function point count (UFC):
External inputs: items provided by the user that describe distinct application-oriented data (such as files), not including inquiries. External output: items provided to the user that generate distinct application-data (such as reports and messages). External inquiries: interactive inputs requiring a response. External files: machine-readable interfaces to other systems. Internal files: logical master files in the system.

51 WTH07

3.8 Functionality (2)


z

Unadjusted function point (FP) count, UFC

FP complexity weight (wi): Item Low Complexity 3 4 3 7 5 Medium Complexity 4 5 4 10 7 High Complexity 6 7 6 15 10
52 WTH07

External inputs External outputs External inquiries External files Internal files

3.8 Functionality (3)


z

Example: A simple spelling checker.


The DFD

A = # external inputs = 2, B = # external outputs = 3, C = # inquiries = 2, D = # external files = 2, E = # internal files = 1

53 WTH07

3.8 Functionality (4)


z

For the example of the spelling checker, the items are identified as follows:
2 external inputs: document filename, personal dictionary-name 3 external outputs: misspelled word report, # of words processed message, # of errors message 2 external inquiries: words processed, errors 2 external files: document file, personal dictionary 1 internal file: dictionary

54 WTH07

3.8 Functionality (5)


z z

The complexity rates: simple, average, or complex For the spelling checker example, we assume that the complexity is average, then
UFC = 4A +5B +4C +10D +10E = 61

If the dictionary file and the misspelled word report are complex, then
UFC = 4A + (5 2 + 7 1) + 4C + 10D + 10E = 63

Technical complexity factor, TCF = 0.93 (0.65 to 1.35 see next slide) for the spelling checker, then
FP = 63 0.93 = 59

What is the FP for? Suppose a task takes a developer an average of two person days of effort to implement a function point. Then we may estimate the effort to complete the spelling checker is 118 person days (59 2).
55 WTH07

3.8 Functionality (6)


z

Converting from function points to lines of code.


Programming language statements per function point Minimum (- 1 standard deviation) 60 40 40 40 10 15 Mode (most common value) 128 55 55 55 20 32 Maximum (+ 1 standard deviation) 170 80 140 80 40 41
56 WTH07

Language C C# C++ Java Smalltalk Visual Basic


Note: [McConnell07] Table 18.3

3.8 Simplified FP Techniques


z

The Durch method. IndicativeFunctionPointCount = (35 x Internal files) + (15 x External files) The numbers 35 and 15 are derived through calibration. However, you can come up with your own calibrations for use in your environment.
Use the Dutch Method of counting function points to attain a low-cost ballpark estimate early in the project. Example: For the spelling checker UFC = 35 x 1 + 15 x 2 = 65 (matching the result in shown in slide 65)

57 WTH07

Supplement to Section 3.8


z

Components of the technical complexity factors:


Reliable back-up and recovery Data communications Distributed functions Performance Heavily used configuration Online data entry Operational ease Online update Complex interface

F1 F2 F3 F4 F5 F6 F7 F8 F9

F10 F11 F12 F13 F14

Complex processing Reusability Installation ease Multiple sites Facilitate change

TCF = 0.65 + 0.01

Fi
i =1

14

For the spelling checker: F3 F5F9F11F12 F13 are 0 (sub-factor is irrelevant), F1F2F6 F7F8F14 are 3 (average) F4 and F10 are 5 (it is essential to the systems being built). So, TCF = 0.65 + 0.01(18 + 10) = 0.93
58 WTH07

Note: The factor varies = 0.65 (if each Fi is set to 0) = 1.35 (if each Fi is set to 5).

3.9 Complexity
z

We define
Complexity of a problem: the amount of resources required for an optimal solution to the problem. Complexity of a solution: be regarded in terms of the resources needed to implement a particular solution.
Time complexity: where the resource is computer time Space complexity: where the resource is computer memory

In order to measure and express complexity, we measure the efficiency of a solution, that is, measuring algorithmic efficiency.
Example: Binary search
For a list of n elements, the binary search algorithm terminates after at most ( n) comparisons.

Big-O notation
Example: the problem of searching a sorted list for a single item can be shown to have complexity O( n), that is, the fastest algorithm necessary to solve the problem requires at least ( n).
59 WTH07

Exercise
1. Explain very briefly the idea behind Albrechts function points

measure. 2. List the main applications of function points. 3. Compare function points with the line of code measure.

Note: [Fenton97] Chapter 7.

60 WTH07

4. Measuring Internal Product Attributes (2)

61 WTH07

4.1 Structure
z

The structure of requirements, design, and code may help the developers to understand the difficulty they sometimes have in converting one product to another, in testing a product, or in predicting external software attributes from early internal product measures, such as maintainability, testability, reusability, and reliability. The structure of a product plays a part, not only in requiring development effort but also in how the product is maintained. Types of structural measures
Control-flow structure: the sequence in which instructions are executed in a program. Data-flow structure: the trail of a data item created or handled by a program. Data structure: the organization of the data itself, independent of the program.
62 WTH07

4.2 Control-Flow Structure (1)


z

McCabes cyclomatic complexity measure


Definition: The cyclomatic number V(G) of a graph G with n vertices, e edges, and p connected components is V(G) = e n p Theorem: In a strongly connected graph G, the cyclomatic number is equal to the maximum number of linearly independent circuits.

(Note: Graph excerpted from McCabe 1976)

63

WTH07

4.2 Control-Flow Structure (2)


z

Properties of cyclomatic complexity


V(G) 1 V(G) is the maximum number of linearly independent paths in G; it is the size of a basis set. Inserting or deleting functional statements to G does not affect V(G). G has only on path it and only if V(g) = 1. Inserting a new edge in G increases V(G) by unity. V(G) depends only on the decision structure of G.

The control graphs of the usual constructs in structured programming.

64 WTH07

4.2 Control-Flow Structure (3)


z

The cyclomatic number is a useful indicator of how difficult a program or module will be to test and maintain. When V exceeds 10 in any one module, the module may be problematic. Example: Channel Tunnel Rail System
The module be rejected if its V exceed 20 or if it has more than 50 statements (Bennet 1994).

65 WTH07

4.3 Modularity and Information Flow Attributes (1)


z

Module: a contiguous sequence of program statements, bounded by boundary elements, having an aggregate identifier (Yourdon and Constantine 1979).
A module can be any object. A program, unit, procedure, or function.

Inter-module attributes.
Example: Design charts (excerpted from [Fenton97])

66 WTH07

4.3 Modularity and Information Flow Attributes (2)


z

Module call-graph: an abstract model of the design.


Module a calls b, c Module b calls d Module c calls d, e

67 WTH07

4.3 Modularity and Information Flow Attributes (3)


z

Coupling is the degree of interdependence between modules (Yourdon and Constantine 1979). Classification for coupling (Ri > Rj for i > j):
R0: module x and module y have no communication. R1 Data coupling relation: x and y communicate by parameters, where each parameter is either a single data element or a homogeneous set of data items (no control element). This type of coupling is necessary for any communication between modules. R2 Stamp coupling relation: x and y accept the same record type as a parameter. R3 Control coupling relation: x passes a parameter (flag) to y with the intention of controlling its behavior. R4 Common coupling relation: x and y refer to the same global data. R5 Content coupling relation: x refers to the inside of y

z z

Loosely coupling: i is 1 or 2 Tightly coupling: i is 4 or 5


68 WTH07

4.3 Modularity and Information Flow Attributes (4)


z

Example: Coupling-model graph


Level: (n.m)
n means coupling relation Ri m is parameter (flag)

M1 and M2 share two common record types: R2 M1 passes to M3 a parameter that acts as a flag in M3: R1 M2 branches into module M4 and passes two parameters that act as flags in M4: R3 and R5
z

Measuring coupling between x and y: c(x, y) = i + n/(n+1), where i is the number corresponding to the worst coupling relation Ri between x and y, and n is the number of interconnections between x and y (Fenton and Melton 1990).
69 WTH07

4.3 Modularity and Information Flow Attributes (5)


z

Cohesion (Yourdon and Constantine 1979):


Functional: the module performs a single well-defined function. Sequential: the module performs more than one function. Communication: the module performs multiple functions, but not on the same body of data. Procedural: the module performs more than one function, and they are related only to a general procedure. Temporal: the module performs more than one function within the same time span. Coincidental: the module performs more than one function, and they are unrelated.

Cohesion ratio = # of modules having functional cohesion / total # of modules

70 WTH07

4.3 Modularity and Information Flow Attributes (6)


z

Information flow (Henry-Kafuras measure 1981).


Information flow complexity = length (M) x ((fan-in(M) x fan-out(m))2 Example:

71 WTH07

4.4 Data Structure


z

The amount of data


Halsteads software science: 2 (# of distinct operands) or N2 (total number of occurrences of operands) as the data measure COCOMO (Constructive Cost Model): D/P = Database size in bytes or characters/Program size in DSI where DSI is the number of delivered source instructions. Multiplier Data Low (D/P < 10) 0.94 Nominal (10D/P<100) 1.00 High(100D/P<1000) 1.08 Very high(D/P1000) 1.16 Example:
If DATA is rated very high, then the cost of a project is increased by 16%. If DATA is low, the cost is reduced to 94%.
72 WTH07

Exercises
1. The following flowgraph is a truly unstructured spaghetti prime.

What is the essential complexity. 2. A good design should exhibit high module cohesion and low module coupling. Briefly describe what you understand this assertion to mean. 3. McCabes cyclomatic number is a classic example of a software metric. Which software entity and attribute do you believe it really measures?

73 WTH07

5. Measuring External Product Attributes

74 WTH07

5.1 External Attributes


Functionality
z

External attributes
Software quality Quality impacts: Time Quality Effort

Predicting external attributes via measuring and analyzing internal attributes, because
The internal attributes are often available for measurement early in the life cycle, whereas external attributes are measurable only when the product is complete. Internal attributes are often easier to measure than external ones.

75 WTH07

5.2 Quality Model


z

ISO 9126 standard


Functionality Portability Reliability Efficiency Usability Maintainability

76 WTH07

5.3 Definitions (1)


z z

Functionality: the functions supplied by the product to the user Portability (*):
A set of attributes that bear on the capability of software to be transferred from one environment to another. Portability = 1 ET/ER where ET is a measurement of the resources needed to move the system to the target environment, and ER is a measure of the resources needed to create the system for the resident environment.

(*) To be described in details in section 7.1.

77 WTH07

5.3 Definitions (2)


z

Reliability (*)
A set of attributes that bear on the capability of software to maintain its level of performance under stated conditions and a stated period of time. defect density = # of known defects/product size where product size is measured in terms of LOC, and the known defects are discovered through testing, inspection or other techniques.

(*) To be described in detail in the next section.

78 WTH07

Other quality measures system spoilage = time to fix post-release defects /total system development time Hitachi example:

79 WTH07

5.3 Definitions (3)


z

Efficiency.
Example: Suppose we implement the Heapsort sorting algorithm in a machine environment where comparison operations are performed at the rate of 220/sec. If we sort a list of n = 225 items, we need nln(n) (n=25 in this case) comparisons, that is, we need 25 x 225 comparisons. The response time of the machine must at leat 800 seconds (13.3 minutes). In software, the efficiency can be expressed by the response time.

80 WTH07

5.3 Definitions (4)


z

Usability:
The extent to which the software product is convenient and practical to use (Boehm 1978). Good usability includes: Well-structured manuals Good use of menus and graphics Information error messages Help function Consistent interfaces

81 WTH07

5.3 Definitions (5)


z

Maintainability:
Many measures of maintainability are expressed in term of MTTR (mean time to repair). Records needed to calculate this measure: Problem recognition time Administrative delay time Maintenance tools collection time Problem analysis time Change specification time Change time (including testing and review)

z z

The guideline: Cyclomatic number < 10. Thus, maintainability in a real software system is affected by a wide range of system design decisions.

82 WTH07

Example: A decomposition of maintainability

83 WTH07

Exercise
z The most commonly used software quality measure in industry is

the number of faults per thousand lines of product source code. Compare the usefulness of this measure for developers and users. List some possible problems with this measure.

84 WTH07

6. Software Reliability
Software reliability is a key concern of many users and developers of software. Reliability is defined in terms of failures; it is impossible to measure before development is complete. However, there are several software reliability growth models may aid the estimation, by carefully collecting data on inter-failure times.

85 WTH07

6.1 Probability Density Function (1)


z

Probability density function (pdf) f(t) describes the uncertainty about when the component will fail :
Probability of failure between t1 and t2 = f(t) dt
t1 t2

86 WTH07

6.1 Probability Density Function (2)


Example: Uniform pdf f(t) = 1/x for t [0, x] , failure time is bounded
Example: t [0,t]

A component has a maximum life span of 10 hours, e.g., it is certain fail within 10 hours of use. Suppose that the component is equally likely to fail during any two time periods of equal length within the 10 hours. It is just as likely to fail in the first two minutes as in the last two minutes. We can illustrate this behavior with the pdf f(t) as shown in the above figure. The function is defined to be 1/10 for any t between 0 to 10, and 0 for any t>10. We say it is uniform in the interval of time from t=0 to t=10. For any x we can define the uniform pdf over the interval [0,x] to be 1/x for any t ime the interval [0,10] and 0 elsewhere.
87 WTH07

6.1 Probability Density Function (3)


Example: Failure time occurs purely randomly, that is, failure is statistically independent of the past. The following figure shows the pdf: f(t) = e t
The component fails in a given interval [t1,t2]. Probability of failure between time t1 and t2 = f(t) dt t[t1, t2] For example, the probability of failure during time 0 and time 2 hours is: 1e2 when =1 this probability failure = 0.63, when =3, it is equal to 0.998

88 WTH07

6.1 Probability Density Function (4)


z

The distribution function. The probability of failure from time 0 to a given time t, that is, the cumulative density function F(t) is the probability of failure between time 0 and t.
F(t) = f(t) dt where t [0,t]

The reliability function (called the survival function).


R(t) = 1 F(f)

Distribution function

Survival function
89 WTH07

6.1 Probability Density Function (5)


z

Example: Distribution function and reliability function for uniform density function: Consider the pdf that is uniform over the interval [0,t]. Then f(t) = 1 for each t between 0 and 1. The cumulative density function F(t) and the survival function R(t) are shown as follows:
F(t) =

0 1 dt = t 0 f(t)dt =

R(t) = 1 F(t)

90 WTH07

6.1 Probability Density Function (6)


z

Example: Distribution function and reliability function for f(t) = e t.


F(t) = -e-t R(t) = e-t

91 WTH07

6.2 Mean Time to Failure (1)


z

The mean time to failure (MTTF): the mean of the probability density function, or called expected value E(t) of T: E(T) = tf(t)dt

Examples:
For the uniform pdf: 1/x, the MTTF is 5 hours. For the pdf as the exponential function: f(t) = e-t , the MTTF is 1/

92 WTH07

6.2 Mean Time to Failure (2)


z

To fix failure after each occurrence:

93 WTH07

6.2 Mean Time to Failure (3)


z

Reliability growth: the new component should fail less than the old one, that is, the mean fi+1 > fi Mean time between failure (MTBF) = MTTF +MTTR, where MTTR us the mean time to repair. Availability: the probability that a component is operating at a given point in time
Availability = MTTF/(MTTF + MTTR) 100%

94 WTH07

Exercises
1. Why is reliability an external attribute of software? 2. List three internal software product attributes that could affect

reliability. 3. Suppose you can remove 50% of all faults resident in an operational software system. What corresponding improvements would you expect in the reliability of the system?

95 WTH07

7. Resource Measurement

96 WTH07

7.1 Productivity (1)


z

Productivity
Productivity equation: productivity = size(lines of code)/effort (person-months) Difficulty of measuring effort Measuring productivity based on function-points # of function points implemented/person months
Function points: a review External inputs, External output, External inquiries, External files, Internal files The function-based measure more accurately reflects the value of output It can be used to assess the productivity of software development staff at any stage in the life cycle Measuring progress by comparing completed function points with incomplete ones

97 WTH07

7.1 Productivity (2)


z

Example: Distribution of US software project size in function points (Jones, 1991)

% of US software projects

Application size in function points


98 WTH07

7.1 Productivity (3)


z

Productivity ranges and modes for selected s/w project sizes


Productivity rate in FP per P-M >100 75-100 50-75 25-50 15-25 5-15 1-5 <1 1.0 % 3.0 % 7.0 % 15.0 % 40.0 % 25.0 % 10.0 % 4.0 % 0.01 % 0.1 % 1.0 % 5.0 % 10.0 % 50.0 % 30.0 % 4.0 % 0.0 % 0.0 % 0.0 % 0.1 % 1.4 % 13.5 % 70.0 % 15.0 % 100 FPs 1000 FPs 10000 FPs

99 WTH07

7.2 Method and Tool (1)


z

The original COCOMO model (*) is included two cost drivers:


Use of software tools

Example: For the project rated low in use of tools, COCOMO includes an 8% increase in project effort compared to the normal (value 1.00) use of tools.

Note: COCOMO model will described later.

100 WTH07

7.2 Method and Tool (2)


Use of modern programming practices.

Tool use categories (COCOMO 2.0):

101 WTH07

Exercise
z

Other than personnel, which software development resources can be assessed in terms of productivity? How would you define and measure productivity for these entities?

102 WTH07

8. Process Prediction

103 WTH07

Good Estimates
z

The process predictions guide the decision-making, from before the development begins, through the development process, during the transition of product to customer, and while the software is being maintained.

104 WTH07

8.1 Making Process Predictions


z

What is an estimate?
An ideal case: the probability density is normal distribution.
Estimate (median) Estimate (median) (Normal distribution)

Number of months to complete project

For example, the probability of completing the project in [8 months,16months] is 0.9, while in less than 12 months is 0.5
105 WTH07

8.2 Models of Effort and Cost


z

Regression-based models: E = (aSb)F (without adjusting log E = log a + b log S)

Low experience (say <8years): F = 1.3 (F is the effort adjustment factor) Medium experience (8 10 years): F = 1.0 High experience (>10 years): F = 0.7
106 WTH07

8.3 COCOMO Effort (1)


z

Constructive Cost Model (Barry Boehm, 1970s), there are three models:
Basic model: when little about the project is known. Intermediate model: after requirements are specified. Advanced model: when design is complete. E = aSb F where E is effort in person months, S is size in thousands of delivered source instruction (KDSI), F is an adjust factor (= 1 in the basic model).

107 WTH07

8.3 COCOMO Effort (2)


The values of a and b depend on the development mode: Organic system: data processing (using databases and focusing on transactions and data retrieving), e.g., banking and accounting system. Embedded system: real-time software, e.g., a missile guidance system. Semi-detached system: between organic and embedded system. Mode a b Organic 2.4 1.05 Semi-detached 3.0 1.12 Embedded 3.6 1.20
z

Example: Telephone switching system. E = 3.6 (5000)1.2 100 000 P-M (the system will require approximately 5000 thousand of delivered source instructions KDSI)

108 WTH07

8.3 COCOMO Effort (3)


z

Cost drivers for original COCOMO


Product attributes: Required software reliability Database size Product complexity Use of modern programming practices Use of software tools Required development schedule

Process attributes:

109 WTH07

8.3 COCOMO Effort (4)


Resource attributes: Computer attributes:

Personnel attributes:

Execution time constrains Main storage constrains Virtual machine volatility Computer turnaround time Virtual machine experience Analyst capability Applications experience Programming capability Language experience

110 WTH07

8.4 COCOMO - Duration


z

The duration model: D = a Eb


where D is duration in months, E is effort in person months. The coefficients depend on the development mode. Mode a b Organic 2.5 0.38 Embedded 2.5 0.35 Semi-detached 2.5 0.32

Suppose the development time for a 3000 person months embedded project:
2.5 (3000)0.35 = 52 months That is, the project requires 58 (3000/52) staff working for 52 months to complete the software.

111 WTH07

8.5 COCOMO II (1)


z

Boehm and his colleagues have defined an updated COCOMO, called COCOMO II, because COCOMO is inflexible and inaccurate for new techniques, such as use of tools, reengineering, application generators, object-oriented approaches.

112 WTH07

8.5 COCOMO II (2)


z

COCOMO II estimation process:


Stage 1: The project usually builds prototypes to resolve high-risk issues involving user interfaces, software and system interaction, performance, or technological maturity. So, little is known about the likely size of the final product under consideration, that is, COCOMO II estimates size in object points. Stage 2: A decision has been made to move forward with development. There is not enough information to support fine-grained effort and duration estimation. COCOMO II employs function points as a size measure, since function points estimate the functionality captured in the requirements, so they offer a richer system description than object points. Stage 3: This stage corresponds to the original COCOMO model, in that size can be done in terms of line of code, and many cost factors can be estimated with some degree of comfort.

113 WTH07

8.5 COCOMO II (3)


z

Comparison of original and COCOMO II models


COCOMO II Stage 2
FP and language % unmodified reuse, % modified reuse (determined by function) 1.02 to 1.26 depending on conformity, precedent, early architecture SEI process, etc.

Model aspect Original COCOMO


Size Delivered source instruction or SLOC Equivalence SLOC

Stage 1
Object points

Stage 3
FP and lang. or SLOC Equivalence SLOC as function of other variables 1.02 to 1.26 depending on conformity, precedent, early architecture SEI process, etc.

Reuse

Implicit in model

Scale

Organic: 1.05 1.0 Semi-detached: 1.12, Embedded: 1.20

114 WTH07

Product cost drivers

Reliability db size product complexity Execution time constraints, main storage constraints, computer turnaround time Analyst capability applications experience, programmer capability, programming lang. experience

None

Complexity, required reusability

Reliability db size, documentation needs, product complexity Execution time constraints, main storage constraints,

Platform cost drivers

None

Platform difficulty

Personnel cost drivers

None

Personnel capability and experience

Analyst capability applications experience, programmer capability, language and tool experience, continuity Use of software tools, required development schedule 115 WTH07

Project cost drivers

Use of modern None programming practices, use of s/w tools required development schedule

Required development schedule, development environment

8.6 Putnams SLIM Model (1)


z

Customer Perspective at Start of Project


A set of general functional requirements the system is supposed to perform; A desired schedule; A desired cost.

Certain things are unknown or very fuzzy


Size of the system; Feasible schedule; Minimum person power and cost consistent with a feasible schedule

116 WTH07

8.6 Putnams SLIM Model (2)


z

Assuming that technical feasibility has been established, the customer really wants to know:
Product size a reasonable percentage variation; A do-able schedule a reasonable variation; The person power and dollar cost for development reasonable variation; Projection of the software modification and maintenance cost during the operational life of the system.

117 WTH07

8.6 Putnams SLIM Model (3)


z

4 parameters concerned by a manager:


Effort Development time Elapsed time A state-of-technology parameter These parameters provide sufficient information to assess the financial risk and investment value of a new software development project.

118 WTH07

8.6 Putnams SLIM Model (4)


z

Rayleigh curves for SLIM model (*)

(*) Excerpted from [Putnam78]

119 WTH07

8.6 Putnams SLIM Model (5)


z

SLIM uses separate Rayleigh curves for


Design and code Test and validation Maintenance management

120 WTH07

8.6 Putnams SLIM Model (6)


z

The Effort-Time Tradeoff Law Size = CkK1/3td4/3 where Ck is the state of technology which depends on the development environment, e.g., Ck = 10040 by an environment with on-line interactive development, structured coding, less fuzzy requirements, and machine access fairly unconstrained. Example: The SLIM software equation implies that a 10% decrease in elapsed time results in a 52% increase in total life-cycle effort

121 WTH07

8.6 Putnams SLIM Model (7)


z

To allow effort or duration estimation, introducing equation (Putnam): D0 = K / td3, where D0 is a constant called manpower acceleration.
Example: the manpower acceleration = 12.3 for new software with many interfaces and interactions with other system, 15 for stand-alone systems, 27 for re-implementations of existing system. We can derive by using the two equations (s/w equation and manpower acceleration equation), the effort or duration are: K = (Size/Ck)9/7D04/7

122 WTH07

8.6 Putnams SLIM Model (8)


z

The software differential equation


The rate of accomplishment is proportional to the pace of the work times the amount of work remaining to be done, that is, dy/dt = 2at(K-y) where 2at is the pace and (K-y) is the work remaining to be done. Differentiate once more with respect to time, we get the software differential equation as: d2y/dt2 + t/td2 dy/dt + y/td2 = K/td2 = D

123 WTH07

8.6 Putnams SLIM Model (9)


z

The software differential equation is very useful because it can be solved step-by-step using the Runge-Kutta solution. For example, for SIDPERS, a DoDs the Armys Standard Installation-Division Personnel System:
t (year)
0 .5 1.0 1.5 2.0 3.0 3.5 3.65 4.0

Coding rate (size/yr in 000)


0 52.8 89.2 101.0 90.8 44.2 24.9 20.33 12.3

Cumulative code (in 000)


0 13.6 50.0 98.6 147.0 215.0 232.5 236.0 241.0

Actual size is 256K which is pretty close to this result.

SIDPERS parameters: K=700 MY, td=3.65, D=52.54 MY/yr

124 WTH07

Supplement
z

The overall life-cycle manpower curve can be well represented by the Norden/Rayleigh form: dy/dt = 2K a t e-at*t MY/YR where a=(1/2td2), td is the time at which dy/dt is a maximum, K is the area under the curve from t to infinity and represents the nominal life-cycle effort in man-years. The cumulative number of people used by the system at any time t is: y = K(1 e-at*t)

125 WTH07

Neglecting the cost of computer test time, inflation overtime, etc., the development cost is simply the $COST/MY (average). That is, $DEV = $COST/MY * (0.3944 K) = 40% $LC, where $LC is life-cycle cost.

126 WTH07

Exercises
1. According to the Rayleigh curve model, what is the effect of

extending the delivery date by 20%? 2. Suppose that you are developing the software for a nuclear power plant control system. Select the most appropriate mode for the project, and use the COCOMO model to give a crude estimate of the total number of person months required for the development, assuming that the estimated software size is 10,000 delivered source instructions.

127 WTH07

9. Object-Oriented Metrics

128 WTH07

9.1 Object-Oriented Metrics (1)

OO Metrics can provide insight into software size.


Number of scenario scripts (NSS): The number of scripts or use cases is directly proportional to the number of classes required to meet requirements; the number of states for each class; and the number of methods, attributes. Number of key classes (NKC): A key class focuses directly on the business domain for the problem and will have a lower probability of being implemented via reuse. High values for NKC indicate substantial development work. It is suggested that between 20 and 40 percent of all classes in a typical OO system are key classes. Number of subsystems (NSUB): The NSUB provides insight into resource allocation, schedule and overall integration effort.

129 WTH07

9.1 Object-Oriented Metrics (2)


z

Number of scenario scripts.


To measure the amount of work on project. Scenario scripts are written in certain OO methodologies to document and leverage the expected uses of the system. Scenario scripts should directly relate to satisfying requirements, since scripts exercise major functionality of the system being built. An example: Inventory management system Initiator Action Participant User request item info. on InventoryQueryWin. InventoryQueryWin sends item: aNumber to Inventory Inventory returns anItem to InventoryQueryWin InventoryQueryWin requests price from Item..

130 WTH07

9.1 Object-Oriented Metrics (3)


The number of scenario scripts is an indication of the size of the application to be developed. Scripts steps should relate to the public responsibilities of the subsystem and classes to be developed.
z

Number of key classes


Key classes are central to the business domain being developed. The number of key classes is an indicator of the volume of work needed in order to develop an application. The number of key classes is indicator of the amount of long-term reusable objects.

131 WTH07

9.1 Object-Oriented Metrics (4)


z

How to determine if a class is key by asking the following questions


Could I easily develop applications in this domain without this class ? Would a customer consider this object important ? Do many scenarios involve this class ?

Examples of key classes for some problem domains:


Retail SalesTransaction LineItem Currency Telephony Call Connection Switch Banking SavingAccount Currency Customer

132 WTH07

9.1 Object-Oriented Metrics (5)


z

Number of support classes


Support classes, include user interface classes. They are not the central to the business domain ( UI, communications, database, file, collection, string, and so on). Support classes give us a handle on estimating the size of the effort. Type of user interface
Number of support classes varies one to three times the number of key classes (in experience). The variance is primarily affected by the type of UI. UI intensive projects have two to three times as many support classes as key classes. Non-UI intensive projects have none to two times as many support classes as key classes. Example: If there are 100 key classes and a GUI is used, and early estimate might be for 300 total classes in the application.

133 WTH07

9.1 Object-Oriented Metrics (6)


z

Average number of support classes per key class


This metric can be used to help estimate the total number of classes that will result on a project, based on previous projects result. Example: If there are 100 key classes during first weeks of analysis, the total estimated number of classes for final project is 100 + 250 = 350 classes, where ratio is 2.5. User interface complexity
The number of classes required to support a complex UI will be greater than a simple interface, such as a GUI under Presentation Manager or Windows

134 WTH07

9.1 Object-Oriented Metrics (7)

An estimating process
1. Use analysis techniques, such as parts of speech and scenario scripts to discover a majority of the key classes in the problem domain. 2. Categorize the type of user interface found
No UI Simple, text-based UI Graphic UI Complex, drag-and-drop GUI 2.0 2.25 2.5 3.0

3. Multiply the number of key classes by the numbers from step 2. This is the early estimate of the total number of classes in the final system.

135 WTH07

9.1 Object-Oriented Metrics (8)

4. Multiply the total number of classes from step 3 by a number between 15 and 20 (person-days from person-days per class), based on factor such as
The ratio of experienced to novice OO personnel The number of reusable domain objects in the reuse library

This is an estimate of the amount of effort to build the system.

136 WTH07

9.1 Object-Oriented Metrics (9)

Example from NASAs Software Engineering Laboratory (NASASEL).


The preliminary results of NASA-SEL research showed that OO represented the most important methodology studies by the SEL to date:
The amount of reused had risen dramatically, from 20-30% before OO to 80% with OO OO program were about 75% the length (in line of code) of comparable traditional solutions (Stark 1993). But, it was not clear which gains are due to Ada.

137 WTH07

9.2 Estimating Work with Use Cases (1)


[Schneider01] (*)
z

Example: Order Processing System use case diagram (extracted from OOSE courseware Chapter 4).

(*) The research by Gustav Karner of Objectory AB, 1993. Geri Schneider and Jason P. Winters, Applying Use Cases: A Practical Guide, 2nd Ed., Addison-Wesley, Boston, 2001.

138 WTH07

9.2 Estimating Work with Use Cases (2)


z

Weighting Actors.
Actor Type Simple Average Complex where Description Factor Program interface 1 Interactive, or protocol-driven, interface 2 Graphical interface 3

A Simpler Actor represents another system with a defined application programming interface An Average Actor is either another system that interacts through a protocol such as TCP/IP, or it is a person interacting through text-based interface such as ASCII terminal. A Complex Actor is a person interacting through a graphical user interface.

139 WTH07

9.2 Estimating Work with Use Cases (3)

Order Processing System


Customer complex Inventory System simple Accounting System simple Customer Rep complex Warehouse Clerk complex Shipping Company average

So, 2 simple * 1 = 2 1 average * 2 = 2 3 complex * 3 = 9 The total actor weight for OPS = 13

140 WTH07

9.2 Estimating Work with Use Cases (4)


z

Weighting Use Cases


Transaction-Based Weighting Factors Use Case Type Description Simple Average Complex 3 or fewer transactions 4 to 7 transactions More than 7 transactions Factor 5 10 15

Analysis Class-Based Weighting Factors Use Case Type Description Simple Average Complex Fewer than 5 analysis classes 5 to 10 analysis classes More than 10 analysis classes

Factor 5 10 15
141 WTH07

9.2 Estimating Work with Use Cases (5)

Order Processing System


Create Order average Check Order simple Cancel Order simple Modify Existing Order average Confirm Order simple Check Customers Credit simple Check Account average

Fill Order average Shipping Order simple Send Email simple So, 6 simple * 5 = 30 4 average * 10 = 40 0 complex * 0 = 0 Total use case weight for OPS = 30 + 40 + 0 = 70

142 WTH07

9.2 Estimating Work with Use Cases (6)


z

Unadjusted Use Case Points.


(UUCP) is raw number to be adjusted to reflect the projects complexity and the experience of the people on the project: 13 + 70 = 83 UUCP for the Order Processing System.

143 WTH07

9.2 Estimating Work with Use Cases (7)


z

Weighting Technical Factors


Technical Factors for System and Weights Factor Description Factor number
T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 T12 T13 Distributed system Response or throughput performance objectives End-user efficient (online) Complex internal processing Code must be reusable Easy to install Easy to use Portable Easy to change Concurrent Includes special security feature Provides direct access for third party Special user training facilities required

Weight
2 1 1 1 1 0.5 0.5 2 1 1 1 1 1
144 WTH07

9.2 Estimating Work with Use Cases (8)


z

Technical factors for system and weights for our OPS:


Factor Number & Description The value (extended) T1: Distributed system (yes) 2 T2: Response & throughput (likely limited by human input) 3 T3: Needs to be efficient (yes) 5 T4: Easy to process (yes) 1 T5: Code reusability (not yet, later) 0 T6: Easy for non-technical people (yes) 2 T7: Easy for non-technical people (yes) 2 T8: Portability (not at this time) 0 T9: Easy to change (yes) 3 T10: Concurrent (multi-user only) 5 T11: Security (simple) 3 T12: Direct access for 3rd parties (yes) 1 T13: Training (no) 0
145 WTH07

9.2 Estimating Work with Use Cases (9)

Technical Complexity Factor (TCP)


Rate each factor from 0 to 5. A rating of 0 means that the factor is irrelevant for this project; 5 means it is essential.
TFactor = (Tlevel) * (Weighting Factor) TCF = 0.6 + (0.01 * TFactor)

Rating the factors for National Widgets (say): 2 + 3 + 5 + 1 + 0 + 2 + 2 + 0 + 3 + 5 + 3 + 1 + 0 = 27 TCF = 0.6 + (0.01 * 27) = 0.87

146 WTH07

9.2 Estimating Work with Use Cases (10)


z

Environment Factor: the experience level of people on project.


Environmental Factors for Team and Weights Factor Number Factor Description
F1 F2 F3 F4 F5 F6 F7 F8 Familiar with UP Application experience OO experience Lead analyst capability Motivation Stable requirements Part-time workers Difficult programming language

Weight
1.5 0.5 1 0.5 1 2 -1 -1

147 WTH07

9.2 Estimating Work with Use Cases (11)

Environmental factors for team and weights for our OPS:

Factor number & descriptions The value (extended) F1: Most of team (unfamiliar) 1.5 F2: Most of team (no application experience) 0.5 F3: Most of team (no OO experience) 1 F4: Lead analyst capability (good) 3 F5: Motivation (really eager) 5 F6: Stable requirements (not enough) 5 F7: Part-time workers (none) 0 F8: Difficult programming language Java (looking for) -1

148 WTH07

9.2 Estimating Work with Use Cases (12)


Rate each factor from 0 to 5. For F1 through F4, 0 means no experience, 5 means expert, 3 means average. For F5, 0 means no motivation for the project, 5 means high motivation, 3 average. For F6, 0 means extremely unstable requirements, 5 means unchanging requirements, 3 means average. For F7, 0 means no part-time technical staff, 5 means all part-time technical staff, 3 means average. For F8, 0 means easy-touse programming language, 5 means very difficult programming language, 3 means average. Now multiply each factors rating by its weight from the table shown in last slide to get F factors. EFactor = Summation of (Flevel * WeightingFactor) EF = 1.4 + (-0.03 * Efactor)

149 WTH07

9.2 Estimating Work with Use Cases (13)

The rating for OPS. Efactor = 1.5 + 0.5 + 1 + 3 + 5 + 5 + 0 1 = 15 EF = 1.4 + (-0.03 * 15) = 0.95 Use case points UCP = UUCP * TCF * EF The use case point for National Widget and the final estimation of time to complete the project is UCP = 83*0.87*0.95 = 67.6

150 WTH07

9.2 Estimating Work with Use Cases (14)

Project estimate.
Suppose we use a factor of 28 person-hours per UCP, then 67.6 * 28 1921 hours 46 weeks at 42 hours a week for one person
Note that the factor of 28 person-hours/UCP is because we have 2 negative factors in Environment Factors for Team and Weights. If you have 4 people in a team, there is not problems of communication or synchronization of effort. You may assume that all of the team member work full-time. So, about 11 months of effort plus 2 weeks for working out any team issues. The values of TF, EF, and factor of person-hours per UCP are different from every organization and estimated according to experience.

151 WTH07

10. Planning a Measurement Program

152 WTH07

10.1 A Measurement Program (1)

What is a metrics plan?


Why: the plan lays out the goals or objectives of the project.
Describing what questions need to be answered by project members and project management.

What: what will be measured.


For instance, productivity may be measured in terms of size and effort..

Where and When: measurement will be made during the process.


Some measurements are taken once, while others are made repeatedly and tracked over time.

How and Who: the identification of tools, techniques and staff available.

153 WTH07

10.1 A Measurement Program (2)


z

Why and what: Developing Goals-Questions-Metrics (GQM):


The GQM templates encourage managers to consider the context in which a question is being asked, as well as the viewpoint of the question. By deriving measurement from goals and questions, it is clear how the resulting data points will be related to one another and used in the larger context of the project.

154 WTH07

10.1 A Measurement Program (3)


z

Example: The importance of understanding the effects of tool use on productivity. Suppose that evaluating tool use is one of the major goals for a project. Several questions from the goals, including:
Which tools are used? Who is using tool? How much of the project affected by each tool? How much experience do developers have with the tool? What is productivity with tool use? Without tool use? What is the productivity quality with tool use? Without tool use?

155 WTH07

10.1 A Measurement Program (4)


z

Example: Suppose that a project involves developing a complex piece of software, including a large database of sensor data. The sensor data capture routines are being written in C, while the data storage and manipulation routines are being written in Smalltalk. The project manager wants to track code productivity. His/her metrics plan includes a GQMderived question: Is the productivity for C development the same as for Smalltalk development? Productivity will be measured as size per person day. There are three ways: A count of objects and methods (operations), A count of lines of code, A count of function points. Thus, the goals tell us why we are measuring, and questions, metrics, and models tell us what to measure.

156 WTH07

10.2 CMM (1)


z

Capability Maturity Model (CMM):


Level 1 Initial: Few processes are defined, the success of development depends on individual efforts, not on team accomplishment. Level 2 Repeatable: Basic project management processes track cost, schedule and functionality. There are some discipline among team member. Success on earlier projects can be repeated with similar, new one. Level 3 Defined: Management and engineering activities are documented, standardized and integrated. Standard process is tailored to the special need; the result is a standard process for everyone in the organization. Level 4 Managed: A managed process directs its efforts at product quality. The organization focuses on using quantitative information to make problems visible and to assess the effort of possible solutions. Level 5 Optimizing: Quantitative feedback is incorporated in the process to produce continuous process improvement. New tools and techniques are tested and monitored to see how they affect the process and products.

157 WTH07

10.2 CMM (2)


z

Key process area in the CMM model (Paulk 1995)


Initial: none Repeatable:
Requirements management Software project planning Software project tracking and oversight Software subcontract management Software quality assurance Software configuration management

158 WTH07

10.2 CMM (3)


Defined:
Organization process focus Organization process definition Training program Integrated software management Software product engineering Intergroup coordination Peer reviews

Managed:
Quantitative process management Software quality management

Optimizing:
Defect prevention Technology change management Process change management

159 WTH07

10.2 CMMi (4)


z

Capability Maturity Model Integration (CMMI)

From Mike Philips: CMMIV1.1 Tutorial

160 WTH07

10.2 CMMi (5)


Level
5 Optimizing

Focus

Process Areas

Continuous Organization innovation and process development, Casual Analysis improvement and resolution Quantitative management Process standardization Organization process performance Quantitative process management Requirements development, Technical solution, Product integration, Verification, Validation, Organization process focus, Organization process definition, Organization project management, Integrated supplier management, Risk management, Decision analysis and resolution, Organization Environment for integration, Integrated training Requirements management, project planning, Project Project monitoring and control, Supplier agreement management, Measurement and analysis, Process and product quality assurance, Configuration Mgmt

4 Quantitative managed 3 Defined

2 Managed

Basic project management

1 Performance

None

161 WTH07

10.3 Extreme Programming (XP) and CMM (1)


z

XPs target is small to medium sized team (fewer than 10 people) building software with vague or rapidly changing requirements. The XP life cycle has four basic activities:
Continual communication with the customer and within team; Simplicity, achieved by a constant focus on minimalist solutions; Rapid feedback through mechanisms such as unit and functional test; and The courage to deal with problems proactively.

162 WTH07

10.3 Extreme Programming (XP) and CMM (2)


z

XP method consists of 12 basic elements:


Planning game: Quickly determine the next releases scope, combining business priorities and technical estimates. Small releases: Put a simple system into production quickly. Release new version on a very short (say 2 weeks) cycle. Metaphor: Guide all development with a simple, shared story of how the overall system works. Simple design: Design as simple as possible at any given moment. Testing: Developers continually write unit tests that must run flaw-lessly; customers write tests to demonstrate that functions are finished, that is, test, then code. Refactoring: Restructure the system without changing its behavior to remove duplication, improve communication, simplify, or add flexibility.

163 WTH07

Pair programming: All production code is written by two programmers at one machine. Collective ownership: Anyone can improve any system code anywhere at any time. Continuous integration: Integrate and build the system many time a day when every time a task is finished). Continual regression when requirements change. 40-hour weeks: Work no more than 40 hours per week whenever possible; never work overtime two weeks in a row. On-site customer: Have an actual user on the team full-time to answer questions. Coding standards: Have rules that emphasize communication throughout the code.

164 WTH07

XP satisfaction of key process area.


Level 2 2 2 2 2 2 3 3 3 3 Key process area Requirements management Software project planning Software project tracking and oversight Software subcontract management Software quality assurance Software configuration management Organization process focus Organization process definition Training program Integrated software management Satisfaction ++ ++ ++ -+ + + + ---

165 WTH07

3 3 3 3 3 3 4 4 5 5 5

Software product engineering Intergroup coordination Peer review Software product engineering Intergroup coordination Peer review Quantitative process management Software quality management Defect prevention Technology change management Process change management

++
++ ++ ++ ++ ++ --+ ---

Note: ++ Largely addressed in XP; + partially addressed in XP; -- Not addressed in XP.

166 WTH07

Note that CMM supports a range of implementations through 18 key process areas (KPAs) and 52 goals that comprise the requirements for a fully mature software process. If systems grow, some XP practices become more difficult to implement. XP is targeted toward small teams working on small to medium-sized projects.

167 WTH07

10.4 Measurement in Practice (1)


z

Motorola improvement program


A product is built in the shortest time and based on lowest cost if no mistake is made in the process. If no defect can be found anywhere in the development process, then the customer is not likely to find one either. The more robust the design, the lower the inherent failure rate.

168 WTH07

10.4 Measurement in Practice (2)


z

Example: The process phases and steps of Siemens AG and/or Nixdorf AG


Process phase Planning and high-level design Process steps Requirements study Solution study Functional design Interface design Detailed project plan Component design Code design Coding Component test Functional test Product test System test Pilot installation and test Customer installation 169 WTH07

Detailed design and implementation

Quality control Installation and maintenance

10.3 Measurement in Practice (3)


z

Siemens metrics
Quality metrics Number of defects counted during code review, quality control, pilot test, fist year of customer installation / KLOC. Total number of defects received per fiscal year. Total number of field problem reports received. Development cost/# of KLOC. Maintenance cost/# of defects. Total gross lines of code delivered to customers/total staff months. KLOC/development effort in staff-months. KLOC/development time on months. Sales in DM/software development cost for FY and product line. Sales in DM/ total cost for development, maintenance and marketing for FY and product line.

Productivity metrics

Profitability metrics

170 WTH07

10.3 Measurement in Practice (4)


z

Hitachi Software Engineering (HSE): 98% of the projects were completed on time, and 99% of the project cost between 90 and 110% of the original estimate.

171 WTH07

10.3 Measurement in Practice (5)


z

HP: Effects of reuse on software quality.

172 WTH07

10.3 Measurement in Practice (6)


z

HP: Effects of reuse on software productivity.

173 WTH07

10.4 Successful Metrics Program (1-1)


z

Recommendation for successful metrics program (Rifkin and Cox 91)


Pattern type Measures Recommendation Staff small Use a rigorously defined set Automate collection and reporting Motivate managers Set expectations Involve all stakeholders Educate and train Earn trust

People

174 WTH07

10.4 Successful Metrics Program (1-2)


Program Take an evolutionary approach Plan to throw one away Get the right info. to the right people Strive for an initial success Add value Empower developers to use measurement info. Take a whole process view Understand that adoption takes time

Implementation

175 WTH07

10.4 Successful Metrics Program (2)


z

Key benefits of measurement programs:


They support management planning by providing insight into product development and by quantifying trade-off decisions. They support understanding of both the development process and development environment. They highlight areas of potential process improvement and characterize improvement efforts.

176 WTH07

10.5 Lessons Learned (1)


z

As software become more pervasive and software quality more critical, measurement programs will become more necessary. Rubins report (1990): Among 300 major US IT companies (less 100 IT staff), sixty were successful by implemented measurement programs. Success means:
The measurement program results were actively used in decision making; The results were communicated and accepted outside of the IT department; The program lasted longer than two years.

177 WTH07

10.5 Lessons Learned (2)


z

Reasons for failure in the remaining 240 companies:


Management did not clearly define the purpose of the program and later saw the measures as irrelevant; Systems professionals resisted the program, perceiving it as a negative commentary on their performance; Already burdened project staff; Program reports failed to generate management action; Management withdrew support for the program.

178 WTH07

10.5 Lessons Learned (3)


z

Steps to success for a measurement program (Grady and Caswell):


Define the company and project objectives for the program; Assign responsibilities to each activity; Do research; Define the metrics to collect; Sell the initial collection of these metrics; Get tools for automatic data collection and analysis; Establish a training class in software measurement; Publicize success stories and encourage exchange of ideas; Create a metrics database; Establish a mechanism for changing the standard in a orderly way.

179 WTH07

10.6 Size, Structure, and Quality (by Examples)


z

Fault density by Akiyama (Fujitsu, 1971)


d = 4.86 + 0.018L d is the predicted faults, L is the size in LOC. For instance, a module of 1000 lines of code will be expected to have approximately 23 faults

Lipow and Halsteads theory to define a relationship between fault density and size
d/L = A0 + A1 L + A2 L d is the number of faults, L is the size in LOC, and each Ai depends on the average number of uses of operators and operands per line of code for a particular language. For instance, for Fortran A0=0.0047, A1=0.0023 and A2=0.000043; for assembly 0.0012, 0.0001 and 0.000002, respectively.

Gaffney argued that the relationship between d and L was not language dependent, thus d = 4.2 + 0.0015 (L)4/3
180 WTH07

10.7 Object Orientation


z

Example from NASAs Software Engineering Laboratory (NASASEL).


The preliminary results of NASA-SEL research showed that OO represented the most important methodology studies by the SEL to date:
The amount of reused had risen dramatically, from 20-30% before OO to 80% with OO OO program were about 75% the length (in line of code) of comparable traditional solutions (Stark 1993). But, it was not clear which gains are due to Ada.

181 WTH07

11 Software Estimation

Note: The materials in this chapter are excerpted from [McConnell06]. If readers are interested in software estimation, we strongly recommend they may read McConnells book to obtain more solid knowledge about.
182 WTH07

11.1 A Good Estimate


z

Definition of a good estimation.


A good estimate is an estimate that provides a clear enough view of the project reality to allow the project leadership to make good decisions about how to control the project to hit its targets. (*) The primary purpose of software estimation is to determine whether a projects targets are realistic enough that the project can be controlled to meet the them, not to predict a projects outcome.

(*) [McConnel06] page 14.

183 WTH07

11.2 Estimate Influences (1)


z

Project Size.
easily determining effort, cost, and schedule
Relationship between project size and productivity. (*)
Project Size (in LOC) 10K 100K 1M 10M LOC per Staff Year (COCOMO II nominal in parentheses 2,000-25,000 (3200) 1,000-20,000 (2600) 700-10,000 (2,000) 300-5,000 (1,600)

(*) [McConnel06] Table 5-1

184 WTH07

11.2 Estimate Influences (2)


z

Personnel Factors.
Personnel factors exert significant influence on project outcomes. According to COCOMO II: a 100,000 LOC project:
Best Rank Requirements analysis capability Programmer capability (general) Personnel continuity (turnover) Applications (business area) experience Language and tools experience Platform experience Team cohesion -29% -24% -19% -19% -16% -15% -14% Worst Rank 42% 34% 29% 22% 20% 19% 11%

Compare with nominal

Example: the project with worst requirements analysis would require 42% more effort than nominal.

185 WTH07

11.2 Estimate Influences (3)


z

Programming Language Factors.


40% impact on the overall productivity rate of the project by teams experience with the specific language and tools used on the project. Some languages generate more functionality per line of code than others: Language Level Relative to C
C C# C++ Cobol Fortran 95 Java Macro Assembly Smalltalk SQL Visual Basic 1 to 1 1 to 2.5 1 to 2.5 1 to 1.5 1 to 2 1 to 2.5 2 to 1 1 to 6 1 to 10 1 to 4.5

Note: If you dont have choice about the programming language, this point is not relevant to your estimate. Otherwise, using Java, C#, or VB would tend to be more productive than using C, Cobol, or Macro Assembly. 186 WTH07

11.3 Estimates, Targets, and Commitments


z

Estimation on software projects interplays with business targets, commitments, and control.
Estimate: a prediction of how long a project will take and how much it will cost. Target: a statement of a desirable business objective.
Example: We must limit the cost of the next release to $2 million, because that is the maximum budget we have for the release.

Commitment: a promise to deliver defined functionality at a specific level of quality by a promised date. Control: typical activities are to remove non-critical requirements, to redefine requirements, to replace less-experience staff with moreexperience staff.

187 WTH07

11.4 The Probability of Software Delivering


z

The probability of a software project delivering on or before a particular date.

188 WTH07

11.5 Estimation Error (1)


z

The Cone of Uncertainty (adapted from [McConnel06] figure 4-4)

189 WTH07

11.6 Estimation Error (2)


z

Estimation error by software development activity (according to the figure in last slide) [Beohm00].
Scoping Error Possible Error on Low Side 0.25x (-75%) 0.50x (-50%) 0.67x (-33%) 0.80x (-20%) 0.90x (-10%) Possible Error on High Side 4.0x (+300%) 2.0x (+100%) 1.5x (+50%) 1.25 (+25%) 1.10x (+10%) Range of High to Low Estimates 16x 4x 2.25x 1.6x 1.2x

Phase Initial Concept Approved Product Definition Requirements Complete User Interface Design Complete Detailed Design Complete (for sequential projects)

190 WTH07

11.6 Estimation Error (3)


z

Chaotic Development Processes.


Common examples of project chaos.
No well-investigated requirements in the first place. Lack of end-user involvement in requirements validation. Poor design. Poor coding practices. Inexperienced personnel. Incomplete or poor project planning. Prima donna team members. Abandoning planning under pressure. Gold-plating developers. Lack of automated source code control.

191 WTH07

11.6 Estimation Error (4)


z

Unstable Requirements.
The challenges of unstable requirements.
If requirements cannot be stabilized, estimate variability will remain high through the end of the project. Requirements changes are often not tracked and the project is often not reestimated.

So, in those cases, consider the development approaches that are designed to work in short iterations (agile methods), such as Scrum, Extreme Programming, time box development, etc.
Scrum: an agile method that is strong promotion of self-organizing teams, daily team measurement, and avoidance of following predefined steps. Key practices: a daily stand-up meeting with special questions, 30-calendar-day iterations, a demo to external stakeholders at the end of each iteration.

192 WTH07

12. Agile Estimating(*)


In this additional chapter, we want to introduce agile estimating, because using agile method to develop software system becomes more popular today. However, this chapter can be referred to as a supplement to this Software Metrics courseware. From my viewpoint, agile method is something different from the tradition software development methods such as Unified Process.
(*) Excerpted from Mike Cohn, User Stories Applied: For Agile Software Development, Addison-Wesley, Boston, 2004. 193 WTH07

12.1 Agility
z

Agility is dynamic, context-specific, aggressively change-embracing, and growth-oriented. (Goldman 1997). Agile process is both light and sufficient.
Lightness: staying maneuverable Sufficient: a matter of staying in the game, i.e., delivering software

Agile Software Development Manifesto:


Individuals and interactions over processes and tools. Working software over comprehensive documentation. Customer collaboration over contract negotiation. Responding to change over following a plan.

194 WTH07

12.2 User Stories (1)


z

User stories.
A user story is a description of functionality that will be valuable to either a user or software or purchase of a system. Examples: Buying books through Internet(*)
A user can search for books by author, title or ISBN number. A user can view detailed information on a book. For example, number pages, publication date and contents. A user can put books into a shopping cart and buy them when she is done shopping. A user can remove books from her cart before completing an order. A user enters her billing address, the shipping address and credit card information. A user can establish an account that remembers shipping and billing information. A user can edit her account information (credit card, shipping address, billing address and so on).
(*) Mike Cohn, User Stories Applied for Agile Software Development, Addison-Wesley, Boston, 2004. 195 WTH07

12.2 User Stories (2)


z

Some comments on stories:


Using story card which contains a short description of user- or customer-valued functionality. The customer team writes the story cards. Stories are prioritized base on their value to the organization. Releases and iterations are planned by placing stories into iterations. Velocity is the amount of work the developers can complete in an iteration. The sum of the estimates of the stories place in an iteration cannot exceed the velocity the developers forecast for that iteration. If a story wont fit in an iteration, you can split the story into two or more smaller stories (next slide). User stories are worth using because they emphasize verbal communication.
196 WTH07

12.2 User Stories (3)


z

Disaggregating into tasks


Stories are small enough to serve as units of work. If not, then a story may be disaggregated.
Example: The story: A user can search for a hotel on various field. might be turned into the following tasks:
Code basic search screen Code advanced search screen Code results screen Writhe and tune SQL to query the database for basic search Write and tune SQL to query the database for advanced search Document new functionality in help system and users guide

197 WTH07

12.3 Story Points


z

Story points.
Story point as an ideal day of work (that is, a day without interruption whatsoever). Story point as an ideal week of work Story point as a measure of the complexity of the story

198 WTH07

12.4 Estimating (1)


z

Estimate as a team
Gather together the customer and the developers who will participate in creating the estimates. Estimate, and converge on a single estimate that can be used for the story.

199 WTH07

12.4 Estimating (2)


z

Using story points


Use the term velocity to refer to the number of story points a team completes in an iteration.
Suppose a project comes up with a total of 300 story point. Estimators can complete 50 story points in each iteration, that is, they will finish the project in a total of 6 iterations. So they may plan on maintaining their measured velocity of 50 based on three conditions: Nothing unusual affected productivity this iteration The estimates need to have been generated in a consistent manner (using a team estimate process) The stories selected for the first iteration must be independent
The sum of a number of independent samples from any distribution is approximately normally distribution.

200 WTH07

12.4 Estimating (3)


z

The ways of estimating a teams initial velocity


Use historical values Take a guess Run an initial iteration and use the velocity of that iteration

From story points to expected duration


Suppose the team sums the estimates from each card and come up with 100 story points which are converted into a predicted for the project. Using velocity which represents the amount of work that gets done in an iteration. For example, if we estimate a project at 100 ideal days (story points) with a velocity of 25, we estimate that it will take 100/25 = 4 iterations to complete the project

201 WTH07

12.4 Estimating (4)


z

Question: Assuming one-week iterations and a team of four developers, how many iterations will it take the team to complete a project with 27 story points if they have a velocity of 4?
Answer: With a velocity of 4 and 27 story points in the project, it will take the team 7 iteration to finish. (Note that the number of iteration is integer.)

202 WTH07

12.4 Estimating (5)


z

Responsibilities
Developers
Defining story points Giving honest estimates Estimating as a team All two-point stories should be similar.

Customer
Participating in estimation meetings Playing the role to answer questions and clarifying stories. Dont estimate stories yourself.
z

Question: If three programmer individually estimate the story at 2, 4, and five story point. Which estimate should they use?
Answer: They should continue discussing the story until their estimate get closer.
203 WTH07

12.5 Measuring and Monitoring Velocity (1)


z

The team complete stories during iteration. For example:


Story Story Point A user can 4 A user can 3 A user can 5 A user can 3 Velocity 15 The teams velocity is 15, which is the sum of the story points for the stories completed in the iteration. Note that stories partially completed cannot include in velocity calculations, such as values like 12.3.

204 WTH07

12.5 Measuring and Monitoring Velocity (2)


z

Planned and Actual Velocity


To graph planned and actual velocity for each iteration is a good way to monitor whether actual velocity is deviating from planned velocity. Planned and actual velocity after the first three iteration actual velocity is slightly less than planned velocity.

205 WTH07

12.5 Measuring and Monitoring Velocity (3)


z

The answer: Figure in last page + the cumulative story point graph:

The cumulative story point chart (above) shows the total number of story points completed through the end of each iteration.
206 WTH07

12.5 Measuring and Monitoring Velocity (4)


z

Iteration burndown charts


An iteration burndown chart shows the amount of work, expressed in story points, remaining at the end of each iteration.

207 WTH07

12.5 Measuring and Monitoring Velocity (5)


z

Progress and changes during four iterations (an example).


To be noted: A strength of agile software development is that a project can begin without a length upfront complete specification of the projects requirements. Stories will come and go, stories will change size, and stories will change in importance.

The team actually complete 45-10-18=17, so still 113 story points remain. 208 WTH07

12.5 Measuring and Monitoring Velocity (6)


z

Burndown chart for the project in last slide.

From the slope of the burndown line after the 1st iteration, the project would not be finished after 3 iterations.

209 WTH07

12.5 Measuring and Monitoring Velocity (7)


z

Burndown charts during an iteration.


A daily burndown chart shows the estimated number of hours left (not hours expended) in the iteration. Example: The following chart shows a daily tracking of the hours remaining in an iteration:
Reflects the amount of work remaining.

210 WTH07

12.5 Measuring and Monitoring Velocity (8)


z

Question: What conclusions should you draw from the following figure? Does the project like it will finish ahead, behind or on schedule?

Answer: The team started out a little better than anticipated in the first iteration. They expect velocity to improve in the second and third iterations and then stabilize. After two iterations they have already achieved the velocity they expected after three iterations. At this point they are ahead of schedule but you should be reluctant to draw too many firm conclusions after only two iterations.
211 WTH07

12.5 Measuring and Monitoring Velocity (9)


z

Question: What is the velocity of the team that finished the iteration shown in the following table?
Story Story 1 Story 2 Story 3 Story 4 Story 5 Story 6 Story 7 Velocity Story Points 4 3 5 3 2 4 2 23 Status Finished Finished Finished Half finished Finished Not started Finished

Answer: 16. Partially completed stories do not contribute to velocity.

212 WTH07

12.5 Measuring and Monitoring Velocity (10)


z

Question: Complete the following table by writing the missing values into the table. Iter-1 Iter-2 Iter-3
Story points at start of iteration Complete during iteration Change estimate Story points from new stories Story points at end of iteration 100 35 5 6 76

40 -5 3

36 0 2

213 WTH07

12.5 Measuring and Monitoring Velocity (11)


z

Answer:
Iter-1 Story points at start of iteration Complete during iteration Change estimate Story points from new stories Story points at end of iteration 100 35 5 6 76 Iter-2 76 40 -5 3 34 Iter-3 34 36 0 2 0

214 WTH07

12.6 Release Plan


z

The steps in planning a release [Cohn06]


Determine condition of satisfaction: defined by a combination of schedule, scope, and resource goals.
Date-driven: product must be release by a certain date for which the feature set is negotiable. Feature-driven: consider the completion of a set of features.

Estimate the user stories: estimate each new feature that has some reasonable possibility of being selected for inclusion in the upcoming release. Do in any sequence:
Select an iteration length: Mostly two or four weeks for most agile teams work. Estimate velocity: make an informed estimate of velocity based on past results. Prioritize user stories: prioritize the features the product owner wants to develop.

Select stories and a release date: estimate the teams velocity per iteration and assume the number of iteration.
Iterate until the conditions of satisfaction for the release can best be met.

215 WTH07

12.7 What User Stories Are Not


z

User stories are different from IEEE 830 software requirements specifications.
Documenting a systems requirements following IEEE 830 is tedious, error-prone, and very time-consuming.

User stories are not use cases


One of the most obvious difference between stories and use cases is their scope. Both differ in the level of completeness Use cases are often intended as permanent artifacts of a project. User stories are not.

User stories are not scenarios


Use case scenarios are much more detailed than user stories, though they similar to each other. A scenario often describes a broader scope that does a use story.
216 WTH07

12.8 Why User Stories?


z

User stories
emphasize verbal communication. are comprehensive by everyone. are the right size for planning. work for iterative development. encourage deferring detail. support opportunities design. encourage participatory design. build up tacit knowledge.

Good to express requirements

Drawbacks to using user stories:


On large project it can be difficult to keep hundreds or thousands of stories organized. Communications cannot scale adequately to entirely replace written documents on large projects.
217 WTH07

Some Comments
z

Martin Fowler and Kent Beck: Asking a developer for a percentage of completeness for a task generates a nearly meaningless answer.
The developers are often 90% complete in a matter of days, 95% complete in a month, 99% complete in six months, and leave the work 99.9% complete. As a manage what you can do! So, dont ask teams for a percentage of complete.

But, better ask the teams what percentage of the features or user stories complete they are.
Feature Driven Development (FDD) (*) uses the percentage of completeness of each feature to produce summary progress reposts.

(*) Refer to [Palmer02] and Supplement to Section 12.8

218 WTH07

12.9 An Example: User Story for Sailing Books(*) (1)


z z

z z

z z

A user can search for books by author, title or ISBN number. A user can view detailed information on a book. For example, number of pages, publication date and a brief description. A user can put books into a shopping cart and buy them when she is done shopping. A user remove books from her cart before completing an order. To buy a book the user enters her billing address, the shipping address and credit card information. A user can rate and review books. A user can establish an account that remembers shipping and billing information. A user can edit her account information (credit card, shipping address, billing address and so on).
(*) [Cohn06]. 219 WTH07

12.9 An Example: User Story for Sailing Books(*) (2)


z

z z z

z z

A user can put books into a wish list that is visible to other site visitors. A user can place an item from a wish list (even someone elses) into his or her shopping card. A repeat customer must be able to find one book and complete an order in less than 90 seconds. (Constraint) A user can view a history of all of his past orders. A user can easily re-purchase items when viewing past orders. The site always tells a shopper what the last 3 items she viewed are and provides links back to them. A user can see what books we recommend on a variety of topics. A user, especially a Non-Sailing Gift Buyer, can easily find the wish list of other users.
220 WTH07

12.9 An Example: User Story for Sailing Books(*) (3)


z z

z z

z z

z z

A user can choose to have items gift wrapped. A Report Viewer can see reports of daily purchases broken down by book category, traffic, best- and worst-selling books and so on. A user must be properly authenticated before viewing reports. Orders made on the website have to end up in the same other database as telephone orders. (Constraint) An administrator can add new books to the site. An administrator needs to approve or reject reviews before they are available on the site. An administrator can delete a book. An administrator can edit the information about an existing book.

221 WTH07

12.9 An Example: User Story for Sailing Books(*) (4)


z

A user can check the status of her recent orders. If an order hasnt shipped, she can add or remove books, change the shipping method, the delivery address and the credit card. The system must support peak usage of up to 50 concurrent users. (Constraint)

222 WTH07

Supplement to Section 12.8 (1)


z

Feature-Driven Development the Processes.

Develop an Overall Model

Build a Feature List

Plan by Feature

Design by Feature

Build by Feature

An object model + notes

A list of features grouped into sets and subject areas

A development plan Class owners Feature set owners

A design package Complete client(sequence) valued function (add more content to the object model)

Note: Readers who are interested in FDD may refer to www.featuredrivendevelopment.com.

223 WTH07

Supplement to Section 12.8 (2)


z

Feature.
A feature is a very specific, small, client valued function expressed in the form: <action>the<result><by|for|of|to>(an)<object>
Small: 1-10 days of effort are required to complete the feature. Most are 1-3 days. Client valued: the feature is relevant and has meaning to the business.

Examples:
Calculate the total of a sale. (a calculateTotal() operation in a Sale class) Validate the PIN number for a bank account. (a validate() operation on a BankAccount class) Authorize a loan for a custome. (a authorize() operation in a Customer class)

224 WTH07

Review Articles
R1: GQM Trees R2: Software Cost Estimation R3: Function Points R4: COCOMO Model R5: Putnam Model R6: Software Science Measurements

225 WTH07

R1: GQM Trees


z

Example: GQM tree on software reliability

226 WTH07

R1: GQM Trees


z

Example: GQM tree on software reliability

227 WTH07

R2: Software Cost Estimation (1)


z

Special attention to the software cost-estimation:


There are no two identical systems or projects. With the increased size of software projects, any estimation mistakes could cost a lot in terms of resources allocated to the project. These mistakes lead either to overestimation or underestimation. The uncertainty about cost estimates is usually quit high.
Uncertainty reduction over the course of software project.

228 WTH07

R2: Software Cost Estimation (2)


z

Methods of cost estimation (from Kitchenham 1994):


Expert opinion. Estimation based on some personal experience. Analogy. Exercising some judgment based on some previous projects, use the estimates obtained from previous projects, and apply them to the current project. Decomposition. Breaking a product up into its smallest components, or decomposing a project into smallest subtasks. PERT models.
Effort = (lower estimate + 4 * most likely estimate + upper estimate) / 6

Mathematical models. The well-known models are the COCOMO effort model, Rayleigh curve models, and Albrechts function point models

229 WTH07

R3: Function Points (1)


z

Function Points Models.


Function Points
External inputs are the inputs from the use that provide distinct application oriented data. Example of such inputs are file names and menu selections. External outputs are directed to the user; they come in the form of various reports and messages. User inquires are interactive inputs requiring a response. External files deal with all machine-readable interfaces to other systems. Internal files are the master files in the system.

230 WTH07

R3: Function Points (2)


z

Levels of complexity.
Item External input External output User inquiry External file Internal file Simple 3 4 3 7 5 Average 4 5 4 10 7 Complex 6 7 6 15 10

231 WTH07

R3: Function Points (3)


z

The Unadjusted Function Count (UFC)

The computation of TCF are completed using the experimentally derived formula:

where fi are detailed factors contributing to the overall notion of complexity. It ranges 0 to 5 with 0 being irrelevant and 5 standing for essential. So, TCF = 0.65 means all factors rated as irrelevant; =1.35 means all factors being essential.
z

Adjusted function point (FP)


FP = UFC * TCF
232 WTH07

R3: Function Points (4)


z

Example: Weather Information

233 WTH07

R3: Function Points (5)


z

Weather Information
External inputs: none External outputs: 1 (update display) User inquiries: 1 (update request) External files: 2 (weather sensor, weather data) Internal files: 1 (logs) UFC = 1*7 + 1*6 + 2*15 + 1*10 = 53

If we consider adjusted point count FP, the range of possible values spreads from 34.45 (0.65*53) to 71.55 (1.35*53).

234 WTH07

R3: Function Points (6)


z

A general scheme of Albrechts function point model.

235 WTH07

R4: COCOMO Model (1)


z

The COCOMO model is the most complete and thoroughly documented model used in effort estimate. It is based on Boehms analysis of a database of 63 software projects. There are 3 classes of systems:
Embedded. This class of systems is characterized by tight constraints, changing environment, and unfamiliar surroundings. Such as real-time software system, e.g., avionic, aerospace, medicine). Organic. This category have a stable environment, familiar surroundings, and relaxed interfaces. Such as simple business systems, data processing, small software libraries. Semidetached. The software systems falling under this category are a mix of those of organic and embedded nature. Such as operating systems, database management systems, and inventory management systems.
236 WTH07

R4: COCOMO Model (2)


z

The basic form of the COCOMO Model.


Effort = a*KDLOCb , where a and b are two parameters of the model whose specific values are selected upon the class of the software system.
For embedded systems Effort = 3.6*KDLOC1.20 For organic systems Effort = 2.4*KDLOC1.05 For semidetached systems Effort = 3.0*KDLOC1.12

237 WTH07

R4: COCOMO Model (3)


z

Development Schedule M (in months).


For embedded systems M = 2.5*Effort0.32 For organic systems M = 2.5*Effort0.38 For semidetached systems M = 2.5*Effort0.35

Maintenance effort

Effortmaintenance = ACT*Effort

ACT (annual change traffic) is a fraction of KDLOC undergoing change during the year.

238 WTH07

R4: COCOMO Model (4)


z z

The Intermediate COCOMO Model: a refinement of the basic model. The Improvement comes in the form of 15 attributes of the product. Rating the 15 attributes by using the following six point scale:
VL (very low) LO (low) NM (nominal) HI (high) VH (very high) XH (extra high)

239 WTH07

R4: COCOMO Model (5)


z

The list of attributes.


Product attributes
Required reliability (RELY). Data bytes per DSI (DATA). Complexity (CPLX).

Personnel attributes
Analysis capability (ACAP). Application experience (AEXP), language experience (LEXP), and virtual machine experience (VEXP).

Computer attributes
Execution time (TIME) and memory (STOR) constraints. Virtual machine volatility (VIRT). Development turnaround time (TURN).

Project attributes
Modern development practices (MODP). Use of software tool (TOOL). Schedule effects (SCED)

240 WTH07

R4: COCOMO Model (6)


z

Intermediate COCOMO Attributes


VL
RELY DATA CPLX TIME STOR VIRT TURN ACAP AEXP PCAP LEXP VEXP MODP TOOL SCED 0.75 0.70

LO
0.88 0.94 0.85

NM
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00

HI
1.15 1.08 1.15 1.11 1.06 1.15 1.07 0.86 0.91 0.86 0.95 0.90 0.91 0.91 1.04

VH
1.40 1.16 1.30 1.30 1.21 1.30 1.15 0.71 0.82 0.70

XH

1.65 1.66 1.56

1.46 1.29 1.42 1.14 1.21 1.24 1.24 1.23

0.87 0.87 1.19 1.13 1.17 1.07 1.10 1.10 1.10 1.08

0.82 0.83 1.10 241 WTH07

R4: COCOMO Model (7)


z

Depending upon the product, each attribute is rated and these partial results are multiplied giving rise to the final product multiplier (P). The effort formula is expressed as follows: Effort = Effortnom*P where Effortnom arises in the following form: Effortnom = 2.8*KDLOC1.20 for embedded system Effortnom =2.8*KDLOC1.05 for organic system Effortnom =2.8*KDLOC1.12 for semidetached system The support effort is calculated using the following formula: Effortmaintenance = ACT*Effortnom*P

242 WTH07

R4: COCOMO Model (8)


z

Example
Suppose a software with an estimated size of 300 KDLOC. The software is a part of control system of a smart vehicle initiative. The system collects the readings from various sensor, process them, and develop a schedule of pertinent control actions. This is an embedded system. The basic form of the cost estimation model leads to the person-month effort as Effort = 3.6*3001.20=3379 person-month, and development time M=2.5*33790.32=33.66 months.

243 WTH07

R4: COCOMO Model (9)


z

Refining by using the intermediate COCOMO model with:


RELY DATA CPLX TIME STOR VIRT TURN ACAP AEXP PCAP LEXP VEXP MODP TOOL SCED HI HI NM VH VH NM LO HI HI NM NM NM NM LO VH 1.15 1.08 1.00 1.30 1.21 1.00 0.87 0.86 0.91 1.00 1.00 1.00 1.00 1.10 1.10

The scaling factor P = 1.6095. The nominal effort is equal 2.8*3001.20=2628 person-months. The modified result is 2628*1.6095=4229 person-months.
244 WTH07

R5: Putnam Model (1)

Manpower Loading

245 WTH07

R5: Putnam Model (2)


z

The basic distribution (by Rayleigh distribution)


dy/dt = 2kat exp (-at2)

a = (1/2t2), a shape parameter for distribution, e.g., dy/dt is a maximum when t = td, where the time td at which the average teamsize is maximum; k = the area under the curve and has the dimensions of effort. In the curve, the left of td is the effort (40%) for a software specification and development, while the right of td is the maintenance effort (60%) required after delivery of the software.

246 WTH07

R5: Putnam Model (3)


z

Software Equation:
Ss = ck k x t y Ss is the software size in source code statements; ck is a constant of proportionality that can be correlated with the degree of sufficient of a technical environment for a type connected effort, i.e., on-line interactive development, structural coding, less fuzzy requirements, machine access constraints, etc. k is the area under the Rayleigh curve, representing effort; its exponent here is x = 1/3. t is the time, and its exponent is y = 4/3.

The equation is at the heart of Putnams parametric cost estimation model.

247 WTH07

R5: Putnam Model (4)


z

By transposition of the software equation


Effort x Time4 = (Ss/ck)3

given that for a particular task and environment Ss and ck can be regarded as properties of that task, and therefore constants.
E T4 = constant
z

This equation expresses the underlying relationship between effort and time-scale for software development. So, small incremental or decremental changes to time will result in rather large concomitants in effort.

248 WTH07

R5: Putnam Model (5)


z

Example. Suppose a project with the estimate of effort about 25 person-year, over 2 elapsed year of time. If it allows the duration reduced to 18 months, the predicable effects of this, according to Putnams derivation:
The intrinsic property of this estimate is given 25 x 24 = constant = 400, say. In the new circumstance, effort x (1.5)4 = constant = 400 It follows then that a new value of effort is required, and this may be computed from E = 400/(1.5)4 = 79.6 person-years In other word, a 25% decrease in time-scale has led to an increase of 216% in effort required.

249 WTH07

R6: Software Science Measures (1)


z

Example: Bubble Sort Code in Java

Operators Occurrences Operands Occurrences 1 N1 2 public void () ; int ++ [] {} for = + > 2 2 2 6 3 2 3 4 2 2 2 1 paint Graphics sort 30 60 a 1 pass a.length i hold g 0 print

N2 1 1 1 3 1 5 4 3 2 7 2 2 1 2

1 public void paint (Graphics g) 2 { 3 print (g, Sequence in original order, a, 30, 30); 4 sort ( ); 5 print (g, results, a, 30, 60); 6 } 7 public void sort ( ) 8 { 9 for (int pass=1; pass < a.length; pass++) 10 for (int i = 0; i < a.length; i++) 11 if (a[i] > a[i+1]) 12. {hold=a[i]; 13. a[i]=a[i+1]; 14. a[i+1]=hold; 15. } 16 }

1=12 2=14

N1=31 N2=35
250 WTH07

R6: Software Science Measures (2)


z

Program length Program volume Potential volume Program level Difficulty Effort and Time

z z

z z

N = 1ln 1 + 2ln 2 = 12 ln 12 + 14 ln 14 = 96.32 V = N ln = 456 bits V* = (2 + 2*) ln (2 + 2*) = (2 + 3) ln (2 + 3) = 11.61 L = V*/V (if V = V*, L = 1. In general, V > V*) = 0.025 D = 1/L = 40 L = (2/1) x (2 /N2) = 0.068 E = V/L = 6706 T = E/ (=[5,20] Stroud number) = 373 sec = 7 min

251 WTH07

References
z

[Boehm00] Boehm, Barry, et al., Software Cost Estimation with Cocomo II, AddisonWesley, Reading, MA., 2000. [Cohn06] Cohn, Mike, Agile Estimating and Planning, Prentice Hall Professional, Upper Saddle River, NH, 2006. [Fenton97] Fenton, Norman E. and Shari Lawrence Pfleeger, Software Metrics: A Rigorous & Practical Approach, PWS, Boston, 1997. [Lorenz94] Lorenz, Mark, and Jeff Kidd, Object-Oriented Software Metrics, PTR Prentice Hall, Englewood Cliffs, NJ, 1994. [McConnell06] McConnell, Steve, Software Estimation, Microsoft Press, Redmond, WS, 2006. [Mller93] Mller, K. H., D. J. Paulsh, Software Metrics: A Practitioners Guide to Improved Product Development, IEEE Press, London, 1993. [Palmer02] Palmer, Stephen and John M. Felsing, A Practical Guide to FeatureDriven Development, Prentice Hall PTR, Upper Saddle River, NJ., 2002. [Putnam78] Putnam, Lawrence R., A General Empirical Solution to the Macro Software Sizing and Estimating Problem, IEEE Trans. On Software Engineering, Vol. SE-4, No. 4, 1978, pp. 345-361.
252 WTH07

[Schneider01] Schneider, Geri, and Jason P. Winters, Applying Use Cases: A Practical Guide, 2nd Edition, Addison-Wesley, Boston 2001. [Papers] Papers from IEEE Transaction on Software Engineering, IEEE Software, IEEE Computer, CACM, and JOOP.

253 WTH07

Оценить