Вы находитесь на странице: 1из 57

SEA Side

Software Engineering Annotations Annotation11:

Software Metrics
One hour presentation to inform you of new techniques and practices in software development.

Professor Sara Stoecklin


Director of Software Engineering- Panama City Florida State University Computer Science sstoecklin@mail.pc.fsu.edu stoeckli@cs.fsu.edu 850-522-2091 850-522-2023 Ex 182
1

Express in Numbers
Measurement provides a mechanism for objective evaluation
2

Software Crisis
According to American Programmer, 31.1% of computer software projects get canceled before they are completed, 52.7% will overrun their initial cost estimates by 189%. 94% of project start-ups are restarts of previously failed projects.
Solution? systematic approach to software development and measurement
3

Software Metrics
It refers to a broad range of quantitative measurements for computer software that enable to
improve the software process continuously assist in quality control and productivity assess the quality of technical products assist in tactical decision-making
4

Measure, Metrics, Indicators


Measure.
provides a quantitative indication of the extent, amount, dimension, capacity, or size of some attributes of a product or process.

Metrics.
relates the individual measures in some way.

Indicator.
a combination of metrics that provide insight into the software process or project or product itself.
5

What Should Be Measured?


process process metrics project metrics measurement product metrics product What do we use as a basis? size? function?
6

Metrics of Process Improvement


Focus on Manageable Repeatable Process Use of Statistical SQA on Process Defect Removal Efficiency

Statistical Software Process Improvement


All errors and defects are categorized by origin The overall cost in each category is computed

The cost to correct each error and defect is recorded

Resultant data are analyzed and the culprit category is uncovered

No. of errors and defects in each category is counted and ranked in descending order

Plans are developed to eliminate the errors

Causes and Origin of Defects


St nd rd 7% Error Checking 11% Specific tion 25%

D t H ndling 11%

U er Interf ce 12% H rd re Interf ce 8% Sof re Interf ce 6%

Logic 20%

Metrics of Project Management


Budget Schedule/ReResource Management Risk Management Project goals met or exceeded Customer satisfaction

10

Metrics of the Software Product


Focus on Deliverable Quality Analysis Products Design Product Complexity algorithmic,
architectural, data flow

Code Products Production System


11

How Is Quality Measured?


Analysis Metrics
Function-based Metrics: Function Points(
Albrecht), Feature Points (C. Jones) Bang Metric (DeMarco): Functional Primitives, Data Elements, Objects, Relationships, States, Transitions, External Manual Primitives, Input Data Elements, Output Data Elements, Persistent Data Elements, Data Tokens, Relationship Connections.

12

Source Lines of Code (SLOC)


Measures the number of physical lines of active code In general the higher the SLOC in a module the less understandable and maintainable the module is

13

Function Oriented Metric Function Points


Function Points are a measure of how big is the program, independently from the actual physical size of it It is a weighted count of several features of the program Dislikers claim FP make no sense wrt the representational theory of measurement There are firms and institutions taking them very seriously
14

Analyzing the Information Domain


measurement parameter number of user inputs number of user outputs number of user inquiries number of files number of ext.interfaces count weighting factor simple avg. complex X 3 X 4 X 3 X 7 X 5 4 5 4 10 7 6 7 6 15 10 = = = = =

Unadjusted Function Points: count-total Assuming multiplier complexity all inputs with the same weight, all output with the same weight, function points Complete Formula for the Unadjusted Function Points:

Inputs

Wi  Output Wo  Inquiry Win  InternalFi les Wif  ExternalIn terfaces Wei


15

Taking Complexity into Account Fact rs are rate a scale f 0 ( t 5 (very i p rta t): ata c icati s istri te f cti s eavily se c fi rati tra sacti rate -li e ata e try ser efficie cy e t i p rta t)

-li e p ate c plex pr cessi i stallati ease perati al ease ltiple sites facilitate c a e

Formula:

CM ! ComplexityMultiplier FComplexityMultiplier
16

Typical Function-Oriented Metrics


errors per FP (thousand lines of code) defects per FP $ per FP pages of documentation per FP FP per person-month

17

LOC vs. FP
Relationship between lines of code and function points depends upon the programming language that is used to implement the software and the quality of the design Empirical studies show an approximate relationship between LOC and FP

18

LOC/FP (average)
Assembly language C COBOL, FORTRAN C++ Visual Basic Smalltalk SQL Graphical languages (icons) 320 128 106 64 32 22 12 4

19

How Is Quality Measured?


Design Metrics
Structural Complexity: fan-in, fan-out, morphology System Complexity: Data Complexity: Component Metrics: Size, Modularity, Localization,
Encapsulation, Information Hiding, Inheritance, Abstraction, Complexity, Coupling, Cohesion, Polymorphism

Implementation Metrics
Size, Complexity, Efficiency, etc.
20

Comment Percentage (CP)


Number of commented lines of code divided by the number of non-blank lines of code Usually 20% indicates adequate commenting for C or Fortran code The higher the CP value the more maintainable the module is
21

Size Oriented Metric - Fan In and Fan Out


The Fan In of a module is the amount of information that enters the module The Fan Out of a module is the amount of information that exits a module We assume all the pieces of information with the same size Fan In and Fan Out can be computed for functions, modules, objects, and also non-code components Goal - Low Fan Out for ease of maintenance.
22

Size Oriented Metric - Halstead Software Science


Primitive Measures
number of distinct operators number of distinct operands total number of operator occurrences total number of operand occurrences

Used to Derive
maintenance effort of software testing time required for software
23

Flow Graph
if (a) { X(); } else { Y(); }
Predicate Nodes

V(G) = E - N + where E = number of edges and N = number of nodes


24

McCabes Metric
Smaller the V(G) the simpler the module. Modu es arger than V(G) are a itt e unmanageab e. A high cyc omatic comp exity indicates that the code may be of ow qua ity and difficu t to test and maintain

25

Chidamber and Kemerer Metrics


Weighted methods per class (MWC) Depth of inheritance tree (DIT) Number of children (NOC) Coupling between object classes (CBO) Response for class (RFC) Lack of cohesion metric (LCOM)
26

Weighted methods per class (WMC)


ci is the complexity of each method Mi of the class

Often, only public methods are considered

WM

! ci
i !1

Complexity may be the McCabe complexity of the method Smaller values are better Perhaps the average complexity per method is a better metric?

The number of methods and complexity of methods involved is a direct predictor of how much time and effort is required to develop and 27 maintain the class.

Depth of inheritance tree (DIT)


For the system under examination, consider the hierarchy of classes DIT is the length of the maximum path from the node to the root of the tree Relates to the scope of the properties
How many ancestor classes can potential affect a class

Sma er va ues are better


28

Number of children (NOC)


For any class in the inheritance tree, NOC is the number of immediate children of the class
The number of direct subclasses

How would you interpret this number? A moderate va ue indicates scope for reuse and high va ues may indicate an inappropriate abstraction in the design
29

Coupling between object classes (CBO)


For a class, C, the CBO metric is the number of other classes to which the class is coupled A class, X, is coupled to class C if
X operates on (affects) C or C operates on X

Excessive coup ing indicates wea ness of c ass encapsu ation and may inhibit reuse High coup ing a so indicates that more fau ts 30 may be introduced due to inter-c ass activities

Response for class (RFC)


Mci # of methods called in response to a message that invokes method Mi
Fully nested set of calls

Smaller numbers are better


Larger numbers indicate increased complexity and debugging difficulties

Mc
i !1

If a large number of methods can be invoked in response to a message, the testing and debugging of the class 31 becomes more complicated

Lack of cohesion metric (LCOM)


Number of methods in a class that reference a specific instance variable A measure of the tightness of the code If a method references many instance variables, then it is more complex, and less cohesive The larger the number of similar methods in a class the more cohesive the class is Cohesiveness of methods within a c ass is 32 desirab e, since it promotes encapsu ation

Testing Metrics
Metrics that predict the likely number of tests required during various testing phases Metrics that focus on test coverage for a given component

33

Views on SE Measurement

34

Views on SE Measurement

35

Views on SE Measurement

36

12 Steps to Useful Software Metrics


Step 1 - Identify Metrics Customers Step 2 - Target Goals Step 3 - Ask Questions Step 4 - Select Metrics Step 5 - Standardize Definitions Step 6 - Choose a Model Step 7 - Establish Counting Criteria Step 8 - Decide On Decision Criteria Step 9 - Define Reporting Mechanisms Step 10 - Determine Additional Qualifiers Step 11 - Collect Data

37

Step 1 - Identify Metrics Customers


Who needs the information? Whos going to use the metrics? If the metric does not have a customer -do not use it.
38

Step 2 - Target Goals


Organizational goals Be the low cost provider Meet projected revenue targets Project goals Deliver the product by June 1st Finish the project within budget Task goals (entry & exit criteria) Effectively inspect software module ABC Obtain 100% statement coverage during testing

39

Step 3 - Ask Questions


Goal: Maintain a high level of customer satisfaction What is our current level of customer satisfaction? What attributes of our products and services are most important to our customers? How do we compare with our competition?
40

Step 4 - Select Metrics


Select metrics that provide information to help answer the questions
Be practical, realistic, pragmatic Consider current engineering environment Start with the possible Metrics dont solve problems -- people solve problems Metrics provide information so people can make 41 better decisions

Selecting Metrics
Goal: Ensure all known defects are corrected before shipment
42

Metrics Objective Statement Template


understand attribute evaluate in order To the of the goal(s) control to entity predict

Example - Metric: % defects corrected


% defects found & To evaluate the corrected during testing ensure all known defects in order are corrected to before shipment
43

Step 5 - Standardize Definitions

Developer

User
44

Step 6 - Choose a Measurement


Models for code inspection metrics Primitive Measurements:
Lines of Code nspected = oc Hours Spent Preparing = prep_hrs Hours Spent nspecting = in_hrs iscovered efects = defects

Other Measurements:
Preparation Rate = oc / prep_hrs nspection Rate = oc / in_hrs efect etection Rate = defects / (prep_hrs + in_hrs)
45

Step 7 - Establish Counting Criteria


Lines of Code Variations in counting No industry accepted standard SEI guideline - check sheets for criteria Advice: use a tool
46

Counting Criteria - Effort


What is a Software Project? When does it start / stop? What activities does it include? Who works on it?
ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Task Name Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Task 7 Task 8 Task 9 Task 10 Task 11 Task 12 Task 13 Task 14 Task 15 Task 16 W1 W2

onth 2 W3

W4

W5

W6

Month 3 W7

Month 4 W9 W10 W11 W12 W13 W14

W8

47

Step 8 - Decide On Decision Criteria


Establish Baselines Current value
Problem report backlog Defect prone modules

Statistical analysis (mean & distribution)


Defect density Fix response time Cycle time Variance from budget (e.g., cost, schedule)

48

Step 9 - Define Reporting Mechanisms


Open Jan-97 Feb-97 Mar-97 Apr-97
160

Fixed 23 27 18 12

Resolved 13 3 24 11 26 15 18 27
100 80 60 40 20 0 Jan Mar May July

100 80 60 40 20 0 1st Qtr 2nd Qtr 3rd Qtr 4th Qtr

120

80

40

0 1st Qtr 2nd Qtr 3rd Qtr 4th Qtr

120

80

40

0 0 20 40 60 80 100 120
1 2 3 4 5 6 7 8 9 10 11 12

49

Step 10 - Determine Additional Qualifiers


A good metric is a generic metric Additional qualifiers: Provide demographic information Allow detailed analysis at multiple levels Define additional data requirements
50

Additional Qualifier Example


Metric: software defect arrival rate
Release / product / product line Module / program / subsystem Reporting customer / customer group Root cause Phase found / phase introduced Severity
51

Step 11 Collect Data


What data to collect?
Metric primitives Additional qualifiers

Who should collect the data?


The data owner
Direct access to source of data Responsible for generating data Owners more likely to detect anomalies Eliminates double data entry
52

Examples of Data Ownership


Examples of Data Owned Schedule Budget Engineers Time spent per task Inspection data including defects found Root cause of defects Testers Test Cases planned / executed / passed Problems Test coverage Configuration management Lines of code specialists Modules changed Users Problems Operation hours
53

Owner Management

Step 12 Consider Human Factors


The People Side of the Metrics Equation How measures affect people How people affect measures

Dont underestimate the intelligence of your engineers. For any one metric you can come up with, they will find at least two ways to beat it. [unknown]
54

Dont
Measure individuals Use metrics as a stick

Use only one metric


Quality

Cost

Schedule

Ignore the data


55

Do
Select metrics based on goals
Goal 1 Goal 2 Question 1 Question 2 Question 3 Question 4 [Basili-88] Metrics 1 Metric 2 Metric 3 Metric 4 Metric 5

Provide feedback
Data

Data Providers Feedback

Metrics

Processes, Products & Services

Obtain buy-in

Focus on processes, products & services


56

References
Chidamber, S. R. & Kemerer, C. F., A Metrics Suite for Object Oriented Design, IEEE Transactions on Software Engineering, Vol. 20, #6, June 1994. Hitz, M. and Montazeri, B. Chidamber and Kemerers Metrics Suite: A Measurement Theory Perspective, IEE Transaction on Software Engineering, Vol. 22, No. 4, April 1996. Lacovara , R.C., and Stark G. E., A Short Guide to Complexity Analysis, Interpretation and Application, May 17, 1994. http://members.aol.com/GEShome/complexity/Comp.html Tang, M., Kao, M., and Chen, M., An Empirical Study on Object-Oriented Metrics, IEEE Transactions on Software Engineering, 0-76950403-5, 1999. Tegarden, D., Sheetz, S., Monarchi, D., Effectiveness of Traditional Software Metrics for Object-Oriented Systems, Proceedings: 25th Hawaii International Confernce on System Sciences, January, 1992, pp. 359-368. Principal Components of Orthogonal Object-Oriented Metrics http://satc.gsfc.nasa.gov/support/OSMASAS_SEP01/Principal_Components_of_Orthogonal_Object_Oriented_Metrics.htm Software Engineering Fundamentals by Behforhooz & Hudson, Oxford Press, 1996 Chapter 18: Software Quality and Quality Assurrance Software Engineering: A Practioner's Approach by Roger Pressman, McGraw-Hill, 1997 IEEE Standard on Software Quality Metrics Validation Methdology (1061) Object-Oriented Metrics by Brian Henderson-Sellers, Prentice-Hall, 1996

57

Вам также может понравиться