Вы находитесь на странице: 1из 48

Lecture #3

CS 633 EX
February 20, 2016

Agenda
Course Structure and Expectations
Logistics
Term Project team, submissions
Assignments
Quizzes

Module 3
Agile
Peer Reviews

Learning Objectives
Upon successful completion of this course, you will be prepared to:

Justify, implement and manage a global product development


Solicit, define and scope requirements as part of the product backlog grooming.
Play an effective role of a Software Engineering Manager in a context of IEEE CSDP
Select an estimation method that is appropriate for a specific phase of a product life
cycle. Oversee adoption of a consistent methodology to narrow the Cone of Uncertainty.
Support the Scrum delivery framework; become aware of several agile certification paths.
Play a role in a peer review, request and provide constructive and concise comments.
Evaluate software development tools (approved, allowed, restricted), while following the
Magic Quadrant technique. Maintain a logical relationship between tools and processes to
optimize their variety throughout an organization.
Articulate the strategy for system and unit test leading to continuous integration and
delivery.
Structure a project asset library aiming at a single-click navigation to a requested artifact.
Provide leadership to a process program using SEI CMMI as an improvement model.

Introduction

Introduction
Focus on learning . You should assume that you will be successful with the course
and hence, grades will be reflective of that. Focus on those best practices that you
learn here and be able to actually apply.
This is an overview course. It delves into multiple topics. Buckle up. You cannot
afford spending all your time on a single topic.
Time box your assignments. This is not about a PhD thesis. This is about becoming
aware of an important concept and then moving to the next concept.
This course is not about a specific language or a framework.
Some assignments and discussions - are optional, aimed to stimulate your thinking
Term project brings it all together. All predefined deliverables of a term project are
introduced in lectures.
How does it all fits together? It is all about how to do software better, about
software engineering and program management. You cannot function as a
professional software engineer or a project manager if you are unaware of these
key concepts covered in the class.

MET
CS473
Principles
Process has no goal in itself

Process
Improvement

Process
A Glue. A Heart bit. A Librarian. Architecture

Data-driven design

Test
Essentials

Mocks are not Stubs

Unit
Test

Deployment pipeline

Continuous
Delivery

Low coupling and high cohesion


Scale optimization

Globalization
Offshoring
Taxonomy

Do not offshore if unable to monetize

Requirements
Backlog
Grooming

If its not on a backlog it does not exist

Engineering
Management

Myth of organizational hierarchy

Software
Configuration
Management

2
4

Estimation

Identification , Control,
Auditing, Reporting
Precision equal to accuracy

Software
Design

Agile

SW Tools
Evaluation

Peer
Reviews

Wisdom of vertical slices


It is cheaper to find defects early

MET
CS473
Responsibilities
Find the shortest path
Separate released from drafts

Select best tests

Unit
Test

Merge each commit

Continuous
Delivery

Requirements
Backlog
Grooming

Process
Improvement

Engineering
Management

Process
Architecture

Test
Essentials

Fail, Pass, Refactor

Globalization
Offshoring
Taxonomy

2
4

Maintain requirements in canonical form


Manage flow of features
Focus your team

Software
Configuration
Management

Document Project Taxonomy

Estimation

Control changes
Retain historical estimates

Fit components into architecture

Software
Design

Agile

Facilitate convergence of tools

SW Tools
Evaluation

Peer
Reviews

Split
Ask for and provide comments

Engineering
Management

UML

Git

ISO

Test Essentials
Faults & Failures

Cost Of Delay
Black Swan

Requirements

Pivotal
Version
One
Rally

Backlog Grooming

5/20 Rule

Story Points

Architecture

Asset Library

Motivation

Peer Reviews

Defect Density &


Examination Rate

Estimation

Low Coupling and High Cohesion


Quality
Center

Continuous
Delivery

SW Tools
Evaluation

Agile Manifesto

Scrum

CMM

Regression

Cone Of Uncertainty

Software Process
Configuration
Management

MVC

Mocks & Stubs

Unit Test
Process Improvement

Agile

Agile
The purpose of this section is to delve into a very special attribute of software
development. Its agility. Ability to respond to a change quickly and efficiently. How would
you facilitate a software process that is nibble, adaptable, bottom-up and reduces the
risk of a failed delivery? For the past decade, Agile has been an inspiration for many
generations of software developers.
As Agile has a very rich and distinct taxonomy, here are some relevant concepts,

Scrum framework

Sprint / Iteration

Daily stand up

Product / Iteration Backlog

Agile Manifesto / Principles

Scrum Master

Product Owner

SAFe (Scaled Agile Framework)

MVP (Minimum Viable Product)

PSPI (Potentially Shippable Product Increment)

Vertical Slices

Batch Size Reduction

Scrum Framework at a Glance

As a matter of introduction I would like to examine the pictorial below from


KnowScrum. As it has all key components of Agile Scrum framework. Many companies
have a similar pictorial on their sites. Please go over each component and make sure
you have some initial familiarity with the topic. We shall systematically approach the
subject within the next few pages.

A Brief History of Agile

Agile is alive! It is documented and has a version number that is bound to increase! Here in
front of us are the key events of Agile movement, (2001) Agile Manifesto is signed by 17 early
adopters, (2002) Scrum Alliance is formed, (2013) Scrum Guide is developed and maintained.

Agile Manifesto
http://agilemanifesto.org

There hardly been a document so brief that produced such a profound effect on so many people.
1. Our highest priority is to satisfy the customer through early and
continuous delivery of valuable software
2. Welcome changing requirements, even late in development. Agile
processes harness change for the customer's competitive advantage
3. Deliver working software frequently, from a couple of weeks to a
couple of months, with a preference to the shorter timescale
4. Business people and developers must work together daily
throughout the project
5. Build projects around motivated individuals. Give them the
environment and support they need, and trust them to get the job done
6. The most efficient and effective method of conveying information to
and within a development team is face-to-face conversation
7. Working software is the primary measure of progress
8. Agile processes promote sustainable development. The sponsors,
developers, and users should be able to maintain a constant pace
indefinitely
9. Continuous attention to technical excellence and good design
enhances agility
10. Simplicity--the art of maximizing the amount of work not done--is
essential
11. The best architectures, requirements, and designs emerge from
self-organizing teams.
12. At regular intervals, the team reflects on how to become more
effective, then tunes and adjusts its behavior accordingly.

Interpreting Agile Manifesto & Principles


for Your Specific Environment
Individuals and Interactions over processes and tools
from Steve McConnell on
Four Drivers of a Software Project

The human variation factors are by far the largest influence


on project outcomes, according to Cocomo II

Why Human Variation Occurs?


Innate capability
Skills development
Motivation
Team Composition
Team dynamics

Working software is the primary measure of progress


Organizational alignment is being done around software delivery
Faster, faster, faster and with a highest quality
Breaking addiction - to process for its own sake
This is not about functional specs, test cases, agile or waterfall - this is about delivery of code
Software engineer / project manager is responsible for streamlining the process

HOW MANY COMMITS


HAVE YOU DONE TODAY?

HOW MANY COMMITS


HAVE YOU DONE TODAY?

Folks are forgetting about the bottom line


Org chart is unimportant
Process is unimportant
Personal advancement is unimportant
Whatever else is unimportant
Software delivery is the priority

Self-Organizing Teams
Immature teams need a strong leader.
As maturity of a team improves, the role of a leader has to change.
Scrum Guide has no role of a manager.
Problems are in sight if.
a leader is clinging to his/her entitlements
a team is unable resolve some inherent conflicts

Examine the Scrum Guide in detail

https://www.scrum.org/Portals/0/Documents/Scrum%20Guides/2013/Scrum-Guide.pdf

This is a "Definitive" Guide that is being quoted and followed all around the globe.
It is authored by Ken Schwaber and Jeff Sutherland.
Roles
Product Owner
Its wide adoption could be attributed to
Development Team
its fundamental assertion "Scrum Is Free".
Scrum Master
Activities
Sprint Planning
Daily Scrum
Sprint Review
Sprint Retrospective
Key Artifacts
Product Backlog
Sprint Backlog
Product
Reports
Impediment Lists

In the true spirit of Agile, the level of commitment to the project


outcome has been divided into two categories. Participants with strong
commitment belong to the so-called "pig" category. It includes Product
Owner, Scrum Master, and Development Team. Remote participants
with a weaker commitment (Managers, Customer Stakeholders) belong
to the so-called "chicken" category. Refer to an insightful discussion by
McConnell, see link.
http://www.construx.com/10x_Software_Development/Scrum_Chickens_and_Pigs

Definition of personas at Term Project uses the industry standard RASCI diagram. In that terminology,
"R-responsible" and "S-supporting" belongs to first Agile category
"C-consulted" and "I-informed" belong to the second category.

What's Wrong with this Picture?

If you examine carefully the picture


below, you might find some
discrepancies with the Scrum
Guide.
1) Scrum Team consists of PO, SM,
and Development Team. There
is no such role as Developer or
Tester. The official Scrum Guide
clearly distinguishes Scrum
Team from Development Team.
2) "Remember that in Scrum
estimates are only a part of the
story". The term "story" is
overloaded. In fact estimates
are not a part of a "story".
3) Story points are a proxy
measure of Size, not a measure
of Complexity. Complexity is
included into Size.

4) The burn down chart makes a visual misrepresentation that "burn down velocity" is decreasing. In fact, Velocity is the number of
story points divided by a time period - is constant or at least stable. What is decreasing is the amount of remaining work.
5) The chart makes a visual misrepresentation that the same Stories that constitute the product backlog are advanced into Release
and Sprint backlogs. This might be true for simple efforts of small organizations. Large organizations need to coordinate among
multiple scrum teams and have a hierarchy of Stories. Product Backlog consists of highest level stories that usually called
Features. Sprint Backlogs consist of lowest level Stories that usually called Leaf Nodes. This organizational hierarchy is missing.

Agile Scrum Self Assessment

There are several certification paths available,


CSM - Certified Scrum Master
CPO - Certified Product Owner
PMI-ACP - Agile Certified Professional from PMI
Metropolitan College has a course
CS 634 "Agile Software Development"

Scrum Alliance is offering an on-line self-assessment that is free of charge. One of the fastest
ways to learn Agile Scrum is to make several attempts to reach the passing grade of 85%.

https://www.scrum.org/Assessments/Open-Assessments/Scrum-Open-Assessment

You are encouraged to take the on-line test.

Here are several examples of questions and answers.

Better SAFe than sorry

http://scaledagileframework.com/

Dean Leffingwell maintains an important site with a wealth of useful information. The topic is how to fit Agile into a large organization. How to coordinate among dozens and even hundreds
teams? The site promotes a certain methodology that could be quite effective.

Build on Cadence,
Deliver on Demand

In the latest SAFe version 4.0, PSI Potential Shippable Increment" is


changed to "PI - Potential Increment"
Other relevant terms,
Release Train Engineer
Architectural Runway
Architectural Feature
Swarming
Epic (note that in implementation of
some companies "feature" is
positioned above "epic" )
WSJF - Weighted Shortest Job First

Wisdom of Vertical Slices

Here is a simple example to illustrate correct affinities.


Stories should be grouped and sequenced in such a way, so to be able to ship multiple
times. This advances our process one step closer to Continuous Delivery.

A number of times I participated in a brainstorming sessions and iteration planning.


Everyone arrives at a large audience usually without windows, so to preserve the wall
space. A secretary greets you at the door with a pack of yellow sticky. All folks
immediately start writing stories on yellow sticky and glue them to a wall. In fifteen
minutes, the whole room turns yellow.
Now is the time for someone to create affinities and merge groups of these stories
together. Usually it is an architect who sais, give me all development stories, so he
waives his right hand and the secretary attaches them to the right wall. Then architect
sais, give me all test stories, he waves his left hand and the secretary attaches them to
the left wall. This would be a bona fide waterfall session.
Alternatively, at Agile session, the question being asked ... what is the actual delivery that
we are able to make this week. How about the next week and the week after that. So all
stories are grouped by week with a certain feature being earmarked specifically for that
week. This is a very different mindset.
The lecture slide called Wisdom of Vertical Slices implies that
dividing all stories vertically is not always the first thought that
comes to mind. It does suggest a second order thinking. If you ask
an individual UX engineer, she might very well tell you that she
wants all her stories to be sequential. Why would she be
interrupted? However, from the top point of view, the quick
customer feedback presents an unmatched advantage
Horizontal and Vertical User Stories - Slicing the Cake By DeltaMetrics Consulting

http://www.deltamatrix.com/horizontal-and-vertical-user-stories-slicing-the-cake

Obsession with Being Done

The most difficult part of a project is to finish it. All your missteps and pieces that were
forgotten at the beginning are going to hunt you at the end. The sheer title of the
McConnell's book "Code Complete" is a reminder of the 80/20 rule when the last 20 percent
takes at least 80 percent of effort.
The notion behind "Definition of Done" is to profess a razor-sharp focus toward completion
of a project. Mario Moreira refers to THREE sequential criteria "done, done, done" that look
like a music sheet from which we have to sing all together.
Big Visible has an insightful blog about the "obsession of being done"
".... Which would you rather be able to say?
Option 1. For our new product 90% of our features are 100% done or
Option 2. For our new product 100% of our features are 90% percent done?
I think youd prefer the first option. Even though from pure mathematical point, the amount
of effort left is the same. The last 10% could hide a significant risk and it usually does. In the
first option, these 10% are concentrated in a single feature. In the second option, these 10%
are spread over all features. So the real question is ... what would you prefer? ... to have
your risk well-bounded in one place or to have it spread around? Apparently, dealing with a
well-bounded risk is a lesser risk.
The culture of Agile teams is to be obsessed about things being done. There is also an
Acceptance Criteria guiding the completion of each story. The Definition Of Done is a
collection of common parts from all Acceptance Criteria of all stories.

Essence of Agile is in MVP-Minimum Viable Product


Let us consider the human factor driving the definition of MVPs . A Product Manager is
compelled to create a constant pressure on Development, so to make them work extra hard
and to go an extra mile by stuffing as much as possible into each Release. In fact, to be fair to
Product Management, this pressure is not self generated, it is transferred from the market
place from global competition.
Donald Reinesrtsen has been at the forefront of Lean movement for three decades. Here is the
link to his key note speech at the well-attended conference YOW 2012. You can judge for
your self, as Donald is an excellent speaker making a strong argument to lay a foundation for
Agile mentality.
http://yow.eventer.com/yow-2012-1012/the-practical-science-of-batch-size-by-don-reinertsen-1269

Of all the ideas of Lean, batch size reduction is the most important economically, sais Donald
Reinertsen. That is why the emphasis of the notion of MVP is on the first word "minimum".

. How not to build a minimum viable product .

. How to build a minimum viable product .

This process benefits


from customer feedback.

Iteration Planning

From Product Backlog to Iteration Backlog

In order to execute the following strategy: Items are pulled from backlog to fill the teams capacity,
as a minimum, one needs,
Estimates of feature Sizes
Teams capacity
Here is the portrait of Leonardo Fibonacci along with his famous work
defining the sequence of numbers that we use in estimation of stories.

In real life, each feature is taken


from the product backlog and
thoroughly reviewed. Several
considerations are taken into an
account when scheduling a
feature into a certain iteration.

Prioritized
Product
Backlog
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24

Story
Story
Story
Story
Story
Story
Story
Story
Story
Story
Story
Story
Story
Story
Story
Story
Story
Story
Story
Story
Story
Story
Story
Story

Rough calculation - fitting stories into iterations iteration 1 iteration 2 iteration 3 iteration 4 iteration 5
# of stories Story Size
capacity
Iteration Schedule
0
7
team members
50
45
30
20
15
7

1
2
3
5
8
13
21
34
1428

2
28
392

weeks per iteration


story points (ideal hours) per week
iteration capacity

3.64

# of iterations

Note that, when calculating capacity, logical


dependencies and architectural scenarios
are not considered.

Real life Examples. When building a house


start with the pond in front of the house. Such an unusual sequence
of activities could be defined only by an architect who has a complete
picture in mind and who has a certain hidden reasons. For example,
one needs to stand on a balcony to see the pond and there is only one
place in the yard where such pond could be positioned.
If we are to diminish the influence of dependencies, and decouple
stories, then the methodology of filling team's capacity reappears.
Here is another insightful case from Scrum Alliance blog titled Requirement
Analysis and Design flow in Scrum,

https://www.scrumalliance.org/community/articles/2009/december/requirement-analysis-and-design-flow-in-scrum

Example Standard Chart


Estimating Agile Projects

Preparing Dinner
(example of estimation)
350 degree oven - 40 minutes
700 degree oven - 20 minutes

How to estimate the whole MVP


before coming down to estimating stories?

A simple answer is ... don't do it.

Session 1 of this course described a Grooming and Scoping process where features are
initially estimated. Such process is actually practiced by several large organizations.
In a company with traditionally non-agile parts of manufacturing and service, one
might be under a huge pressure to produce a commitment very early. This is a true test
for an Agilist to resists the temptation and stick with a first-iteration-only estimate.
Exercise.
Given a front side of the template for a user story
below, that has two attributes
Size and
Business Value
How these two attributes correlate?
Answer 1. If the first story has Size 8, then the
second story with Size 16 is bound to double the
Business Value of the first story.
Answer 2. The correlation between Business Value
and Size - is not trivial. On one hand, UI changes
have a direct link to a customer value that could be
understood and appreciated by a layman. On
another hand, architectural enhancements are
transparent to a user community and have to be
specifically explained to be appreciated.

An insightful story
In 2005 I was teaching a class to a scrum team during iteration zero. The topic was
how to harmonize test cases in Quality Center with stories in VersionONE. Folks were
sitting in a dark auditorium and I was on stage showing dozens of slides. After the
presentation, Scrum Master went to my manager, who had the title of a Chief
Architect, and complained bitterly, saying that Alex is undermining the Agile culture.
Next day I organized the second part of the presentation in a very different set up. I
placed few low folding chairs in a middle of a busy corridor. Some folks were standing,
others were sitting. I also ordered a scrambled eggs and hot coffee to everyone. So
team members were schmoozing among themselves and with folks who were passing
through the corridor. I continued advancing the same topic of various degrees of test
coverage and burn downs, but in a most unassuming way possible. Everyone liked it.
Scrum Master went to my manager and said that Alex is a really devoted and true
Agilest.
The difficulty of this assignment is to respond in-kind
articulate the moral of the story, the lesson, the take away
provide another story in support of this one, that has a similar underlying lesson

Peer Reviews

Defect Prevention through Peer Reviews

Peer Reviews are the industry best practice adopted throughout many successful organizations. The reason peer
reviews are so effective, since they have a single goal of revealing hidden defects close to the point of their origin.
Peer Reviews complement other verification methods, e.g. Unit Test and Feature Test to find defects that would
be impossible to find otherwise.
Let me introduce a very important chart comparing various review techniques. The source for the chart is the book by Capers Jones, Applied
Software Measurement, McGraw-Hill, chapter: Evaluating the Quality Impact of Multiple Technologies. DRE (stands for Defect Removal Efficiency)
is the percentage of all defects found through usage of a certain technique. Apparently the lowest Efficiency belongs to personal editing of own
work. Question is, why is it so difficult, or almost impossible - to find defects in your own work product? Why is it so much more effective to let
other folks to find your defects? There are many
answers to that question. But the numbers on the chart
talk themselves. Formal Inspections or peer reviews
DRE
Type of Review
are about three times more effective than the personal
editing. So it is much cheaper to facilitate a peer review
rather than to attempt finding your own defects.
1 Personal editing of own work
25%
The justification for the peer review process is in
these numbers.
2

Group informal design reviews

30%

Manager's review of employee work

40%

Group structural walkthroughs

50%

Formal design reviews (SPR inspections)

65%

Formal inspections (Fagan)

80%

Unit, Integration, or Systems Testing (Each)

50%

Here is the link to the book "Applied Software


Measurement"
Source: Capers Jones, Applied Sof tware Measurement, McGraw-Hill,
http://www.amazon.com/Applied-Software-Measurement-Assuring-Productivity/dp/0070328269

chapter: Evaluating the Quality Impact of Multiple Technologies.

Here is the link to the LinkedIn discussion


" From
Blackis the
To percentage
White Box
Testing",
where
Capers
elaborates
on the
Defcalled
ect Removal
Ef f iciency
of all
def ects found
through
usage of
a certain technique.
latest DRE numbers.
https://www.linkedin.com/groupItem?view=&item=5891918620295979008&type=member&gid=1159917&trk=eml-group_discussion_new_comment-discussion-title link&fromEmail=fromEmail&ut=133El0OuNwHSk1

What is a Peer Review?


The definition of the process is as follows. Peer Reviews are a formal examination of work products by peers to identify
defects.
Let us elaborate every word of this definition.

Terminology: Peer Reviews, Fagan Inspections, Structural Walkthroughs, Design Examinations

Formal the process is documented and has a version number, it is not improvised each time it is enacted

Examination the point of the exercise is to methodically go through a work product in great detail

Work Products Organizational Policy lists those work products that are supposed to be peer reviewed

Peers the concept of peers is fundamental to the process; roles are rotating; there is no role of a manager in
finding defects; mangers leave their hat at the door; empirical studies show that effectiveness dips if a manager
present, since folks are embarrassed to show

To Identify Defects the single goal of the process is to identify defects; not to brainstorm possible solution; not to
prove that my design is better than yours; such single goal makes the process most effective

Swim Lanes

Here is the workflow for one peer review. Could not overstate the importance of activity (3) "Preparation". It is much too
often I have seen folks arrive at a meeting without a thorough examination of an artifact. Moderator is encouraged to
postpone a meeting if preparation appears to be inadequate.

A Peer Review
Conduct Review

Manager

Plan Review

1
Plan for Work
Product Review
No

Author

Send
Work Product for
Review and
Schedule review
Meeting

Review
Follow-up

10
Perform
Retrospective
Analysis / Audit on
Review Records

Consolidate and
Analyze the
Review Comments

Moderator
Reviewer

Yes
Done

Scribe

Verify and Close

Update Work
Product

Archive /
Announce
Work Product and
Review Records
in Repository

Conduct Review
Meeting

3
Preparation.
Review
Work Product
and Provide
On-line Feedback

6
Record the
Meeting
Version 10.1

Rotating Roles

Rotating roles of a peer review are shown below as "hats", which the same person could wear and switch. This
ensures the democratic nature of the process.
There is no role of a "manager". As a manager leaves his/her hat at the door, so to speak.
Roles could be combined, for example, a Producer often plays several roles. In such cases, these roles should be
interpreted as "Functions" performed by the same person. As someone always needs to lead a meeting (perform
Moderator function) and someone needs to record defects (Scribe).

Moderator

Moderator
Producer

Reader

Manager

Reviewer

Recorder

Consumer

Planning Multiple Reviews

Pictorial below focuses on planning aspect of a peer review. Without a consistent planning, multiple defects are
usually omitted.

plan

respond / accept / reject

consolidate

incorporate

at least three days


Complete
Work Product

Distribute
for peer review

Close
Request

Additional Planning Considerations making sure that:


- people play different roles, see table below
- Peer Review load is equally spread

John Smith
Eva Brown
Gerry White
Joe Alexander
Harry Block
Jeffrey Car

1/1/98 1/8/98 1/15/98 1/22/98 1/29/98


M
I
I
I
S
R
P
M
I
I
P
M
R
I
S
R
S
P
M
I
I
R
S
P
M
I
I
R
S
P

Schedule
F2F meeting

Announce
new version
in Repository

Do Not.distribute incomplete work product


Do Notdistribute a work product
with less than three days prior to F2F meeting
Do Notleave any comments un-addressed
Do Notparticipate in a F2F meeting un-prepared

Roles:
Moderator
Scribe
Reader
Inspector

Simplest way to plan peer reviews:


- allocate a standard time slot
for example,
every Friday from 10 to 12
- book the same conference room for a whole project length

Do Notask the same person to always be a Scribe

a key process parameter


should be tailored based on
the team and product specifics

"Good Catch" versus "Inappropriate Comment


There is a world of difference between skillfully finding important defects versus continuously wasting precious time on
irrelevant chats. People are infinitely grateful when someone corrects their work in an unthreatening environment
where everything is focused on resolving issues. And folks could be hugely irritated when being criticized personally or
when they feel that their career is at stake. Having said this, here are several simple rules that go a long way in improving
effectiveness of peer reviews.
Comments are provided toward a work-product and not toward an Author. For example it is expected to say I see a
missing here". And it is inappropriate to say "John has missed this".
Respecting an Author is fundamental. Staying away from editorials is a key. It is prudent to provide editorials off-line
while focusing on major issues only. It is inappropriate e-mailing an author a torrent of minor misspellings while
copying his boss and a great number of other people. This is not about revealing major defects, this is about
something else.
Focus on issue not on solution. One could start with..."the way I would do this"..... and this is already stepping into
an Author's territory. A Reviewer is expected to state an issue and stop right there.
If we measure effectiveness as the number of major defects divided by the number of people and by the time
spent. Then whatever is not a part of this formula should be stopped. For example, all lengthy discussions about
how to improve effectiveness of peer reviews - do not belong to a peer review itself.

Tailoring Standard Process For Your Environment


It is important to be aware of the standard organizational process. It is equally important to tailor it to a specific project
or a team. Key parts cannot be tailored out. For example, I do not see how "finding defects" could be tailored out.
Similarly, "recording defects" and "preparing to a review" are the pieces that hold it all together. On another hand,
skipping a "review meeting" might prove useful, particularly in a situation when the same people examine sequential
classes of code day in and day out.
Considering the environment of an on-line BU offering, MET CS 473, here some tailoring notes.
Students are offered a Drop Box to submit in-process and final deliverables
The Blackboard Discussions are used to provide and respond to comments. Hope your comments could benefit
from the simple rules provided earlier
There are Live Meetings scheduled at session 5 and session 6. Note that issues revealed during a live meeting are
very different from issues found during preparation.
Several meetings have been scheduled to respond to questions and make sure all of us are on the same page with
this key process.

Checklists

Checklists could play a significant role in peer reviews. Some organizations have extensive libraries of checklists. We shall
cover the "theory" behind checklists and then draft one checklist that could be used at our upcoming examination of the
Term Project.
Checklists are usually compared to crutches that help someone who is new to the process. Understanding that once
he/she has learnt the skill of discovering major defects and is able to walk and even run, crutches become unnecessary.
Several common attributes of a peer review checklist are,

Concise, agenda-like
Focuses on weak, error prone, complex areas
Allows for joint discovery
Facilitates group's synergy

Here are the three basic steps. First, an Author picks the most relevant checklist from an organizational library. Then
tailors it to cover critical areas of concern and then includes it along with the work product that is being distributed for
review.

Select /Tailor
standard
checklist

Identify
Critical
Areas

Record
Checklist
In Request

Here is an example of a checklist,


you are encouraged using
while submitting artifacts
for a peer review.

Completeness
All items have been submitted for a peer review. Content of Drop Box matches CI list.
Personas include both Humans (e.g. operator) and Non-Humans (e.g. system)
Use Cases cover all requirements
Test cases cover all requirements and use cases
Correctness
Artifact's version is greater than 1.0 , it has been reviewed, corrected and version increased
File names of artifacts in Drop Box follow the naming convention on CI list
Fields are defined to cover Defaults and Ranges
Estimation Record includes Size, Effort and Schedule and has at least two iterative values
State transitions are cross referenced with Fields and Reports
Style
All deliverables are done in a professional manner with appropriate level of detail
Requirements follow Canonical form
Requirements are verifiable thru INVEST checklist
Use Cases follow UML notation
Five test cases are selected to yield best coverage

Here is an insightful exchange.


- Hello John, I see you distributed
a test plan. What do you expect from
this review? What kind of comments?
- Whatever ... any comments are good.
- There are many different people on distribution list. Wouldn't you want to give them some direction?
- It is not my responsibility to give reviewers a direction. I write the document, they do the review that is it. If no one
provides any comments, I will ship it sooner, which works for me just fine.
.... The concept of a peer review checklist is opposite to 'throwing the doc over the wall'. The agenda-like checklist
establishes an atmosphere of openness. An author is expected to say .. this is the area I am most concerned with and
this area I do not worry about. Another key aspect of an exchange is that author should not rash the review, as the
quality of the work is at stake here.
Why checklists are so powerful? Because they enable a team to accumulate the knowledge and improve the skill over
time. You can always refer to a previous checklist used at a similar peer review. It creates a part of an organizational
learning repository, reflecting most frequent mistakes.
PAL usually has more items than CI list, which is a sign of a required cleanup
State Transition Diagram usually has more states, than a final product, which is a sign of a required baselining

Commercial Tools for Peer Reviews


There are many commercially available tools for peer reviews. For example, the following three competing vendors,
Crucible
Collaborator Smart Bear
Review Board - offers fundamental features in support of the industry standard process,
retaining attributes about each event, e.g. date, artifact name, roles and names of participants
maintaining multiple comments from each peer review
deriving statistics for individual and multiple peer reviews

The following screenshots provide an example of information that is collected and retained.
The initial screen has data about each event.

Project Name:
Artemis #:
Lines of Code:
12

Organization:
Lines of Document:

Meeting Date:
Work Product:
Type: (I)nspection, (R)einspection:
Project Classification:

22222

FSDM Module: BV/ PPI/ SRA/ AIP/ PER/ PAI/ SDS/ TSD/ TPD/ UPD/ SAT/ TRA

Print Sheets

1
2
3
4
5
6
7

participants
moderator
producer
recorder
reader
inspector
inspector
inspector

name
fasdfsdaf
sdfasdf
asdfasdf
asdfasdf
asdfasdf
asdfasdf

preparation
1
1
1
1
1
1

total effort:
productivity:
defect density:
examination rate:
inspection start time:
inspection stop time:

inspection

follow up

6
0.00

0
0.00

0
0.00

#DIV/0!

(hh:mm)
(hh:mm)

work product disposition:


(A)ccept
(C)onditional
(R)einspect

Additional Comments:

6
0.00
0.00
#DIV/0!

1
2
3
4
5
6
7
8
9
10

checklists used
Correctness
Completeness
Style
Rules of Construction
Multiple Views
Metrics
Technology

This screen has information about each comment.


Meeting Date:
Work Product:

Print Sheets

Note: More rows are added automatically when the last row is used.

component
1w

page
we

description

line
we

type

category severity checklist origin

resolution

werreawerwaer

werwerwe

werwerwe

4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Defect Type: (DA)Data/(DC)Documentation/(FN)Functionality/(HF)Human Factors)/(IF)Interface/(LO)Logic/(MN)Maintainability/(PF)Performance/(SN)Syntax/(ST)Standards/(OT)Other
Resolution:(ATI)Added Issues List/(COR)Corrected/(DEF)Deferred/(EIE)Entered In Error/(NAD)Not A Defect/(NAT) No Action Taken
Defect Category: (M) Missing/(W)Wrong/(E)Extra
Defect Severity: (J)Major/(N)Minor

Data Entry Spreadsheet Version 2.3

This screen is read-only. It has a derived statistics about an individual event.

Meeting Date:
Work Product:

Print Sheets

major defects
minor defects
defect types M W E TOT M W E TOT TOT
data (DA)
documentation (DC)
functionality (FN)
human factors (HF)
interface (IF)
logic (LO)
maintainability (MN)
performance (PF)
syntax (SN)
standards (ST)
other (OT)
total:

Pulling Strings
LEAKAGE
Inspection
3

Inspection
2

Inspection
1

Inspect less of the


volume.
Inspect more
thoroughly,
find fewer defects.

Examination
Rate

Defect
Density

Break-Even Point
Defect Cost
Assumptions:
- 3 major defects per inspection
- 1 inspection per day average
- $80 is hourly loaded rate
- 30 person/hours per inspection

4,000,000
3,000,000
2,000,000
1,000,000
0
-1,000,000 100 500

700 800 1000 5000

cost of a major defect


The most venerated metrics of all times,
Cost of Delay
Cost of Quality
Cost of a Confusion
Cost of Losing Customer Confidence

SI Cost Avoidance & Number of Inspections


12

1500

Average Cost Avoidance = $7,770 Per Inspection

1300

10
9

1100

1000
900

800
6
700
5

600

500
400

300
2
200
1

Cumulative Cost Avoidance in $ Millions

0
Jul-96

Jun-96

May-96

Apr-96

Mar-96

Feb-96

100
Jan-96

Cumulative Cost Avoidance In $ Millions

1200

Cumulative Number Of Software Inspections Performed

1400

11

Cumulative Number Of Software Inspections Performed

Types Of A Human Error


Skill - Based
Omissions Following Interruptions
Perceptual Confusion
Double-capture Slips
Rule - Based
Information Overload
Redundancy of Rules
Knowledge - Based
Overconfidence
Illusory Correlation
Problems with Casualty
Problems wth Complexity
From James Reason, Human Error

Defect Models - Matching Types


Omissions
following
interruptions

Defect Types

Human Errors
overconfidence

Problems
with
causality
Information
overload

Completeness
Defects

slips

recommended
style

standards

reuse

correctness

Illusory
correlation
Problems with complexity

presentation

metrics

portability

Вам также может понравиться