Вы находитесь на странице: 1из 4

Performance Engineering Model

What Should Be & Why

Obtain Approval,
What Has to Develop a Plan,
“Should Be” Required Change & How Do Build Support,
Societal Analysis Performance We Change It? Marshal Resources Execute

Organizational
Selected Solution Solution Solution
Gaps Identification Configuration Execution
Operational

Individual “What Is” Realized


Solution Evaluation
Analysis Performance

What Is & Why How Is It Going, How Did It Go, What Did We Learn?

© Fred Nickols 2009

Fred Nickols | nickols@att.net June 25 2009 | 1219 EDT


A Performance Engineering Model
The aim of this paper is to briefly describe the Performance Engineering Model shown in Figure 1. First, some brief background and then a
description of the model.

Background
The model grew out of a request from a young woman who was trying to establish a performance consulting group in her organization. She
thought the model she found on the ISPI web site at the time was too complicated for her purposes and she wanted something simpler. What I
worked out for her was the model in Figure 1, minus the four levels of performance box on the left-hand side of the model. That box was added
during the course of a discussion among some ISPI members regarding ISPI’s future directions. Those discussions also led to some slight wording
changes (e.g., Selected Gaps instead of Gap and the notions of approval and lessons learned). One person suggested that the words outside the
boxes would make a good “elevator speech” and you will find such a speech at the end of this paper.

Description
The Performance Engineering Model in Figure 1 begins with an assessment of required and realized performance, what some people refer to as
“what should be” and “what is.” This assessment or analysis can be undertaken at any given level of performance (individual, operational,
organizational or societal). It can also encompass more than one level (e.g., individual and operational) or it can encompass all four levels. It can
also begin at the bottom and work upward or it can begin at the top and work its way down. It can even begin in the middle and fan out in both
directions. Decisions regarding how many levels of performance to include, where to start, and how to proceed are affected by many factors; in
particular, the restraints and constraints under which the analysis is being conducted. It is hoped that, on as many occasions as possible, the
analysis encompasses all levels and that actions and results obtained at lower levels contribute to results at higher levels, especially with respect
to beneficial societal impact. In all cases and at all levels, the analyses of required and realized performance are expressed in terms of results,
not conditions or resources or activities.

With the assessment or analysis of required and realized performance in hand, gaps between the two can be identified, some of which can be
selected for resolution. The world is full of gaps or discrepancies between the way things are and the way we would like them to be. Not all
gaps are worth bothering with and there aren’t enough resources to address all the gaps we might consider important. We must be selective.
Some of the criteria to consider in selecting gaps for resolution include the following:

 The cost of the gap itself


 The payoffs of closing the gap

Fred Nickols | nickols@att.net June 25 2009 | 1219 EDT


 The costs of closing the gap
 The likelihood of successfully closing the gap
 The time it will take to close the gap
 The kinds of resources required to close the gap

Once a gap has been selected for resolution, attention turns to engineering the solution, a course of action that will close the gap. The first step
is to identify the solution. In many cases, closing the gaps requires changing some aspects of the situation in which the gap manifests itself. Two
factors are central here: (1) what has to change and (2) how can those things be changed? This is almost always the case when the aim is to
improve upon existing arrangements. On occasion, a gap in results occurs because brand-new goals or targets have been set. There are no
existing arrangements. Here, the task at hand is to design (and subsequently build) a performance system that will yield the desired results.

Identifying a solution is only part of the engineering task. That solution must also be configured. Approval must be obtained; a plan must be
developed; resources must be marshaled and support must be garnered. All this is by way of preparation for implementation or execution.

Assuming everything is ready (or at least as ready as it can be) attention turns to execution, to carrying out the plan, to implementing the
identified solution.

The model shows evaluation as what some might call the last step. However, evaluation is an ongoing process. The two-way arrows between
evaluation and the three solution-related boxes indicate that this part of the process is iterative and that subsequent steps can “feed back” and
affect earlier ones. This is reality in situations where circumstances and conditions are dynamic, fluid and ever-changing. So, all along the way,
evaluation is being conducted. Early on, this is for the purpose of keeping track of how things are going, and where necessary, adjusting and
adapting so as to keep things on track. Once the effort is completed, the focus of evaluation turns to how things turn out and what was learned.

Is That All There Is?


Yep. Certainly more detail could be provided but it would necessarily take the form of this author’s experiences and preferences. Much is
known about how to carry out the steps and stages of the model shown in Figure 1. Indeed, entire books have been written about the subject.
Moreover, there are many capable practitioners who have their own notions about how these stages should be carried out. There is, then,
available to the interested reader a wealth of information, tools, methods and techniques for engineering performance. My goal in this paper is
simply to frame the practice of performance engineering in a way that will provide a common mental model on which many practitioners might
agree and still leave room for practitioners to practice their craft and further develop and refine the practice.

Fred Nickols | nickols@att.net June 25 2009 | 1219 EDT


An Accompanying Elevator Speech

What do we do? We engineer performance – and we do it at and across four levels: Societal, Organizational, Operational and Individual.

We start by determining required performance and we compare that with current performance so as to identify any gaps worth pursuing.

From there, we figure out what to change and how to change it – what to do and how to do it. Next, it’s a matter of getting approval, putting
together a plan, building support and marshaling resources.

Then comes execution – we do what we figured out earlier. As we proceed, we evaluate – we keep track of how things are going and, if
necessary, adjust our course of action so as to keep things on track.

When all is done, we evaluate some more – this time to determine how well we succeeded and what was learned.

Acknowledgements

The model shown in this paper, although primarily of my making, owes much to comments and suggestions to and the work of others. Clearly,
the four levels of performance reflect the “Mega” work of Roger Kaufman. Mariano Bernardez and Carol Panza both stressed the importance of
impact analysis as part of selecting gaps for resolution. Don Tosti pointed out that approval is a necessary ingredient and that evaluation should
also point to lessons learned. And Darlene van Tiem drew attention to the potential of the words above and below the boxes as having the
potential for an elevator speech. Thanks to all who contributed.

Fred Nickols | nickols@att.net June 25 2009 | 1219 EDT

Вам также может понравиться