Вы находитесь на странице: 1из 4

IT Service Management: A Guide for ITILFoundation Exam

Candidates, Second Edition


by Ernest Brewster, Richard Griffiths, Aidan Lawes and John Sansbury
BCS. (c) 2012. Copying Prohibited.

Reprinted for Ioana-Bianca Daraban, Capgemini US LLC


none@books24x7.com
Reprinted with permission as a subscription benefit of Books24x7,
http://www.books24x7.com/

All rights reserved. Reproduction and/or distribution in whole or in part in electronic,paper or


other forms without written permission is prohibited.

ITServiceManagement:AGuideforITILFoundationExamCandidates,SecondEdition

Chapter 33: Measurement and Metrics


INTRODUCTION
While measurement and metrics are neither processes nor functions, this chapter earns its place here because it is difficult
to overestimate their importance.
Measurement is a prerequisite to improvement. Put simply, if you can't measure something, you can't improve it or show
that it has improved. The reason for this is that to make an improvement, you have to identify that something has gone
wrong or not happened and then understand why. Only then can you diagnose the root cause and apply a change to
eliminate it, preventing the same thing from happening again and thereby improving performance.
There are other reasons for measurement:
n

To demonstrate that an operation or service has performed according to requirements or specification. An example of
this would be the publication of a train company's performance against its service levels for the timetable (i.e. the
percentage of trains that arrived on time).
To prove to a stakeholder that they received what they commissioned and for which they might have paid (e.g. an
independent audit of the performance of a third-party sales company engaged to generate new sales from a call
centre).
To compare the performance of one operation or service against another, as in a benchmark.
To establish a baseline that represents the present situation and from which to demonstrate a variation in the future
(e.g. the share price of a new company on the day it goes public and trading in shares starts is a baseline).

These examples show that measurements are justified for reasons other than improvement. However, only when used to
create improvement can they tangibly increase value for an organisation and its customers. It is for this reason that
measurements and metrics as a topic is included within the continual service improvement part of the service lifecycle.
KEY PERFORMANCE INDICATORS AND METRICS
There is often confusion concerning the differences between key performance indicators (KPIs) and metrics. Here are
ITIL's definitions:
METRIC
Something that is measured and reported to help manage a process, IT service or activity.

KEY PERFORMANCE INDICATOR


A key performance indicator (KPI) is a metric that is used to help manage an IT service, process, plan, project or other
activity. Key performance indicators are used to measure the achievement of critical success factors. Many metrics may
be measured, but only the most important of these are defined as key performance indicators and used to actively
manage and report on the process, IT service or activity. They should be selected to ensure that efficiency,
effectiveness and cost effectiveness are all managed.
To expand slightly on these definitions, the following view of KPIs is helpful: 'Metrics are used to help an organisation
define and evaluate how successful it is, typically in terms of making progress towards its long-term organisational goals.'
In summary therefore, a KPI is simply a more important metric because it references goals rather than just performance. As
such, there are typically fewer KPIs than metrics and we often find that KPIs are expressed in terms of two or more metrics.
For example, 'Number of incidents reported' and 'Number of incidents resolved by the service desk' are both metrics, but
'Percentage of incidents resolved by the service desk' expresses the latter as a proportion of the former to produce a
meaningful measure of quality or KPI for a customer.
Three types of metric
Page 2 / 4
ReprintedforQ4OGY\282725,CapgeminiUSLLC

BCS,BritishInformaticsSocietyLimited(c)2012,CopyingProhibited

ITServiceManagement:AGuideforITILFoundationExamCandidates,SecondEdition

ITIL recognises three distinct types of metric in support of continual service improvement:
n

Technology metrics: These metrics are associated with components such as 'mean time between failures'. These
metrics are only usually used internally to understand the capability of the technology components on which a service
depends to remain in service.
Process metrics: These are typically used to measure the quality, performance, value and compliance of a service
management process as a way of identifying improvement opportunities. An example is 'Percentage of failed changes'.
These metrics are used to ensure that the processes are conforming to documented procedures.
Service metrics: These are used to measure and report on an end-to-end service, for example 'Percentage
availability of the web service in the last month'. These are the metrics that are used in performance reports provided to
customers.

In summary, measurements and metrics are focused on the key attributes of performance, compliance, quality and value.
Baselines
Baselines have two purposes:
n

To provide a reference point against which to demonstrate future improvement.

To measure the health of an operation or process to see if it requires attention.

Baselines can and should be established at strategic, tactical and operation levels. Initially, baseline measures may be
difficult to compile and of questionable accuracy. However, the data is still valuable as a focus for improvement potential.
Critical success factors
Critical success factors (CSFs) identify areas that are critical to the success of the enterprise and, as such, they tend to be
high level and few in number. For example, the involvement of the business could be a CSF for IT service. CSI might have
senior management involvement or commitment as a CSF. Improved IT service quality could be a sensible CSF for IT
service management. We might see clearly defined roles and responsibilities as a CSF in any organisation.
For CSFs to be more than just a vague concept there has to be some way of measuring them: a way of calibrating our
performance against the things that are most important to our business; that define our success or failure. To achieve this
we need to break down each CSF into things that we can measure and that will help us assess how well we are doing. We
need something we can measure that will give us a characteristic or a numerical value that will tell us whether our goals
are being achieved.
Earlier, we described key performance indicators (KPIs) as important because they reference goals rather than just
performance. This discussion of CSFs shows how organisational goals, critical success factors, KPIs and metrics are
interrelated in our performance management and improvement framework.
Baselines tell us where we started from, or where we were the last time we checked, whereas goals, CSFs and KPIs tell us
where we are going and if we have arrived, or at least if we are still going in the right direction.
USING METRICS AND KPIS TO IMPROVE PERFORMANCE
It is not the aim of this book to provide formal guidance on the use of measurement in IT service management. Many books
have been written on the subject and the ITIL Foundation syllabus does not require the candidate to achieve such
expertise. However, some guidance on constructing meaningful measures and reports is useful in order to help you
establish a measurement framework that you can use for performance improvement and in which you have confidence in
its integrity and value.
Measures should always encourage the correct behaviour. Measures that give incentive to productivity without a
corresponding quality measure usually don't do this.
EXAMPLE
Having a target for 'Number of calls handled per service desk analyst per day' will only encourage analysts to shorten
calls, typically to the detriment of the caller. In a real-life example, one member of a sales team promoting a new product
Page 3 / 4
ReprintedforQ4OGY\282725,CapgeminiUSLLC

BCS,BritishInformaticsSocietyLimited(c)2012,CopyingProhibited

ITServiceManagement:AGuideforITILFoundationExamCandidates,SecondEdition

consistently sold more products than her colleagues. However, when management monitored the calls to understand
the secret of her success, they were horrified to hear that she was promising full refunds if the buyer was in any way
unhappy. She achieved the highest sales but also created the highest refund levels, except she wasn't measured
against refunds!
Measures should be meaningful to those receiving the performance reports. It may be easy for IT to report 'mean time
between service incidents' or perhaps 'Percentage availability of service X', but the service recipient may only be interested
in or understand the number and duration of outages and, more importantly, what the service provider is doing to prevent
future outages.
Measures should be unambiguous. For example, service desks are increasingly recognising the importance of using 'Firstline resolution rate' as a KPI. This is entirely appropriate, provided it is properly measured. However, there are significant
variations in the way this KPI is measured that make it very hard to compare across organisations or establish a target
value. Most organisations measure first-line resolution rate by dividing the total number of logged calls by the number of
logged calls resolved by the service desk without escalation, expressed as a percentage. The simplest reason for
variations is because contacts to the service desk can be broadly divided into incidents, Service requests (including
password resets) and information requests. Incidents are much harder to fix at first line than either of the others. Therefore
by failing to separate out the incidents from the requests, the first-line resolution rate will vary according to the mix of
contact types as well as the performance of the service desk, making the measure meaningless.
A further distortion of measurement occurs with variations against the period over which a measure is taken. Put simply,
the longer the period, the easier it is to meet the target. For example, if the target availability of a service is expressed as
99 per cent, it is easier to meet this over a monthly period than a weekly or daily period. Either is valid, but each
organisation needs to decide which is most applicable.
METRICS IN REPORTS
Focus on the exceptions. Where you are reporting measures internally, rather than to a customer, report the exceptions
rather than the conformance. For example, if 200 changes were implemented last month and four failed, instead of
reporting a 98 per cent success rate and patting each other on the back, report that there were four failures. These
represent the improvement opportunity and provide a focus for action.
KPIs and metrics should therefore be constructed in a way that all parties to the agreement understand and accept as an
accurate measure of performance. When comparing or benchmarking such measures, ensure that the values against
which you compare are constructed on the same basis: they often aren't!
When constructing performance reports do not rely solely on charts or numeric tables but use both together for a full
picture. Charts are useful for showing trends and exceptions but are often manipulated to create a specific perception, for
example by starting the y-axis at a value other than zero to increase apparent variations, or by excluding exceptions to
better define a pattern. Numbers show absolute values and generally have higher integrity, but are less useful for showing
trends.
Most recipients of reports focus on exceptions, so where there is an exception, it is the responsibility of the report creator
to explain the cause. Where the exception is undesirable, the explanation should include the actions taken to prevent
future exceptions. The combination of charts, numbers and explanations in a report will always provide more value than a
report missing any one of these.
TEST QUESTIONS FOR CHAPTER 33
CSI 05, CSI 06, CSI 07, CSI 10

Page 4 / 4
ReprintedforQ4OGY\282725,CapgeminiUSLLC

BCS,BritishInformaticsSocietyLimited(c)2012,CopyingProhibited

Вам также может понравиться