Вы находитесь на странице: 1из 122

AAMCoG Assets Performance Measure

Public Sector Asset


Performance
Measurement and
Reporting

© 2008 CIEAM
AAMCoG Assets Performance Measure

The Australian Asst Management Collaborative Group (AAMCoG)


The Australian Asset Management Collaborative Group (AAMCoG) was formed in August
2006 by the CRC for Integrated Engineering Asset Management (CIEAM). AAMCoG is a
collaboration of several of Australia’s peak bodies interested in work programmes in asset
management. AAMCoG’s mission - “Facilitate collaboration between interested organisations
to promote and enhance asset management for Australia”.

Members of AAMCoG aim to:-


ƒ Collaborate nationally on asset management strategies between all asset management
groups
ƒ Coordinate transfer of technology and knowledge sharing of asset management R&D
ƒ Promote skills development in asset management
ƒ Host the annual National Forum for Asset Management
ƒ Act as a communication channel between member bodies
ƒ Inform asset owners/custodians of the critical aspects of whole of life asset
management

For further details, please refer to the AAMCoG Website www.aamcog.com

Acknowledgments
The CRC for Integrated Engineering Asset Management (CIEAM) would like to
acknowledge the financial support from the Commonwealth Government’s Cooperative
Research Centres Programme and the contributions from our industry, government and
university participants.
CIEAM would also like to acknowledge the following contributions to this project:
Dr. Fred Stapelberg of CIEAM
Mr Graham Carter of the APCC
This project was undertaken under the guidance of Professor Joseph Mathew chair of
AAMCoG and Professor Kerry Brown, Executive Officer, AAMCoG.

Confidentiality
In accordance with Australian freedom of information legislation, all information collected as
part of this study will be retained for seven years in a safe and secure environment. Paper-
based data will be stored in a locked filing cabinet, and electronic data will be encrypted and
stored at CIEAM Head Office, Brisbane.

Disclaimer
AAMCoG members make use of this report or any information provided by CIEAM at its
own risk. CIEAM will not be responsible for the results of any actions taken by members or
third parties on the basis of the information in this report, or other information provided, nor
for any errors or omissions that may be contained in this report. CIEAM expressly disclaims
any liability or responsibility to any person in respect of anything done or omitted to be done
by any person in reliance on this report or any information provided.

Enquiries
Communication Officer/ Jane Davis Phone: +617 3138 1471
CRC for Integrated Engineering Asset Management Fax: +617 3138 4459
Level 7, O Block, QUT Gardens Point campus Email: jane.davis@cieam.com
GPO Box 2434
BRISBANE QLD 4001

© 2008 CIEAM
AAMCoG Assets Performance Measure

CONTENTS

Page:
INTRODUCTION 1

1. Primary Assets Performance Measures 5

2. Level of Service in Infrastructure Assets 16

3. Primary Assets Key Performance Indicators 26

4. Principles of Assets Performance Measurement 37

5. Infrastructure Assets Performance Specifications 42

6. Infrastructure Assets Performance Standards 59

7. Assessing and Evaluating Assets Performance 63

8. Assets Performance Measurement Framework 85

9. Establishing Assets Performance Benchmarks 92

10. Assets Performance Measurement Reporting 114

REFERENCES 116

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e |1

Introduction
Profit is the business driver in the private sector and ‘return on investment’, is the key
measure of assets performance, which is the ratio of financial gains or losses on
capital asset investment, relative to the amount of money invested. The public sector
however is based on a different exemplar, and the key driver for the ownership and
management of assets is the provision of ‘service’ to the community. Unlike the
private sector, public sector organisations do not normally have an associated income
stream and are not intended to generate a profit. The private sector is also affected by
different asset management imperatives associated with taxation and depreciation of
assets. The application of private sector asset performance measures that are based on
the premise of income and profit, for example Return on Assets, Revenue Ratio,
Revenue per m² etc., is therefore not appropriate to the management of public sector
assets. Consequently there is a need to establish a suite of performance measures for
the effective and efficient management of public sector assets.

The basic principles of asset management represent current thinking with national
asset owners as well as professional organisations representing asset owners. The
following includes some of the important asset management principles that have been
developed by the APCC to enable asset management to be integrated into the
mainstream of Government and Government Agency business planning:
• Asset management within Government Agencies must be consistent with the
concept of whole-of-government policy frameworks.
• Asset management decisions should meet the needs of the present without
compromising the needs of future generations. The responsibility for asset
management decisions should reside with the Agencies that control the assets.
• The strategic planning and management of assets are key corporate activities, to
be considered along with the strategic planning for other resources such as
Human Resources and Information Technology.
• Assets should exist to support service delivery. Before deciding to acquire new
assets, Agencies must consider all relevant factors including non-asset solutions,
full life cycle costing, risk analysis and the better use of existing assets.
• The cost of providing, operating and maintaining assets must be reflected in the
relevant Agency’s budget sheets.
• Government Agencies must report on the usage, maintenance and performance
of their assets.

Assets performance measure goes hand in hand with asset management. The
application of asset management principles and practices requires adopting a
performance-based approach to asset management and resource allocation.
Performance-based approaches can strengthen both external accountability and the
effectiveness of internal decision-making. Performance measurement describes what
is accomplished. When a performance measurement system is established, many
decisions have to be made such as what to record, and when and how to gather
information. It is essential to know what guidelines can be used to make these
decisions and what the characteristics of good performance measurement systems are.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e |2

Good performance measurement can demonstrate results during the time of service,
and are relevant, accurate, and feasible. They identify strengths and short-comings,
and measure outcomes that are valued by stakeholders, including decision-makers.
Assets performance measures are specifically used to assess the asset’s financial
performance, its function, utilisation, and its physical condition.
• Financial Performance: Are the asset's operating costs similar to those for other
comparable assets? (use of benchmarking) Are the energy, cleaning and
maintenance costs reasonable? Are user charges being made, and how do they
relate to the total operating costs of the asset (including cost of capital)?
• Utilisation: How intensively is the asset used? Could it be used more productively
by extending its working hours, or by accommodating additional functions?
• Function: How well suited is the asset to the activities and functions it supports?
• Physical Condition: Is the asset adequately maintained? Is there a maintenance
backlog that requires attention? Are major replacements or refurbishments likely
to be required during the planning period?

The public sector considers the continuous inter-relationship between an asset’s


capacity and utilisation, budget and actual expenditure, possible and actual condition,
and the asset’s replacement cost and depreciated value as criteria for integrated
performance measurement and benchmarking, where (APCC, 2001):
• Asset Costs - budget versus actual expenditure.
• Asset Utilisation - capacity versus utilisation.
• Asset Value - replacement cost versus depreciated value.
• Asset Condition - possible condition versus actual condition.

It is the combination of these factors that will provide the most appropriate asset
performance measures, particularly in determining public sector service delivery.

The concept of service delivery is the foundation of asset management performance.


There are two performance criteria related to this concept of service delivery,
specifically:
• Level of Service (LOS): The Level of Service is an indicator of the extent or
degree of service provided by an asset, based on and related to the operational and
physical characteristics of the asset. Level of service indicates the capacity per
unit of demand for an asset, particularly for public infrastructure assets.
• Standard of Service (SOS): The Standard of Service (SOS) states in measurable
terms, how an asset will perform, including a suitable minimum condition grade in
line with the impact of asset failure.

Traditionally performance specifications have given a preferred design solution to


define how a service is to be delivered, or an item is to be fabricated or constructed.
This has often been based on empirical evidence - ‘it worked before so it will work
again’. This method of specifying is inherently conservative with little incentive to
explore options for potential improvements. There are many different views on what
constitutes a performance specification.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e |3

Performance specifications that avoid prescriptive methods and focus on the final
product or service can be described either in terms of the delivery of service or in
terms of the benefits delivered – output and outcome driven measures, where output
measures define the end product or service, and outcome measures define the benefits
that should be delivered. This will usually take the form of the Level of Service.

Organisations set performance standards, which provide baselines for performance


expectations, compliance and management. They are the guidelines that underpin
monitoring, measuring, reviewing and providing feedback on assets performance.
Performance standards can be stated in terms of quantity and quality. Both can be
measured and monitored at intervals, and for outcomes. Performance standards related
to quantity specify what the asset has to achieve and when. If necessary, quantity
performance standards are monitored with incremental checks. Ultimately the results
must be measurable in quantifiable terms, to make a performance judgement
regarding whether the specified performance standard has been achieved. A
performance standard may be expressed as a competency requirement whereby a
competency checklist can be used to measure this in quantifiable terms. Performance
standards, either in terms of quantity or quality, must be able to provide verifiable
evidence which can be reviewed to identify performance achievements and
performance gaps. A performance gap is the difference between intended
performance and actual performance. Standards of conformity need to specifically
relate the asset’s performance to a standard, just as the standard must relate to the
nature of the asset’s performance.

A new paradigm of performance measure has been adopted by many asset owner
organisations. This is based on identifying what the business does in terms of levels
of service and attaching Key Performance Indicators (KPIs) to those services. The
recording and analysis of KPIs should significantly contribute to the achievement of
the organisation’s business goals. Key Performance Indicators determine how well
services are provided, i.e. service delivery performance, and how much time is taken
in addressing and correcting performance gaps between intended and actual
performance. Key Performance Indicators are those critical performance measures
which ultimately determine assets serviceability and stakeholder value.

Most asset management contracts in effect include the identification, planning,


budgeting, costing, scheduling, and delivery of routine and preventive maintenance,
as well as renewal of assets, or asset rehabilitation in the case of infrastructure assets,
and many ancillary services such as environmental management. Performance-based
contracts require Level of Service (LOS) requirements to be identified for each of the
assets being maintained. In order to be used successfully on this type of contract, LOS
requirements, which may also be called performance measures, must identify what is
being measured; how it is being defined; what the extent of measure is; what the
accepted tolerances are; and what the expected response time would be.

The importance of the link between historical assets performance and the performance
standards established for outcome based contracts cannot be overemphasized. The
performance standards must reasonably account for the condition of the assets at the
start of the contract, and should incorporate the criteria that are most important to the
agency when considering assets performance.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e |4

It is also important for the agency to be able to measure the performance standards
over time as a means of verifying the performance of the contractor. Performance
measures must be established for each asset included in the contract. As these
contracts become more prevalent in public asset-owner agencies, asset management
contractors will need to evaluate the usefulness of traditional assets condition surveys
for setting performance standards for this type of contract and for monitoring the
performance of the contractor over time.

An Assets Performance Measurement Framework provides the framework for


performance management, capable of consistent implementation by all service
organisations. At the initial stage it defines only the minimum number of generic
performance measures for performance assessment and reporting. Service
organisations are therefore required to review their performance management
requirements and use the framework to further develop their own asset performance
measures as appropriate. Monitoring performance is integral to the assets
management process and typically involves setting up procedures and assigning
resources to measure performance over time; monitoring performance; verifying that
targets and supporting standards can be measured and are relevant; and reviewing the
cost-effectiveness of the monitoring process. Asset management information systems
should support the performance monitoring process.

Performance reporting is an essential step in the management of asset and service


performance, because it provides information on the performance of the assets and the
asset management programs; on the physical, functional and financial performance of
the asset; on the achievement of planned program objectives, targets and budgets;
allows asset level performance information to be evaluated at the asset and service
delivery level; and for meeting statutory reporting requirements. In addition, uniform
and consistent reporting formats for financial sustainability performance measures
provide a high level summary of both operating and capital investment activities. The
formats incorporate all of the recommended key financial indicators. The formats also
facilitate meaningful comparisons of each council’s finances, and it is intended that
annual budgets, reports on financial outcomes, and long-term financial plans be
summarised on the same basis.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e |5

1. Primary Assets
Performance Measures
Description of Assets
Assets take primarily two distinct forms. The basic distinction made is between
financial assets and non-financial assets. Non-financial assets may have a physical or
tangible form such as buildings, roads, machinery and mobile equipment. They can
also be intangible such as computer software as well as legally enforceable rights
associated with copyright and patents. They can also be a combination of both
tangible and intangible, particularly where the elements operate as parts of the whole.
A common understanding of an asset is that it is an investment of enduring value. In
the public sector it is perhaps often more important to appreciate the non-monetary
aspects of an asset’s value. The term ‘service potential’ is used to describe the utility
of an asset in meeting program objectives and is a useful concept to employ where the
asset does not generate income (ANAO, 1996).
Non-financial or physical assets can be categorized into infrastructure and industrial
assets in both the public and private sectors. Infrastructure assets refer to roads and
bridges; storm water drains; municipal buildings such as libraries and community
halls; parks, reserves and playgrounds; and recreation facilities, including sporting
complexes and swimming pools (Department for Victorian Communities, 2003).
Primary Infrastructure assets are typically large, interconnected networks or portfolios
of composite assets, comprising components and sub-components that are usually
renewed or replaced individually to continue to provide the required level of service
from the network. They are generally long-lived, fixed in place and often have no
market value. Primary infrastructure assets include the built environment such as
major buildings, office blocks, roads, bridges and harbours, and facilities and utilities
related to water, sewage, power etc., as well as assets that relate to military facilities.
Industrial assets include all plant and equipment that industry uses for manufacturing,
mining, processing etc. and for producing a product.

Infrastructure Assets Performance and its Measurement


Infrastructure assets provide a broad range of services at national, state, and local
levels. Their performance is defined by the degree to which the assets serve multilevel
community objectives. Identifying these objectives and assessing and improving
infrastructure performance occur through an essentially political process involving
multiple stakeholders. Assets performance measurement, a technical component of
the broader task of assets performance assessment, is an essential step in effective
decision making aimed at achieving improved performance of valuable assets.
Despite the importance of performance measurement, current practices of measuring
comprehensive assets management performance are generally inadequate. Most
current measurement efforts are undertaken because they are mandated by Federal or
State governments, or as an ad hoc response to a perceived problem or the demands of
an impending short-term capital assets project (USNRC, 1996).

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e |6

No adequate, single measure of assets performance has been identified, nor should
there be an expectation that one will emerge. Infrastructure assets are built and
operated to meet basic but varied and complex community needs. Their performance
must therefore be measured in the context of community objectives and the
multiplicity of stakeholders who use and are affected by infrastructure assets.
Performance should be assessed on the basis of multiple measures chosen to reflect
community objectives, which may conflict. Some performance measures are likely to
be location and situation specific, but others have broad relevance. Infrastructure
assets performance benchmarks based on broad experience can be developed as
helpful guides for decision makers. The specific measures that communities use to
characterise infrastructure assets performance may often be grouped into three broad
categories; effectiveness; reliability; and cost. Each of these categories is in itself
multi-dimensional, and the measures used will depend on the location and nature of
the problem to be decided.

Asset Performance Measures


The Role of Performance Measures in Asset Management:
Assets performance measure goes hand in hand with asset management. The
application of asset management principles and practices requires adopting a
performance-based approach to asset management and resource allocation.
Performance-based approaches can strengthen both external accountability and the
effectiveness of internal decision-making. External accountability is improved by
using performance measures to provide a clear and compelling rationale for budget
requests and to regularly communicate progress in achievement of stated policy and
programmatic objectives. Internal effectiveness is enhanced by using performance
measures to provide a technical basis for decisions and a greater degree of focus,
consistency, and alignment in decision-making and operational management across
the organisation (AASHTO, 2005).
Selecting Suitable Performance Measures:
Rather than recommending a single set of performance measures suitable for asset
management, the U.S. Highways and Roads NCHRP Project 20-60 defined criteria for
what constitutes a useful performance measure within the context of infrastructure
asset management, and provided examples of suitable measures and how they can be
applied. A key conclusion of this project is that while the choice of specific
performance measures is important and can influence what gets done, the ways in
which performance measures are used and integrated into decision-making processes
deserves equal if not greater attention.
While Government Agencies face similar challenges of how to make the best use of
available resources, there are significant variations across agencies with respect to
policy objectives, organisational culture, management styles, decision-making
processes, staff capabilities, data and performance measures already in place. Each of
these factors has a bearing on what kinds of performance measurement approaches
will be most successful in a given agency. Simply stated, a performance measure is
suitable for asset management if it helps the organisation make better decisions about
where to invest its resources, and if actions taken by the organisation can influence
changes in the value of the measure.

In addition to these characteristics, the following criteria should be considered:


© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e |7

• Policy-sensitive – Does the measure reflect stated policy objectives?


• Easy to communicate – Is the measure meaningful to affected stakeholders,
decision-makers, implementers and customers?
• Feasible – Can the measure be monitored with sufficient accuracy and
reliability given available resources?
• Predictable – Is it feasible to predict the value of the measure under different
scenarios reflecting resource allocations or actions taken?

Getting the Most out of Performance Measurement:


Realising the benefits of performance measurement with respect to both external
accountability and internal effectiveness depends on having a well-designed set of
measures, buy-in from staff at all levels, and attention to integration of the measures
into decision processes and implementing actions. Key considerations include:
• Balance – Do the performance measures collectively reflect a broad and
balanced set of perspectives – do they cover key policy goals and objectives,
and do they take into account both customer and asset owner viewpoints?
• Long-term View – Are asset preservation measures used in a way that leads the
asset owner to making good decisions for the long term – doing the right thing
at the right time in order to minimise life-cycle costs?
• Consistency and Alignment – Are measures for different geographic areas or
parts of the assets network consistent to allow for comparison and
aggregation? Are measures for different types of physical assets consistent to
support tradeoffs in investments across asset classes?
• Resource Allocation – Are high-level resource allocation decisions being
made based on the same performance measures (or derivations thereof) as
those used by technical staff to identify and prioritise needs?
• Use for Decision-Making – Are there well defined procedures for using
performance measures within planning, budgeting, prioritisation, operations
and management, and annual reporting processes?
• Target Setting – Are performance targets established in conjunction with an
analysis of required resources to meet those targets? Quantitative analysis of
the relationship between investment levels and likely future performance has
proven to be a powerful tool for getting agreement on resource allocations.
• Relating Outcome Measures to Output or Delivery Measures – Are output or
program delivery measures used to provide early indications of progress
towards desired outcomes, and consistency between budget allocations and
stated system performance targets?
• Effective Data and Reporting Procedures and Communication Devices – Are
procedures in place to ensure that performance information is collected
efficiently and accurately, and that results are widely communicated to their
intended audiences in a useful and usable manner?

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e |8

Assets Performance
Protecting service delivery potential and addressing health and safety concerns are
priorities when making decisions about asset use and maintenance. It is very
important, therefore, that asset performance be appropriately reviewed and evaluated
to verify that required outcomes are being achieved. The results of any performance
assessment need to be reported to management to identify any actions to be taken, and
to comply with ongoing reporting requirements, as well as with those forming part of
the corporate, business and asset planning processes. In addition to observing the
reporting requirements, the entity shall comply with the requirements of any
legislation that may apply specifically to its operations (Victorian Government, 1995).
Evaluating asset performance:
All assets currently being used to deliver the service under consideration need to be
identified and registered. How effectively these assets support service requirements
also has to be determined. As part of this process, there are a number of performance
measures used to assess asset performance, specifically the asset’s financial
performance, its function, utilisation, and its physical condition.
Financial Performance:
Are the asset’s operating costs similar to operating costs for other comparable assets?
(Use benchmarking to establish this.) Are the energy, cleaning and maintenance costs
reasonable? Are user charges being made, and how do they relate to the total
operating costs of the asset (including the cost of capital)?
Function:
How well suited is the asset to the activities and functions it supports?
Utilisation:
How intensively is the asset used? Could it be used more productively by extending
its working hours, or by accommodating additional functions?
Physical Condition:
Is the asset adequately maintained? Is there a maintenance backlog that requires
attention? Are major replacements or refurbishments likely to be required during the
planning period?

Assets Financial Performance


The financial performance of an asset must be evaluated to determine whether or not
it is providing economically viable services. To do this, the entity needs to monitor
and assess operating expenses and current and projected cash flows, including capital
expenditures. This information is then used to determine the current and projected
economic return of the asset or portfolio. Discounted Cash Flow analysis can be used
to provide a measure of the Net Present Value and the internal rate of return for
assets. The Department of Treasury and Finance sets a level for charge on capital and
hurdle rates against which return on investment is measured. When assessing project
viability, these rates should be used as one of the criteria set for ranking proposals in
priority order. Another important aspect of an asset's financial performance which
must be assessed is the maintenance of equity. This measure provides a basis for
evaluating the performance of both assets and entities. It is also a major consideration
in establishing approaches to service pricing and revenue.
© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e |9

Assets Function
The most fundamental feature of an asset is its function. Function decides strategic
importance. The functionality of an asset is a measure of the effectiveness of the asset
in supporting the activities to be carried out. To monitor and assess an asset’s
function, the entity needs to determine (Victorian Government, 1995);
¾ the role that the asset plays in achieving service delivery outcomes;
¾ the functional characteristics required of the asset to support the specified
activities (for example, the functional requirements for constructed assets).
The functionality of assets should be regularly reviewed. This will enable any
significant impacts on services to be identified. It will also allow timely changes to be
made to improve both service delivery and functional standards. Furthermore, the
results of regular asset functionality reviews are used in the formulation of
organisational asset strategies.

Assets Utilisation
Asset utilisation is a measure of how intensively an asset is being used to meet the
entity's service delivery objectives, in relation to the asset's potential capacity. To
assess utilisation, criteria and benchmarks appropriate to the services being delivered
and to the class of asset being considered firstly need to be established. The criteria
should have regard to (Victorian Government, 1995):
¾ the value of the asset’s unit of service potential that is being used relative
to the units of service being delivered;
¾ the physical measures of asset capacity relative to the units of service
being delivered;
¾ the use being made of the asset relative to the optimal availability for the
type of asset.
The utilisation criteria should be based, wherever appropriate, on best practice data as
well as on the results of analyses undertaken in the private and public sectors.
Under-utilised assets should be identified, and the reasons for this examined. It may
be, for example, that the asset is no longer effective in performing the activities
required of it or that it is in less than optimum condition. It may also be that the need
for the services it delivers or supports has reduced. The following examples illustrate
some of the reasons for under-utilisation;
¾ physical constraints, such as poor lighting for night-time use;
¾ technological obsolescence;
¾ management constraints.
Action should be taken either to improve the asset’s utilisation or to redeploy it
(provided that service delivery needs can be met by alternative means). Where asset
utilisation is low, entities should consider whether the cost of holding the asset
exceeds the cost of transferring the services it delivers, and whether there is a more
economical way of delivering the services. Alternative or additional uses of assets
should also be considered. The utilisation of each asset should be reviewed annually.
© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 10

Asset Utilisation Defined:


Before addressing the components that comprise an asset utilisation program, it is
important to establish a common point of reference regarding what exactly is meant
by asset utilisation. The concept behind asset utilisation is often disguised behind
terms such as uptime, maximum equipment uptime, minimum equipment downtime,
and maximum equipment capacity. But regardless of how it is referenced, the purpose
behind this measurement is the same - to measure the difference between what an
asset is capable of delivering and what it actually delivers in terms of output, and with
this data to calculate the ‘opportunity gap’. Properly measured and understood, asset
utilisation, or more correctly, the opportunity gap, can be used as a metric for
focusing on asset performance. While there is no accepted industry definition for asset
utilisation, the definitions used by most companies are similar to the following:
‘Assets utilisation is the ratio of actual output to the output that could be achieved at
maximum capacity’. From this definition, it is clear that at the most basic level,
implementing an asset utilisation program requires the capture of only two types of
data - actual output and maximum capacity. With this information, it is then possible
to calculate asset utilisation and the opportunity gap using the following equations
(ICPPR, 1998):
AU = ( actual output / maximum capacity ) *100
Opportunity Gap = maximum capacity - actual output

Asset Utilisation Models:


For asset utilisation data to be useful to multiple levels of an organisation, it is
necessary to provide the ability to summarize, or roll-up the data captured by the
program to various hierarchical levels. This need drives the requirement to create
specific levels of measurement. The reason for this is that the level of detail required
by an asset manager is different than the level of detail needed by an engineer who is
challenged with solving the problem. The following provides an overview of the
different levels that an asset utilisation model may contain.
Levels of Measurement:
Industry - At a corporate level, management of diversified companies may wish to
develop asset utilisation models by industry segment such as: agriculture, chemical,
food, petroleum or power. This level can give them insight into how each of the
segments is operating, and quantify how significant the opportunities are within each
respective segment.
Business - A business level asset utilisation model is another corporate level model
where the names used to segment the businesses are unique to each company.
Site or Division - A site or division based model sorts asset utilisation measurements
based on geography.
Plant - The plant level is the most fundamental level of any asset utilisation model
and is typically the level at which most asset utilisation programs begin.
Unit Operations or Process Areas - Most plants are sub-divided into unit operations
or process areas for the purpose of assigning resources.
System - By system, reference in this case is to a process system, such as that which is
developed when designing a new plant, or found in manuals used for training.
Assembly – The lowest level in the asset utilisation model hierarchy is the assembly
level at which the measure of utilisation can logically relate to output and capacity.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 11

Assets Condition
An asset should be able to be used safely and effectively. This means that it needs to
be maintained in a condition that is adequate for the purpose for which it is intended,
and that it complies with the relevant health and safety standards. If this is not the
case, the asset's ability to deliver services to the level and standard required will be
compromised. Condition assessment involves (Victorian Government, 1995);
¾ setting the required condition of the asset relative to its service delivery
requirements and value (criteria should include those relating to
operational efficiency, health and safety, and amenity);
¾ inspecting the asset and comparing its actual condition with that required;
¾ forecasting the future condition of the asset.
Required Assets Condition:
It is important to be clear on what condition the asset needs to be in to perform at an
appropriate level of service. The required condition will vary between assets
according to the asset's strategic importance, its specific function and its particular
physical requirements. The purpose of establishing required condition is to provide a
benchmark against which actual condition can be compared. Required condition is the
acceptable physical condition needed of an asset for effective service delivery. It
should perform its functions without unacceptable disruption; provide the expected
level of service appropriate for its functions; and provide a safe environment that
meets statutory requirements. Required condition varies according to function. It will
vary not only between Asset Categories but also between individual assets within the
same Asset Category. Variations within a single asset can arise as a result of assets
that have a number of functions. Physical infrastructure assets or constructed assets
are often complex and support a number of functions. Required condition is simply a
judgement of the main physical requirements that must be met. It will depend on the
specific functions and physical requirements of those features of the asset with most
strategic importance. However, careful and objective identification of required
condition is a very important part of conducting assets inspections in the assets
condition assessment process. If the required condition identified is too high or low,
the result can be either unnecessary expenditure on maintenance or refurbishment, or
deterioration of the asset and loss of value through under-expenditure. Basically, in
establishing required condition, the emphasis should be on those elements of the asset
most important in meeting business needs.

Actual Assets Condition:


Assessing the actual condition of an asset is the active part of conducting assets
inspections in the assets condition assessment process, in preparation for analysis. An
asset's actual physical condition and the acceptability of that condition can fluctuate
considerably over its useful life, particularly if there is a change in its function.
Information on actual condition is needed at any time to be able to make effective
decisions on the management of assets. The focus of actual condition assessment
during assets inspections is on key elements. All physical assets consist of a number
of elements or components that can be identified and measured. In assessing actual
condition it is important to identify and focus on those elements of the asset most
important to business needs.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 12

Asset Performance, Functionality, Utilisation and Condition Criteria


The assets performance perspective focuses on how well an asset is performing in its
intended use, in a way generally meant to be independent of the asset’s relationship to
organisational strategy. In brief, asset performance can be separated into three criteria;
¾ functionality;
¾ utilisation;
¾ condition.
Each criteria is a product of different data sources and methodologies, and similar to
the mission-alignment metric, decision-making risks are reduced by including
independent sources of information (adapted from FFCF, 2006).
The first major criteria used to measure assets performance is functionality. One way
to view functionality is to consider all objectives and criteria used to determine if an
asset can acceptably fulfil its needed purpose. This is however a broad view and
includes not only functional performance from an organisational mission perspective,
but also functional performance from a legal, regulatory, and stewardship perspective.
Traditional decision-making grossly undervalues this area, and by doing so
organisations may absorb large avoidable risks. The simple approach to defining
assets functionality is with a functionality index which is used to establish a value tree
of criteria that are important to asset performance outcomes. This should include
compliance with safety and/or statutory codes. Required or value-contributing
operational parameters are included such as a minimum functional criteria, as
determined for a highway, building, industrial system or maintenance facility. Other
typical facility asset categories include occupant safety (liability mitigation),
productivity, environmental stewardship objectives, energy conservation goals, and
public image. In all cases, the qualification and quantification of asset functionality
categories is best determined when documented and reinforced by asset configuration
profiles, and/or standard operating procedures. If done this way, it is possible to
weight the different criteria, such as using an analytic hierarchy process (AHP) to
calculate a global functionality index. Given the explicit definitions of each functional
criteria, gaps between actual and desired values can be used as a basis for determining
assets deficiency, which is discussed later.
The second assets performance criteria is utilisation. In pure terms, assets utilisation is
independent of assets condition. Although, there is a commonly observed association
between low utilisation and poor condition, this is often the result of some third cause
and not as a direct cause of assets condition. Utilisation can apply to all types of
assets, but is most often used in facility assets such as for example, space utilisation.
For many facility users, space utilisation criteria is essential. The calculation of a
space utilisation index is simply a summary comparison between demand and supply.
Space demands or needs can be defined in two ways; the occupant can determine
them, or they can be established by policy. Having the occupant determine space
needs works well when the occupant also directly pays for the space used, thus
imposing a self-governing behaviour. However, this is not the case for many large
and/or public organisations. Most decision-makers making space consumption
decisions, do so without knowledge of the impact these decisions will have on the
organisation in terms of mission/operational tradeoffs or total asset ownership costs.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 13

This is generally an organisational complexity issue, and in order to adequately


address it, many organisations use some form of utilisation guidelines or standards.
The strategy is to drive towards consistently using a reasonable and well-quantified
standard. When employed, these standards can be used to calculate a utilisation index.
Measuring assets utilisation achieves a number of valuable outcomes in addition to
producing a simple assets performance metric that can be used for relative
comparisons. Valuable outcomes include the equitable distribution of resources and
funding, and identifying assets usage needs to avoid or mitigate functional and/or
operational impacts - all of which contribute to lowering total ownership costs.
The third criteria, assets condition, is a broad and complex field. There are a number
of competing methods to quantify assets condition ranging from general service life
prediction estimates to scientifically defined degradation models. The common
approach is to apply a distress-based inspection method, often performed in the
context of an engineering management system (EMS). This method uses
mathematically defined degradation curves, to predict a system’s physical
performance. These curves model performance as a function of the type of asset
system, certain distress criteria, time, and various other statistically relevant variables.
An EMS-based inspection includes the collection of basic asset system attributes,
such as the type of construction and age, along with data collected on observable
distresses, specifically type, severity and density. This information is used to calculate
individual deduct values, that when summed are compared to established degradation
profiles to determine the asset’s remaining service life. This data is used to predict the
optimal time for asset system rehabilitation or renewal, and then in turn a deficiency
is recorded in a project management system for the replacement of the asset system.
The sum of deduct values is also used to calculate a Condition Index (CI), which is
typically reported on the scale of 100-0 where 100 is a distress free system. This
methodology is fundamentally different and vastly superior to the traditional facility
condition index (FCI), which is simply calculated as the sum of maintenance project
costs divided by the present replacement value of the system, building or assets
portfolio being evaluated. The problem with the FCI is in the definitions used for the
numerator and denominator. Where CI uses very explicit, auditable definitions, FCI
definitions are known to vary widely or are inconsistently used across the industry or
even at individual locations. This introduces great uncertainty when using FCI in
support of decision-making such as capital assets funding allocation and prioritisation.
Determining Assets Deficiencies:
Each assets performance criteria discussed previously, provides a principal source for
asset deficiencies. Deficiencies are fact-based statements of correctable problems,
meaning their correction is absolute and auditable. Specifically, when a deficiency is
corrected in can be confirmed and the metric reporting performance, i.e. functionality,
utilisation or condition, must have a direct and observable sympathetic response. In a
decision-making strategy, assets deficiencies are defined using highly structured data
in order to support efficient and accurate acquisition, logic testing, analysis, and rapid
decision-making. This includes disciplined use of performance standards and
conventions to define the deficiency (type of correctable problem), its location
(unique geospatial referencing), the system effected, and the deficiency’s category
(maintenance, alteration, improvement, code compliance, or disposal). Additional
data may also be collected as necessitated by the deficiency’s source.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 14

Deficiencies are to be recorded in a capital assets project management system to


include scope, estimated correction costs based on cost estimating systems, targeted
time for correction, and other asset performance attributes. Condition-based
deficiencies are the products of a distress-based asset inspection and analysis that
selects the optimal maintenance, repair or renewal alternative. Utilisation deficiencies
require the evaluation of assets utilisation assessment data, the identification of a
utilisation problem, and a concept correction scope with a parametrically generated
cost estimate. Assets functional deficiencies are more complicated and must be
sourced back to the specific objective not achieved, which as discussed previously,
includes compliance with any number of specific goals, standards and/or rules.
Therefore, assets functionality objectives must include definitions to trigger a failure
or functional gap requiring a documented deficiency. Each assets functionality
objective must also be correctable and its correction verifiable. All possible assets
deficiencies, contained in the capital assets project management system, is then
evaluated by asset managers as source material for capital asset investment projects.
Assets performance deficiencies confer the needs of existing assets or the need for
changes to the assets performance criteria, i.e. function, utilisation and condition.
Additionally, as incorporated elements in a project’s scope, the correction of assets
deficiencies represent the project’s criteria for success. A simple deficiency of an
existing asset, such as a road pavement rehabilitation, or building facility roof
replacement, may translate directly into a capital assets replacement project.
Typically, multiple deficiencies are bundled to define a project scope, which due to
timing issues, economies of scale, and other variables, may only approximate the sum
of its component deficiency estimates. The universe of deficiencies can also confer
the need for new assets or needed changes to existing ones. Assets functionality and
utilisation criteria are also frequently applied to existing assets, but deficiencies in
these areas may also communicate the need for new assets. For instance, a utilisation
deficiency may indicate the need for more space in the case of facilities assets.
Additionally, a new strategic goal, or regulatory requirement may be the source for an
asset functionality deficiency. It is very important to note that the three assets
performance criteria used can communicate performance with respect to current
needs, but their measure does not necessarily indicate the best correction strategy. For
instance, poor assets condition may not indicate the need for more maintenance where
a better solution may be new capital assets funding to demolish or dispose of the
existing asset and replace it with a new asset that may be superior in quality or
technology. Likewise, poor asset utilisation many not indicate the need for more
capacity with a new asset, but it might mean conversions of use. Deciphering the
assets deficiency universe can be a complex matter and is best managed by
professional planners, engineers, environmental specialists, and asset managers who
can assess the larger picture to find more favourable solutions. The processes relative
to capital assets project development and management are better equipped to handle
the many variables and externalities encountered to include assets user preferences as
well as new technologies and strategic objectives. All projects though, must have
criteria for success, specifically absolute and auditable correction of the deficiencies.
Lastly, once the preferred project alternative is selected, it is entered onto the business
case pro forma and competes for funding and resources, which can include
recognising certain deficiencies as contract alternatives or options that may or may
not be awarded based on their competitive cost and the availability of funding.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 15

Engineering Management Systems and Assets Condition:


The following is an overview of the general theory behind an engineering
management system (EMS). First, a degradation curve is defined, which may be
modelled using a Pearl Curve as illustrated in Figure 1. The attached graphic presents
a degradation curve for an asset system with an anticipated 15-year life cycle as
determined when the curve crosses the CI failure threshold shown. Point A represents
the calculated CI based on a facility assessment conducted in year 10. The anticipated
CI was 75, but the actual CI was 70 due to the type, severity and density of distresses
observed. This translates into an asset system performance lag of one year,
represented by the horizontal separation between Point A and the grey line
representing the generic degradation profile shown. Point B represents the updated
projection for system failure, or the prediction for a system renewal deficiency, which
is roughly targeted for year 14. Line C represent a repair project alternative performed
in year 11 that adjusts the CI to approximately 90 and extends predicted performance
to year 19 identified by Point C. Line D is another alternative that allows the system
to continue to degrade until year 14 and then upon predicted failure a replacement in-
kind system is installed. The product of this alternative extends systems performance
to year 29, combining the 14 year life cycle of the first system and a 15 year predicted
life cycle of the second system (FFCF, 2006).
The advantage of using mathematically modelled degradation curves is that each
alternative can be economically evaluated by comparison of the uniform annual costs
of each maintenance, repair or renewal alternative, which considers the organisation’s
cost of capital and allows the consideration of different investment horizons.

Figure 1. Engineering Management Systems and Assets Condition


(FFCF, 2006)

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 16

2. Level of Service in
Infrastructure Assets
Infrastructure Asset Management and Service Delivery
Infrastructure Asset Management is the discipline of managing infrastructure assets
such as transportation infrastructure (roads, bridges, culverts, railways, harbours etc.);
facilities infrastructure (water supply, dams, barrages, treatment plant, pump stations,
waste treatment plant, sewers, storm water drains and pipes, flood management etc.);
utility infrastructure (power supply, generation, distribution, reticulation, gas plant,
gas storage and piping); as well as community recreational infrastructure and assets.
In the past these assets have typically been owned and managed by local or central
government. Investment in these assets are made with the intention of meeting
community demands for infrastructure services, and improved living conditions.
The concept of service delivery is the foundation of Infrastructure Asset Management.
There are two performance criteria related to this concept of service delivery,
specifically the Level of Service (LOS) and Standard of Service (SOS). The Level of
Service (LOS) is an indicator of the extent or degree of service provided by, or
proposed to be provided by, an infrastructure asset, based on and related to the
operational and physical characteristics of the asset. Level of service indicates the
capacity per unit of demand for a public infrastructure asset (i.e. transportation
infrastructure, facilities infrastructure, and/or utility infrastructure). The Standard of
Service (SOS) states in objective and measurable terms, how an asset will perform,
including a suitable minimum condition grade in line with the impact of asset failure.
An Acceptable Level of Service is the minimum standard adopted for a service level
on public infrastructure assets and services.
Levels of Service (LOS):
LOS includes the defined service parameters or requirements for a particular
infrastructure asset against which service performance may be measured. Service
levels usually relate to the asset’s responsiveness in terms of quality, quantity, and
reliability, as well as environmental acceptability and cost. The measurable outputs of
the LOS of infrastructure assets consist of;
¾ quality
¾ quantity
¾ reliability
¾ safety / risk
For mature assets the key inputs that impact on the level of service are the planning,
maintenance and renewal. The priority or impact of these inputs will vary for each
owner and asset. They will depend on the status of the assets and their key corporate
objectives or business viability. The key performance indicators or quality assurance
issues in relation to Level of Service are:
• Clear indicators and standards exists for each service delivery programme.
• Indicators are public knowledge, available to stakeholders and customers.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 17

The most successful organisations have:


• Regular customer surveys and associated analysis.
• An ‘informed customer group’ separate from members and the general public.
• An effective feedback system and complaints management process.
• Regular public meetings and discussions with special interest groups,
describing outcomes of the asset management outcome program etc.

Level of Service Strategy:


A Level of Service strategy begins with a concise description of a LOS objective,
followed by legislative and organisational requirements, viz:
• Objective – To match the LOS provided with the expectation of the customer.
• Factors – Legislative requirements; organisational goals and objectives;
resources availability; and customer expectations.

Figure 2 illustrates the various steps required for developing infrastructure assets LOS
strategy, beginning with data collection and performance measurement, identification
of the LOS, evaluating options in determining LOS, and ending as input into the Asset
Management Plan (AMP) or in asset stewardship contracts.

Figure 2. Infrastructure Assets Level of Service Strategy


(WBPC, 2006)

Level of Service Procedure:


The procedure for developing optimum LOS for infrastructure assets includes the
following steps (NRCTRB, 1984):
1. List the asset elements – The LOS procedure begins with a categorisation of the
asset elements or components (for example the asset elements of a highway
infrastructure asset would include the flexible pavement, shoulders and approaches,
roadside, drainage and ancillary structures such as culverts etc.).

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 18

2. Assign Considerations for the Asset Elements – A list of considerations is


established that can be used to evaluate the performance of the asset elements in terms
of their ability to contribute towards the asset’s service delivery. Appropriate
considerations such as safety, reliability, availability and maintainability, aesthetics
etc., are then assigned to each asset element with respect to its service potential.
Safety is an important consideration whereby the performance of most infrastructure
asset elements can be evaluated. However, safety would for example not be an
appropriate consideration with the highway infrastructure asset element of roadside.

3. Select an Attribute for Each Consideration – In this step, one, and only one,
attribute is selected to express the level of each assigned consideration whereby the
performance of the infrastructure asset element can be evaluated on a numerical scale.
An attribute is a numerical scale for measuring the effects of alternate Levels of
Service for each assigned consideration. There are two general types of attributes;
natural attributes and constructed attributes. A natural attribute has levels that are
physically measurable. For example, for the consideration of safety, a natural attribute
may be the percent change in the number of incidents or accidents, relative to a
particular asset element. This is a natural attribute as it can be physically measured,
even if data of incidents or accidents may not be readily available, and estimates have
to be used. A constructed attribute is one for which physical measurement is not
possible. In such cases, a subjective scale or index must be constructed to define the
various degrees of the effect of the attribute. For example, the consideration of
aesthetics in evaluating the performance of an infrastructure asset element such as a
waste water sewer cannot be physically measured and a subjective attribute scale has
to be constructed to define a range for the degree of aesthetical appearance. An
important point in the use of constructed attributes is that each level on the subjective
scale should be described in sufficient detail so that the associated level of impact is
communicated clearly and unambiguously. Establishing scales is presented in step 7.
Furthermore, the selection of attributes should involve an iterative procedure whereby
a preliminary list of attributes are prepared, followed by a group review including
various specialised expertise. A typical preliminary list of attributes may include the
following;
¾ percent change in the number of incidents or accidents;
¾ percent change in the reliability of the asset element;
¾ percent change in the availability of the asset;
¾ frequency of maintenance of the asset element;
¾ frequency of rehabilitation of the asset;
¾ percent serviceability index;
¾ percent increase in usage costs; etc.

The objectives of the specialist expertise group review would be to assess;


¾ whether the preliminary list of attributes includes all issues of concern;
¾ whether the attributes on the list are practical, i.e. whether the effects of
alternate Levels of Service could be measured in terms of these attributes;
¾ whether there are appropriate additions, deletions, or modifications
necessary for the preliminary list of attribute.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 19

5. Establish a Parameter for Element Condition – In this step one, and only one,
parameter is designated for each degree of condition of the asset element to define
alternate Levels of Service of the asset. For example the condition of rutting in the
asset element of flexible paving of a road asset, or the condition of cracking in the
asset element of isolators of a power transformer asset. The parameters of condition
should be capable of being expressed numerically or descriptively The numerical or
descriptive parameters should be able to differentiate clearly between different levels
of an asset element’s condition. A parameter may consist of a single definitive value
such as skid resistance in terms of a specific number at a specific speed in the case of
a smooth surface of road pavement, or it may consist of a combination of values such
as depth of rut and percent of road surface affected in the case of a rutted surface of
road pavement. In the case of cracking in transformer isolators, the parameter may
consist of a combination of values such as average width of cracks and percent of
surface area affected. Where designation of a numerical parameter is not feasible, a
descriptive parameter has to be used. For example, if in the case of cracking in
transformer isolators a measure of crack width or percent of surface area affected is
not feasible, then a description such as; ‘20% of isolators badly cracked’, or ‘some
isolators visibly cracked’ would suffice to designate the degree of condition of the
asset element to determine alternate Levels of Service of the asset.

6. Specify Alternate Levels of Service for Each Condition – This step establishes
numerical values of the parameters used to define alternate Levels of Service based on
the condition of the asset elements. The number of alternative Levels of Service
defined for each condition should range between two and five. As defined previously,
the Level of Service is an indicator of the extent or degree of service provided by, or
proposed to be provided by, an infrastructure asset, based on and related to the
operational and physical characteristics of the asset. The Level of Service thus
specifies a threshold value of a condition parameter that indicates the extent or degree
of service that can be provided by an asset, based on the asset’s ability to function
according to its designed operational and physical characteristics. Such a threshold
value of condition would trigger the scheduling of an appropriate maintenance action.
Some general guidelines for generating appropriate alternate Levels of Service are;
¾ the description of each Level of Service should be definitive and
unambiguous - in other words, it should communicate clearly the extent or
degree of service that can be provided by an asset, as well as the asset’s
designed operational and physical characteristics;
¾ the description of a Level of Service should include performance measures
that can be easily and quickly assessed;
¾ each alternate Level of Service should be feasible - for example, if the
analysis of the extent or degree of service that can be provided by an asset,
based on the asset’s ability to function according to its designed
operational and physical characteristics, results in a selection of the lowest
Level of Service for a particular condition, then such a Level of Service
should be able to be adopted;
¾ the resource requirements to maintain a specific Level of Service at a
specific threshold value of condition should be significantly different from
other levels so that different options of maintenance action are represented.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 20

7. Determine Effects of Alternate Levels of Service – This step determines the


effect on the relevant consideration for each of the alternate Levels of Service
established for a specific condition. The effect on a consideration (i.e. safety) is
estimated in terms of the attribute that is selected to express the level of each assigned
consideration. As previously indicated, an attribute is a numerical scale for measuring
the effects of alternate Levels of Service for each assigned consideration. Ideally, the
procedure for estimating the effects should be based on objective data. However, the
available data might often not be adequate for directly estimating the effects of
alternate Levels of Service. In such a case, estimating effects involves specialist
expertise review groups to supplement whatever data may be available.

8. Estimate Resource Needs to Maintain Each Level of Service – In this step, the
resources required to maintain the asset condition at each alternate Level of Service, is
determined. The results of these estimates can be conveniently tabulated from
information provided by experienced operations and maintenance personnel. For
alternate Levels of Service not previously used or considered for use, data for
estimation of resource requirements will be lacking and judgemental estimates will be
necessary. Best estimates must be made from available data and from experienced
expertise. The estimating process can be approached as follows;
¾ estimate the total annual resource requirements for each condition that
defines alternate Levels of Service;
¾ estimate the increased amount of resources that would be required to
maintain the condition for each Level of Service that is higher than the
acceptable practice level, as well as the decreased amount of resources that
would be required to maintain the condition for each Level of Service that
is lower than the acceptable practice level.

9. Assess the Desirability for Each Level of Attribute – The relative desirability
(value) is assessed of the different levels of each attribute that is selected for
measuring the effects of alternate Levels of Service for each assigned consideration
(as determined in step 7). For example: How much better or worse is one level of an
attribute such as the frequency of maintenance of an asset element in the case of a
maintainability consideration for an asset, relative to another level of the same
attribute in relation to resources costs? The relative desirability in this case is thus
determined by assessing how much it would cost in order to maintain each level of the
attribute of the frequency of maintenance of an asset element. This step therefore
requires the completion of the following three sequential tasks;
¾ preparation of group value assessments;
¾ conducting specialist expertise group reviews;
¾ analysis of assessment data.

Each task is described in detail as follows:


Preparation of group value assessments:
In this task, assessment forms are compiled as background information to facilitate
specialist expertise group assessments. The basic assessment question would be: What
maximum proportion of the total available (maintenance) budget would an
organisation be willing to spend for a particular level of the attribute?

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 21

Conducting specialist expertise group reviews:


In this task, the relative weights of different attributes are determined. It is important
to use the percent of total available (maintenance) budget as an indication of the value
placed on maintaining the attribute at each of the levels described, and not the actual
cost of maintaining each level. The ‘willingness to pay’ should be used as an
expression of the relative value of the results of maintaining the attribute at a specific
level, and not an estimate of the cost of doing so.
Analysis of assessment data:
For each given attribute, the following analysis procedure is followed;
¾ arrange all estimates regarding a given attribute in ascending order;
¾ find the median of all estimates for each level of a particular attribute;
¾ calculate the relative value of each attribute level;
this is done accordingly:-
attribute level estimate (a) relative value of attribute level (a) = 0
attribute level estimate (b) relative value of attribute level (b) = (b) – (a) / (e) – (a)
attribute level estimate (c) relative value of attribute level (c) = (c) – (a) / (e) – (a)
attribute level estimate (d) relative value of attribute level (d) = (d) – (a) / (e) – (a)
attribute level estimate (e) relative value of attribute level (e) = (e) – (a) / (e) – (a)
¾ plot the attribute levels on the x-axis, and the corresponding relative values
on the y-axis;
¾ draw a smooth curve through the plotted points;
¾ find the attribute level corresponding to a relative value of 0.5 – this is
called the mid value level of the attribute;
¾ determine the worst and best attribute levels corresponding to a relative
value of 0 and a relative value of 1 respectfully;
¾ determine the relative weight of each attribute;
this is done accordingly:-
The relative weight of an attribute is proportional to the increase in that portion of the
budget an organisation would be willing to spend in order to improve the level of the
attribute from worst to the best. The relative weights are normalised to sum up to 1.

10. Formulate Recommendations from an Analysis of the Results – Once all the
factors have been tabulated, from a listing of the asset elements to related
considerations, levels of attributes, conditions, parameters, alternate Levels of
Service, effects, estimated resources needed, and the estimated desirability of each
level of attribute, the results can be computed on a typical spreadsheet for formulation
of recommendations. The spreadsheet would serve as a layout for the results and a
formulation of recommendations for their implementation. This is presented in terms
of two basic applications, specifically;
¾ in the selection of optimum Levels of Service for the related amounts and
costs of available resources;
¾ in an assessment of the effects of changes in the portion of the budget an
organisation would be willing to spend in order to improve the level of an
attribute for alternate Levels of Service for each assigned consideration.
With sensitivity analysis, the optimum Levels of Service for different
combinations of available resources can be determined.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 22

Standard of Service (SOS):


There are two main objectives of Infrastructure Asset Management relating to
Standard of Service (SOS) (NZALGE, 1998):
• Sustain SOS: to sustain or deliver an agreed Standard of Service in the most
cost-effective way through the operation, maintenance, refurbishment, and
replacement of assets. Management of this objective is the subject of Asset
Management Plans.
• Improve SOS: to make strategic changes and improvements to the Standard
of Service of the asset portfolio through the creation, acquisition, improvement
and disposal of assets. Changes to the SOS are usually managed as a
programme based on strategic objectives regarding the asset portfolio.

Sustain SOS:
The key components of the sustain SOS objective are:
• A defined standard of service
• A whole-life cost approach
• Asset Management Plan.

Defined Standard of Service (SOS):


Minimum condition grade - without a defined Standard of Service (SOS) there is no
means of knowing what service customers expect, and no control on the whole-life
cost. With a defined SOS, the asset manager is clear about how success or failure will
be measured and the customer understands what to expect in return for the
expenditure on the asset system. With a defined SOS the need to maintain, repair,
refurbish or replace is dependent on the condition of the asset. With a performance-
based asset management approach, decisions are flexible and depend predominantly
on the current condition of the asset. This differs from a planned maintenance
approach (which may not reflect the actual condition) by responding to the actual
deterioration and performance of an asset. For example, an asset is corrected after a
partial loss of function condition, rather than scheduled maintenance action
irrespective of its functional condition. The minimum condition grade needs to be set
objectively, in line with the scale of impacts or consequences of asset failure during
the design event. The minimum condition grade provides a key boundary condition
for making investment decisions. The second part of Standard of Service is a
specification of how the asset should perform. This would normally include a
specification of the attributes of the asset which are important to its function e.g.
location, type, height, capacity. By managing asset usage against a defined SOS,
which couples the performance specification with the condition grade as a measure of
reliability, the potential complication of trying to optimise maintenance over a short
timeframe can be avoided, as well as the need to determine the outcome or benefit
associated with each individual intervention (NZALGE, 1998).

Whole-Life Cost Approach:


Also referred to as total cost of ownership (TCO) or lifecycle cost, management takes
a whole-life cost approach to decisions regarding operation, maintenance,
refurbishment and replacement of assets.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 23

Over the life of an asset, there are likely to be hundreds of individual interventions,
which together sustain the agreed SOS. Undertaking a formal investment appraisal to
assess options or the relative benefit for each individual intervention or even an
annual programme of interventions would be very complicated, and prohibitively
expensive. It is the sum of the cost of all the individual interventions, and their effect
on the whole-life cost of providing the SOS that is of major concern. By
implementing a system that keeps a historic record of past expenditure, coupled with a
forecast of expenditure to sustain the SOS, a solid foundation can be provided from
which to assess the asset manager’s performance. The objective of sustained
investment decisions can therefore be stated as providing the agreed Standard of
Service for the minimum whole-life cost. Delivering the agreed Standard of Service is
a relatively simple concept on its own. A key objective is to minimise the whole-life
cost. At its most simple, Asset Management attempts to optimise the trade-off
between maintenance and replacement. For a given standard of service every asset in
a system requires a decision about how best to manage the asset, and at which point
replacement or refurbishment may represent the most cost effective approach.

Three alternative asset management approaches, A, B & C can be envisaged where


Option A is a maintenance regime where the annualised maintenance costs are low,
but the annualised capital costs are relatively high. In this option the asset is
effectively allowed to deteriorate faster with minimal maintenance before being
replaced. This scenario could be expected in a situation where maintenance costs
come out of a local revenue budget but capital costs are controlled centrally, or where
an organisation is under pressure to minimise revenue costs with specific incentives
for capital expenditure. Option B is a maintenance regime where the annualised
maintenance costs are high, preventing the deterioration of the asset and reducing the
annualised capital cost of sustaining the SOS. Whilst the asset life is much longer, the
cost of doing so is ultimately more expensive than if the asset was replaced earlier.
Delaying or avoiding the cost of replacing the asset requires more frequent and
expensive maintenance and repair in order to sustain the SOS. This scenario could be
expected in a situation where it may be difficult to justify replacement expenditures,
or where the prioritisation system favours improvement projects. Option C represents
a maintenance regime that minimises the whole-life cost of providing the required
SOS. This is the lowest point that represents the sum of two major components of
whole-life cost, the optimum balance between maintenance and replacement costs.
This scenario is most likely in an organisation that is operating with effective asset
management with clear responsibilities for whole-life costs and good information
systems to support the best asset management decisions based on the whole-life cost.

Asset Management Plan:


Asset Management Plans (AMP) include strategic, tactical and operational planning
for managing an organisation's assets in order to achieve strategic objectives. Usually
an Asset Management Plan will cover more than a single asset, taking a system
approach - especially where a number of assets must work together in order to deliver
an agreed Standard of Service. It is in the Asset Management Plan that the Standard of
Service is recorded and compared against the Level of Service, along with a long-term
plan that shows how an organisation can deliver the defined Level of Service against
the relative Standard of Service for the minimum whole-life cost.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 24

Level of Service Requirements


Performance-based contracts require Level of Service (LOS) requirements to be
identified for each of the assets being maintained. In order to be used successfully on
this type of contract, LOS requirements, which may also be called performance
measures, must identify the following (USTRB, 2001):
• What is being measured?
• How it is being defined?
• What is the sampling rate and extent of measure?
• What are the accepted tolerances?
• What is the expected response time?
In order to determine the feasibility of the LOS requirements, an agency may verify
the LOS established in the contract with existing data from a source such as the
agency’s asset management system or maintenance management system. By
comparing existing conditions to the LOS established in the contract, it may be
possible to check whether it is reasonable to expect the contractor to attain the
intended LOS within the contract period. For example, it is probably not reasonable
for an agency that currently maintains a particular asset at an average condition of 60
(on a 100-point scale) to establish a Level of Service that requires the contractor to
maintain the asset at a condition index of 90 over a certain contract period, because
the contractor’s bid price will likely be much higher than what the agency currently
spends to maintain the asset. A more reasonable LOS requirement for those
conditions might be a condition index of 70 by the last year of the contract. The
contract should however, state the interim conditions that are expected in each of the
other years of the contract.
The LOS topic is covered in detail in NCHRP Report 422, Maintenance QA Program
Implementation Manual (USTRB, 1999), as it relates to the implementation of a
quality assurance program for transportation agencies.
The Level of Service approach to asset management contracts presents several
implications for contractors. Issues to consider include:
• Is the LOS rating system objective and repeatable?
• Does the agency have experience with realistic links between LOS and
performance?
• Does the agency have experience determining reasonable funding levels to
achieve specified LOS?
An agency must exercise care in selecting the factors that will be used to establish
performance measures on this type of contract: a credible and properly established
evaluation system is essential. The historical condition information maintained in the
agency’s asset management database is an excellent source for assets condition data
that can be used to establish baseline conditions, such as the distribution of assets
network performance in various condition categories. However, the use of this data
assumes consistency in the contractor’s data collection and reporting methods and
agreement on the distribution of conditions at the beginning of the contract.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 25

Alternatively, the information can be valuable in establishing the specific


performance standards that must be maintained. In any case, the contractor has an
acute interest in any LOS standards and condition information that the agency
proposes to use in these contracts. It is highly likely that the contractor will perform
its own baseline survey to compare the agency’s data and will perform QC surveys
throughout the contract period to track its own performance. Any discrepancies
between the two results, especially discrepancies that might cost the contractor, could
lead to challenges to the agency’s data, collection method, and analysis procedures.
The importance of the link between historical assets performance and the performance
standards established for outcome based contracts cannot be overemphasized. The
performance standards must reasonably account for the condition of the assets at the
start of the contract, and should incorporate the criteria that are most important to the
agency when considering assets performance. It is also important for the agency to be
able to measure the performance standards over time as a means of verifying the
performance of the contractor. Performance measures must be established for each
asset included in the contract. As these contracts become more prevalent in public
asset-owner agencies, asset management contractors will need to evaluate the
usefulness of traditional assets condition surveys for setting performance standards for
this type of contract and for monitoring the performance of the contractor over time.

Monitoring Contractor Performance:


Once the contract is in place, it is the agency’s responsibility to monitor the
performance of the contractor to ensure that the intended Levels of Service are being
met. The manner in which the monitoring will take place should be defined in the
contract, including the frequency with which the monitoring will take place.
Depending on the performance measure criteria established, monitoring could be
required very frequently but should be conducted at least annually. Monitoring may
occur at different frequencies for different elements of the asset. In order to track the
contractor’s performance over the duration of the contract, the agency should
establish a documentation system. This system is essential in order to challenge the
contractor’s performance if Levels of Service are not being met; without such a
system, it is useless to set performance measure standards for the contractor to meet.

Outcome Performance Measure Standards:


Performance-based contracts require a public sector agency to establish a set of
criteria for each asset that is being maintained. The performance criteria should
describe the outcome that is being sought from the contractor in each year of the
contract period. Key to the success of this type of contract is developing performance
standards that provide the contractor with the autonomy needed to achieve the results
specified. For instance, when describing performance criteria on an outcome based
contract, the criteria should not include references to material or qualifications of the
people who are to perform the work. Rather, the performance standards should
establish the outcomes that are sought by the agency, such as the duration above a
certain performance measure, or the percentage frequency of non-conformance of the
minimum Level of Service. To a large degree, a contractor’s bid price will be heavily
influenced by the performance standards that are set in the contract. These standards
will impact many of the contractor’s decisions, including the materials that are
purchased and used, and the equipment and procedures applied to use the materials.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 26

3. Primary Assets Key


Performance Indicators
Key Performance Indicators
A new paradigm of performance measure has been adopted by many asset owner
organisations. This is based on identifying what the business does in terms of levels
of service and attaching Key Performance Indicators (KPIs) to those services. The
recording and analysis of KPIs should significantly contribute to the achievement of
the organisation’s business goals. Key Performance Indicators determine how well
services are provided, i.e. service delivery performance, and how much time is taken
in addressing and correcting performance gaps between intended and actual
performance. Key Performance Indicators are those critical performance measures
which ultimately determine assets serviceability and stakeholder value.

Traditionally, performance management systems have had a financial bias and only
produced information for management. As a result, they have ignored the key issues
of linking assets functional performance to strategic objectives and communicating
these objectives and performance results to all levels of the organisation, not just
management. In addition, assets performance standards need improvement.

There have been two very different schools of thought as to how this can be achieved:
• One school advocates that traditional financial measurements should be
revised and improved to make them more relevant and effective within a
modern business environment.
• The other urges that businesses should ignore financial measures altogether
and focus instead on functional and operational parameters.

Both schools of thought accept that no single measure can provide a clear picture of
an organisation’s assets performance. The complexity of managing assets today,
whether they are industrial or infrastructure assets, requires asset managers to be able
to view performance in several areas simultaneously. The concept developed by
Robert Kaplan and David Norton (1996), specifically the Balanced Scorecard,
measures corporate performance by addressing four basic questions:
• How do customers see us?
• What must the organisation excel at?
• How can the organisation continue to improve and create value?
• How does the organisation appear to existing and potential stakeholders?

The Balanced Scorecard approach requires an organisation to focus on factors which


create long term economic value and strategic benefits such as customers; staff;
processes; stakeholders; and the community. The Balanced Scorecard obliges
management to consider all of the operational measures effecting the business.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 27

The Balanced Scorecard as a Performance Measurement Method


The Balanced Scorecard is a technique developed by Kaplan and Norton (1992) that
helps organisational decision makers to navigate the organisation towards success.
The technique enables organisations to translate their mission and strategy into a
comprehensive set of performance measures that provide the framework for a
strategic performance measurement system. Organisations have used the Balanced
Scorecard to (Kaplan and Norton, 1992; 1996; 2000);
¾ clarify and translate vision and strategy;
¾ communicate and link strategic objectives and measures;
¾ plan, set targets and align strategic initiatives;
¾ enhance strategic feedback and learning;
¾ succeed in realising both tangible and intangible investment benefits.

The Balanced Scorecard measures organisational performance, with emphasis on


financial objectives. But, it also includes the performance drivers of these financial
objectives, and measures performance across four balanced perspectives;
¾ financial perspective;
¾ customer perspective;
¾ internal business processes;
¾ learning and growth.

Developers of the Balanced Scorecard argue that traditional financial measures “tell
the story of the past” (Kaplan and Norton, 1992), and try to address this inadequacy
by complementing past performance measures (financial measures) with the drivers of
future performance indicators (customers, suppliers, employees, processes,
technologies and innovation). The fundamental concept of the Balanced Scorecard is
to derive the objectives and measures from the overall corporate vision and strategy
and to use the four perspectives as a ‘balanced’ framework to monitor and achieve
these objectives. A properly developed Balanced Scorecard should:
• Represent financial and non-financial measures from all levels of the
organisation (front line to executives).
• Maintain an equilibrium between;
¾ external measures (developed for the stakeholders and customers);
¾ internal measures (developed for the bushiness processes, innovation,
learning and growth);
¾ outcome measures (results from the past) and measures that are for future
performance;
¾ objective (easily quantifiable outcome measures) and subjective
(judgmental performance drivers) outcome measures.
• Include only measures that are elements in a chain of cause-and-effect
relationships that communicate the meaning of the organisation’s (or business
unit’s) strategy.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 28

The Balanced Scorecard as a Public Sector Performance Evaluation Method:


A fundamental feature of the Balanced Scorecard, is that it requires that each measure
should relate to corporate strategies and to each other in a cause and effect
relationship. The individual measures at each instance would be unique depending on
corporate goals and strategies (Kaplan and Norton, 1996). Thus, identifying corporate
goals and strategies in relation to the core perspectives is a critical preliminary step in
a Balanced Scorecard approach.
The Balanced Scorecard is applied in private and public sectors from two different
viewpoints. In the private sector, the main emphasis is on financial indicators for
managing the organisation. The private sector responds to fluctuations in market
share, share prices, dividend growth and other changes in the financial perspective. In
the public sector however, entities must respond mainly to legislative acts and are
responsible to Government Agencies and Authorities. The most common difference
between a private sector Balanced Scorecard and a public sector Balanced Scorecard
lies in the purpose of utilising the Balanced Scorecard.
Public sector focuses on cost reduction and customer satisfaction, while private sector
is mainly focused on revenue generation and profitability. For example, performance
measurement in the Australian Department of Primary Industries (DPI) includes
evaluation of the Department’s outputs against State wide trends. This broad
approach, based on the Balanced Scorecard, takes both internal and external
perspectives of DPI into consideration in measuring performance. In this way, the
department can link their actual performance with the expectations of stakeholders
(DPI, 1999; 2000). As in the standard Balanced Scorecard, DPI uses the same four
perspectives to measure performance and seek improvements.
The Queensland Government’s Balanced Scorecard approach is tightly integrated to
its Managing for Outcomes (MFO) budgetary system. The expected benefits from the
Queensland Government Balanced Scorecard are to;
¾ understand the management approach in a holistic manner;
¾ relate strategy to performance and action;
¾ set performance targets;
¾ focus, communicate and coordinate effort;
¾ reduce/eliminate blind spots;
¾ improve management and performance of the organisation.

The Queensland Government Benefits Realisation Plan consists of 6 main steps


(Queensland Treasury, 2000a, 2000b);
¾ specifying the appropriate business drivers;
¾ identifying key stakeholders;
¾ determining the Balanced Scorecard perspectives;
¾ identifying and applying method/s of measuring benefit;
¾ identifying initiative to achieve the recognised benefit;
¾ deploying a risk management strategy including potential risks, constraints
and dependencies.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 29

The Balanced Scorecard and Strategy Maps


Kaplan and Norton further extend the Balanced Scorecard concepts (2000; 2001).
Strategy Maps are combined with Balanced Scorecards to provide a framework for
describing and implementing strategy. According to Kaplan and Norton (2001), a
strategy map is "a logical comprehensive architecture for describing strategy. It
provides the foundation for designing a Balanced Scorecard that is the cornerstone of
a strategic management system". Strategy Maps reveal the cause-and-effect linkages
needed to transform intangible assets into tangible financial outcomes. Strategy Maps,
if designed correctly, may provide the solution to the problems discussed by Ittner and
Larcker mentioned in the section above.
Strategy Maps:
A strategy map is a tool for translating strategy into operational terms. Strategy maps,
combined with Balanced Scorecards, provide a new framework for describing and
implementing strategy. Kaplan and Norton (1996) define a strategy map as "a logical
comprehensive architecture for describing strategy. It provides the foundation for
designing a Balanced Scorecard that is the cornerstone of a strategic management
system". The key for implementing strategy is to have everyone in the organisation
clearly understand the strategic decision-making hypotheses, to align resources with
these hypotheses, to test the hypotheses continually, and to adapt as required. A
strategy map makes the outcome more apparent and helps the organisation build a
cause-and-effect perspective. When looking at the financial portion of growth and
productivity, it clarifies non-financial measures such as quality and availability.
Vision and strategy complement each other, but there needs to be a strategy in order
for any outcomes to be achieved. The Balanced Scorecard helps keep the organisation
focused on its strategy in making the vision and the desired outcomes a reality.
Balanced Scorecard is a good indicator of success in a company because it identifies,
and aligns the components in an organisation together. Some strategies such as the use
of KPI (key performance indicators) and constituent/stakeholder scorecards may not
lead to successful outcomes because they do not reflect the strategy of the
organisation. You cannot look at the scorecard and understand the strategy. The
Balanced Scorecard helps to adapt vision, strategy, and outcomes so that it is more
conceivable to those who have to implement it.

Strategy Scorecards in the Public Sector:


Government Agencies in general have trouble defining strategy as a result of their
integrated funding for capital assets investment and service delivery based on
community demand. Using the Balanced Scorecard, they must change their focus
from product leadership and customer intimacy to local processes and improvement.
Unfortunately, many of these organisations use KPI (key performance indicators)
scorecards as a guiding map instead of concentrating on specific objectives.
These agencies have trouble with what goals to place at the top of their hierarchy. In
general, agencies have three objectives; costs incurred, value created, and funded
support. Overall, the Balanced Scorecard helps these organisations to highlight the
importance of the human resources interface with their asset management functions.
The human resources function does not typically align to strategy. In order to bridge
this gap between company and personnel objectives, strategy-focused organisations
have linked the Balanced Scorecard with the assets management cycle.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 30

Key Performance Indicator Dimensions


The alignment of Key Performance Indicators (KPIs) with an organisation’s
vision/mission and strategies/objectives is the key to realising bottom-line impact.
The challenge is to develop KPIs that provide a holistic and balanced view of the
business. Faced with potentially hundreds (if not thousands) of candidate metrics,
how does one select those that are most meaningful? One potential approach is to
think of individual KPIs not just as a singular metric, but as a balanced metric that
incorporates several alternative dimensions. These dimensions include business
perspectives (customer, financial, process and development), measurement families
(cost, productivity, quality) and measurement categories (direct, additive, composite).
By overlaying these various dimensions, one can create a framework for building
KPIs that succinctly captures the most critical business drivers (DMRM, 2004).
The KPI Perspective Dimension:
Since the early 1990s when Robert Kaplan and David Norton (1996) introduced the
Balanced Scorecard methodology for performance management, the conceptual
framework has been enthusiastically embraced by corporate America. As a
performance management tool, the Balanced Scorecard is designed to assist
management in aligning, communicating and tracking progress against ongoing
business strategies, objectives and targets. The Balanced Scorecard is unique in that it
combines traditional financial measures with non-financial measures to measure the
health of the company from four equally important perspectives:
• Financial: Measures the economic impact of actions on growth, profitability
and risk from shareholder's perspective (net income, ROI, ROA, cash flow).
• Customer: Measures the ability of an organisation to provide quality goods
and services that meet customer expectations (customer retention, profitability,
satisfaction and loyalty).
• Internal Business Processes: Measures the internal business processes that
create customer and shareholder satisfaction (project management, total
quality management, Six Sigma).
• Learning and Growth: Measures the organisational environment that fosters
change, innovation, information sharing and growth (staff morale, training,
knowledge sharing).
Although the focus of each perspective is distinctly different, there is a common
thread of causality that provides a universal linkage between the four perspectives.
For example, if a company invests in learning and growth to improve employee skills
and elevate morale, then those results will be translated into improved internal
business processes by leveraging best practices and change management programs
such as Six Sigma and TQM.
The KPI Family Dimension:
Another important consideration in the development of KPIs is the selection of the
appropriate measurement family to capture operational performance over time and
then relate these KPIs to internal business and external industry benchmarks.
Although the following list reflects common measurement families, different
industries will have their own specific business drivers and related measures.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 31

• Productivity: Measures employee output (units/ transactions/dollars), the


uptime levels and how employees use their time (sales-to-assets ratio, dollar
revenue from new customers, sales pipeline).

• Quality: Measures the ability to meet and/or exceed the requirements and
expectations of the customer (customer complaints, percent returns, DPMO or
defects per million opportunities).

• Profitability: Measures the overall effectiveness of the management


organisation in generating profits (profit contribution by segment/customer,
margin spreads).

• Timeliness: Measures the point in time (day/week/ month) when management


and employee tasks are completed (on-time delivery, percent of late orders).

• Process Efficiency: Measures how effectively the management organisation


incorporates quality control, Six Sigma and best practices to streamline
operational processes (yield percentage, process uptime, capacity utilisation).

• Cycle Time: Measures the duration of time (hours/days/months) required by


employees to complete tasks (processing time, time to service customer).

• Resource Utilization: Measures how effectively the management organisation


leverages existing business resources such as assets, bricks and mortar,
investments (sales per total assets, sales per channel, win rate).

• Cost Savings: Measures how successfully the management organisation


achieves economies of scale and scope of work with its people, staff and
practices to control operational and overhead costs (cost per unit, inventory
turns, cost of goods).

• Growth: Measures the ability of the management organisation to maintain


competitive economic position in the growth of the economy and industry
(market share, customer acquisition/retention, account penetration).

• Innovation: Measures the capability of the organisation to develop new


products, processes and services to penetrate new markets and customer
segments (new patents, new product rollouts, R&D spend).

• Technology: Measures how effectively the IT organisation develops,


implements and maintains information management infrastructure and
applications (IT capital spending, CRM technologies implemented, Web-
enabled access).

The perspectives and measurement families can now be combined to develop a KPI
profile matrix, which provides a construct for balancing the number and types of KPIs
that are developed. The profile matrix also ensures the proper mix of financial and
non-financial measures - typically a shortfall of most performance management
implementations.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 32

The KPI Category Dimension:


Once the KPI perspective and KPI family dimensions are identified, the next task is to
determine what form the measures should take; i.e. the KPI category. An effective
KPI is generally never just a raw data point, but some data derivative such as a ratio,
index or weighted average. The first step in creating a derived measure is to
standardise the measures so that comparisons across different
divisions/functions/departments are consistent. Normalisation, the most common
technique, places all the measures on a similar basis by equalising them across a
common organisational base (e.g., per square meter, etc.). Critical to successful
implementation of such measures is an enterprise-wide commitment to this
standardisation process. Business silos and function-specific metrics need to be
eliminated and replaced with new enterprise standards that ensure enterprise-wide
optimisation. After normalisation, development of the KPI category dimension is the
next critical step. Potential options include variations such as direct, percentage, ratio,
index, composite and statistical categories:
• Direct: The actual raw data value as measured.
• Percent: The comparison of the changes in performance of one value relative
to the same value at a different time, geography, etc.
• Simple Ratio: The comparison of one value relative to another to provide a
benchmark for comparison of performance.
• Index: A combination of several separate measures added together that result
in an overall indicator of performance for a specific geography.
• Composite Average: The addition of the weighted averages of several similar
measures that result in an overall composite indicator of performance (e.g.,
customer satisfaction is a mixture of results from surveys and focus groups).
• Statistics: Multiple measures such as mean, variance, standard deviation and
variance that capture the spread and distribution of the performance measures.

In most situations, the direct data elements that need to be incorporated in a specific
KPI are quite apparent up front. The real challenge is in translating the data elements
into meaningful derived metrics that reflect true business drivers.

The KPI Focus Dimension:


After incorporating the KPI perspective, family and category dimensions into the
development of KPIs, one needs to consider the final overlay; i.e. the focus. The focus
dimension reflects an eclectic mixture of views that further balance the development
and selection of KPIs, such as:
• Time Horizon - short-term vs. long-term
• Planning - strategic vs. tactical
• Indicator - lead vs. lag
• Type - qualitative vs. quantitative
• View - internal vs. external
• Level - process vs. outcome
• Purpose - planning vs. control.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 33

It is important to screen the final KPIs to ensure that they are not all skewed toward
short-term, quantitative, tangible and lag indicators, which are easiest to develop. For
example, tangible assets such as investments are a lot easier to quantify with a
monetary value than intangible assets such as employees’ skill, talent, knowledge and
teamwork. Values for the latter are much more difficult to capture, but they are
typically a much better indicator of the company’s future potential. The bottom line is
that the creation of effective KPIs requires an extensive commitment in time and
resources. This effort can be streamlined by incorporating these KPI dimensions.

Use of KPIs for Decision Making:


One of the most important uses of KPIs is to provide guidance for business decisions.
Improvement will only occur if the corresponding KPIs are considered when
important strategic and operational decisions are made. Decision-makers need to
examine the implications of business initiatives, as well as the cost/benefit trade-offs
of performance improvement. Therefore, implementation of KPIs should be
coordinated with decision-making processes as follows:
• Both financial and non-financial KPIs should be included in the company
strategy for decision-making regarding important decision-making objectives.
This will encourage decision analysts to include consideration of specific
performance goals.
• A ‘business case’ should attempt to describe how a proposed decision
increases (or decreases) value for internal and external stakeholders, and to
identify the implied enterprise value using various types of indicators. The
business case should try to articulate the linkages between performance
improvement and business value creation.
• Major decisions that are implemented, such as capital asset investment
projects, should be tracked as part of the company’s performance
measurement process to assure that the expected benefits in terms of KPI
improvement are realised.

Process for KPI Selection:


Most leading companies pursue a careful, deliberate process for selection of
performance indicators. The following describes a process whereby companies can
select aggregate indicators and flow them down to the business unit or facility level.
The process involves a series of steps depicted below:
• Consider Stakeholder Needs.
• Identify Important Aspects.
• Establish Company Goals and KPIs.
• Select Performance Indicators and Metrics.
• Set Targets and Track Performance.
• Track Performance.
• Review Company Goals and KPIs.
• Revise Indicators and Targets.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 34

Step 1. Consider Stakeholder Needs. This step examines the needs and expectations
of various stakeholder groups, an essential activity for any performance improvement
program. It provides a useful starting point for identifying important performance
aspects and worthwhile goals. At this stage, it is useful to perform a baseline
assessment of current performance for any performance-related goals that may have
been established previously.
Step 2. Identify Important Aspects. This step addresses the question: What aspects of
the enterprise are most important in fulfilling organisational commitment to
performance? It involves identifying performance-related aspects and selecting those
that are most significant in view of stakeholder expectations, emerging issues,
industry trends, and the company’s strategic goals.
Step 3. Establish Company Goals and KPIs. This step involves choosing a high-
priority subset of those aspects identified in Step 2, and establishing broad goals and
KPIs for performance improvement. The setting of goals is a critical part of the
company and business unit strategic planning process.
Step 4. Select Performance Indicators and Metrics. This step takes the company
goals and KPIs from Step 3 and determines how they will be implemented throughout
the company’s operations. It includes selection of focused performance indicators and
corresponding operational metrics.
Step 5. Set Targets and Track Performance. This is actually the beginning of an
ongoing, continuous improvement process. Managers periodically establish specific,
measurable targets that represent milestones for short and long-term performance
improvement. Then they monitor performance relative to these targets, and update the
targets or performance indicators as needed.

Performance Indicator Frameworks


A performance indicator can be defined as any quantifiable attribute of an enterprise’s
activities that characterises the potential implications of these activities with respect to
performance. It is common to distinguish between indicators (which define what is to
be measured) and metrics (which define how it will be measured). Thus, an indicator
might be energy consumption, and an associated metric might be BTU/ton. Although
environmental conditions may not be directly linked to company activities, they often
reflect important stakeholder concerns that may influence the company’s choice of
performance indicators. The following summarises a number of conceptual
frameworks that have been proposed by various organisations to facilitate selection of
performance indicators (Battelle, 2002).

ISO 14031:
The International Organisation for Standards (ISO) has released ISO 14031, a
guideline document for evaluating corporate environmental performance. It suggests a
simple three-step process – Plan, Do (also called Implement), and Review. The plan
step involves assessing the current situation and designing an evaluation system that
will be effective for the enterprise. During implementation the plan is put into action
and integrated with existing processes. Finally, the review step allows managers to
collect information and improve the process. Most companies will need to customise
this approach to ensure that stakeholder concerns are addressed adequately and that
the system is compatible with existing company practices.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 35

Shortcomings of Key Performance Indicators


Key Performance Indicators employ a faulty definition of performance and do not
measure the business. One of the main causes of 20th century management problems
is the definition of performance. Performance is defined as both the actions executed
and the results accomplished. This definition is used in performance management,
Key Performance Indicators (KPI), and other business performance. However, the
definition prevents separation of results from performance in order to manage the
actual utilisation of capital in performance to incur costs and produce value in results.
Key performance indicators do not measure actual business performance. In order to
organise and manage the actual business it is necessary to use the concept of Results
Performance Management to separate the three components of the business:
• Capital defined as the set of assets and capabilities in specific performance
solutions of positive (asset) or negative (liability) worth invested in and
supported for utilisation to produce specific business results.
• Performance defined as the utilisation of capital by the business in specific
performance solutions at a level of effectiveness to incur costs and create
value and quality in specific results.
• Results defined as the set of economic outputs of positive or negative value
that can be measured and counted and are produced over time against goals.
Many of the measures of capital, performance, and results are mixed together as KPIs
or are defined as performance. KPIs are a mixture of result, capital, performance, and
other measures that are produced without the framework of the actual business. Most
KPIs measure the enterprise related to overlaid structures, rather than the business,
and report artificial entities like a monolithic process, department, activity, task,
centre, object, etc. Very few actual business performance indicators are measured and
produced for management. Most capital, results, actual performance, costs, worth,
value, effectiveness, quality, etc. remain unknown.
Capital must be defined and organised as performance solutions to manage utilisation
by the business. Many key performance indicators describe the tangible or known
capital utilised by the business. Capital is all the specific tangible and intangible assets
utilised as specific performance solutions to produce results. Results Performance
Management describes capital assets as a set of specific performance solutions with a
unique solution identifier key. Capital asset descriptors or metrics describe or measure
the attributes of performance, such as (Greene, 2007):
• Category of capital asset for support.
• Class of capital asset for utilisation.
• Capacity to produce results in general.
• Capital worth and the assessed cost.
• Utilisation indicators in capacity, time, amount consumed, etc.
• Effectiveness, both general and actual, in performance.
• Expectations from the utilisation and effectiveness over time.
• Uncertainty due to lower availability, utilisation, effectiveness, etc.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 36

The descriptors and measures are examples of those common to all capital assets, but
are not in any sense complete. Furthermore, they can never be known or managed if
capital assets are not organised, recorded, and utilised as set in performance measures.
Key performance indicators describe output results, however some results are
managed as isolated entities, but most results are not defined or managed.

Results Performance Management identifies results within the set of results


performance measures. These performance descriptors and metrics describe and
measure the attributes of results, such as:
• Group: Revenue (service), Capital assets (delivery), and Investment (Projects).
• Level: End-result from performance, or set-result containing end-results.
• Goal: The planned volume or value of results by time period.
• Volume: The count or measured quantity of results produced in a time period.
• Value: The value in quantity of a result and value produced in a time period.
• Added value: Value increase to the result through performance improvement.
• Costs: The costs of all performance solutions utilised to produce the result.
• Value-added: The result value less total performance costs for specific results
in a time period.
• Productivity: The actual service delivery against a standard in a time period.
• Quality: The determinate for the quality planned and actual quality produced.
• Risk: The potential of the result not being produced as planned.
• Symptom: The impact of performance problems on producing the result.
• Customer: The internal or external customer that is willing to pay a value for a
determined level of quality

Very few of today’s key performance indicators actually indicate performance.


Performance is the utilisation of a specific performance solution to produce a specific
result. Results Performance Management sets up a performance record in the
performance set when a performance solution is deployed to produce a specific result.
Performance is identified by a specific solution key and a specific result key.
Performance descriptors and indicators describe and measure the performance of a
solution to produce a result. Performance is aggregated for both the performance
solution and the result.

Key performance indicators must be replaced by actual business metrics. Performance


measurement and performance management require extensive resources and
information systems, but most performance reported is against contrived measures.
The reports are difficult to use for asset management and become superfluous when
the actual business is reported and managed. Results Performance Management
replaces measurement and management of the enterprise with focused measurement
and management of the business. Results Performance Management solution modules
can be used to manage routine performance for a set of results or results under a
business organisation unit solution.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 37

4. Principles of Assets
Performance Measurement
Asset Management and Asset Performance Measurement
Asset Management Decisions
Asset management is a dynamic activity and throughout their life cycle all assets are
constantly subjected to changing levels of usage, costs, physical condition, and value.
All public sector organisations should have plans for the management of their assets
throughout their life cycle. The day to day management of these plans involves a wide
range of business decisions about utilisation, maintenance, investment, and disposal
that enable agencies to exercise their rights of asset ownership and discharge the
associated responsibilities. In essence all major asset management decisions are made
on the basis of information about the asset’s capacity and usage, condition, costs and
value. The regular measurement, reporting and evaluation of this information enables
an agency to determine if it’s assets are being managed in the most efficient and
effective manner to achieve its service delivery goals.
The focus of performance measurement in asset management is the relationship and
value an asset has to organisation-wide desired outcomes. An asset life cycle
management approach incorporates asset management practices to monitor and assess
asset performance using operational and financial criteria (e.g. functionality,
utilisation, condition, value, and cost of asset ownership). This is achieved based on
two precepts to an organisation’s decision-making process. First, assets usage
planning contains only actionable well-defined criteria that provide auditable
verification of an asset’s deficiency correction with a corresponding direct response to
the related performance metrics. Second, the assets performance data is highly
structured and can be logically integrated with other highly structured datasets to
enable the calculation of performance outcome metrics. This information in turn can
be used as assets performance targets or objectives.

Principles of Performance Measurement


Performance measurement is an integral component in accountability. Despite its
complexity, six principles guide performance measurement of outcomes (OSU, 1998).
Principle One: Know What Performance Measurement Is:
Performance measurement determines the success of a specific program or initiative
by comparing plans to actual activities and outputs, and outcomes. Performance
measurement is characterized by its inclusion in ongoing operations as a part of every
day management. Measures are used that both inform service delivery and
demonstrate results. Performance measurement essentials include;
¾ a clear vision of intended activities and results;
¾ systematically gathered information about actual participants, activities
and outcomes;
¾ careful analysis of efforts and outcomes to inform future decision-making.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 38

Principle Two: Useful, Accurate and Feasible:


Performance measurement describes what is accomplished. When a performance
measurement system is established, many decisions have to be made such as what to
record, and when and how to gather information. It is essential to know what
guidelines can be used to make these decisions, and what the characteristics of a good
performance measurement system are. Good performance measurement can
demonstrate results during the time of service, and are relevant, accurate, and feasible.
They help identify strengths and short-comings, and measure outcomes that are
valued by stakeholders, including decision-makers. Keys to a good performance
measurement system include the following characteristics;
¾ it is useful;
¾ it is accurate;
¾ it is feasible.
Useful information:
Good performance measurement provides information that is timely and relevant to
service planning and delivery. Useful reports should summarise findings and be
accessible for decision-making. Most importantly, useful information is able to show
the impact on outcomes.
Accurate information:
Good performance measurement provides information that is believable and correct.
Accurate information builds on valid and impartial standards, reliable procedures, and
reasonable interpretations and conclusions. A limited amount of accurate information
is better than a lot of inaccurate and incomplete information. Effective performance
measurement strives to gather information that is accurate.
Feasibility:
In judging what is feasible, it is critical to balance the value of the information that
will be gathered to the costs of collecting that information. Good performance
measurement uses resources realistically. This means that information is gathered in a
manner that is manageable over time.

Principle Three: Begin with a Logic Model:


Performance measurement compares what was intended against what has occurred.
Thus, the first critical step in performance measurement is the creation of a logic
model that clarifies the exact inputs, activities and outputs, and planned outcomes.
The logic model shows a ‘chain’ of activities and outputs that are intended to lead to
the planned outcomes. Usually, logic models are created in several steps:
Step one:
Define the long-term goal and/or benchmark to be reached.
Step two:
Define the basic components that will be measured. Be sure these components reflect
essential approaches that are known to lead to the desired outcomes.
Step three:
Define the consecutive outcomes that should occur. This is called the ‘so that’ chain.
Essential knowledge and skills that lead to long term goals and benchmarks can help
define the outcomes in a ‘so that’ chain.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 39

Principle 4: Know the Capacity For Assessment:


In considering capacity to conduct performance measurement, many decisions need to
be taken. Trade-offs will be made between what would produce the best, most
accurate information, and what is actually possible and practically feasible to do.
Accuracy and feasibility have to be balanced. Once the capacity for assessment has
been realistically defined, a better selection of appropriate measures for planned
outcomes, as well as a realistic design for collecting information with these measures,
can be made. More information on selecting measures is considered later; Principle 4
addresses the design of performance measurement by considering various levels of
capacity to conduct performance measurement and outcome assessment. For instance:
• Level one capacity is limited to the most simple of data collection methods.
These data collection programs should probably rely on simple assessments
conducted at the end of the program to identify outcomes.
• Level two capacity for performance measurement means that data collection
programs are able to collect all the level one information and more. For
example, at level two, a data collection program might add observations to
outcome assessments or compare observations. Simple goal attainment scaling
is also a level two strategy.
• Level three capacity exceeds levels one and two. At level three, data collection
programs are ready to assess multiple outcomes, use multiple assessments,
and/or use more complicated approaches such as observation scales and goal
attainment scales. Some of these may be collected in a ‘pre-evaluation’ design
which requires more complicate record-keeping and analysis.

The most rigorous evaluation designs include the following:


• Define and consistently apply various performance conditions.
• Apply random sampling with similar performance conditions.
• Utilise sophisticated measures and collect information at multiple points.
• Utilise sophisticated statistical analysis.

Principle Five: Know the Design for Performance Measurement:


Based on the capacity for performance measurement, the appropriate performance
measures as well as the design for collecting the appropriate performance information
can be selected. The design is the master plan for conducting performance
measurement, or performance evaluation. At a minimum, the design specifies:
• When, how, and what information or data will be collected.
• From where information or data will be collected.
• How information or data will be analysed.
Some designs are very complicated. These designs are used in research and evaluation
in order to understand causality by answering the question: ‘Does this service result in
a particular outcome?’. Unlike experimental design, performance measurement does
not seek to prove causality. Rather, performance measurement assesses how well
intended outcomes are achieved. Performance measurement rely on simpler designs
and must fit in the ‘real world’ of service delivery.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 40

Principle Six: Use and Report What is Assessed:


The purpose of performance measurement in an asset management context in the
public sector is to improve service delivery. Performance measurement is therefore
integrally tied to comprehensive assets strategic planning and community needs
assessment. These close ties make the effective utilisation of information from
performance measurement possible. Nevertheless, utilising performance measurement
findings is dependent upon several factors, including:
• Commitment by senior management with resources and technical assistance.
• Commitment to ‘continuous improvement’ in which findings are used
primarily to strengthen organisational services programs and initiatives – not
primarily to inform budget rationalisation.
• Clear relevance of performance information to assets strategic planning and
implementation decisions.
• Effective, communication of findings.

Basically, performance measurement findings are utilised when;


¾ senior management is committed;
¾ continuous improvement is emphasized;
¾ information is relevant and timely;
¾ reports are accessible.

Effective reports clarify facts, meanings, interpretations, and make recommendations.


To be useful, information has to be timely, focused, and accessible. Accessible reports
are focused and clearly separate facts, meaning, judgements, and recommendations. In
creating any report, written or verbal, first report the facts of the performance
assessment, then describe the logic model of the performance measurement program.
Thereafter, describe what, how, and from where information is collected (the design)
with a focus on whether the program achieved its intended activities and outcomes.
Report frequencies, percentages, average scores, or other evidence in summary form.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 41

Public Sector Asset Management Principles


Over recent years the concept of ‘asset management’ in the public sector has been
evolving as the “systematic approach to the procurement, maintenance, operation,
rehabilitation and disposal of assets, which integrates the utilisation of assets and their
performance with the business requirements of the owner and users” (APCC, 2001).
For its part the Australian Construction and Procurement Council (APCC) has
established the following principles to enable asset management to be integrated into
the mainstream of public sector business planning:
• Assets should exist to support production or service delivery;
• Responsibility for asset management decisions should reside with the
organisation that control the assets;
• Asset management within agencies must be consistent with whole-of-
government policy frameworks;
• Strategic planning and management of assets are key corporate activities, to be
considered along with the strategic planning for other resources such as
Human Resources and Information Technology;
• Full costs of providing, operating and maintaining assets should be reflected in
the organisation’s budgets;
• Public sector organisations should report on the usage, maintenance and
overall performance of their assets;
• Before deciding to acquire new assets, agencies must consider all relevant
factors including non-asset solutions, full life cycle costing, risk analysis and
the better use of existing assets;
• Asset management decisions should meet the needs of the present without
compromising the needs of future generations; and
• Public sector organisations should preserve our heritage, cultural and
environmental values.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 42

5. Infrastructure Assets
Performance Specifications
Framework for Performance Specifications
This section describes a suggested a framework for the contents of a specification.
However, all procurements are different and it is not intended that this framework
should be prescriptive; the specification for any procurement should reflect the
requirements of the customer organisation and the circumstances of the procurement.
The headings and contents lists will therefore need to be tailored for each
procurement situation (OGC, 2007).

Performance Specification Headings and Contents:


1. Introduction:
This section gives suppliers an introduction to the department and explains the
purpose of the Specification. Things to include are;
¾ an introduction to customer organisation;
¾ an introduction to the specification, its purpose and composition;
¾ disclaimers, caveats etc.

2. Scope:
This section sets out the broad scope of the procurement, it covers;
¾ what is included;
¾ what is excluded;
¾ what is optional: extensions for which proposals will be considered;
¾ treatment of assets, and staff where transfers are anticipated.

3. Background to the Requirements:


This section provides background information to help suppliers see the requirements
in context. Subjects to cover can include;
¾ an overview of the business of the organisation including an outline of
business strategy, overview of business objectives relevant to procurement,
description of the business activities in the area affected by procurement,
and the role of procurement in it;
¾ the history relevant to procurement - recent developments and future
developments relevant to procurement;
¾ objectives of the procurement;
¾ business functions and processes;
¾ information flows;
¾ organisation and staffing: roles and responsibilities, stakeholders;
¾ current service support and quantitative aspects of current operations;
¾ policies and standards.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 43

4. Requirements:
This section sets out the detailed requirements the supplier is to meet. Keep
background and supporting material separate from requirements, and ideally, make
the requirements easy to find. Requirements are often classified as;
¾ 'Mandatory' - essential requirements that suppliers must meet;
¾ 'Desirable' - requirements that whilst bringing benefits are not essential;
¾ 'Information' - requirements that request information from the supplier for
evaluation purposes, but which are not transferred to the contract.
Ensure mandatory requirements really are essential, because suppliers can be rejected
for failing to meet them. Mandatory requirements can be paired with desirable ones;
the mandatory requirement sets out the basic requirement, the desirable expands on it,
or specifies higher performance. If using desirable requirements consider how these
will be evaluated. In some cases meeting desirable requirements is a quality issue and
would be handled by the scoring system used in qualitative evaluation. In other cases,
if a desirable requirement is not met the organisation will need to provide the function
itself, or obtain it from a third party.

5. Functional Requirements:
This section defines the task or desired result usually by focusing on what is to be
achieved, not by describing the way it is to be achieved. This challenges suppliers to
use their skills and develop smart, creative solutions. There are some cases however
where it may be appropriate to specify particular types of equipment and provide
drawings, but this should as far as possible be avoided. Specifying requirements in
terms of outputs or functions gives potential suppliers the opportunity to propose
innovative solutions (or simply be more creative in their proposals), and also means
the responsibility for ensuring the solution meets the requirement rests with the
supplier rather than the customer. Use a heading structure that subdivides the
requirement into logical areas that map onto the evaluation model.

6. Performance Requirements:
Specifies the performance required by setting out details of inputs and outputs.
Example performance measures are;
¾ throughput - volume of inputs that can be handled within a specified time;
¾ accuracy - the number of outputs that are error free (usually expressed as a
percentage);
¾ availability - time to be used as a percentage of time supposed to be used.
Some performance measures are easily defined by reference to existing operations.
Where this is not the case they need to be defined with users and can be informed by
benchmarking information. It is important to set performance measures at the right
level - too high and they can be costly, the cost of meeting the higher performance
level can be higher than the additional benefit obtained; too low and users’
expectations will not be met, and there may be a detrimental effect on the business.
For procurements following the negotiated route it can be beneficial to explore
performance measures and the cost of different levels of service with suppliers. In this
case the requirements in the specification should be indicative rather than fixed.

7. Other Requirements:
© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 44

Security - Describe any specific security requirements appropriate to the requirement.


Standards - Set out any standards relating to the goods or services being procured, for
example health and safety, electrical etc. However take care when requiring
conformance to standards, as the European Commission believes that organisations
should consider any proposal that provides technical equivalence to, if not conformity
with, the standard. As a rule of thumb, contracting authorities must therefore;
¾ specify by reference to national standards which implement European
standards, or other European specifications, where relevant;
¾ consider offers which purport to offer equivalent functionality or
performance even though they do not conform to the standard in question.
The burden of proving technical equivalence will fall on the supplier.
Training - List the training needs both in-company and through commercial courses.
Constraints - Include any requirements that may constrain the supplier's proposal.

8. Implementation Requirements:
This section covers requirements for the period between awarding the contract and the
entry of the goods or services into use, and includes acceptance. In complex
procurements it can be useful to request information on methodologies and processes
the supplier will use in implementing its proposal, such as;
¾ project management;
¾ risk and issue management;
¾ application development in IT projects.

9. Contract/Service Management Requirements:


Any requirements covering contract/service management, for example;
¾ management information;
¾ change management.

10. Procurement and Contractual Requirements:


These requirements are different to others in that they relate to the procurement
process, not the resulting contract with the successful supplier, and include;
¾ expected nature of contracts - proposed terms and conditions;
¾ opportunities for suggesting different contract scopes;
¾ proposed arrangements for management of service contracts;
¾ roles and responsibilities.

11. Procurement Procedures:


This section provides the suppliers with information on the remainder of the
procurement process. Areas to cover include;
¾ a procurement timetable
¾ evaluation criteria and process
¾ contact(s) for further information

12. Format and Content of Responses:

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 45

This section sets out how suppliers are to respond to the Specification. It is important
to be clear on what is required to minimise queries from suppliers and help suppliers
to understand what they have to do, and to facilitate the evaluation process by
ensuring responses map onto the evaluation model.
A typical layout for a proposal includes:
¾ Management Summary describing the scope covered and giving a resume
of the proposal, highlighting the benefits of the proposed solution, and a
summary of total costs.
¾ Understanding of requirements concisely setting out the supplier's view of
the requirement, and the overall aims of the procurement gained from the
specification and any involvement in the procurement to date.
¾ Response to requirements - sets out the response to each of the
requirements in the requirements section of the specification. Suppliers
should respond to requirements individually in full, explaining how they
are met (not simply stating 'met'); use the same headings and paragraph
numbering as the specification.
¾ Costs - set out how the supplier is to present cost information.
¾ Further information - Any other information that suppliers wish to add.
¾ Annexes - Supporting information that can include details of business
activities; business facts and figures; organisational details; and details of
current services, technical environment.

Table 1. Typical Contents Page for an Assets Performance Specification

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 46

Output Performance Specifications and Performance Measures


Output Specifications:
Specification and definition of outputs, output groups and performance measures is
fundamental to the application of output budgeting and costing in asset management.
Poor specification of outputs or definition of output groups and performance measures
will obscure management decision-making leading to sub optimal decisions. In the
context of the public sector service provision this means knowing desired government
outcomes, and aligning department objectives to achievement of those outcomes to
which outputs contribute (VGDTF, 2001).
Major Output Categories:
Output categorisation will assist with monitoring and reporting against service
delivery, particularly with regard to comparison and benchmarking. Outputs can
generally be classified into four categories, namely:
• Products and Services
• Capacity
• Projects
• Funding and Grants.
Products and Services:
The majority of departmental outputs fall into the provision of products and services
category. Many of the outputs in this category are the tangible outputs associated with
infrastructure assets. Outputs in the provision of products and services category have
the following characteristics;
¾ uniquely identifiable;
¾ quantifiable in units of production; and
¾ unit price can be attached;
Capacity:
Capacity outputs have the following characteristics;
¾ not easily quantifiable as separate units of production;
¾ service provided is a response to an unpredictable level of demand;
¾ departments receive revenue to maintain effective capacity;
¾ quality is usually judged by the client.
An example of capacity output would include the provision of policy advice. This
advice is in the form of reports, policy papers, policy reviews and policy submissions.
Projects:
These outputs have the following characteristics;
¾ single product produced by long term process;
¾ one-off with a defined start and end date;
¾ milestones are set during the process;
¾ measure milestones achieved with regard to quality, timeliness and cost;
An example of a project output would include major public construction projects.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 47

Funding and Grants:


Outputs relating to funding fall into two categories, namely;
¾ administration costs;
¾ grant payments.
Ideally, separate targets are negotiated for grants administration and grant payments.

Output Groups:
Outputs need to be aggregated into manageable amounts of output information or
output groups to assist planning, budgeting, performance monitoring and reporting.
Departments will need to use judgement when defining output groups. Aggregation of
outputs at too high a level may compromise the usefulness of information, while too
low an aggregation may obscure strategic direction. Government bodies may also
wish to see outputs reported which may either represent a material proportion of a
Department's total outputs or may be of particular interest to the community. With
outputs generally classified into categories, performance measures can be used to
access these outputs with government funding. However, some important questions
first need to be answered: What are performance measures, and why use them?

Performance Measures and Targets:


Performance measures are units of measurement, expressed as targets. They are used
to access the outputs that are targeted for funding. They seek to measure impact;
quantity; quality; timeliness; and cost. Performance measures provide the tools for
examining organisational performance against delivery of outputs. Performance
measures help establish the building blocks of an accountability system that provides
the drive for continuous improvement and the basis for the certification of revenue.

Developing Performance Measures (Targets):


Performance targets set the quantity, quality, timeliness and cost levels which
departments aim to achieve for delivery of outputs. Measures should be selected on
the basis of the ability to indicate successful delivery of outputs. Quantity measures
describe outputs in terms of how much, or how many. Quantities will conceptually be
different for each output type. However, quantity could take the form of the number
of discrete deliverables or capacity provided. Quality measures are usually defined by
customer expectations. A fundamental aspect of quality is the assumption that the
product or service is defect-free and fits the purpose for which it was intended.
Timeliness measures provide parameters for how often, or within what time frame,
outputs will be delivered and may be a measure of either;
¾ efficiency, measured by turnaround times; or
¾ effectiveness, measured by waiting or response times.
Cost measures should reflect the full accrual cost to producing an output.

Specific checklists are established to include the following:


• Output Specification
• Output Groups
• Performance Measures

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 48

Performance Specifications Strategic Roadmap


Performance Specifications (PS) is an umbrella term incorporating performance
related specifications (PRS), performance-based specifications (PBS), and warranties.
In the broadest terms, a performance specification defines the performance
characteristics of the asset and usually links them to items under contracted control.
Performance characteristics (PC) may include items that describe the functionality of
an asset or aspects of functional failure, for example in the case of a road
infrastructure asset, items such as pavement smoothness and strength, or bridge deck
cracking and corrosion, etc. (FHADT, 2007).

When performance of an asset can be estimated using key tests and measurements
linked to the original design via modelling and life cycle costs, the asset specification
structure is commonly described as performance-related or performance-based. When
the condition of the asset is measured after some predetermined time, the specification
structure is commonly known as a warranty. When the asset is described in terms of
component materials, dimensions, tolerances, weights, and required construction
methodology-equipment type, size, etc. the specifications are commonly known as
method or prescriptive specifications. Currently, method specifications are the
predominant specification type used for infrastructure assets such as highway
construction.

Performance Related Specifications


Performance related specifications (PRS) are specifications that use quantified quality
characteristics and life cycle cost relationships correlated to asset performance. In
management terms, a PRS is the bridge between design, construction quality, and
long-term asset performance. So how does one determine that a specification is
performance related? Some fundamental requirements are offered to determine this.
PRS Fundamental Requirements:
Performance-related specifications should be written simply, clearly, and succinctly
with the following fundamental requirements:
Quality Characteristics and Accountability - Critical quality characteristics should be
readily measurable and clearly tied to asset performance. Infrastructure assets
construction contractors should be held accountable only for those quality
characteristics under their control.
Performance Predictions - Prediction tools, including modelling and databases,
should be verified, calibrated, validated, and otherwise made appropriate for local
conditions.
Life Cycle Cost Analyses (LCCA) - Life cycle cost analyses should be used to
compare the as-designed asset element section to the as-built section. The LCCA
should be based on a clear, well-documented, and realistic preservation, rehabilitation
and maintenance decision tree.
Acceptance Plans - Acceptance plans should be statistically based with clearly
defined risks. If necessary, costs determination should be made in a timely fashion to
allow for prompt corrective action. Sampling and testing plans should properly
address testing variability and improve confidence in the results.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 49

PRS Suggested Requirements:


Add Performance and Subtract Method – With performance related specifications in
infrastructure assets contracts, as PRS end-result criteria are added to a contract for a
specific quality characteristic, they should be accompanied by a corresponding
reduction in prescriptive or method elements, giving the contractor more freedom to
innovate, improve quality, and clarify roles and responsibilities.
Quick and Timely Testing - Testing should incorporate standardised tests using non
destructive techniques to measure the asset in situ, better quantifying the quality
characteristics and enhancing turnaround of information. This also could be the driver
to harness computer technology, such as PDAs (personal digital assistants), wifi
(wireless fidelity) networks, voice recognition, and high-speed linkage to asset
management systems.
Process Control - With performance related specifications in infrastructure assets
contracts, the contractor should be given reasonable latitude to develop and
implement a process control plan that can be verified by the asset owner agency,
especially for those quality characteristics included in the acceptance plan.
Mechanistic Models - Performance prediction techniques used in PRS should be
based on mechanistic models and be the same models used in the design process.
Asset management systems should track the same assumptions used in the design and
construction process.
LCCA and User Costs - User costs should be considered in developing appropriate
cost factors. The impact can be high, however, and will require sound judgment when
applied. Both the asset owner and the contractor need to understand the impact on
customer satisfaction.

Warranties
Warranties can be divided into two areas:
Materials and workmanship (M&W) warranties:
With infrastructure assets contracts, materials and workmanship warranties call for
contractors to correct defects in work elements within their control. The M&W
concept is referenced in many State regulations and codes, but it is not directly
referenced in certain infrastructure assets specifications, such as highway
specifications, and has been rarely invoked.
Product performance warranties.
The performance warranty is a relatively recent concept and requires the supplier (or
contractor) to correct defects if the product does not perform to some desired quality
level over a certain time in service. Product performance warranties are somewhat
controversial, exponentially so as the length of the warranty period extends beyond
three years. The controversy stems from the concept of risk allocation and the
financial burdens that accompany partial or complete product failures. M&W
warranties of less than three years generally require the contractor to focus on
construction quality. With a performance warranty, the contractor may have more
latitude in selecting materials, processes, and design choices. This requires the
contractor to have much more than a working knowledge of the product. This means
sorting through various combinations of materials or manufactured products and
pricing alternate products.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 50

A step-by-step process for developing a warranty includes the following:


• Establish what gain is expected and how success of the program will be
measured.
• Define the product service life.
• Establish a warranty period and describe the condition of the product at the
end of the warranty, including expected remaining service life.
• Describe the sampling and testing plan that will be used to monitor quality
during construction and measure quality at the end of the warranty period.
• Eliminate method or prescriptive requirements that conflict with performance
requirements or intent. This includes material selection, mix designs, etc.
• Establish some thresholds where warranties are invalidated, i.e. natural
disasters, weather, inadvertent maintenance, etc.
• Establish a contract bonding, insurance, or retainer requirement to hold the
contractor financially accountable.
• Establish a repair protocol should the product show early distress.
• Establish a mediation board to resolve conflicts.
• Pay according to a pre-determined pay schedule, including incentives and
disincentives.
• Monitor, measure, and feedback into the performance models.

Performance Related Specifications and Warranties:


The comparison between performance-related specifications and warranties is a
natural. Both address product performance and improvement in contractor end
product compliance and innovation, and both have an impact on the interrelated issues
mentioned previously. The impacts on the contractor and the asset owner agency,
however, are different in each scenario. The following table outlines the issues and
the requirements under each.

Table 2. Performance-Related Specifications and Warranties Comparison


(FHADT, 2007)

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 51

With infrastructure assets contracts, should the contractor provide a higher-cost,


longer-life, more-than-meets-the-warranty threshold product, or should the contractor
provide a lower-cost, shorter-life, just-meets-the-warranty product? What is the risk
versus costs? What impact will this have on contract award? Price obviously matters
in a low-bid contract, but it also matters in emerging procurement options such as
design-build and best-value contracting. This process is obviously the reverse of the
PRS process, in which the asset owner agency makes the decisions on material type,
construction requirements, etc. Not surprisingly, however, both parties need a
working knowledge of what drives performance. The fundamental approach in PRS
may be applied by a contractor in response to a warranty requirement as well.

Method Specifications
One of the most difficult issues facing the adoption of performance specifications is
the impact they have on method or prescriptive specifications. A recent review of
select transportation agency standard specifications showed that use of method
specifications remains common, with more than 400 prescriptive requirements in the
standard specification book. The difficulty comes when the specification includes
both a prescriptive and end-result requirement. Method specifications have been a
mainstay in transportation construction for many years.
What is the most commonly accepted principle behind a method specification?
If the contractor follows the prescription, the work product will be accepted by the
asset owner agency, with a good probability of performing well in service.
What are some of the other impacts of method specifications?
Decision Aids - A method specification tells the contractor exactly what the asset
owner agency has decided about a certain topic.
Knowledge Tools - Method specifications tell both parties what is considered good
practice and, by omission, what is not good practice.
Minimum Acceptable Values - Terms like "no less than" or "at least" show the lowest
allowable value that will be accepted by the agency.
Restrain Decision Makers and Force Fair Treatment - Method specifications give
both parties protection over arbitrary decision making. In fact, they serve to prevent
arbitrary decision-making by the agency as much as the contractor.
Difficult to Change - Method specifications are difficult to change once imposed and
set into practice, which is both good and bad. It is good in that training, equipment
procurement, and testing programs can be developed around the concepts, but it is bad
in that a minor or insignificant method specification is often difficult to remove.
Unintended Negative Consequences - It may be that the asset owner agency wants to
allow flexibility but is constrained by the method requirements. The contractor, in
turn, may want to introduce an innovative concept but is inhibited by having to
address each method specification point by point.
Red Tape - While one method specification may be judged as a safeguard to both
parties, a series of method specifications may become overbearing.
Minimum Quality Equals Maximum Quality - While method specifications clearly
define minimum acceptable performance, they may also, as a result of a low-bid
process, define maximum performance levels as well.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 52

Specifications and Contracts


Several key questions need to be answered about how the three types of
specifications, namely performance related specifications, warranties, and method
specifications, will work in the future. Will infrastructure assets specifications, such
as the highway specification book, be filled with performance specifications and void
of all method requirements? Will the specification book contain a blend of
specifications? Or will it have different types of specifications for different types of
contracts-method specifications for less-critical capital asset projects and performance
specifications for design-build projects, for example. Or will method specifications
always be used to control those illusive long-term durability issues?

A window to the future might be the European Union (EU) process for improving
trade and competition among European countries. For example, the EU is providing
the stimulus for the highway industry to develop functional highway specifications for
contracts (tenders). Functional specifications (FS) are a cross between end-result and
performance specifications and define the final in-place product with some
specificity. Method specifications gradually are being removed, especially those that
relate to material composition and installation procedures. Industry and government
are working on these specifications and acknowledge the complexities of these issues.
In addition, many European countries have moved to functional contracts with
specific language on performance of the in-place product over time.

The United Kingdom's Highways Agency bases 80 percent of a contract decision on


quality factors and 20 percent on cost. In 2003, this will change to a 100 percent
quality award. The costs will be negotiated after the award, and all the specifications
will be functional specifications. However, some European countries are increasingly
using design-build-operate-maintain (DBOM) contracts that may extend for 20 to 30
years. These contracts are performance based, including eventual turn back to the
asset owner agency at a required performance standard or benchmark.

The drivers in Europe for these types of contracts are the same as elsewhere:
• To pull the private sector into the innovation equation.
• To address the reduction in government personnel.
• To allow the remaining governmental workforce to focus more on
performance requirements for the transportation system.

Is everybody in Europe happy about this movement? No. Is everybody in Europe


seeing the long-range vision the same way? No. But they are working on the issue and
already are seeing fruits of their labour in several key technology areas. What does
this mean to the United States and to Australia? Is Europe a window to the future?
Maybe. Should the United States and Australia copy what Europe is doing? Not at all.
The European construction industry is structured differently than the U.S. and
Australian industries, and the social implications cannot be dismissed. But it does
mean that a real-life example is available to learn about performance specifications
and performance contracts. With a watchful eye, the United States and Australia could
learn from Europe's organisation efforts, experiment with its specifications, and
dismiss those that would bear little fruit.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 53

Expected Benefits:
It makes no sense to start something without clear reasons and expected benefits.
Developing and implementing performance specifications offers many potential
benefits. The following are some of the most important:
Improved Design-to-Construction Communication - Performance specifications could
more directly connect design requirements with construction, assuring that both
parties communicate effectively.
Rational Pay Factors - Pay factors could be more accurate, rational and defensible, as
they would be based more on processes and less on bartering.
Improved and Focused Testing - Testing would focus on those characteristics that
relate to performance.
Improved Trade-off Analyses - Performance, quality, and costs could be uniquely
connected through modelling and life cycle cost analyses with a much better way to
analyse tradeoffs.
Improved Understanding of Performance - Performance specifications could lead to a
better understanding of those quality characteristics that relate more directly to
product performance.
Improved Quality Focus - Performance specifications could lead to improvement in
the overall quality of the product in areas that caused problems previously.
Clearer Distinction in Roles and Responsibilities - Performance specifications could
help clarify changes in roles and responsibilities between the asset owner agency and
the contractor, as well as define the levels of risk that each would carry.
More Innovative Environment - By being less prescriptive, performance specifications
could create an environment that encourages innovation.

Performance Specifications for Assets Services Contracts


Defining Assets Performance Specifications:
Traditionally performance specifications have given a preferred design solution to
define how a service is to be delivered, or an item is to be fabricated or constructed.
This method of specifying is inherently conservative with little incentive for suppliers
to explore options for potential improvements. There are many different views on
what constitutes a performance specification. In this section, performance
specifications that avoid prescriptive methods and focus on the final product or
service are considered. These can be described either in terms of the delivery or of the
benefits delivered – output and outcome driven measures, where (HACS, 2003):
• Output measures define the end product of works carried out on the network.
This is usually in the form of a series of outputs that will deliver the desired
outcome. For example meeting road surface skid resistance requirements is
one output that will help enable the safety outcome to be realised.
• Outcome measures define the benefits that should be delivered as a
consequence of the works carried out on the network. This will usually take
the form of the Level of Service required. For example an asset’s level of
safety or reliability.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 54

Benefits in Using Performance Specifications:


A key driver for the development of performance specifications is the potential to
optimise service levels whilst at the same time offering better value for money. The
driver for service suppliers is the ability to innovate more cost effective methods of
work, reduce administration costs and increase efficiency.

The objective that needs to be set in widening the use of performance specifications,
is to ensure best value is achieved throughout the life of an asset by;
¾ aligning service suppliers with the objectives;
¾ encouraging innovation at all levels in the supply chain;
¾ offering incentives to improve efficiency and effectiveness of processes;
¾ providing a driver for continuous improvement in service delivery;
¾ improving whole life value decision making processes;
¾ greater cost savings as the industry gains experience and confidence in this
relatively new concept.
The adoption of an output / outcome based performance specification will enable
suppliers to be rewarded for achieving stated objectives, rather than simply by
reference to the amount of work done, thereby promoting better value and improved
price certainty against service delivery.
Government Agencies in general aim to widen the use of performance specifications
on the majority of their assets service delivery contracts.

Risks in Using Performance Specifications:


There are also perceived risks in introducing performance-based specifications across
a broader spectrum of work, where;
¾ inappropriate application of performance measures result in service
suppliers meeting targets without achieving the desired outcome;
¾ a possible reduction exists in the ability to change contract requirements;
¾ a potential for loss of consistency in approach exists across all assets;
¾ the loss of control of the technical governance role arises; and
¾ a lack of preparedness is experienced in the assets services industry.

Overall, a greater flexibility of performance specifications should bring a general


improvement in value for money and it is therefore worth pursuing their wider use.

Impacts in Using Performance Specifications:


Widening the use of performance specifications will impact on the way an asset
owner works together with its service suppliers. Potential changes likely to take place
include the following;
¾ more reliance on risk management;
¾ increased trust between partners;
¾ greater involvement of the supply chain;
¾ changes in working culture.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 55

Assets Performance Measure Generic Framework


A generic framework of performance measures at both output and outcome levels that
can be applied to different services contracts should be commensurate with the level
of transfer of responsibility and risk. One model for specifying outcomes based on
understanding needs and processes is known as the ‘4D Model’ (HACS, 2003).
The ‘Deliver’ level of the model states the desired outcomes. These represent the
overall requirements of a services contract and would be focused on the asset owner’s
stated aims and objectives or vision. This is the level at which a performance based
contract would aim to measure the service delivered by the supplier.
The ‘Demonstrate’ level of the model represents the processes that are carried out to
support delivery of the outcomes. These processes equate generally to ‘outputs’. The
asset owner would require assurance that these issues have been properly considered
but would not specify how they are to be addressed.
The ‘Detail’ level represents the quality management procedures that will need to be
in place as part of any services contract. The asset owner would not need to be
involved in this detail but would need to know that the system was in place and
available for auditing.
The ‘Develop’ level represents the need for continuous improvement over time. A
process model in the format described above provides a line of sight between detailed
procedures and high-level outcomes.

Key Performance Indicators:


Six generic measures can initially be identified and proposed as key performance
indicators for service delivery. The measures will need to be made specific to reflect
the contribution that is made in achieving the required overall objectives. The asset
owner therefore needs to consider a range of options for applying the 4D model
performance regime:
Option 1: Developing Existing Specifications:
The asset owner could retain the present mix of prescriptive and output based
specifications. Current documentation could be maintained, with changes in line with
European and International Standards. Improvements to the departures from any
standard process could be introduced with a change in culture to encourage greater
innovation. In addition, the asset owner would continue to improve current
Performance Specifications for Assets Maintenance. This approach would represent
the status quo and is seen as a low risk but low benefit strategy. A potential risk for
the asset owner and its suppliers is that of lost opportunities to make substantial
improvements. This represents specifications aimed at the detailed procedures level in
the 4D model.
Option 2: Performance Specification for Maintenance Only:
Current Performance Specifications for routine and maintenance work could be
expanded to include design and delivery of major maintenance and renewal of
network assets e.g. earthworks, pavements, drainage, structures, environmental
aspects and other elements. New specifications will build on the existing standards
and specifications. Current documentation would be used for guidance purposes and
for testing ‘equivalence’ where necessary. This targets the output of the 4D model.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 56

Option 3: Full Performance Specifications:


This option is an outcome based performance specification for all assets design,
construction and network management activities. It includes defining high level
measures such as those referred to above. This is expected to substantially change the
existing level of risk transfer between the asset owner and the services suppliers. At
the same time, this is expected to offer more freedom for suppliers to innovate. This
option would aim to specify the outcomes that the asset owner needs to achieve
without reference to inputs or procedures.
Limitations on Full Performance Specifications:
There are a number of perceived limitations and challenges on how far an asset owner
can go down the road to full performance specifications, and these are set out below.
These issues need to be addressed in the future development and use of assets
performance specifications.

Technical Governance:
The asset owner is responsible for technical governance for its assets management.
This role cannot be passed on. However, the asset owner can approach assets stewards
to carry out the tasks to help it meet its duties. What role the assets stewards should
take to support the asset owner is a major area for consideration, and is covered in the
module on Assets Ownership and Stewardship.

Assets stewards and suppliers look to the asset owner’s current standards and
specifications to limit their exposure to risk. If these define outputs and outcomes
rather than inputs and methodologies, then the responsibility of ensuring technical
excellence will rest with the stewards and suppliers. The existing standards and
specifications would be used as a guide to existing best practice, but the issue then
arises about who would be responsible for updating the guidance and defining best
practice. European and International codes and standards introduce new forms of
specifications that will need to be followed.

Issues for Assets Stewards and Services Suppliers


Risk Management/Analysis:
The asset owner usually carries most of the risks associated with specifications and
standards. Under a performance specification regime it is expected that a number of
these risks and responsibilities will be transferred to the assets stewards or services
suppliers. The asset owner will need assurance that assets design, construction and
maintenance processes are developed in line with industry best practice principles. An
understanding of the risks involved in the use of performance specifications is
essential.
Development of a comprehensive risk management system will therefore be critical to
enable risks to be managed and their impact minimised as far as possible. The
conventional method of specifications has been claimed to stifle innovation and create
a barrier to improving assets performance. Services suppliers are potentially in the
best position to instigate innovations for improvement in assets Levels of Service and
reduction in costs. Ideas from services suppliers could play a key role in defining
performance specifications.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 57

Issues for the Asset Owner:


Under its Procurement Strategy, an asset owner would consider extending the use of
performance specifications to other forms of services contracts. In doing this it must
ensure that consistency of outcomes is maintained. The asset owner provides technical
and corporate governance and the use of outcome-based performance specifications
must not compromise this role. Possible means of assurance for the asset owner
include the use of quality management systems, regular auditing procedures, product
guarantees and liability insurance. A major issue is that of durability and future
performance of the assets. The asset owner will need to know that work carried out
today will perform in the future. This will require future looking indicators most
likely based on suitability of process.

Issues common to both Assets Stewards and Asset Owners:


Some assets performance specification issues common to both assets stewards,
services suppliers and asset owners include the following:
• The development of a performance specification will need to ensure quality
and durability of the end product.
• A robust set of performance indicators will need to be developed to ensure that
realistic sustainable targets are set and can be easily measured.
• Performance specifications will need to be easily tailored to the specific
requirements of a particular area, while ensuring a consistency of outcome
across the network.
• There is a need to develop lead indicators that point to successful performance
in the future rather than identifying past performance or failure. Indicators,
such as predicted durability of infrastructure assets could be developed.
• Performance specifications will need to be flexible enough to accommodate
future changes in standards and best practice in infrastructure assets
construction and maintenance.
• The use of performance specifications could have an impact on tender periods,
methods of services supplier evaluation/selection, and the amount and quality
of information needed within tender documents.

Outcome Based and Method Performance Specifications


There are significant changes taking place in infrastructure asset management
stewardship contracts with a shift away from method or prescriptive performance
specifications towards outcome based or end-result performance requirements.
Construction and rehabilitation contracts of most infrastructure assets initially
involved method type specifications. These specifications nominate the certified
materials that are to be used, as well as particular equipment and construction
techniques. Typically, payment is made once the majority of the construction is
completed with a 5% - 10% retention held, pending successful completion of the
construction, with no defects at the end of the stewardship contract period. Failures or
defects that occur after the contract period has generally remained the responsibility
of the asset owner.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 58

Under this type of contract the asset owner cannot expect the contractor to accept the
risks when the contractor does not have control of externalities that could impact upon
the initial asset design and possibly the effectiveness of the final result. This set
performance measure criteria which is generally based on past experience and
empirical historic records, has limitations when a different method or material is used,
or a new design technique is introduced in an effort to improve asset performance.
The contractor’s risks are directly related to ‘time’ and ‘profit’ and are linked to the
level of responsibility. The increase in the level of risk with increased responsibility
impacts upon the level of profitability. As a result, there has been a move to introduce
new performance criteria and measures into assets stewardship contracting, in
particular for infrastructure asset rehabilitation such as general road reconstruction
and resurfacing. Performance specifications have been developed where a minimum
Level of Service is defined using various performance criteria with the Level of
Service to be maintained at all times. This provides the asset owner with confidence
that the minimum level, or greater, is being achieved regardless of the methods used
to achieve this end result.
This allows the contractor to use innovative methods, new technology and techniques
that provide greater efficiencies and thereby greater profitability to achieve the
specified end result. With performance based specifications the profitability and
credibility or goodwill of the contractor play an important role. The contractors
performance is analysed to provide attribute ratings to assess the contractor’s
performance and relevant experience when considering future tender proposals. The
contractor must be acutely aware of factors that may affect the final result, as these
will directly impact upon the level of payment. These may include variables such as
(TNZ, 1998):
• Quality control
• Expertise personnel
• Existing conditions/situation
• Monetary/economic influences
• Climate and weather conditions.

Significant variations and externalities beyond the contractor’s control are generally
covered by the General Conditions of Contract, but the foreseeable variables must be
considered and accounted for in the contractor’s methodology. This is a change from
traditional conditions where the asset owner’s prescriptive specifications both
nominated and included these variables within the contract, or the asset owner
compensated innovation as a variation to the contract.

Significant Features of Outcome Performance Measure Criteria:


Level of Service is a predetermined criteria of required performance that is specified
as a minimum, and is to be maintained throughout. The contractor must determine
asset rates of deterioration to be able to calculate the extent of assets performance
above the minimum Level of Service that would be necessary to ensure the assets do
not deteriorate too rapidly or fall below this level over the specified contract period.
The time period of a contract is therefore an extremely important feature when
considering Level of Service.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 59

6. Infrastructure Assets
Performance Standards
Assets Performance Standards
Organisations set performance standards, which provide baselines for performance
expectations, compliance and management. They are the guidelines that underpin
monitoring, measuring, reviewing and providing feedback on assets performance.
Performance standards can be stated in terms of quantity and quality. Both can be
measured and monitored at intervals, and for outcomes. Performance standards related
to quantity specify what the asset has to achieve and when. If necessary, quantity
performance standards are monitored with incremental checks. Ultimately the results
must be measurable in quantifiable terms, to make a performance judgement
regarding whether the specified performance standard has been achieved. A
performance standard may be expressed as a competency requirement whereby a
competency checklist can be used to measure this in quantifiable terms.
Quality standards are usually in the form of safety, statutory or legal requirements.
Quality standards are more difficult to measure than quantity standards. Neither is it
easy to determine standards for indefinable characteristics. Codes assist with
qualitative performance measurement. They usually cover issues relating to quality
performance criteria. Quality performance standards and codes arise from the
expectations of required asset service delivery capability. They may also emerge as a
response to legislative action, such as statutory requirements covering assets safety
practices and procedures.
Performance standards, either in terms of quantity or quality, must be able to provide
verifiable evidence which can be reviewed to identify performance achievements and
performance gaps. A performance gap is the difference between intended
performance and actual performance. Standards of conformity need to specifically
relate the asset’s performance to a standard, just as the standard must relate to the
nature of the asset’s performance.

Assets Performance Management Systems:


Assets performance management systems need to be designed objectively, and with
an objective means of measuring, monitoring and reviewing asset performance. The
key to providing an objective performance management system, which eliminates
value and individual judgements, includes procedures and methodology that are;
¾ valid;
¾ reliable;
¾ free from bias;
¾ practical.

To be valid, the performance management system must measure asset performance


that is directly related to the asset’s service delivery capability and capacity, and to
performance standards that specifically relate to such service delivery.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 60

To be reliable, the performance management system must generate consistent results,


that is, the same results over time, regardless of who or what device is measuring,
reviewing or providing feedback. A performance competency standard should leave
no room for inconsistent measurement. The review document could be in the form of
a competency checklist. Freedom from bias has two components. The first concerns
performance issues whereby measurement and review must relate to the requirements
of asset service delivery capability and not focus on factors such as physical
capability. Performance of service delivery capability must be judged on the needs for
such service. When asset service delivery performance goals, objectives and action
plans are developed, bias is reduced. This must apply equally to setting performance
goals, objectives and action plans in the event that a performance gap arises from the
difference between intended performance and actual performance. The second
component of freedom from bias, is freedom from rating error that results from the
subjectivity of the judgement of asset service delivery capability. Quantitative
measures reduce rating errors; while qualitative measures tend to be more subjective.
This emphasises the reason for well established performance goals, objectives and
performance plans that can be measured and reviewed with a minimum of
subjectivity. Practicality refers to the asset performance capability that is to be
measured. Practicality also concerns the ease with which instruments can be used in
measuring performance. Practicality also refers to the availability of employees or
assessors to conduct assets performance measure.
A performance management system has the following components for monitoring,
measuring and reviewing assets performance;
¾ key performance indicators linked to key performance results;
¾ standards for performance measurement;
¾ performance targets and performance objectives;
¾ action plans to close gaps between intended and actual performance.
Figure 3 illustrates the links between organisational goals and objectives and key
performance areas with key performance indicators.

Figure 3. Organisational and Functional / Operational Performance Links


(OLILG, 2003)

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 61

Assets Performance Standards and Performance Specifications

Outcome-based Specifications:
Outcomes are the broad high-level requirements of the asset owner. Outcomes can be
financial (such as return on investment), safety, risk or reliability objectives, and
address both today’s and future’s requirements of the asset. In order to deliver these
long-term requirements, a partnering approach is typically developed between the
service provider and the asset owner. Service level agreements based on outcomes
necessitate both the service provider and the asset owner having common objectives.

Output-based Specifications:
Outputs are specific objectives that have been developed to meet the outcomes
required of the asset owner. Output specifications specifically address the questions of
what and where. Output specifications are typified by quantities, performance
standards, timeliness etc. Risks for the effectiveness of the strategic end-result rests
with the asset manager and the operational risk in performing the tasks to the
specified technical standards rests with the service provider.

Input-based Specifications:
Input-based specifications require an asset manager to specify the who, how and by
when of the task is to be performed by a service provider. The service provider is
merely required to do what is required with the majority of risks relating to the inputs,
output performance standards, and outcomes resting with the asset manager. A typical
example of this is where the asset owner might ask the service provider to provide a
certain number of service officers to undertake inspections of distributed electrical
installations. Input-based specifications provide the service provider with relatively
little freedom for innovation compared to output or outcome specifications in the area
of service delivery, and rarely provide any financial driver to improve productivity.

Performance Standards in the Public Sector


Performance standards in the public sector represent the standard of operation
required of Government Agencies. Agencies need to comply at all times with all
standards, and the measures that apply to them, is mandatory. It is the responsibility
of the agency to provide supporting evidence to show that the standards have been
met. One of the principal mechanisms for measuring performance and improvement
will be the agency’s annual business plan and the measures of performance contained
in that document. The business plan will not be static, and the agency will be required
to demonstrate continuous improvement against the performance measures.

Performance Standards to Ensure Service Delivery:


The performance standards in the public sector can be viewed as a set of operating
rules for an agency. It is a condition of being registered that the agency complies with
the performance standards and other requirements of the Government Registrar at all
times. The focus of the performance standards is to ensure that the agency delivers the
best possible services to the community. To do so, requires effective, ethical
governance, competent management and a sound business strategy, incorporating all
elements of asset management and maintenance.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 62

Application of the Performance Standards:


Government Agencies range from small to large-scale operations. As such,
performance standards apply, regardless of the size of the agency. However, the
standards are broad, and the measures of performance will be assessed with reference
to the individual agency’s scale and complexity of operation, the concomitant risks,
and the extent to which it has achieved improvement over the previous years’
performance against its business plan.

Regulatory Framework to Monitor Performance:


A regulatory framework, establishing a Registrar of Agencies (the Registrar), invests
it with powers to register and monitor the performance of registered Government
Agencies. It also provides the Registrar with inspection and enforcement powers. The
registration criteria must be met and maintained if the agency is to be registered and
remain registered. In addition to maintaining registration eligibility, a registered
agency is required to demonstrate compliance with a set of performance standards.
Failure to comply with such standards may provide the Registrar with grounds to
consider intervention in the agency’s affairs.

Performance Measurement for Sustainable Development


Sustainable development performance evaluation practices will become increasingly
important as environmental responsibility continues to influence business strategy and
decision-making. Already, global companies in the automotive, chemical, energy,
food production, packaging, and other industries are using sustainable development
performance information to improve their decision-making. For sustainable
development to become truly integrated into company business processes, a
systematic sustainable development performance evaluation process is essential.
Trying to achieve this type of integration raises challenging organisational issues,
including:
• How to establish company policies and performance-based incentives.
• How to modify existing business processes to account for SD considerations.
• How to capture and disseminate sustainable development knowledge.
• How to achieve consistent practices across diverse business units.

The following observations are made with regard to assets performance measurement
for sustainable development (Battelle, 2002):
• The vast majority of asset owner organisations tend to focus on conventional
environmental, health and safety indicators associated with operations, and do
not address economic, environmental, and social value creation.
• While assets performance evaluation has received a great deal of attention in
recent years, it remains a challenging field. The main focus has been on assets
condition, and most performance indicators used today are quantitative and
direct and not so much qualitative and indirect.
• Stakeholder dialogues show that clear commitment to transparent performance
measurement is a high priority for non-governmental organisations (NGOs),
especially with industry stakeholders.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 63

7. Assessing and Evaluating


Assets Performance
Assets Performance Assessment
The Performance Assessment Process:
The performance assessment process by which objectives are defined, specific
measures specified, and performance measure criteria identified, is crucial. It is
through this process that community values are articulated and decisions made about
infrastructure assets development and management. Methodologies do exist for
structuring multiple stakeholder decision making that involve performance measure
criteria, but experience is limited in applying these methodologies to infrastructure
assets. Performance assessment requires good data. Continuing, coordinated data
collection and measurement are needed to establish benchmarking and performance
assessment. The subsystems of infrastructure, i.e. built environment, transportation,
power, water, waste water, hazardous and solid waste management, etc., exhibit both
important physical interactions and relationships in budgeting and management.

Effective assets performance management requires a broad systems perspective that


encompasses these interactions and relationships. Most infrastructure institutions and
analytical methodologies currently do not reflect this broad systems perspective.
Unfortunately, it is a fact that in asset management, performance assessment is often
limited to condition performance of an asset.

Monitoring Performance
Monitoring performance is integral to the assets management process and related
application of an Assets Performance Management Plan, and typically involves the
following activities;
¾ setting up procedures and assigning resources to measure performance
over time;
¾ monitoring performance;
¾ verifying that targets and supporting standards can be measured and are
relevant;
¾ reviewing the cost-effectiveness of the monitoring process.
Asset management information systems should support the performance monitoring
process. These systems should have a comprehensive structure, and be linked to the
financial management system and asset register. This will allow asset owners and
asset service providers to;
¾ monitor performance of assets by type, scheme or facility;
¾ analyse and evaluate the effectiveness and cost efficiency of programs;
¾ report and advise on program performance to stake holders;
¾ evaluate performance and implement strategies to improve performance.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 64

Typical information in asset management information systems should include;


¾ performance against planned targets;
¾ performance against benchmarks and standards;
¾ checklists, customer surveys, and other means of gathering information;
¾ planned targets and actual performance.

Assessing and Evaluating Performance


Asset managers should continuously assess and evaluate the performance both of the
assets and of the whole service, to verify that;
¾ the asset management program supports the service delivery program;
¾ improvement strategies address differences between planned and actual
performance;
¾ the planning process applied in the asset management program is effective;
¾ stated targets are achieved and used as a basis for the current planning
period;
¾ the asset management program achieves its budget objectives;
¾ the asset management program includes the relevant policies and customer
service agreements.

Performance assessment and evaluation should enable asset managers to;


¾ identify any deviation from the plan;
¾ understand the cause of the deviation;
¾ identify minimum acceptable performance levels;
¾ identify and establish benchmarks for acceptable performance levels;
¾ develop strategies to solve the problem.

Steps in Performance Assessment and Evaluation


Review performance:
The Performance Management Framework and related performance measures should
be comprehensively analysed. This includes reviewing current performance and any
significant problems identified in order to;
¾ develop a comprehensive understanding of current performance;
¾ identify and document the areas of non-performance.
Identify issues:
The issues for detailed analysis should be identified and classified into inputs,
process, and outputs issues, and any other asset management system components.
These issues should selected on the basis of being;
¾ major areas of non-performance;
¾ minor areas of non-performance.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 65

Analyse issues:
In the context of the asset management system, a standard process analysis
methodology should be used to;
¾ gather relevant data;
¾ analyse the data for signals, trends and variations;
¾ identify and describe the problem;
¾ define the boundaries;
¾ identify key participants.
Analyse causes:
In the context of the asset management system, the issues should be analysed to find
causes, by considering;
¾ policies and service standards relevant to the issue or problem;
¾ motivators of behaviours;
¾ inappropriate processes;
¾ technologies being applied/required;
¾ level of management.

Cause-and-effect diagrams should be developed. It is necessary to consider the


impacts of causes in isolation and how they interact with each other to affect
performance. The following sections, provide information on performance assessment
and performance evaluation (ANAO, 1996).

Performance Assessment:
Performance assessment is an important step in understanding the extent of
performance achievement. It is based on comparisons using a range of targets,
benchmarks, standards and milestones. The following factors relate to an
organisation’s overall performance assessment:
Targets:
Targets express quantifiable performance levels, or changes of level, to be attained in
future, rather than a minimum level of performance.
Benchmarking:
Benchmarking involves;
¾ searching for best practice;
¾ comparing best practice with existing practice;
¾ introducing best practice.
Benchmarking can concentrate on comparing;
¾ the same activity between different parts of the same organisation;
¾ the same activity in other organisations that deliver a similar service;
¾ similar processes with other organisations which may have different
services or processes.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 66

Performance information is used to compare and evaluate practices within and


between organisations.
Standards:
Standards are predefined levels of excellence or performance specifications. They can
be set for any performance measure criteria of an organisation or asset management
program. Standards are set to define the expected level of performance. Progress in
delivering the service can be measured against the standard.
Milestones:
Milestones help asset managers determine whether a program or activity is;
¾ heading in the right direction;
¾ making the most efficient use of resources.
Milestones mark the achievement of critical stages of program implementation.
Milestones are particularly important for large and/or complex asset service activities.

Performance Evaluation:
Performance evaluation is the systematic, objective assessment of the efficiency,
effectiveness and compliance of a service or part of a service. Performance evaluation
should be part of the asset management performance program, to ensure that the asset
investment, operations and maintenance and renewal/replacement programs are
evaluated. The continued evaluation of these programs will lead to an improved
understanding of the asset management program’s performance and its link with the
organisation’s service delivery requirements; asset life cycle planning and costing;
and asset strategic planning with external service providers, such as energy, water and
waste water utilities and facilities.

Service performance evaluations can lead to improved service delivery outcomes,


assist decision-making, and help account for service performance. However,
evaluations need good performance information so that they can focus on key issues.
Evaluations can be particularly useful to examine a service provider’s asset
management system. Data needs for program evaluations should be planned in
advance; otherwise data collection can be very expensive.

Improving Assets Performance


Performance improvement is a fundamental component of a service organisation’s
asset management system. Performance improvement involves;
¾ identifying strengths and weaknesses in the asset management program
and related management programs and systems;
¾ developing strategies to improve these programs and systems so that they
deliver efficient and effective asset management performance measures.
Improvement strategies will impact on all resources that are inputs to the asset
management process. These include human resources, business information systems
and financial resources. The gains from continuous improvement are essential to
maximise value for service.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 67

Developing Performance Improvement Strategies:


Developing strategies to improve performance may include revising performance
measures, and involve;
¾ generating potential solutions;
¾ evaluating possibilities;
¾ identifying barriers;
¾ developing implementation strategies;
¾ developing improvement milestones;
¾ approving performance solutions; and
¾ implementing solutions.

Reviewing the Performance Management System:


Setting up a performance management system will allow regular monitoring and
review of the improvement strategies and targeted performance milestones.
Performance Outcomes:
Assets performance outcomes include;
¾ measured impact of assets performance improvement solutions;
¾ analysis of deviations from the Assets Performance Management Plan.
Standardisation:
Assets performance standardisation should;
¾ define and document the standardised process/procedure;
¾ develop compliance standards;
¾ monitor results of the standardised process.

Assets Performance Assessment Criteria


The following list of questions relate to assessment criteria for assets performance
(HACS, 2003):
1. How do assets Level of Service relate to performance specifications?
2. What are the benefits of a Balanced Scorecard in asset performance measures?
3. How can Key Performance Indicators relate to assets performance standards?
4. What elements of performance should be measured on an outcome basis?
5. Are there any outputs or outcomes that should not be measured?
6. What is the ability of the organisation to assume the risks inherent in outcome
based performance specifications?
7. Are there any risks that should not be transferred?
8. What risks are perceived in working under a performance specification?
9. How would the organisation propose to manage such risks?
10. What cultural changes will be necessary to facilitate the implementation of
performance specification based contracts?

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 68

11. Should performance specifications be extended to all asset management


contracts, or are there specific areas where the traditional prescriptive type of
specification should be retained?
12. How best can a technical governance role be supported by suppliers in
outcome based performance measure contracts?
13. What impact and implications would there be on suppliers if a public sector
agency were to fulfil its technical governance role by passing responsibility for
performance standards and/or specifications to suppliers?
14. What use could be made of future looking performance indicators as a
measure of performance?
15. Without a detailed specification to follow, how would service suppliers
guarantee continued quality and best value?
16. What assurance could be established that service delivery is being provided to
an acceptable standard and that the objectives are being met?
17. What mechanisms should be included in asset management contracts to ensure
that performance specifications and standards are updated as required to
reflect current best practice and to deliver continuous improvement?
18. Will service suppliers be prepared to share information with other suppliers in
relation to performance specification issues, and if so, on what terms?
19. How will the use of outcome performance based contracts impact on the asset
management contract bidding process?
20. What risks can be identified in outcome versus risk based performance
measure criteria in asset management contracts?

Contracted Asset Management in the Public Sector


Many Government Agencies have experienced inadequate funding levels to support
the agency’s infrastructure assets needs. Funding shortages and personnel reductions
have caused some agencies to turn to private industry to provide assets management
and maintenance services on a contract basis. In accomplishing comprehensive asset
management, the contractor is responsible for assets maintenance as well as any other
asset management activities necessary to maintain the assets at or above a
predetermined Level of Service (LOS) over a multi-year period. The use of such
comprehensive asset management contracts inevitably has a tremendous impact on the
performance of the assets in the case of in-house versus contracted asset management.
Government Agencies can utilise historical asset management information to establish
performance criteria to be used to assess the performance of the contractor through
the duration of the contract period. Assets management inventory maintained by the
agency can also serve as an important resource to the contractor for predicting assets
performance. The asset management contractor thus utilises asset management tools
in a slightly different manner than in traditional asset management implementation by
the asset owner, to determine the most effective combination of assets maintenance
and rehabilitation activities, to achieve specific goals, to enhance performance of the
agency’s assets, and to ensure contractor sustainability (USTRB, 2001).

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 69

Performance of Contracted Asset Management:


Asset management contracts are an attractive option to Government Agencies and
Local Government Authorities (LGAs) for a number of reasons. First, the contracts
are performed for a multi-year period (such as 5 years) at a fixed sum of money. This
allows the agency to fix its costs for maintaining its assets over the contract period
(regardless of whether costs for construction increase) and concentrate its remaining
resources on assets issues such as capital assets planning. Since maintenance of the
assets are being conducted by a private contractor, the contractor is generally
expected to operate more efficiently than its government counterpart, so the direct
costs of maintaining the assets should be lower than the costs to the agency, whereby
more innovative maintenance approaches can be implemented. Additionally, the
contractor has greater ability to react quickly to any changes in procedures that might
be needed to better meet contract objectives.
The importance of the link between assets performance and performance standards
established for contract performance cannot be overemphasized. The performance
standards must be reasonable when taking into account the condition of the assets at
the beginning of a contract, and should be representative of the criteria that are most
important to the agency when considering performance measures. It is also important
that the agency be able to measure the performance standards over time as a means of
verifying the performance of the contractor. Performance measures must be
established for each asset included in the contract. An important requirement of an
assets management system at the initiation of a contract, is to establish the current
assets conditions, if the agency chooses to include a performance standard that states
that the overall condition of the assets must be equal to or better than existing
conditions. This implies a consistency in the way in which data will be collected and
reported by the contractor, and an agreement on the distribution of condition at the
start of the contract. In most instances, the contractor will perform an independent
evaluation of assets conditions in preparing its fixed-price bid, so the verification of
assets conditions can be done as part of the contract development process.
Performance also plays an important role with the contractor as the maintenance of
the assets is being carried out. The contractor conducts performance checks on a
regular basis to verify conformance with the performance standards.

Agencies have two primary uses of asset management information as part of an asset
management contract. First, the agency must provide to any contractor interested in
bidding on an asset management contract, a complete inventory of the assets to be
included in the contract. Agencies with asset register databases can easily provide this
information, but will have to find other sources to establish the inventory of any
remaining assets. Second, the agency must establish the performance standards that
will be maintained by the contractor during the contract period, and assess the
performance of the contractor during the contract period. The asset inventory
information to be provided to interested contractors will normally be provided as part
of the Request for Proposals (RFP). Included in the asset inventory should be
information regarding the location of the assets, asset elements, functional
classifications, and other types of similar information such as assets utilisation and
condition. Each of the bidding contractors will conduct independent surveys to verify
the information provided by the agency, since the contractor will be assuming the risk
for the assets during the contract period.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 70

The historical assets condition information maintained in the agency’s asset


management system provides an excellent source for condition data that may be used
to establish the baseline conditions that must be maintained by the contractor. For
example, if a contractor is expected to at least maintain assets, such as a road
pavement network, at the same condition in which it was delivered, a pavement
management system’s (PMS) information can be used to establish the distribution of
pavement network condition that will be used as the baseline. The information can be
valuable in establishing the specific performance standards that must be maintained.
As comprehensive asset management contracts become more prevalent in public
sector agencies, an asset owner’s in-house maintenance function will have to evaluate
the usefulness of traditional network-level assets condition surveys for setting
performance standards for these contracts, and for assessing contractor performance.
Since asset management information has historically been used for planning and
programming purposes within agencies, the use of somewhat subjective condition
surveys has been acceptable. In addition, to expedite assets condition surveys,
surrogate performance measures are often used to represent assets structural condition
in the absence of more detailed survey information such as non-destructive testing
(NDT) results. Since a network-wide conduct of NDT is impractical for most agency
assets, surrogate representations of assets structural strength are considered adequate.
In the future, surrogate measures may not provide the necessary level of quality in
order to assess the performance of asset management contractors. In the past, since
assets condition information has been used in-house for primarily planning and
programming activities, the quality of the data has not been challenged. Because a
contractor’s performance evaluation under an asset management contract is dependent
on the initial results of an assets condition survey, the potential exists for the
contractor to challenge these results if the contractor’s independent condition survey
results differ. In reality, any differences in assets condition surveys should be worked
out equitably between the contractor and the agency, but an asset owner’s in-house
maintenance function must realise that any survey results could be challenged,
especially if surrogate performance measures are used to represent assets condition,
and should plan to enhance the quality of assets condition surveys in the future.

Although an agency is under no obligation to provide the contractor any information


other than the assets inventory information, other types of information from the
agency’s asset management system are invaluable to the contractor in maintaining the
assets. Information that includes performance models is very useful to the contractor
in managing the assets. Without this type of information, it is difficult for the
contractor to estimate the future performance of an asset over the contract period
because of the lack of knowledge as to when the asset last received the necessary
amount of maintenance work or even rehabilitation over time. At the very least, the
information should be provided to the contractor at the beginning of the contract
period. During the duration of the contract period, the contractor should share
maintenance and rehabilitation information with the agency so that it can continue to
update its asset management system. This information will complete the history of
asset performance that will be necessary should the agency assume responsibility for
the maintenance of its assets at some point in the future. Sharing this type of
information between contractor and agency strengthens the effectiveness of decisions
made by the agency, and serves to improve the Level of Service that can be provided.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 71

Application of Asset Management Concepts by the Contractor


The proper application of asset management is important to the contractor throughout
the duration of the asset management contract. Assets condition surveys are one
method of evaluating the contractor’s success at meeting (or exceeding) the assets’
performance standards. The information can also be used in preparing cost-effective
asset maintenance and rehabilitation programs. However, there are some differences
in the application of certain asset management principles that must be recognised in
order to be used effectively (USTRB, 2001).

Network Inventory and Condition Assessment:


As discussed previously, inventory information provided by the agency is a valuable
supplement to the inventory information collected independently by the contractor as
part of the bid preparation activities. Especially important is information about the
asset structural condition and estimates of asset useful and residual life. Figure 10.26
illustrates typical asset residual life as an agency supplied performance measure.
Without this type of information from the agency, the contractor is at a disadvantage
in trying to prepare its future assets maintenance and rehabilitation plans. In
developing an assets database for the contractor, it is important that the asset
management system support a method of storing condition-related information that
does not require fixed beginning and end points. Database features, such as dynamic
segmentation and concurrent transformations, are important for the contractor to be
able to manipulate data stored in different ways in the database. Simply put, dynamic
segmentation is a process that allows data to be stored by the smallest common
denominator between data elements. Since it is often impractical to perform analyses
on the smallest common denominator, concurrent transformations are used to
transform data into useful data sets for asset management analysis. Common
approaches used to transform data include;
a) weighted average,
b) weighted sum,
c) maximum value,
d) minimum value,
e) statistical average,
f) first occurrence, and
g) largest portion.

These tools provide the contractor with flexibility in determining asset elements for
inclusion in the maintenance and rehabilitation programs.
Assets condition surveys will be performed by the contractor periodically throughout
the duration of the contract in order to monitor the performance of the asset network
or portfolio, and to report conditions to the asset owner. If the performance standards
include a requirement to maintain the condition of the asset network at, or above, the
condition of the network at the beginning of the contract, an objective and repeatable
survey procedure will have to be agreed to by both the contractor and the agency to
measure and report on asset conditions. If the performance standards include specific
measures of distress, these distress types should also be incorporated into the asset
condition surveys, so the occurrence of the distress can be monitored over time.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 72

Asset Condition Analysis:


The development of a multi-year maintenance and rehabilitation plan for an asset
management contract differs from the traditional approaches that are used by agencies
in conducting asset condition analysis. Some of the major differences between the two
approaches are influenced by the nature of the asset management contract. For
example, an asset management contract is conducted on a fixed price basis, providing
the contractor with a fixed budget that can be used for the maintenance and
rehabilitation of the assets over the contract period. Few agencies can claim to know,
with complete certainty, the amount of money that is available to them for the
maintenance and rehabilitation of assets under their care. Additionally, the contractor
has flexibility in how the funds are distributed from one year to the next, providing
the contractor with the ability to accelerate projects as needed, and to apply cost
effective improvement treatments such as preventive maintenance on a timely basis.
Finally, the contractor can implement an improvement plan as soon as the asset
condition analysis is complete, rather than having asset improvement projects delayed
because of funding. All of these factors have a bearing on the development of asset
condition models used and the manner in which asset condition analysis is conducted.
Deterioration models used in asset condition analysis may differ from traditional
approaches to sophisticated mathematical model development. In traditional asset
management applications, assets with similar performance characteristics,
environmental or operational conditions, and design, are typically grouped into asset
families and general performance trends are established for the asset family, using
regression analysis. This is achieved by plotting asset condition information (obtained
from historical and current condition surveys) on the vertical axis and the age of the
asset at the time of the survey (or since the last major rehabilitation was performed)
on the horisontal axis. Regression techniques are applied to the data set to identify the
equation that best fits the family information. The resulting equation represents the
deterioration curve that is stored in the database for all assets fitting that family
description. In a contracted asset management application, it is difficult to obtain
information needed to develop family deterioration models unless the information has
been provided by the agency. As a result, the contractor may be more likely to
develop deterioration models for individual asset elements rather than asset families.
Similarly, since many asset management contracts include performance criteria for
specific distress severities, assets performance – condition models may be needed to
project when any one of the performance criteria will be reached during the contract
period. Over time, these enhanced models may be developed to improve the
predictive capability of asset management systems used for contract maintenance.
Figure 4 illustrates a typical performance – condition model using specific road
pavement asset condition characteristics such as a pavement condition index (PCI).
As in traditional asset management application, the contractor must also define
treatment models for the asset condition analysis. For the most part, development of
maintenance and rehabilitation treatment models for an asset management contract
should closely resemble the process followed by the agency, as illustrated in Figure 5.
Perhaps the greatest difference lies in the increased use of preventive maintenance
strategies by the contractor as a cost-effective means of maintaining the assets. With a
known budget for the duration of the contract period, the contractor is at an advantage
with its ability to develop multi-year programs that maintain the conditions of the
asset network over the duration of the contract period.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 73

Figure 4. Performance – Condition Model Pavement Condition Index (PCI)


(QGTMP, 2000)

Figure 5. Maintenance and Rehabilitation Treatment Models


(Stapelberg, 2004)

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 74

Perhaps the greatest difference from traditional assets condition analyses lies in the
overall approach used to develop the maintenance and rehabilitation programs that are
submitted to the agency each year by the contractor. The primary features of the
analysis are listed below:
• To determine the most appropriate maintenance treatment for each asset
element over the contract period. Instead of being based simply on a cost
effectiveness ratio as in traditional multi-year prioritisation analyses, the ratio
is linked with performance criteria for the remainder of the contract period.
• Rather than being a budget-driven analysis as in traditional asset management
applications, an asset management contract is primarily needs driven. In
traditional asset management applications, available funding levels are input
into the asset management system and the most cost-effective capital
improvement program is developed to match the budget constraints. Under an
asset management contract, asset management is condition driven rather than
budget driven with performance standards being the primary drivers.

As comprehensive asset management contracts become more accepted by agencies as


a means of addressing shortages in resources, modifications to the application of
assets condition analyses is needed to provide the necessary types of information.
This approach provides agencies with several potential benefits, including:
• A fixed price contract that protects an agency from escalating costs and
uncertain budget fluctuations.
• The assurance that the assets included in the contract will be maintained at or
above the pre-defined performance standards.
• A reduction in the cost of maintaining the facilities.

The acceptance of contract maintenance will also have a significant effect on asset
management in the future. From the contractor’s point of view, the availability of
assets inventory and condition information on the assets is invaluable. This
information is essential for establishing asset useful and residual life, and is an
important source of information for the development of accurate asset deterioration
models. In addition, contractors will need somewhat modified tools to enable them to
adopt a condition-based asset management approach rather than a budget-based asset
management approach. To be most effective, the analysis will need to be able to take
into account the length of the contract period, and should be able to evaluate the
financial implications of program expenditures to the contractor. But due to the
availability of a multi-year commitment of funds over the contract period, the
contractor has more of an ability to efficiently manage the assets and implement cost
effective maintenance and rehabilitation programs such as preventive maintenance.
However, because the risk associated with the contract has been transferred to the
contractor, the contractor bears the risk associated with a maintenance and
rehabilitation program that may exceed the amount of the fixed price contract. As a
result, the incorporation of risk into the analysis is an important analysis component.
The following chapter looks at outcome versus risk based performance measure
criteria in asset management contracts.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 75

Outcome Based Asset Management Contracts


Most Government Agencies, particularly those responsible for transportation
infrastructure assets such as roads and highways, have developed a series of fully
outcome based asset management contracts as part of public sector strategy for
contracting out assets maintenance and rehabilitation services. In effect, these
contracts form public/private alliances of common intent whereby a significant
portion of risk in maintaining the life-cycle performance of public assets is transferred
to a private contractor for a negotiated lump sum over a contract period. In general,
agencies and the relevant government authorities are accountable for the management
of public assets, and for the management of risks associated with the ownership of
those assets. With outcome based asset management contracts, significant risks that
have traditionally been held by the relevant government authority are transferred to
the contractor. However, risk transfer does not necessarily mean better risk
management. The transfer of risk to the contractor only provides a benefit if the
contractor is able to manage the risk. Under traditional method specification contracts,
the contractor took the risk of being able to deliver the required services in accordance
with pre-determined specifications. Under outcome based contracts, risks associated
with asset management are transferred to the contractor. These outcome-based
contracts offer the relevant government authority the opportunity of shedding or
sharing a variety of risks previously borne solely by that authority (Stapelberg, 2004).
Identifying Risk Allocation in Outcome Based Contracts:
Most current asset management contract agreements offer options to the contractor in
terms of how risks are allocated and managed. For example, a Government Agency
would invite contractors to propose the costs associated with transfer of a table of
identified risks from the government authority to the contractor, which then forms
part of the contract negotiation with the successful contractor. This method has the
advantage of allowing the authority to evaluate the cost effectiveness of reducing its
risk exposure. Furthermore, a significant benefit of outcome based contracts is that
preventive maintenance of assets becomes the rule rather than reactive corrective
maintenance, although both are equally considered. Preventive maintenance can
therefore be regarded as an insurance premium against the underlying risks associated
with the condition of the asset. The aim is thus to select the type and level of
maintenance which results in minimal overall cost.
However, there is an inherent difference in the manner in which government
authorities and contractors manage risks. Government authorities are required to be
expressly conservative in how they handle risk. In contrast, contractors are generally
amenable to taking on additional risks, provided they are accompanied by a
commensurate opportunity. Current asset management contract agreements thus either
allocate all the risks associated with the contract to the contractor, or provide a
framework for risk sharing on a negotiated basis. The benefit of this is that risk may
be managed better as a result of a much more explicit recognition of what the risks
are, than in traditional contract arrangements. However, assessing whether risk is
better managed through outcome based contracts is difficult, and could only be
quantified by a somewhat subjective assessment if some of the risks did eventuate.
Thus the most significant risk transferred to the contractor is the condition of the asset
network in terms of asset condition.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 76

At the same time, the government authority takes on the risk that certain performance
measures, expressed as Key Performance Indicators, may not reflect actual
contractual performance. The implementation and acceptance of outcome based
contracts also have a significant effect on asset management. From the contractor’s
point of view, the accumulation of inventory and condition information on the assets
is essential. This information is important for establishing assets useful and residual
life, and is an important source of information needed to develop accurate
deterioration models. Contractors would therefore need the skills and tools to enable
them to conduct whole-of-life asset management and costing which includes
condition based asset management in addition to the normal budget / cost analysis. To
be most effective though, such an approach must take into account the length of the
contract period in contrast to the expected or residual life of the asset. Contractors
must also be able to evaluate the financial risk implications of program expenditures
over the short and long term. Due to a multi-year commitment of funds over the
contract period, the contractor should be able to efficiently implement cost-effective
preventive maintenance and rehabilitation plans over the long term.

Long Term Preventive Maintenance and Rehabilitation:


The development of long-term maintenance and rehabilitation plans for outcome
based asset management contracts differs significantly from the traditional approaches
that are used by government authorities. Some of the major differences between the
two approaches are influenced by the nature of the asset management contract. For
example, where an asset management contract is conducted on a lump sum or fixed
price basis, it should provide the contractor with a fixed budget (modified by rise and
fall rates) that can be used for the maintenance and rehabilitation of the assets over the
contract period. However, not only a few agencies can claim to know, with complete
certainty, the amount of money that is available to them for the maintenance and
rehabilitation of assets under their care, very few contractors would also know.
Furthermore, the contractor does not always know how the funds are to be distributed
from one year to the next, providing the ability to accelerate a rehabilitation program
as needed, or to reallocate funds through cost-effective treatments such as preventive
maintenance on a timely basis.
Long-term preventive maintenance, particularly in the case of infrastructure assets,
basically comprises periodic maintenance and rehabilitation works, where the work
activities can be planned and executed relatively independent of the day-to-day
operational performance of the assets. In this context, periodic maintenance means
recurring work to be carried-out on the asset to limit the rate of deterioration, to
maintain functional performance, and to maintain the required condition or to enhance
the condition of the asset. Furthermore, rehabilitation works means the restoration of
an asset substantially consistent with its original design and condition, so that it may
reasonably be expected to function in the required condition until the expiration of its
design life. In other words, rehabilitation works is carried-out to restore the condition
of an asset to the initial state of its life-cycle performance. It is thus important to
determine whether the asset has reached its cost-effective usage (i.e. performance),
whereby rehabilitation works are initiated to restore the condition of the asset to the
initial state of its life-cycle performance, in contrast to corrective maintenance in the
form of holding treatments. The contractor’s approach to long-term preventive
maintenance and rehabilitation is required to be an on-going continuous cycle.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 77

Long-term preventive maintenance and rehabilitation thus comprise the following:


• Condition monitoring / measurement
• Asset integrity assessment
• Whole-of-life costing
• Maintenance works planning
• Works execution programming
• Asset integrity verification and validation.

Contracted Asset Management Planning for the Future:


The contractor must implement a planning process for long-term preventive
maintenance and rehabilitation works to ensure continuing asset integrity over the
term of the contract. This must be an interactive process between the acquisition of
asset condition data (and hence evaluation of deterioration rates that guide the
planning process), the planning process, and the execution of a works programme
which impacts upon asset condition. The contractor’s planning process must include:
• A Long-Term Operational Plan which includes annual budgets for corrective
and preventive action and projected costs, location, and timing of
rehabilitation works. The long-term plan must comply with outcome
performance criteria such as Maintenance Intervention Parameters and Asset
Condition Profiles (which will be considered in the following chapter).
• An Annual Works Programme of committed work in the short-term to an
accuracy of 10% for the first year, and 20% for the second.
• A Future Plan to allow the asset owner to consider long-term emerging issues
or potential requirements for change in the future.
A Condition Measurement Programme (CMP) to determine asset distress conditions
also needs to be carried-out prior to the commencement of the Annual Works
Programme (AWP). However, the contractor must verify the AWP by comparing
visual condition assessment data of the assets network with the asset condition data
from the Condition Measurement Programme. This will allow for timely resolution by
the contractor of apparent differences between the two sets of data and allow
appraisals to be made on the reliability of the asset condition analysis. The asset
condition analysis is used to model and forecast deteriorating conditions of the assets
based on the condition data obtained through the CMP, and to develop the appropriate
rehabilitation treatment strategies based on budget / cost constraints. The steps
involved in planning for the future, and developing a whole-of-life costs programme
for the projected contract period, as well as establishing a readjusted budget spread for
the projected whole-of-life costs over the contract period, constitute the following:
• Estimating a maintenance and rehabilitation budget (top-down budgeting).
• Allocating a budget spread over the contract period (bottom-up budgeting).
• Developing asset distress conditions for initialising rehabilitation works.
• Optimising budgeted rehabilitation works using condition modelling methods.
• Determining whole-of-life costs based on asset deterioration / rehabilitation.
• Adjusting budgets over the contract period for projected whole-of-life costs.
• Preparing Asset Condition Profiles over the contract period.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 78

Estimating a total maintenance and rehabilitation budget (top-down budgeting):


An indicative annual budget for all routine and preventive maintenance, inclusive of
periodic and asset rehabilitation works, is initially estimated on a top-down basis to
incorporate all of the assets network.
Allocating a budget spread over the contract period (bottom-up budgeting):
A bottom-up budget spread is established for each individual asset element over the
contract period, by ultimately determining the preferred rehabilitation treatment for
each element (inclusive of a ‘do nothing’ strategy), subject to budget constraints from
the total maintenance and rehabilitation budget (i.e. top-down budgeting). The steps
involved in such a bottom-up budgeting strategy, whereby rehabilitation treatments
are optimised in an Assets Condition Model, and a whole-of-life costs programme for
the projected contract period, as well as an adjusted budget spread for the projected
whole-of-life costs over the contract period are established, comprise the following:
• Select a budget spread over the long-term for each asset.
• Optimise the budget treatments using model optimisation.
• Produce expected asset conditions for budgeted treatments.
• Prepare Asset Condition Profiles and ensure they meet or exceed the estimated
Worst Allowable Condition and Target Condition Asset Condition Profiles.
• Refine the bottom-up budget spread by repeating these steps until the budget
results in a long-term Operational Plan and Annual Works Plan that meets the
Worst Allowable Condition and Target Condition Asset Condition Profiles.
• Use an incremental benefit-cost technique to optimise treatments in order to
meet the maintenance and rehabilitation budget.
Developing asset distress conditions for initialising rehabilitation works:
Fundamental to infrastructure asset management is the planning and programming of
preventive maintenance and rehabilitation. Planning of asset rehabilitation treatments,
and programming the delivery of preventive maintenance and rehabilitation works to
ensure that the asset satisfies the required life-cycle performance standards, is largely
based on asset condition data evaluation and analysis. To ensure that assets are not
‘consumed’, infrastructure asset management must include, inter alia, the following;
¾ inspection and assessment of the asset condition;
¾ planning and scheduling maintenance and rehabilitation works;
¾ meeting specified maintenance and rehabilitation standards;
¾ monitoring and verifying the achievement of the specified standards.
Maintenance and rehabilitation works must be initiated before the condition of any
asset deteriorates beyond a specified condition. Outcome performance criteria such as
Maintenance Intervention Parameters (MIPs) are used to both initiate the maintenance
and rehabilitation works and to verify that the works have achieved the specified
standards. Consequently, the outcome based contract specifies Maintenance
Intervention Parameters for asset condition properties and Asset Condition Profiles
for asset condition attributes, to give an overall picture of the condition of the assets.
The ACPs are used to plan and monitor the effectiveness of long term preventive
maintenance and rehabilitation works in respect of a Target Asset Condition Profile.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 79

Figure 6 illustrates a typical Distress Condition Strip Map of an infrastructure asset


element, specifically a highway pavement.

Figure 6. Distress Condition Strip Map of an Infrastructure Asset


(Stapelberg, 2004)

Optimising budgeted rehabilitation works using condition modelling methods:


Figure 7 showed the variation in asset condition as a function of time measured in
asset life. The chart indicates roughly a 40% drop in asset condition within 75% of its
design life, and thereafter a further 40% drop in condition (i.e. 80% drop) within
87.5% of design life. Figure 10.30 shows the required maintenance types in relation
to the change in asset condition, where:
• Routine Maintenance – Ongoing services performed or to be performed on
the assets and are likely to be required on a day-to-day basis to maintain the
required condition, which is not amenable to planning in detail.
• Preventive Maintenance – Planned strategy of cost-effective treatments to an
existing asset that preserves the asset, retards future deterioration, and
maintains or improves the functional condition of the asset’s inherent systems,
without increasing the asset’s capacity.
• Rehabilitation – The restoration of an asset consistent with its original design
and condition and original geometry so that it may reasonably be expected to
function in the required condition until the expiration of its design period.
• Reconstruction – The re-establishment of an asset to a different design or
capacity, or using different material content.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 80

Figure 7. Condition and Pavement Maintenance Types


(Stapelberg, 2004)

Determining whole-of-life costs based on asset deterioration / rehabilitation:


It is essential in the management of infrastructure assets that all costs occurring during
the life of the asset be accounted for. When making economic comparisons, this has
not always been understood or practiced by design engineers because comparisons are
often made over a fixed, equal design period. Whole-of-life costing should account
for all costs and benefits associated with the planning, design, construction, use,
deterioration, maintenance and rehabilitation of the asset over its design life prior to
reconstruction. In whole-of-life-costing, these costs are calculated in today's dollars
(i.e. NPW). Modern design procedures, combined with accurate assets historical
databases, can be interfaced with assets deterioration models to develop predictive
asset condition analysis. With a combination of these approaches, engineers can
design infrastructure assets complete with appropriate maintenance and rehabilitation
regimes, and compare costs of various options. The basic concepts used in whole-of-
life costing for infrastructure assets are:
• A life period is selected for the asset under evaluation. This period is usually
chosen to coincide with the asset’s expected life prior to reconstruction, so that
the asset’s residual life can be assessed. Alternative rehabilitation treatment
types are then considered over the contract period, taking cognisance of their
impact upon the asset’s residual life. All alternatives must be compared over
the same period. When comparing various asset improvement treatments, the
life period chosen is very important to get a true indication of the asset’s
whole-of-life costs.
• Economic comparisons are based on net present worth (NPW) and equivalent
annualised cash flow (EACF), using an appropriate discount rate. Discounting
compensates for the fact that more significant estimates are usually placed on
costs in the near future as compared to a later date.
• An economic evaluation should consider all the possible alternative
rehabilitation treatment types within forecast budget constraints.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 81

Guidelines for Whole-of-Life Costing of Infrastructure Assets:


Guidelines can be developed for whole-of-life costing of infrastructure assets, such as:
• Whole-of-life costing analysis should not be separated from considerations of
alternative asset rehabilitation treatment plans. Care should be taken that
rehabilitation treatment plans are balanced according to forecast budget
constraints and estimated costs for each treatment option, and benchmarked to
a required quality standard.
• The accuracy of unit cost data for both maintenance and rehabilitation
activities should be carefully considered with respect to inflation increases and
provision for rise and fall in life-cycle costing. Provision for rise and fall in
life-cycle costing could typically be based on seasonal variations of
maintenance activities in a single year, and the extent of seasonal variations
such as excessive weather conditions from one year to the next. This is usually
difficult to predict, and a pattern of weather fluctuations is applied instead.
• The performance models used should reflect current and projected assets
condition over the life-cycle of the asset (i.e. life-cycle performance). To a
reasonable extent, the performance models should reflect design and
construction risks, as well as optimal rehabilitation treatments.
Adjusting budgets over the contract period for projected whole-of-life costs:
A planning process for long term maintenance and rehabilitation works is
implemented to ensure continuing asset integrity over the term of the contract. This
process is an interaction between the evaluation of asset condition deterioration rates,
and the planning and execution of the Annual Works Program, which impacts the
condition of the assets. Refining the bottom-up budget spread is achieved by iterating
the budgeting steps of firstly optimising rehabilitation treatments using the Assets
Condition Model, producing expected asset conditions for budgeted treatments, and
then preparing Asset Condition Profiles (ACPs), until the planned budget results in an
Annual Works Plan (AWP) that meets the Worst Allowable Condition and Target
Condition Asset Condition Profiles.
Preparing Asset Condition Profiles (ACPs) over the contract period:
During the term of the contract period, regular condition measurements of the assets
are made, including the following two types of performance measurements:
• Regular day-to-day surveillance of the asset to ensure that the contractor is
complying with the specified Maintenance Intervention Parameters (MIPs) for
both short term and long term road maintenance and rehabilitation works.
• Asset condition surveys to measure the nominated attributes for each asset
element. This allows for the determination of calculated Asset Condition
Profiles (ACPs) to which the asset is being maintained, and to predict the
condition of the asset in future years. These measurements are undertaken
annually during the initial years of the outcome based contract, however the
frequency may be increased or decreased during the term of the contract.
The results of the measurements are linked to Key Performance Indicators to provide
incentives in the contract to encourage the contractor to achieve or exceed the
minimum specified condition of the asset. These performance measures are primarily
undertaken to provide an objective assessment for audit and surveillance.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 82

Maintenance Intervention Parameters and Asset Condition Profiles:


Most asset management contracts encompass infrastructure asset management, which
in effect includes the identification, planning, budgeting, costing, scheduling, and
delivery of routine and preventive maintenance as well as rehabilitation of
infrastructure assets, and many ancillary services such as environmental management.
Fundamental to these contracts are outcome performance criteria such as Maintenance
Intervention Parameters (MIPs), predominantly to gauge and control the efficiency of
short-term corrective maintenance in compliance with contractual requirements, and
Asset Condition Profiles (ACPs) to determine the effectiveness of long-term
preventive maintenance and asset rehabilitation.
Contractors must identify, plan and programme works to meet performance outcomes
defined through the application of MIPs and compliance to ACPs, covering a wide
range of infrastructure asset defects and asset condition attributes. Performance with
respect to contract obligations and compliance with the outcome performance criteria
of MIP and ACP is assessed annually, whereby a number of Key Performance
Indicators (KPIs) directly linked to lump sum payment adjustments are used as part of
contract administration. The transfer of many of the asset management risks and
responsibilities to the asset management contractors has introduced new challenges.
These challenges are being met with an audit and surveillance methodology focusing
on the contractors’ Quality Management Systems (QMS) to determine the efficiency
and effectiveness of its obligations, rather than the traditional approach of inspections
of every item of work delivered.
The efficiency of short-term corrective maintenance (defect maintenance) is based on
quantifiable attributes of the performance criteria of Maintenance Intervention
Parameters (MIPs):
• Maximum intervention levels – maximum severity of a defect at or before
which corrective action must be taken.
• Maximum defective condition – maximum deterioration in condition beyond
maximum intervention level.
• Maximum response time - maximum time allowed for corrective action from
the time of observation of a defect.
The effectiveness of long-term preventive maintenance and rehabilitation is based on
quantifiable attributes of the performance criteria of Asset Condition Profiles (ACPs):
• Asset life-cycle performance – deterioration / rehabilitation cycle based on the
rate of deterioration and the effect of the asset rehabilitation programme.
• Asset rehabilitation programme – programmed rehabilitation treatments based
on the asset’s distress conditions and useful life since the last treatment.
• Whole-of-life costing – asset rehabilitation based on life-cycle performance,
budget / cost constraints and asset residual life prior to reconstruction.
Asset Condition Profiles are used as a regular performance measure of the condition
of the asset to ensure that the objectives of the outcome based contract relating to the
integrity of the assets are being achieved. Thus, during the term of an outcome based
contract, the contractor must not allow the calculated Asset Condition Profiles to
deteriorate to a level lower than the Worst Allowable Asset Condition Profile.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 83

Risk Based Performance Measure Criteria


As indicated previously, current asset management contract agreements either allocate
all the risks associated with the contract to the contractor, or provide a framework for
risk sharing on a negotiated basis. Risk can thus be managed as a result of an explicit
recognition of what the risks are. However, assessing whether risk is better managed
through outcome based contracts is difficult, consequently the most significant risk
transferred to the contractor is the condition of the asset network. The risks associated
with a risk based performance measure framework need to be carefully considered
and addressed. This does involve a significant shift in the risk and roles of those
involved. The type of risks which a contractor may not have been concerned with
under a prescriptive method contract, but must now be considered, are (TNZ, 1998):
• Poor brief or outline of the required outcome.
• Extended asset management contract periods.
• Unrealistic or too tight tolerances.
• Unproven confidence limits on specified performance values.
• Use of unproven/untried materials.
• Condition of the assets.
• Poor quality control producing variable results.
• Communication between parties.
• A clear understanding of the performance specifications.
• Methods of measurement, and variability between performance measures.

Measurement Methods and Limitations:


The desirable feature of a risk based performance contract is to measure the in-situ
(constructed) asset performance and to determine the asset’s expected life. This
involves new measurement methods, in some cases specialised equipment involving
new condition assessment techniques. The method of measuring in-situ performance
may entail critical measurements with very small variations between a pass or fail.
This is a risk that must be realised and managed by the contractor, who must be
confident with both design and construction to ensure that the final performance is
above the minimum. Disputes increase the level of risk and uncertainty with the
additional costs of using (usually expensive, higher quality and more accurate) tools
to measure more precisely the asset’s performance when it is close to the minimum. If
asset performance fails compared to a performance measure standard, it is usually the
contractor who bears the cost for possible remedial work, along with reduced payment
for non-compliance. To avoid such circumstances it is advisable for both parties
(agency and contractor) to conduct a joint final compliance performance measure, and
where possible to reach agreement on mutual terms. As more in-situ performance
measures are developed to measure the asset’s predicted life, maintenance cycle
periods can be set to reflect the risk for both parties, usually within which time most
defects become evident. Risks have thus shifted from the traditional method type
contract, where specifications were set to fixed methods, and the contractor was
relieved of responsibility after a relatively short contract period, to risk based
performance contracts where contractors are required to stand by their materials and
methods for an extended contract duration.

Initiatives to Manage Risk Based Performance Contracts:

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 84

Experience to date has highlighted key factors that eliminate disputes and assist in
managing risks associated with delivery of a minimum Level of Service. Stating the
required objectives up-front goes a long way to improving the understanding of roles,
responsibilities and expectations at the pre-tendering stage. The agency must
endeavour to provide as much historic information as necessary along with areas
where difficulties may be encountered, to ensure all tenderers consider the risks and
include these in their contract price, thereby eliminating possible misunderstandings
at a later date. This information will help to reduce ‘over-designing’ and avoid
excessive detailed examination of the assets by each tenderer. The agency must avoid
placing too many prescriptive (traditional) requirements into the tender document and
allow sufficient scope for the introduction of innovative ideas by the contractor.
Experience has shown that agencies tend to include prescriptive clauses to avoid
problems of the past that will no longer be their responsibility. Prescriptive
performance specifications together with end result requirements restrict the
contractor utilising new innovative materials and methods that may not comply with
previous (traditional) prescriptive type specifications.
Initiatives to manage risk based performance contracts include the following:
• Quality assurance system and quality control.
• Contract quality management plan.
• Plans submitted by the contractor outlining methodology for each activity.
• Partnering concept whereby all parties sign an agreement stating goal, mission
statement and shared objectives at the commencement of the contract.
• Training to improve understanding of philosophy and shift in responsibilities.
• Technical working group meetings to convey information, explain the
technical details and discuss proposed improvements to specifications.
• Produce guidelines, manuals and advice on experiences to date.
• Longer term contracts, with performance criteria for all activities.
• Progressive specification implementation for industry to adapt to the changes.
• Refinement of acceptance measures/procedures/test methods/confidence limits
of data obtained from trials to reconfirm performance criteria.
• Independent monitoring/auditing for double-check on performance outcomes.
Risk based performance measure criteria are significant in today’s asset management
contract environment where contractors are becoming more responsible for delivery
of outcomes, to the required standard, on time, and within specification. It is vitally
important to clearly identify both stated and potential risks and to ensure they are
adequately managed. Any recognised difficulties encountered during asset design and
construction must be absorbed and solved, generally for no extra payment, unlike the
traditional variations expected under a method specified contract. A successful
contractor will endeavour to anticipate and manage these unforeseen ‘extra’s’ without
causing unnecessary concern and at the same time ensuring the final result complies
with the specified brief, while maintaining profitability. Risk based performance
measure criteria determine performance specifications that have been shown to be
successful in asset management contracts.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 85

8. Assets Performance
Measurement Framework
The Performance Management Plan
Performance management is a fundamental component of any business management
system. A Performance Management Plan provides a formal, regular, rigorous process
for performance criteria data collection, analysis and usage. Thus changes in
efficiency and effectiveness can be measured, enabling comparison of performance
over time and against that of other similar entities. A Performance Management Plan
provides a performance process whereby efficiency is based on quantifiable attributes
of performance criteria such as Maintenance Intervention Parameters (MIPs) and
effectiveness is based on quantifiable attributes of performance criteria such as Asset
Condition Profiles (ACPs), in the case of infrastructure assets (GAMS, 1997).
Outcomes of Performance Management:
A Performance Management Plan;
¾ provides a comprehensive picture of how an organisation is progressing
towards achieving its performance goals;
¾ provides a mechanism for responding to emerging issues/cost pressures
that may require remedial action;
¾ establishes a basis for a service standard, and resource and pricing
negotiations between stakeholders;
¾ identifies potential to improve the cost effectiveness of services (through
comparison with other organisations);
¾ forms an integral part of the asset management process, linked to service
standards and assets, financial and environmental management.
Outputs of Performance Management:
Outputs from a Performance Management Plan include;
¾ performance monitoring and performance benchmarking reports in the
asset management process;
¾ assets strategic/detailed planning reports;
¾ capital assets investment program revised and updated annually.

Performance Management Framework


A Performance Management Framework allows service organisations to address:
¾ desired performance management outcomes and objectives;
¾ establishing a performance model with performance measures;
¾ setting performance targets and monitoring performance;
¾ performance assessment and evaluation and performance improvement;
¾ statutory performance reporting requirements.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 86

The Performance Management Plan in a Total Management Planning process


provides the framework for performance management, capable of consistent
implementation by all service organisations. At the initial stage it defines only the
minimum number of generic performance measures for performance assessment and
reporting. Service organisations are therefore required to review their performance
management requirements and use the Performance Management Plan to further
develop their own asset management measures as appropriate (GAMS, 1997).
Features of a Performance Management Framework:
The Performance Management Framework identifies a number of key objectives for
operational and managerial responsibility within the organisation. The Performance
Management Framework has the following features:
• Measurement, evaluation and reporting of performance starts at the asset level.
• Asset performance is summarised to a whole-of-service level.
• All planning performance is assessed and reported.
• Asset program performance for investment, management and renewal or
disposal is assessed and reported.
• Asset performance for investment and management-in-use is assessed, taking
into consideration;
¾ physical performance;
¾ functional performance;
¾ financial performance.
• Performance of the asset management system is also assessed, and
improvement strategies included in the assets strategic planning process.
• An organisation’s performance focus is on improvement strategies developed
and implemented by its individual services.

Performance Measures:
Performance measures are tools developed by organisations to measure work
performed and results achieved against a given objective Performance measures in
asset management should focus on the performance of assets and asset programs for
capital assets investment, operations, maintenance and disposal. The performance
measures should;
¾ identify the performance of the asset or asset program;
¾ align with the key objectives of efficiency, effectiveness, and compliance;
¾ be used to assess whether objectives are achieved;
¾ be able to be monitored ;
¾ provide the necessary information asset and whole-of-business reporting;
¾ be reviewed as part of the planning process to ensure relevance.

Asset-owner and service provider organisations are required to set Levels of Service
in their Asset Management Plan and monitor/measure their performance against those
Levels of Service for key outcome categories with the appropriate performance
measures, and reporting requirements outlined in the Performance Management Plan.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 87

Developing Performance Measures:


To develop performance measures, the underlying logic of asset programs should be
analysed. The suggested approach (detailed below) is to identify the key areas and
issues within an asset management program, particularly in relation to outputs.
Table 3. Asset Management Performance Objectives
(GAMS, 1997)

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 88

Understanding the Objectives:


The key objectives of the asset-owner and/or service provider organisation’s corporate
(or business) plan need to be stipulated, and the outputs of the asset management
program should be aligned to the objectives. The asset management performance
objectives are shown in Table 3.

Setting the Scene:


Before starting to develop performance measures it is good practice to;
¾ assess the environment in which the asset management program operates;
¾ involve asset management staff and other stakeholders in decisions on the
outputs to be measured and the performance information to be collected.
This is particularly important if asset management staff are concerned that the links
between asset program inputs, and outputs are not sufficiently within their control. It
also promotes acceptance of responsibility for the asset management program outputs.
Understanding the Asset Management Process:
It is important to understand the links between inputs and outputs of the asset
management program. This analysis is best undertaken in consultation with the major
stakeholders so that a shared understanding of the program is developed. The assets
and the asset management program outputs should be assessed for efficiency,
effectiveness and compliance. When deciding on the appropriate method for
measuring performance information, asset managers should consider;
¾ the level of detail that should be measured;
¾ the length of time it takes for the program outputs to be achieved;
¾ the degree to which external factors may influence achievement of outputs;
¾ the possibility of unanticipated program outputs;
¾ the extent to which managers are responsible for achievement of results.
When assessing assets effectiveness for investment and management-in-use the
following characteristics are to be addressed;
¾ functional performance (functionality);
¾ physical performance (condition);
¾ financial performance (investment).

Determining Performance Measures:


The following questions will help asset managers to determine the appropriate
performance measures and assess their relevance and validity:
¾ Will the measures assess performance of the asset and/or program?
¾ Will the measures provide ongoing monitoring and evaluation
information?
¾ What happens if the information is inappropriate?
¾ Can the performance measures identify performance trends?
¾ Can the performance measures be used for comparison with similar
organisations?

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 89

Characteristics of Good Performance Measures:


From the Australian National Audit Office publication: ‘Better Practice Guide:
Performance Information Principles’, (ANAO, 1996), the characteristics of good
performance measures include the following;
¾ they are qualitative or quantitative as appropriate;
¾ they achieve appropriate balance;
¾ data is valid, reliable and accurate;
¾ they include an appropriate number of measures;
¾ cost is balanced against benefit;
¾ they are repeatable for consistent use.
Qualitative and quantitative performance measures:
Performance measures may be quantitative (numerical value) or qualitative
(descriptive). It is often only through qualitative performance measures that objectives
and strategies are directly linked and cause/effect (impact) relationships demonstrated.
Achieving the appropriate balance:
All performance measures should be considered together to assess the overall
performance of the program. If only one aspect of an asset management program’s
performance is measured, it is likely that this is what asset managers will focus on,
and as a result, overall program performance could deteriorate. Performance measures
should be balanced to facilitate;
¾ management and accountability;
¾ analysis and improvement of all factors that influence outputs.
Each performance measure should provide a different perspective on program
performance. It is important that the elements of a set of performance measures are
selected because they measure something that is significant and useful, not just
because they are easy to measure.
Validity, reliability and accuracy of data:
Any performance measure used should be of a high quality. Therefore, it should be;
¾ valid, in that it actually measures the characteristic it purports to measure;
¾ reliable under given set conditions, information will not vary significantly;
¾ accurate and timely.
It is necessary to ensure that the information collection techniques are appropriate and
that the information is not biased.
Appropriate numbers of measures:
There is no ‘ideal’ number of performance measures. The emphasis should be on
balance, quality, usefulness and timeliness. A small set of key performance measures
is likely to be more manageable and consequently more useful. It is important not to
collect large volumes of performance data that are not strategic or timely, or are
simply too difficult to interpret and manage. However, it may be necessary for people
at different management levels, or in different geographical areas, to have measures
on different aspects of performance.

Balancing cost against benefit:


© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 90

The cost/benefit of collecting key data items or improving existing data collections is
an important consideration. The benefits of collecting additional or more accurate
measures need to outweigh the costs of collecting, storing and using the information.
It is useful to consider;
¾ the risk that investment in performance information collection may not
produce long-term benefits;
¾ the possibility that policy or program changes may result in performance
information becoming inadequate or irrelevant;
¾ he risk that poor data collection processes may render the resulting
performance information unreliable and unusable;
¾ the collection costs for individual items of performance information.
Consistency of use:
An important aspect of performance measures is that they be used consistently to
determine what trends exist (e.g. whether performance is improving over time).

Setting Performance Targets


Performance targets specify the desired level of performance, and often involve some
improvement on the existing level of performance. Asset managers should set
performance targets during the planning process, when they specify the required level
of performance for each of the asset life cycle activities. Achievable targets should be
set that focus on overall asset performance and asset management program
performance rather than on the achievement of individual targets. Targets should;
¾ relate to overall program performance, or the components of that
performance;
¾ encourage improved performance;
¾ be required for each measure, with at least one target;
¾ be able to be monitored and measured;
¾ make it possible to identify and resolve problems readily;
¾ be achievable and yet challenging;
¾ provide benchmarks for continuous improvement;
¾ be reviewed as part of the planning process to ensure relevance and
practicality.
Performance Target Characteristics:
Performance targets should have the following characteristics:
• They should focus on outcomes achieved rather than action taken. Measuring
whether asset management programs are succeeding is more important than
knowing how much activity is taking place.
• They should focus on outcomes that can be influenced by the asset managers
without themselves becoming individual targets.
• They need to be used consistently, so that results can be analysed and
compared over time and across programs.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 91

Setting Performance Targets:


Simplicity of performance targets should be as highly valued as reliability. Complex
approaches are expensive and often involve a high level of expertise. Qualitative as
well as quantitative information should be considered. As numbers rarely tell the
whole story, qualitative performance information may be equally important as
quantitative performance measures. The following are several possible approaches to
setting performance targets for the assets and their programs;
• Frontier practice to achieve projected performance, based on leading edge
technology, where there is no benchmark.
• Best practice through benchmarking.
• Management decisions and/or calculated decisions, given resource and
staffing limitations based on previous levels of performance.
• Performance standards and external standards.
• Benchmark performance comparisons with other state/national/international
asset-owner organisations.
• Maintaining current performance (i.e. keep the status quo).
• Current performance plus a percentage increase to achieve incremental
improvement (i.e. continual improvement).

Asset Level Performance Targets:


The setting and/or recording of asset level performance targets will occur at different
asset life cycle stages of the total asset management process to suit the asset planning
being undertaken, as follows:
Strategic planning:
A planning improvement program is part of the Assets Strategic Plan where the
performance measures and targets required to assess and evaluate the performance of
the planning processes are recorded. Review of performance targets follows the
monitoring, evaluation and improvement of planning performance.
Investment:
Setting investment performance targets involves setting asset investment performance
measures and targets, after analysing and selecting the preferred option. For each new
major asset, investment measures and targets are established during the preparation of
the asset investment proposal. The Capital Asset Investment Plan identifies the
performance target for each new major asset and investment program.
Operations:
Setting asset performance targets involves setting asset operations performance
measures and targets. This follows identifying the required asset and the requirement
to measure its performance.
Maintenance:
Setting asset maintenance performance measures and targets for both asset
performance and maintenance performance in the Asset Maintenance Plan.
Renewal/Replacement:
The Asset Disposal Plan identifies and sets the performance measures and targets for
the asset renewal/replacement program.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 92

9. Establishing Assets
Performance Benchmarks
Aligning Assets with Performance:
An important aspect of asset management strategic planning and asset performance is
identifying assets that do not have the necessary capacity or functionality to
adequately address the required service delivery standards; identifying the assets that
have capacity or functionality in excess of required delivery standards; and
identifying assets that do not support service delivery objectives and should be
disposed of. The aspects that address service delivery requirements, besides analysis
of alternative methods of assets usage and non-asset solutions, include determining
the utilisation of existing assets and establishing assets performance benchmarks.

Establishing Assets Performance Benchmarks


Performance Measures:
During development of measures against which performance will be assessed, three
broad measures are identified, specifically capacity, suitability, and condition (with
compliance a possible fourth measure). These broad measures are sufficiently generic
to accommodate the requirements of most public sector Government Agencies, and
form the basis of a Performance Framework Hierarchy within which further measures
can be articulated, as shown in Figure 8 (SAM, 2007).

Figure 8. Performance Framework Hierarchy


(SAM, 2007)

Rating Scales:
For each performance measure, levels of performance with associated descriptive
attributes need to be identified against which performance can be assessed. The levels
of performance are listed to form a rating scale. Various rating scales are available
e.g. hierarchical, or relative ranking scales such as +2, +1, 0, -1, -2.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 93

Performance Targets or Benchmarks:


A series of asset performance targets or benchmarks need to be determined before
measurement of actual performance can take place. It is important that these asset
targets reflect the service delivery requirements of the organisation or department.
Delineation of these business or service requirements can help an agency define the
required performance or targets for its assets. Using the relative rating scale chosen
previously, zero always represents the target performance. Under the simple
hierarchical scale, however, any level may be selected as the appropriate target.
For purposes of illustrating the ability to change the target level of performance,
assume that the initial target performance level in the hierarchical scale is set at 3.
Both these rating scales can be seen in relation to a Performance Rating Continuum in
Figure 10.X. Here the attributes describing a rating of 6 on the continuum reflect the
initial target performance level shown as Performance Level 1. It may be, that after
review, the target performance is re-set at 7 on the continuum. This is reflected in the
absolute scale by 4 and requires that the zero target in the relative scale be aligned to
seven on the continuum as shown in Figure 9.

Figure 9. Comparing Rating Scales on a Performance Rating Continuum


(SAM, 2007)

The hierarchical scale has been selected to demonstrate the performance assessment
process, but other scales are equally valid. In the hierarchical scale, a level of 5
indicates the 'most' or highest performance, whereas 1 indicates 'least' or lowest
performance. The ascending rating scale indicates least to most, not worse to better.
Hence, the target does not always need to be set at 5 (highest performance). A
medium level of performance (say 3) or lower may be quite adequate or appropriate
as a target, depending upon the service requirements and their criticality.

Target Performance Profile:


For a particular asset, a performance profile will emerge in relation to the range of
performance measures against which the asset is to be assessed e.g., location,
utilisation, functionality, environment, physical and financial. The profile represents
the target performance levels established at a point in time and is represented in
Figure 10. As discussed above, over time, the targets can be reassessed and re-set to
accommodate changing user requirements.
Profiles can be established for various business types eg, education, health and justice
sectors. The concept can be equally applicable to further differentiate service
requirements within a particular sector.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 94

Figure 10. Establishing a Target Performance Profile


(SAM, 2007)

Corporate Priorities and Criticality Weightings:


Having previously established targets or benchmarks for each performance measure,
it is then important for an agency to consider the relative importance of each measure
in supporting its business service strategy. This relativity should reflect corporate
priorities such as business imperatives, health and safety, security, environmental
sustainability, legal and financial imperatives. Performance measures can therefore be
assigned weightings to signify their relative criticality. In Figure 11 below, suitability
is given a high weighting whilst condition is assigned a lower weighting. Although
priorities have only been assigned to the high level performance measures i.e.,
Capacity, Suitability and Condition, priorities can be given to any measure against
which performance is being assessed.

Figure 11. Assigning Criticality Weightings


(SAM, 2007)

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 95

Integrated Measurement and Benchmarks


Throughout an asset’s life cycle there is a continuous inter-relationship between the
asset’s Capacity and Utilisation, Budget and Actual Expenditure, Possible and Actual
Condition, and the asset’s Replacement Cost and Depreciated Value. In order to
successfully discharge their rights and responsibilities, owners and managers of public
sector assets must be able to successfully manage the gaps and tensions that naturally
exist between these factors i.e.:

Asset Utilisation - capacity versus utilisation;


Asset Costs - budget versus actual expenditure;
Asset Condition - possible condition versus actual condition; and
Asset Value - replacement cost versus depreciated value.

It is the individual, and collective interplay, between these factors at a given point in
time that gives rise to the need to make decisions about the asset’s usage, operation,
maintenance, refurbishment redevelopment or disposal. The typical pattern of these
factors over an asset’s lifecycle is shown in Figure 12.

Figure 12. Typical Life Cycle Pattern of Asset Performance Factors

Asset Capacity and Usage


The management of individual assets and portfolios should be based on ensuring they
maintain their capacity and are used to deliver the intended service to the community.
Although the unit for measuring the capacity and use of a particular type of asset will
be unique, all assets have a nominal specified capacity for the service that they are
intended to deliver.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 96

For example, a building’s capacity would be the number of occupants it could


accommodate per floor or per room, a pumping station’s capacity would be to pump a
specified volume of water per day, and a road will have a capacity for a specified
number of vehicles per year. Regardless of the unit of measure, it possible to
determine an asset’s capacity and regularly measure and maintain data about actual
levels of utilisation. This information is critical for charting historical changes in
utilisation, predicting future levels of usage and determining any need to reduce or
increase the level of the asset’s capacity.

Asset Costs and Budgets


There is a direct correlation between the usage of an asset and its ongoing costs for
operation and maintenance. The management of asset costs relative to an annual
budget is an ongoing task and the full costs of providing, operating and maintaining
assets should be reflected in the organisation’s budgets. This information is critical
for assessing historical variation in costs and predicting future levels of expenditure.

Asset Condition
The condition of all assets declines naturally as a result of usage, wear and tear and
natural aging of surfaces and materials. It is however desirable for the condition of an
asset to be maintained at the level which is commensurate with the intended quality
and quantity of service that is provided by the asset. Depending on the type of asset
and the quality of service to be delivered, it might be necessary for some assets, or
component parts, to always be maintained in a high standard of condition while for
other assets, or is some circumstances, it might be appropriate for the asset to be
maintained in a lower standard of condition.

In all circumstances however it is essential that the desired level of condition standard
for the asset be established and for any changes in condition to be measured on a
regular basis through a condition audit. This should be done at least every 5 years.
Such a regular condition audit program provides consistent, quantitative and
qualitative information relating to asset performance in terms of condition and
associated risks. The subsequent evaluation of the audit establishes the amount of
maintenance or investment necessary to meet or the standard and define the base-line
for determining the adequacy and effectiveness of maintenance over both the
preceding and subsequent cycles. The condition of an asset is determined by the
following factors shown in Table 4:

Table 4. Determining Factors for the Condition of an Asset

Condition Factors Definition


Physical Condition The physical state of the asset including the weather-
tightness, structural stability/integrity and security that is
required.
Functionality The way in which an asset has been designed, modified
and/or developed and the extent, which it currently meets
the contemporary functional needs of the users.
Compliance The extent to which an asset complies with statutory
design and operational requirements.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 97

Assessing a nominal scale value for assets condition involves the application of the
criteria given in Table 5.

Table 5. Condition Scale for Assessing Assets Condition

Condition Scale Criteria


Physical Condition Functionality Compliance
5. Excellent As new Fully meets Complies with
designed function laws, standards
and codes
4. Good No sign of Good working Complies with
deterioration order laws, standards
and codes
3. Satisfactory Minor non-critical Functionally Complies with
deterioration adequate laws, standards
and codes
2. Poor Significant material Functionally Does not comply
damage impaired with laws,
standards and
codes
1. Unsatisfactory Extreme material No longer meets Does not comply
damage or decay designed with laws,
function. standards and
codes

Asset Values
The value of assets depreciates over time as a natural function of aging, usage and
obsolescence, and all public sector organisations are required to record and update
this information on a regular basis to satisfy financial reporting requirements.

In reality an asset’s Depreciated Value is also a monetary reflection of the asset’s


condition. From an asset management perspective it is desirable for the cycle of
condition audits to be synchronised to occur immediately before the revaluation of
assets so as to provide the Valuer with more comprehensive information about the
asset’s current condition and thereby assist in improving the standard of the valuation.
As part of this process the Replacement Value of the asset should also be recorded.
Replacement Value is the current cost of replacing the existing asset by the most
appropriate an up-to-date replacement, based on the assumption that the loss of the
existing asset is replaced by a new improved asset of the same size and standard.
Essential knowledge of the gap between an asset’s Depreciated Value and the asset’s
Replacement Value greatly assists decisions about assets investment planning against
current asset expenses such as operating and maintenance costs.

Regular measurement of the asset’s Capacity and Utilisation, Budget and Actual
Expenditure, Possible and Actual Condition, as well as the asset’s Replacement Cost
and Depreciated Value, and conversion of this data into percentage terms, enables
asset owners and managers to examine the performance of an asset at any point in
time throughout its life cycle, as illustrated in Table 6.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 98

Table 6. Measurements of Performance in Percentage Terms for Asset A

Utilisation Capacity Actual Usage Percent


30,000 25,500 85%
Costs Budget Actual Expenditure
$165,000 $156,750 95%
Condition Possible Condition Actual Condition
5 3 60%
Value Replacement Cost Depreciated Value
$1,800,000 $1,170,000 65%

The information shown in the above table can be produced in graphical form as a
radar chart, which more clearly demonstrates the relationships between the individual
performance measures, as illustrated in Figure 13.

Asset A - Measurements of Asset Performance in Percentage


Terms
Utilisation 100%

Value 100% Condition 100%

Asset A

Costs 100%

Figure 13. Graph of Percentage Measurements of Performance for Asset A

Index of Performance:
By calculating the area inside the bound portion of the radar chart shown above, (and
divide the area by a factor of 20,000 i.e. (100x100) + (100x100) for the 4 corner
values of the radar chart, to reduce the total possible area to a value of 1) it is possible
to designate a ‘Performance Index’ to the asset i.e.:
½ (85x60) + ½ (60x95) + ½ (95x65) + ½ (65x85) = 11,250 and ÷ 20,000 = 0.5625
This Performance Index for the four performance factors can now be represented as a
percentage value i.e.: Performance Index = 56.25%.
Perfomance Benchmarks:
Once the data for all assets in a portfolio has been collected and arranged into this
format, comparisons can be made with other assets and against individual assets, as
well as against the portfolio as a whole, and can be charted and monitored over time.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 99

Comparative Measurement of Asset Performance


in Percentage Terms
Utilisation 100%
100
80
60
40 Asset A
20 Asset B
Value 100% 0 Condition 100% Asset C
Asset D
Asset E
Asset F

Costs 100%

Figure 14. Multiple Radar Charts for a Portfolio of Assets

In larger portfolios of assets, the bound areas of the radar charts of the majority of
assets will visually present comparisons within the chart. The best and worst
performing assets become apparent as shown in the multiple radar charts of Figure 14.
(Asset C is the best performing asset while Asset F is the worst performing asset).

Analysis of this information can be used to determine relative priorities about the
assets’ usage, operation, maintenance, refurbishment, redevelopment or disposal.
This is done by combining Utilisation and Condition data as shown in Table 7.
Calculating a Performance Index for each asset with the 4 factors would be:
Asset A = [½ (85x60) + ½ (60x95) + ½ (95x65) + ½ (65x85)]/20,000 = 0.5625
Asset B = [½ (80x60) + ½ (60x80) + ½ (80x57) + ½ (57x80)]/20,000 = 0.4680
Asset C = [½ (84x100) + ½ (100x94) + ½ (94x89) + ½ (89x84)]/20,000 = 0.8411
Asset D = [½ (93x80) + ½ (80x92) + ½ (92x44) + ½ (44x93)]/20,000 = 0.5735
etc.
Table 7. Determining Priorities with Asset Utilisation and Condition Data

Asset Utilisation Condition Costs Value Performance


% % % % Index
Asset A 85 60 95 65 56.25%
Asset B 80 60 80 57 46.80%
Asset C 84 100 94 89 84.11%
Asset D 93 80 92 44 57.35%
Asset E 63 60 92 48 41.85%
Asset F 43 30 87 36 21.45%

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 100

Linking Asset Performance and Service Delivery

Asset Service Cost:


Fundamentally it is the cost of operating and maintaining assets that has the second
largest impact on an agency’s capacity to deliver its services (the largest is the cost of
personnel). There is a direct link between the cost of operating and maintaining an
asset and the asset’s level of utilisation. The combination of the data for determining
the performance measures detailed in the previous paragraph, enables the asset
manager to calculate the cost per unit of service, or Asset Service Cost, that is
attributable to the asset’s operation and maintenance costs and the asset’s utilisation.
This measure should be a critical consideration in all aspects of an agency’s business
planning, and included in its Asset Management Plan. An example of Asset Service
Cost for Asset A is shown in the following Table 8 and graphically illustrated in
Figure 15.

Table 8. Asset Service Cost as a Ratio of Annual Costs and Utilisation

Asset A Year 1 Year 2 Year 3 Year 4 Year 5


Annual $300,000 $350,000 $380,000 $420,000 $450,000
Asset Costs
Utilisation 1,500 1,800 2,150 2,200 2,300
(Services)
Asset A $200 $194 $176 $191 $196
Cost / Service

Asset Service Cost

$205
$200
$195
$190
$185 Asset A Service
$180 Cost
$175
$170
$165
$160
Year 1 Year 2 Year 3 Year 4 Year 5

Figure 15. Asset Service Cost as a Ratio of Annual Costs and Utilisation

Once the Asset Service Cost has been determined as a ratio of the asset’s annual costs
and its utilisation, these Service Costs can be compared between assets as a basis for
determining which is the most efficient asset within a portfolio. This is shown in the
following Table 9 and illustrated in Figure 16.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 101

Table 9. Assets Service Costs as a Ratio of Annual Costs and Utilisation

Cost per Year 1 Year 2 Year 3 Year 4 Year 5


Service
Asset A $200 $194 $176 $191 $196
Asset B $175 $160 $155 $170 $180

Comparision of Annual Cost Per Service

$250

$200

$150
Asset A

$100 Asset B

$50

$0
Year 1 Year 2 Year 3 Year 4 Year 5

Figure 16. Assets Service Costs as a Ratio of Annual Costs and Utilisation

The data about the costs of operating and maintaining assets should be integrated with
the other costs of service delivery (for example, human resources, information
technology and other overheads) to determine the aggregate cost per service.

Assets Performance Measurement Best Practice


Key Ingredients to Implementing a Performance Measurement System:
(ILGARD, 2003):
• Performance evaluation information is useful in improving asset management
programs, and that all partners/stakeholders are willing to provide feedback.
• Assets performance measurement is an essential activity in the organisation.
• Management is fully supportive of objective performance measurement.
• Training in assets performance measurement to build confidence and skills.
• Stakeholders are included in the planning and design of assets performance
measurement system, ensuring it is practical, relevant, and useful.
• Willingness of management to act on assets performance measurement results
and demonstrate its commitment in assets planning and budgeting.
• Stakeholders who review the results, discuss the implications, and use assets
performance measurement information for asset management program
planning and improvement.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 102

Understanding the Nature of Assets Performance Measurement:


Assets performance measurement is the ongoing monitoring and reporting of asset
management program accomplishments, particularly in regard to progress towards
pre-established service delivery goals and objectives and assets Level of Service. The
concept of service delivery is the foundation of asset management, particularly
infrastructure asset management. The two performance criteria related to the concept
of service delivery are Level of Service (LOS) and Standard of Service (SOS). The
Level of Service (LOS) is an indicator of the extent of service provided by an asset,
based on and related to the operational and physical characteristics of the asset, and
indicates the capacity of an asset. The Standard of Service (SOS) states how an asset
will perform, including a minimum condition in line with the impact of asset failure.

An Acceptable Level of Service is the minimum standard adopted for a service level.
Assets performance measures may address the type or level of asset management
program activities conducted (activities), the direct products and services delivered by
the program (outputs), and/or the results of those products and services (outcomes).
The asset management program may include any activity, project, function, or policy
that has an identifiable purpose or set of objectives related to an organisation’s assets.
Assets performance measurement focuses on whether an asset management program
has achieved its objects, expressed as measurable performance standards. Because of
its ongoing and cyclic nature, performance measurement serves as a closing link to
assets strategic planning whereby goals and performance measures are kept on target.

Performance measurement information is used for a variety of internal and external


purposes such as;
¾ contributing to improved organisational performance;
¾ clarifying assets performance expectations;
¾ communicate the organisation’s goals and results;
¾ reducing uncertainty of assets performance measure;
¾ provide a decision making mechanism when information is lacking;
¾ apportion asset resources in an outcome performance based manner;
¾ transfer of risk in outcome based asset management contracts;
¾ foster accountability for assets performance.

The Asset Performance Outputs / Outcomes Link


Performance outputs are narrower in scope than performance outcomes. Performance
can be gauged by examining service outputs, in terms of efficiency based on
quantifiable attributes of performance criteria such as Maintenance Intervention
Parameters (MIPs) and effectiveness based on quantifiable attributes of performance
criteria such as Asset Condition Profiles (ACPs), in the case of infrastructure assets.
Performance outputs can also refer to specific impacts that assets service providers
have on their customers. The performance of service providers is gauged by
examining outputs in terms of how efficiently and how effectively they are able to
satisfy customer needs, and to what extent this can be associated with the attainment
of desired outcomes for the community in the case of public sector service providers.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 103

Performance outcome defines the benefits that should be delivered as a consequence


of the service. This will usually take the form of the Level of Service required. For
example an asset’s level of safety or reliability. Outcomes are also sought-after
improvements in the general conditions of a community’s well-being in the case of
public sector service providers. They refer to the broad impact of all service delivery.
Community-wide indicators are the proxy measures of the attainment over time of
community service outcomes, and can include statistics related to economic
prosperity, health, education, safety, etc.

Stakeholders of Asset Performance Measures:


Stakeholders in the total asset management process include asset owners, asset
managers, clients, customers, consumers, users, etc. Stakeholders will include not
only the intended beneficiaries of the assets service strategies, but also the users of the
performance measurement information such as decision makers, managers,
consumers, public, etc. The combination of an asset performance outputs / outcomes
link with activities, strategies, and services related to stakeholders, which in turn is
linked to performance input resources needed to achieve the desired outcomes,
constitutes the first important distinction in asset performance outputs / outcomes.
Measuring outputs (i.e. performance measures) includes quantitative and qualitative
measures of effort and measures of effect.

These outputs relate to four sets of interrelated performance measures:


• Fiscal Performance Measures:
Fiscal performance measures include the management of monetary resources
(inputs) required by the service provider to deliver services. This can include a
comparison of the actual resources compared to the contracted amount.
• Service Delivery Performance:
Service delivery performance includes the assets service provider’s output of
services to the targeted customers. This can include a comparison of the actual
service delivery output to the contracted amounts. There are two types of
delivery output measures;
¾ the quantity of service delivered;
¾ the quality of service delivered.
• Level of Service Performance:
Level of Service performance can be gauged by examining service outputs, in
terms of efficiency based on quantifiable performance criteria such as
Maintenance Intervention Parameters, and effectiveness based on quantifiable
performance criteria such as Asset Condition Profiles.
• Output-Effectiveness Performance:
Output-effective performance includes the effects the service delivery has on
the customers. This can include a comparison of the actual impact on the
clients served to the contracted impacts. There are two types of output
effectiveness measures:
¾ the quantity of output-effectiveness
¾ the quality of output-effectiveness

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 104

Best Practice in Asset Management Performance Measurement


Defining Asset Management Performance in the Public Sector:
Generally, performance in public sector asset management means adequately
providing or enabling services to the community such as buildings, roads,
transportation, clean water supplies, waste disposal, and a variety of other services
that support the community’s economic and social activities; all within a safe and
healthy environment for a sustainable quality of life. Asset management in the public
sector is a means to sustainable community development through the use and care of
public assets; and the efficiency, effectiveness, and reliability of its contribution to
society and the community at large must ultimately be the measures of performance.
Judging whether performance is good or bad, adequate or inadequate, depends on the
community's needs and objectives. These objectives are generally set locally but
include elements of State and Federal Government requirements and statutory
obligations, as well as widely accepted standards of practice. Asset performance must
ultimately be assessed by the people who build, own, operate, use, or benefit from
physical assets, whether they be industry or infrastructure assets.

Benchmarks or measures of asset performance are needed that can generally be


applied to all aspects of performance of any one type of asset, to give decision makers
a broad basis for judgment about the performance of physical assets. Judgments about
the adequacy of performance are typically made in a public setting. Many individuals
and institutional and corporate entities make up the community that has a stake in
asset performance, especially infrastructure asset performance, and each stakeholder’s
view must be considered in assessing that performance. These stakeholders include
providers of services who are exposed to output-outcome performance measurement.

The asset management performance measurement process must ensure broad


participation in making the judgment and determining the bases on which judgment is
to be made. In short, assets performance is defined by the users or community, and its
measurement must start with the needs of the users or community. This community,
however, is inevitably broad and diverse, including national and regional as well as
local perspectives. Because of these many facets of assets performance and its
assessment, no single performance measure has been devised, nor is it likely that one
can be, except for an attempt at some universal cost performance measure. Asset
management performance measurement must be multi-dimensional, reflecting the full
range of objectives set for the assets. The degree to which physical assets provide
services that the users/community expect may be defined as a function of efficiency,
effectiveness, reliability, and cost. Physical assets that reliably meet or exceed
user/community expectations, at an acceptably low cost, usually perform well. The
three principal dimensions of performance are each, in turn, complex and typically
require several measures to indicate how well assets are performing. The challenge
decision makers face in seeking to develop and manage assets for high performance is
one of applying money, land, infrastructure, energy, and other resources to deliver
efficient, effective and reliable services to the broad community. These resources are
used in planning, designing, construction, manufacturing, operating, maintaining, and
sometimes demolishing or disposing of assets; monitoring and regulating the safety
and environmental consequences of these activities; and mitigating adverse risks
associated with the assets’ use.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 105

Selecting asset management performance measures, and measuring assets


performance, are central tasks in an organisation’s performance assessment and may
sometimes involve a substantial amount of senior management planning. Specific
performance measures may have general use, but should always be appropriate for the
particular situation of assets usage being assessed. Specific organisational values have
overarching influences on the selection of asset management performance measures,
and on all other steps in the organisation’s performance assessment process. This
performance assessment process will yield a multi-dimensional set of specific
measurements (qualitative as well as quantitative) of key performance indicators.

Asset Management Performance Measurement in the Public Sector:


The Department for Victorian Communities, Local Government Victoria (LGV) with
local government peak bodies, the Municipal Association of Victoria (MAV) and
Local Government Professionals (LGPro) coordinated a program of initiatives to
assist councils with their asset management responsibilities and develop an Asset
Management Plan. A major use for Asset Management Plans is to communicate
information about assets and actions required to provide a defined Level of Service.
Defining Level of Service:
An Asset Management Plan should define the Level of Service required of the asset.
Service levels are defined in the International Infrastructure Management Manual
(IPWEA, 2002) as ‘…defined service quality for an activity or service area against
which service performance may be measured’. Service levels offered should be
determined through community and/or customer consultation consistent with the
council’s ‘Best Value’ program. Service levels relate to, for example quality;
quantity; safety; capacity; fitness for purpose; aesthetics; reliability; responsiveness;
environmental acceptability and costs.
The impact of changes in demand over time on the service level offered should be
regularly established and accounted for to provide a clear understanding of cost
implications across the whole life cycle of a higher or lower service quality.
Expressing services in user terms and quantifying different Levels of Service in user
terms helps to examine the range of service levels. The range of service levels
provides a measure of the different levels of ‘outcomes’ that can be provided,
recognising budgetary constraints.
Defining the time frame:
The Asset Management Plan generally requires three planning horizons, as identified
in the Council Planning Framework depicted in Figure 17, i.e. 20+ years for forecasts,
4 years+ tied to the council plan, and annual planning tied to the council budget.
Describing the asset adequately:
The Asset Management Plan is to include data and information on;
¾ physical identification – quantity, location, construction materials, year
built (or estimate to closest five(5)-year time block), condition, capacity,
performance, estimate of remaining life;
¾ financial information – original cost (if known), renewal cost, written
down current cost replacement cost (see also section 4.5);
¾ performance information – Levels of Service and assets performance.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 106

Figure 17: Council Strategic Planning Framework


(LGVC, 2004)

Incorporating strategies for the management of risk:


Every council is exposed to considerable political, managerial and financial risks due
to their scale of investment in infrastructure assets. The type of risk events that might
impact on assets include:
¾ natural events, for example fires;
¾ external impacts, for example, power supply failures;
¾ operational and physical failure risks.
A council or agency is better able to manage these risks, sustain development, and
obtain better value for money in the delivery of services to the community by
applying a strategic approach to asset management. The Asset Management Plan
should incorporate strategy for the management of risk associated with the assets. The
strategies should be consistent with the overall risk policy.

Australian Standard for Risk Management (AS/NZS 4360 1999)5 is a useful guide.
Recognising changes in service potential of assets:
The Asset Management Plan should include information about likely changes to
service potential. Service potential describes the output or service capacity of an asset.
Decline in service potential is usually a function of usage or time.

Service potential can change through the following factors;


¾ changes in the service level to be provided;
¾ the impact of technical or commercial obsolescence;
¾ the maintenance given to the asset;
¾ improvements in the technology applied to maintain the asset.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 107

Including financial information:


The Asset Management Plan should include financial estimates and cash-flow
forecasts in relation to the assets for at least the ensuing 20 years (preferably longer).
The forecasts should include all life cycle costs and cover both maintenance and
capital expenditures. Assumptions underlying the financial forecasts are to be made
explicit and the degree of confidence placed in them should be made transparent.
These forecasts assist the preparation of the annual budget and 4+ years planning.
Where financial information about critical assets is subject to uncertainty, sensitivity
analysis should be undertaken. Estimated costs must;
¾ provide clear links to the council plan;
¾ be based on known and provable unit asset costs;
¾ be logically and clearly compiled, with clear audit trails;
¾ be updated regularly;
¾ be recorded in present day (real) costs and cash flows;
¾ use real discount rates consistent with investment analysis guidelines;
¾ be assimilated into financial recording systems.

Setting assumptions and confidence levels:


The Asset Management Plan should;
¾ list all assumptions and provisos under which it is prepared;
¾ indicate the degree of confidence of the reliability of the data underpinning
the information presented, for example, accuracy of asset inventory,
accuracy of data on condition of assets, accuracy of asset performance data
or; and demand/growth forecasts;
¾ confirm the estimates for remaining useful lives of assets;
¾ on the basis of the preceding assumptions and confidence of underlying
data, provide a level of precision, or confidence, on the forecasts of
renewal and maintenance expenditure for the asset.
Outlining an improvement program:
The Asset Management Plan should outline options and recommendations for
necessary actions to improve procedures, systems, data, and performance by
considering the following;
¾ what are the strong areas;
¾ what are the weak areas;
¾ what is doing well;
¾ what are the improvement and sustainability targets;
¾ the actions needed to address the ‘gaps’;
¾ the timeframe over which the improvements require to take place;
¾ the resources (technical, human and financial) needed;
¾ the contingent plan of action for critical and essential priorities if resource
shortfalls occur.
The improvement program should be consistent with a continuous improvement
program, required by ‘best practice’.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 108

Best Practice Performance Measures for Financial Sustainability


Three examples of State local government authorities (LGAs) which have published
reports on the sustainability of local government services, are cited (NAMS, 2007).

South Australia Performance Measures for Financial Sustainability:


The Report of the Financial Sustainability Review Board, recommends the adoption
of a standard set of key financial indicators for use in assessing a council’s financial
sustainability (NAMS, 2007 cit op. Financial Sustainability Review Board, 2005).
(i) The net financial liabilities measure of a council’s financial position, as the key
indicator of the council’s indebtedness to other sectors of the economy;
(ii) The operating surplus/(deficit) measure of a council’s annual operating financial
performance, as the key indicator of the intergenerational equity of the funding of the
council’s operations;
(iii) The net outlays on the renewal or replacement of existing assets measures of a
council’s annual capital financial performance, as a key indicator of the
intergenerational equity of the funding of the council’s infrastructure or replacement
activities; and
(iv) The net borrowing/(lending) measure (or what councils term the ‘overall funding
surplus/(deficit)’ measure] of a council’s overall annual fiscal performance, as a the
key indicator of the impact of the council’s annual transactions – both operating and
capital – upon the council’s indebtedness to other sectors of the economy.

A uniform and consistent reporting format was developed by the specific SA LGA
together with the Office for State/Local Government Relations for councils, which
provides a high level summary of both operating and capital investment activities.
The format is shown in Table 10 and incorporates all of the key financial indicators
recommended by the Financial Sustainability Review Board. It is intended that annual
budgets, reports on financial outcomes and long-term financial plans be summarised
on the same basis. The format was endorsed at a meeting of all councils and will
facilitate meaningful comparisons of each council’s finances.

Financial Management KPIs:


A standard set of four KPIs is recommended by the SA Financial Sustainability
Review Board for use in assessing performance of a council’s financial sustainability
(NAMS, 2007 cit op. Financial Sustainability Review Board, 2005, Vol 2, 2.2.1 p 18):
(i) Net financial liabilities,
(ii) Operating surplus/(deficit),
(iii) Net outlays on the renewal or replacement of existing assets,
(iv) Net borrowing/(lending).

Where Net Financial Liabilities equal Total Liabilities less Financial Assets
(excluding equity-type assets). The amount of Net Lending in any one year decreases
the level of Net Financial Liabilities in the year by that amount. Conversely, the
amount of Net Borrowing increases the level of Net Financial Liabilities.
Net Lending / (Borrowing) equals Operating Surplus / (Deficit) before Capital, less
Net Outlays on Existing Assets, less Net Outlays on New and Upgraded Assets.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 109

Table 10. SA LGA Reporting Format for Financial Sustainability


Uniform Presentation of Finances
(NAMS, 2007)

New South Wales Performance Measures for Financial Sustainability:


The NSW Local Government Inquiry Final Report in its discussion on Council’s
Financial KPI’s and Benchmarks, states that council’s financial reports should be
accompanied by disclosure of key financial performance indicators to indicate the
state of both (NAMS, 2007):
• A council’s financial position, which involves the state of its balance sheet, and
so the relative levels – and composition – of its assets and liabilities.
• A council’s annual financial performance, which involves the state of its annual
operating statement, and especially the size of relevant annual surpluses of
deficits.

The financial KPIs should have a strong predictive relationship with the degree to
which a council’s finances and likely to be sustainable in the long term. Examples of
KPIs with details for KPIs relating to asset management are shown in Table 11.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 110

Table 11. NSW LGA Reporting Format for Financial Sustainability


Key Analytical Balances for Financial Key Performance Indicators
(NAMS, 2007 Source. LGI, 2006, Table 11.2, p 272)

The report suggests that council set target values and minimum and maximum values
based on the following broad principles (NAMS, 2007 op cit. LGI, 2006 pp 272-3).
• “A council’s financial position is in a healthy state if its net financial liabilities
(and associated debt) are at levels where the resultant net interest expense can be
met from a council’s annual income (by ratepayers) at the existing rating effort.
• A council’s general government operating financial performance is appropriate
if it is running a modest operating surplus before capital revenues indicating that
costs incurred in the year in question (including both routine maintenance and
annual depreciation of physical assets) are at least being met by today’s
ratepayers and not being transferred to tomorrow’s ratepayers, with rates
revenues more than sufficient to finance current operations.
• Where an operating deficit persists, rates revenues are insufficient to finance
current operations, and liabilities must be incurred or financial assets liquidated
in order to finance those operations.
• The operating financial performance of a council’s commercial entities is
appropriate if its earnings before interest and taxes are around the weighted
average cost of capital. (EBIT = operating surplus before net interest expenses
and taxes and dividends paid)
• A council’s capital performance is appropriate if its capital expenditure on the
renewal or replacement on non-physical assets broadly matches the cash flows
generated to cover annual depreciation expense.
• A council’s overall (i.e. capital and operating) financial performance is
satisfactory it is annual net borrowings as a proportion of capital expenditure on
new (growth) non-financial assets does not put any long term pressure on
achievement of the council’s target net debt or net financial liability ratios.”

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 111

Chapter 11.4 of the report covers councils’ financial situation and financial
performance. Commentary for financial situation criteria is shown in Table 12.

Table 12. NSW LGA Reporting Format for Financial Sustainability


Financial Situation Criteria for Financial Position and Financial Performance
(NAMS, 2007 Source. LGI, 2006, p 276)

The NSW LGI Report proposes financial key performance indicators and a range of
council targets. These KPIs included 3 of the KPIs recommended by the SA Financial
Sustainability Review Board (i), (ii), and (iv) and excluded (iii) net outlays on the
renewal or replacement of existing assets. The report also includes proposed targets
and upper and lower limits for financial benchmarks. Key Analytical balances for
Financial Key Performance Indicators are indicated in Table 13.

Table 13. NSW LGA Reporting Format for Financial Sustainability


Key Analytical Balances for Financial Key Performance Indicators
(NAMS, 2007 Source. LGI, 2006, Table 11.3, p 273-4)

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 112

Chapter 11.5 of the report makes recommendations for councils’ financial outlook,
suggesting that each council should develop and annually update 10 year financial
plans. A council should be able to provide answers to the following questions
(NAMS, 2007 cit op. LGI, 2006, pp 283-4):
• Does the council have the long term ability to finance its statutory and
accountability obligations to the community and to fund its future activities?
• Can the community be convinced to accept a lower level of service if the
council’s future financing requirements will outstrip future financial capacity?
• Does the council currently have the financial capacity to sustain its
infrastructure?

The report further proposes a Council Financial Governance Statement Framework,


for which key reporting items are summarised in Table 14.

Table 14. NSW LGA Reporting Format for Financial Sustainability


Council Financial Governance Statement Framework
(NAMS, 2007 Source. LGI, 2006, Appendix B, Table 1, p 337-8)

Western Australia Performance Measures for Financial Sustainability:


The Systemic Sustainability Study Report of the Independent Systematic
Sustainability Study Panel (2006), includes a template for Financial Sustainability
Self-Assessment, developed by Access Economics for individual councils to assess
their own status. The template is reproduced as Table 15. The report further
comments on the results of use of the template (NAMS, 2007 op cit. p52):
“If the resultant backlog- and revenue-adjusted operating surplus/deficit ratio is
negative, and the negative value is greater than 10%, then the substantial or disruptive
revenue (or expenditure) adjustments would seem inevitable – based upon
continuation of current policies – if the council’s long-term finances are to be put onto
a sustainable basis going forward.”

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 113

Table 15. WA LGA Reporting Format for Financial Sustainability


Financial Sustainability Self-Assessment Template
(NAMS, 2007, Source: WA SSS Report, Appendix 3, p 51)

Other Financial Sustainability Performance Measures:


Renewal Annuities:
A renewal annuity is a stream of expenditures required over a period of time for
renewal of assets to meet agreed service level. The DVC Asset Management
Performance Measures maintenance and renewal expenditure projections and MAV
STEP Renewal Gap projections are examples of renewal annuities. Several KPI’s can
be developed from a renewal annuity such as (NAMS cit op. DVC, 2004):
• Total renewal funds required over 10 or 20 years.
• Net Present Value of funds required for renewal over 10 or 20 year.
• Gap between total funds required over 10 or 20 years and available funds.
• Net present value of gap between total funds required for renewal over 10 or 20
years and available funds.
• Ratio of total funds required for renewal over 10 or 20 years to available funds.
• Ratio of net present value of total funds required for renewal over 10 or 20 years
to net present value of available funds.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 114

10. Assets Performance


Measurement Reporting
Assets Performance Reporting
Reporting on asset management performance should be in accordance with policy
requirements (in the case of the public sector, with government service providers).
Performance reporting is an essential step in the management of asset and service
performance, because it provides information;
¾ on the performance of the assets and the asset management programs;
¾ on the physical, functional and financial performance of the asset;
¾ on the achievement of planned program objectives, targets and budgets;
¾ that allows asset level performance information to be evaluated at the asset
and service delivery level;
¾ for meeting statutory reporting requirements.
Internal Performance Reporting:
Internal performance reporting should;
¾ be in accordance with the service organisation’s reporting requirements;
¾ be structured according to the asset management information system;
¾ allow the asset manager to take timely action to improve the asset’s
performance, avoid potential difficulties and resolve problems.
The information in the following section on external reporting of program
performance is from the Australian National Audit Office publication: ‘Better Practice
Guide: Performance Information Principles’, (ANAO, 1996).
External Performance Reporting:
External performance reporting should;
¾ relate performance to objectives;
¾ make it clear whether or not service delivery strategies have met their
performance targets, and the reasons for any significant variations from the
expected performance;
¾ be balanced (i.e. cover effectiveness in terms of key objectives and
outputs, and cover performance in terms of efficiency and compliance);
¾ provide an honest coverage of successes and failures;
¾ explain the significance of the results reported, including comparisons (e.g.
over time, or against standards), with reference to external factors affecting
the results;
¾ draw on both quantitative and qualitative information and provide details
of trends, problems and progress against specific initiatives;
¾ be in accordance with statutory reporting requirements.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 115

Annual Reporting:
It is essential that the annual reporting requirements for all public sector agencies
include a requirement to disclose full details of the current details of the performance
of all assets in terms of utilisation, costs, condition and value. The reports should
include historical trends in each of these areas and projections for next 10-20 years.

It is also essential that all agencies declare in their annual reports the standards that
have been set for the operation and maintenance of their assets as a basis for
determining if the level of service is satisfactory. The manager’s objectives should be
to ensure that assets are used to the optimal level, that the costs of operation are
contained within the available budget and that the asset is maintained to an
appropriate standard.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 116

References
AASHTO (2005), ‘Performance Measures and Targets for Transportation Asset
Management’, NCHRP Project 20-60, AASHTO Transportation Asset Management
Guide, U.S.A.
ANAO (1996), ‘Better Practice Guide: Performance Information Principles’,
Australian National Audit Office, Canberra.

APCC (2001), ‘Asset Management’, public document of the Australian Procurement


and Construction Council Inc. (APCC), P.O Box 106 DEAKIN WEST, ACT 2600,
6/42 Geils Court DEAKIN, ACT 2600.
Battelle (2002), ‘Key Performance Indicators’, sub-study 5, eds. Fiksel J. Spitzley D.
and Brunetti T., commissioned by the World Business Council for Sustainable
Development.
BPAO (2006), ‘Asset Management Enterprise Process Improvement’, Bonneville
Power Administration, 905 NE 11th Ave. Portland, Oregon 97232 U.S.A.
DPI (1999), ‘Operating Environment’, Department of Primary Industries, Australia.
DPI (2000), ‘Corporate Plan 2000’, Department of Primary Industries, Australia.
DMRM (2004), ‘Key Performance Indicators Multiple Dimensions: The Power of
Metrics’, DM Review Magazine, October 2004, Data Management Review and
Source Media, Inc. Investcorp, ed. Bauer K. Performance Management Practice, GRT
Corporation, Stamford, CT U.S.A.
DVC, (2004), ‘Asset Management Initiatives for Local Government’, Status Report,
Department for Victorian Communities, Local Government Victoria, Melbourne.
FFCF (2006), ‘Facility Asset Management Doctrine: A Strategy for Making Better
Decisions at Lower Risk and Costs’, ed. Dempsey J.L. Federal Facilities Council’s
Forum Engineering, Construction, and Facilities Asset Management, Oct. 31, 2006.
FHADT (2007), ‘Performance Specifications Strategic Roadmap: A Vision for the
Future’, Report Chapter 2. Performance Specifications, U.S. Department of Transport,
Federal Highway Administration, U.S.A.
FHATRB (2000), ‘Asset Management Decision Support Design Model for
Management System Integration’, eds. Thompson P.D. and Pinkerton B., Colorado
Department of Transportation for the Transportation Research Board, U.S. Federal
Highway Administration, U.S.A.
Fiksel, J. (2000), ‘Measuring Sustainability in Eco-Design’, in ‘Sustainable Solutions:
Developing Products and Services for the Future’, eds. Charter M. and Tischner U.,
Greenleaf Publishing, Global Reporting Initiative (GRI), Sustainability Reporting
Guidelines, June, 2000.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 117

Financial Sustainability Review Board, (2005), “Rising to the Challenge, Towards


Financially Sustainable Local Government in South Australia”, Volumes 1 and 2, ref.
Vol 2, Rec. 2.2(1), p18, August 2005, Local Government Association of South
Australia, Adelaide.
GAMS (1997), ‘Performance Management Implementation Guide’, Total
Management Plan (TMP) Performance Management Framework, Guidelines for
Implementing Total Management Planning, Queensland Government Asset
Management System (GAMS), Brisbane Queensland.
Greene H. (2007), ‘Key Performance Indicators Mix Performance and Results of the
Enterprise, Not the Business’, 21st Century Management Conventions and Standards,
Results Performance Management R-pM, Phayathai, Bangkok 10400 Thailand.
HACS (2003), ‘Developing Performance Specifications’, Consultation paper by the
Highways Agency Contracts and Specifications, Heron House, 49/53 Goldington
Road, Bedford MK40 3LL.
ICPPR (1998), ‘Asset Utilisation: A Metric for Focusing Reliability Efforts’, by
Richard Ellis, RE Group, paper for the Seventh International Conference on Process
Plant Reliability, Houston, Texas, October 25-30, 1998, organised by the Gulf
Publishing Company and Hydrocarbon Processing.
ILGARD (2003), ‘Performance Measurement’, ed. Longo P., Executive Master of
Public Administration Program, Institute for Local Government Administration and
Rural Development, Ohio, U.S.A.
Independent Systemic Sustainability Study Panel, (2006), ‘In Your Hands, Shaping
the Future of Local Government in South Australia’, Western Australia Local
Government Association, Perth.
IPWEA (2002), ‘International Infrastructure Management Manual’, Institute of Public
Works Engineers, Australia/New Zealand Edition Version 2.0, October 2002.
Ittner C. D. and Larcker D. F. (2003), ‘Coming Up Short on Non-Financial
Performance Measurement’, Harvard Business Review (November): 88-95.
Kaplan R. and Norton, D. P. (1992), ‘The Balanced Scorecard - Measures that Drive
Performance’, Harvard Business Review, January-February.
Kaplan R. and Norton, D. P. (1996), ‘The Balanced Scorecard: Translating Strategy
into Action’, Harvard Business School Press, Boston, Massachusetts.
Kaplan R. and Norton D. P. (2000), ‘Having Trouble with your Strategy? Then Map
It’, Harvard Business Review, September-October.
Kaplan R. and Norton, D. P. (2001), ‘The Strategy-Focused Organisation: How
Balanced Scorecard Companies Thrive in the New Business Environment’, Boston,
MA: Harvard Business School Press.
LGI, (2006), ‘Are Councils Sustainable?’, Final Report: Findings and
Recommendations, Independent Inquiry into the Financial Sustainability of New
South Wales Local Government.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 118

LGVC (2004), ‘Guidelines for Developing an Asset Management Policy, Strategy and
Plan’, Local Government Victoria Department for Victorian Communities, 1 Spring
Street Melbourne Vic 3000, Australia
Martinsons M, Davison R., and Tse D. (1999), ‘The Balanced Scorecard: A
Foundation for the Strategic Management of Information Systems’, Decision Support
Systems (25): 71-88.
NAMS, (2007), ‘Performance Measures’, Australian Infrastructure Financial
Management Guidelines, Position Paper 3, Version 6. 9/1/2007, National Asset
Management Strategy (NAMS) Committee and the National Local Government
Financial Management Forum, Melbourne, Australia.
NHMRC (1996), ‘Australian Drinking Water Guidelines’, National Health and
Medical Research Council; Agricultural and Resource Management Council of
Australia and New Zealand, Canberra.
Norreklit H. (2003), ‘The Balanced Scorecard: A Rhetorical Analysis of the Balanced
Scorecard’, Accounting, Organisations and Society 28(6): 591-619.
NRCTRB (1984), ‘Manual for the Selection of Optimal Levels of Service’, manual
developed by Woodward-Clyde Consultants for the National Research Council,
Transportation Research Board, Washington DC, U.S.A.
NZALGE (1998), ‘Infrastructure Asset Management Manual’, Edition 1.1,
Association of Local Government Engineers, Auckland, New Zealand.
NZEC (2007), ‘Asset Management Plan’, Top Energy, North Island New Zealand.

OGC (2007), ‘Framework for a Specification’, Office of Government Commerce,


OGC Gateway Management Reviews, U.K.
OLILG (2003), ‘Developing performance management systems’, adapted from the
Open Learning Institute Learner's Guide BSBHR503A TAFE Queensland, Australia.
OSU (1998), ‘Principles of Performance Measurement’, white paper by Organ State
University, U.S.A.
QGTMP, (2000), ‘Asset Evaluation and Renewal’, Guidelines for Implementing Total
Management Planning, Asset Management Implementation Guide, Queensland Gov.
Queensland Treasury, (2000a), ‘QGFMS Benefits Realisation Guidelines: Translating
Strategy in to performance benefits and initiatives’, The Office of Financial
Management Systems and Training, Queensland Government, Brisbane.
Queensland Treasury, (2000b), ‘Understanding Managing For Outcomes’, The Office
of Financial Management Systems and Training, Queensland Government, Brisbane.
SAM (2007), ‘The SAM Performance Management Model’, Strategic Asset
Management, Queensland State Government, Brisbane, Queensland.
Stapelberg R.F. (2004), ‘Effects of Long-Term Maintenance and Rehabilitation on
Whole Life Costs of Infrastructure Assets’, IQPC Conference paper, Sydney.

© 2008 CIEAM
AAMCoG Assets Performance Measure P a g e | 119

TNZ (1998), ‘Managing the Risk in a New Performance Based Environment’, white
paper by Owen M. Transit New Zealand, Wellington, New Zealand.
USNHI (1999), ‘Pavement Preventive Maintenance’, Participant’s Manual, eds.
Peshkin, D.G., Smith K.D., Geoffroy D., and Zimmerman K., National Highway
Institute, United States Department of Transportation, Federal Highway
Administration, Washington, D.C.
USNRC (1996), ‘Measuring and Improving Infrastructure Performance’, Committee
on Measuring and Improving Infrastructure Performance, National Research Council,
Federal Infrastructure Strategy (FIS), directed and administered by the U.S. Army
Corps of Engineers (USACE).
USTRB (1999), ‘Maintenance QA Program Implementation Manual’, eds. Stivers M.,
Smith K., Hoerner T., and Romine A., National Cooperative Highway Research
Program Report 422, Transportation Research Board, Washington, DC.
USTRB (2001), ‘The Role of Pavement Management in Comprehensive Asset
Management Contracts’, Paper No. 01-3478, eds. Zimmerman K.A., Wolters A.S.,
and Kallman H., Transportation Research Board, Washington, D.C.
Verfaillie H. A. and Bidwell R. (2000), ‘Measuring Eco-Efficiency: A Guide to
Reporting Company Performance’, WBCSD.
VGDTF (2001), ‘Output Specification, Performance Measures, Output Groups -
BFMG 09’, Output Specifications Guidelines (1997), Department of Treasury and
Finance, State Government of Victoria.
Victorian Government (1995), ‘Asset Management Series’, Department of Treasury
and Finance, Victoria Government, Australia.
WBPC (2006) ‘Level of Service Development’, white paper by McCaw A., Western
Bay of Plenty District Council, U.S.A.

© 2008 CIEAM

Вам также может понравиться