Вы находитесь на странице: 1из 56

ITIL Process: Incident Management

REPORT NAME BUSINESS USER DESCRIPTION

OPEN AND Service Management This report enables the user to


CLOSED process managers, IT review, for a determined
INCIDENTS BY Management team period, a breakdown of open
CATEGORY and closed incidents by
categories and by their
associated areas.
OPEN AND Service Management This report provides an
CLOSED process managers, IT overview of the number of
INCIDENTS BY Management team reported incidents by service in
SERVICE a given time period.
BACKLOG OF Service Management This report enables the user to
INCIDENTS process managers, IT review the number of incidents
Management team that are not closed in a given
time period.
REOPENED Service Management This report enables the user to
INCIDENTS process managers, IT review the percentage of
Management team reopened incidents by service in
a given time period.
INCIDENTS Service Management This report enables the user to
CLOSED process managers, IT review the number of closed
MEETING SLA Management team incidents that meet the SLA
TARGET targets in a given time period,
relative to the number of all
closed incidents.
INCIDENT Service Management This report enables the user to
AGING process managers, IT review the number of all closed
REPORT Management team incidents opened in the last 30
days by priority and by incident
duration.
INCIDENT Service Management This report enables the user to
REASSIGNMENT process managers, IT review the count of incidents
ANALYSIS Management team opened in the last 13 months
(including the current month) by
number of reassignment times
and by open date.
PERCENTAGE Service Management This report enables the user to
OF INCIDENTS process managers, IT review the count of incidents
BY PRIORITY Management team submitted in the last 13 months
(including the current month) by
open date and by priority.
OPEN Service Management This report enables the user to
INCIDENTS process managers, IT review a breakdown of monthly
MONTHLY Management team opened incidents for a
ANALYSIS BY determined period by
CATEGORY categories and by their
associated areas.
Incident Analytics by Numerify360™
Numerify360™ for IT Incident Management Analytics enables you to assess your overall
Incident process health by analyzing the current backlog along with the trend of open and
resolved Incidents by parameters such as the Incident category, resolution group, location, and so
on.

The available dashboards for the Incident Management Analytics are as follows:

1) Opened Incidents Trend Dashboard :


The Opened Incidents Trend dashboard enables you to view trend of the opened incidents over a
period of time based on status, assignment group, category, sub category, and severity.

Viewing Opened Incidents Trend Dashboard

You can use the following time slice tabs to view the Opened Incidents details in every section
of this dashboard:

 Days - displays the Opened Incidents trend based on data from the past 14 days.

 Weekly - displays the Opened Incidents trend based on data from the past 13 weeks. The 13
weeks period includes the current week and the past 12 weeks.

 Monthly - displays the Opened Incidents trend based on data from the past 13 months. The 13
months period includes the current month and the past 12 months.

Sections in Opened Incidents Trend Dashboard

You can view the following sections in the Opened Incidents Trend dashboard:

 Opened Incidents Volume :

Opened Incidents Volume


This section displays trend of the reopened and escalated Incidents as compared to the opened
incidents volume.

View by Status

You can choose one of the following tabs to view the related Opened Incident details:
 Reopened: to view the Reopened Incidents trend compared to the Opened Incidents volume.

 Escalated: to view the Escalated Incidents trend compared to the Opened Incidents volume.

Viewing Opened Incidents Volume

This section displays the following details:

 KPIs – volume of opened incidents, reopened incidents, and escalated incidents.

 Incident Volume Trend Graph – trend graph of the reopened or escalated incidents as
compared to the volume of the opened incidents. In the graph, horizontal axis represents the time
slice and vertical axis represents volume of the opened incidents and reopened incidents.

Note: The Horizontal axis contributors are based on your choice of the time slice tabs in the
Opened Incidents Trend dashboard.

Metrics Used
Metric Name Description

Opened
Incidents The total number of incidents raised (opened) in a time frame.

MTTR The elapsed time between the Incident creation and resolution. The elapsed time
is measured as the total time open.
Opened The number of open Incidents with the Incident status - Escalated. This is a
Incidents configurable metric. Based on your business requirements, the Numerify
Escalated Administrator can configure the initial setup to tailor the metric suitably.

Opened Incidents Breakdown by Volume :


Opened Incidents Breakdown by Volume
In this section, you can use the following dimension tabs to view volume of opened incidents and
reopened incidents based on the dimension of your choice (Assignment Group, Category, or Sub
Category).

You can also use the Severity selector to view volume of the opened incidents and reopened
incidents based on their severity.

You can further restrict the report to show only the following:

 Top 10 or All - For example, you can select Assignment group to view opened incidents
breakdown based on the assignment groups and click All to view all the assignment
groups and not just the top 10.

 Reopened or Escalated - For example, you can click Reopened to view the breakdown
of reopened incidents in the chosen dimension.

Viewing Opened Incidents Breakdown by Volume Report

The Opened Incidents Breakdown by Volume report displays the opened incident summary
based on their severity. You can filter the incident summary based on any one of the Severity
options (for example, Severity 1 - Critical, Severity 2 - High, Severity 3 - Moderate, Severity
4 - Normal, or UNKNOWN).

You can also use the Assignment Group, Category, and Subcategory options to filter contents of
the

Furthermore, you can use the following options to filter the contents of the Opened Incidents
Breakdown by Volume report

The Opened Incidents Breakdown by Volume report is displayed in the form of the following
two graphs:

 Opened Incident Summary by Dimension - Displays the incident summary based on


the selected Assignment Group, Category, or Sub Category. In the graph, the horizontal
axis depicts volume of the opened incidents and the vertical axis depicts contributors of
the chosen dimension.

You can also filter contents of the graph using the following two selectors:

o Top 10 - displays the top 10 contributors from your chosen dimension.

o All - displays all the contributors from your chosen dimension.

 Incident Volume Trend - Displays the reopened or escalated incident volume trend compared to
the opened incidents. Content of this graph is driven by your choice of dimension contributors
(Assignment Group, Category, or Sub Category) in the Opened Incident Summary by Dimension
graph. In the graph, the horizontal axis depicts the time period and the vertical axis depicts
volume of the opened, reopened, and escalated incidents.
You can also filter contents of this graph using the following Status options:

o Reopened - To view trend of the reopened incidents compared to the opened incidents.

o Escalated - To view trend of the escalated incidents compared to the opened incidents.

Metrics Used
Metric Name Description

Opened
Incidents The total number of incidents raised (opened) in a time frame.

Closed The total number of Incidents closed in a period of time. Closed Incidents mark
Incidents the final status in the life cycle of an Incident.
Reopened The sum of Incidents that were reopened after they were closed. This is a
Incidents configurable metric. Based on your business requirements, the Numerify
Administrator can configure the initial setup to tailor the metric suitably.
Opened The number of open Incidents with the Incident status - Escalated. This is a
Incidents configurable metric. Based on your business requirements, the Numerify
Escalated Administrator can configure the initial setup to tailor the metric suitably.
2) Opened Incidents Overview Dashboard :
The Opened Incidents Overview dashboard enables you to view the Opened Incidents volume
based on their Groups, Categories, Distribution, Status, and Severity.

You can use the following tabs to decide the level of detail displayed on the Opened Incidents
Overview dashboard:

 Overview – To display the Opened Incidents volume based on Group, Category, and
Distribution. .

 Detail – To display the Opened Incidents volume based on Severity, Status, and Categorization.
You can view additional details such as Incident ID, Description, and so on for further analysis.

Select a Time Period

In addition to the Overview and Detail tabs, you have the following time period options:

 Yesterday

 Last 7 Days

 Last 30 Days

These options are relative to the Data Refreshed date and time that is displayed in the upper right
corner of the dashboard as Data Refreshed at Month DD, YYYY, hh:mm:ss AM/PM.

Data is extracted from ServiceNow at least once per day. Typically, a data extract occurs during
a time period that is close to midnight in the main time zone of your company. Under normal
circumstances, Yesterday corresponds to your normal expectation, which is the day that
precedes the day you are viewing the dashboard.

As an example, assume you are looking at this dashboard on April 15, 2016 at 9:00am EST and
your company’s main time zone is EST.

 If the Data Refreshed time is just prior to midnight between April 14 and April 15, Yesterday
includes data from April 14, 2016, 12:00:00 AM through the date and time displayed as the Data
Refreshed date; in other words, it will be somewhat less than a full 24 hours. Any data from
ServiceNow that is more recent than the Data Refreshed date will not be included in the
dashboard for Yesterday even if it is from April 14.

 If the Data Refreshed time is midnight between April 14 and April 15 or later, Yesterday
includes data from April 14, 2016, 12:00:00 AM through April 14, 2016, 12:59:59 PM; in other
words a full 24 hours’ worth of data.

Under normal circumstances, Last 7 Days corresponds to Yesterday plus the 6 days preceding
Yesterday, and Last 30 Days corresponds to Yesterday plus the 29 days preceding Yesterday.

Although rare, the data extract may not successfully complete. In this case, the above definitions
will continue to be relative to the Data Refreshed date and time, which may not correspond to
your normal expectation of Yesterday and Last X Days.

Note: By default, the dashboard displays the Opened Incidents volume based on data from the
past 30 days.
The Overview tab

You can view the following dashboard sections using the Overview tab:

Opened Incidents Volume :


This section displays total volume of the opened incidents, volume of the opened incidents based
on their severity (Critical or High), and percentage indicator that indicates the increase or
decrease in the current opened incidents volume compared to the opened incidents volume from
the previous day, week, or month.

Note: The reference time slice for the percentage indicator depends on your choice of the time
frame in the Opened Incidents Overview dashboard.

Metrics Used
Metric Name Description

Opened Incidents The total number of incidents raised (opened) in a time frame.
Percentage of Opened The percentage of opened incidents in a given Category or Dimension as
Incidents compared with the total number of open Incidents.

Opened Incidents by Group and Category: This section enables you to view the opened incidents
volume by their assignment group, category, and sub category.

View by Assignment Group

You can use the Assignment Group tab to view volume of the opened incidents by each
assignment group, total percentage of opened incidents for the selected assignment group, and
percentage of opened Incidents that met SLA out of the total opened incidents for that group.

Assignment Groups are classified hierarchically using Levels. You can use the Level selectors to
filter and view the Opened Incidents by their Assignment Group levels. A higher number level
indicates a higher position in the hierarchy. Assignment Groups in lower levels are subsets of
groups in the higher levels. For example: If we consider two levels, Level 2 and Level 1 where
Level 2 is higher than Level 1, your choice of Assignment Group in Level 2 filters the
Assignment Groups displayed in Level 1. By default, you can choose All in the Level selectors,
to display Incident summary corresponding to All the Assignment Groups.

For more information on the contents in the Opened Incidents by Group and Category section,
see Viewing Opened Incidents by Group and Category.

View by Category/SubCategory

You can use the Category/SubCategory tab to view volume of opened incidents by each
category or sub category, total percentage of opened incidents for the selected category or sub
category, and percentage of opened incidents that met SLA out of the total opened incidents for
that category or sub category.

You can also use the following tabs to filter and view the opened incidents volume by category
or sub category:
 Top 10 - To view the volume of the opened incidents by the top 10 category or sub category.

 All - To view volume of the opened incidents by all the categories or sub categories.

Viewing Opened Incidents by Group and Category

You can use Assignment Group, Category, or SubCategory tabs to view the following details:

 Volume of Opened Incidents – A horizontal bar graph displays the breakdown of the
Opened Incidents volume by one of the following dimensions: Assignment Group,
Category or Sub Category . The Horizontal axis represents the volume of the Opened
Incidents and the Vertical axis represents the contributors of the chosen dimension. You
can choose any one of the contributor from this graph to view the following related
Opened Incident details:

o Percentage of Total – Displays the percentage of Opened Incidents volume by the


chosen dimension out of the total Opened Incidents volume by All the contributors of the
dimension. For example: if we consider 5 Incidents Opened by Category A out of a total
of 100 Opened Incidents, then the Percentage of Total will be 5%.

o Percentage Met SLA – Displays the total percentage of the Opened Incidents volume
that met SLA in the dimension of your choice. For example: if we consider the Opened
Incidents count to be 5 and the count of Opened Incidents that met SLA to be 3 then the
Percentage met SLA will be 60%.

o Volume of Opened Incidents by Severity – A doughnut graph displays the breakdown


of the Opened Incidents volume by their Severity. You can view the volume of Opened
Incidents for each Severity type.

Metrics Used
Metric Used Description

Opened Incidents The total number of incidents raised (opened) in a time frame.

Opened Incidents The number of open Incidents that have met the SLA. This is a configurable
Met SLA metric. Based on your business requirements, the Numerify Administrator can
configure the initial setup to tailor the metric suitably.
Percentage of The percentage of opened incidents in a given Category or Dimension as
Opened Incidents compared with the total number of open Incidents.

Opened Incidents Distribution : Opened


Incidents Distribution
This section displays distribution of the opened incidents volume based on their contact type,
location, and severity.

View by Contact Type

You can view distribution of the opened incidents volume by their contact type in a vertical bar
graph. In the graph, the horizontal axis represents the contact type contributors and the vertical
axis represents the opened incidents volume.
View by Location

You can view distribution of the opened incidents volume by their location in a vertical bar
graph. In the graph, the horizontal axis represents the location contributors and the vertical axis
represents the opened incidents volume.

View by Severity

You can view distribution of the opened incidents volume by their severity in a vertical bar
graph. In the graph, the horizontal axis represents the severity contributors and the vertical axis
represents the opened incidents volume.

Metrics Used
Metric Name Distribution

Opened Incidents The total number of incidents raised (opened) in a time frame.

The Detail tab

You can view the following dashboard sections using the Detail tab:

Incidents Breakdown : Incidents Breakdown


Using the Detail tab in the Opened Incidents Overview dashboard, you can view the Incident
Breakdown section.

In the Incidents Breakdown section, you can view the Opened Incidents volume by their
severity, status, and categorization.

View by Severity

The By Severity graph in this section displays the total opened incidents volume, volume of the
incidents based on their severity level (Critical or High), and percentage indicator that indicates
the increase or decrease in the current volume of the incidents compared to the volume of
incidents from the previous day, week, or month.

View by Status

The By Status graph in this section displays opened incidents volume by their status. In the
graph, the horizontal axis represents status of the opened incidents and the vertical axis
represents the opened incidents volume.

View by Categorization

The By Categorization graph in this section displays the volume of the Reopened Incidents.

Metrics Used
Metric
Description
Name
Reopened The sum of Incidents that were reopened after they were closed. This is a
Incidents configurable metric. Based on your business requirements, the Numerify
Administrator can configure the initial setup to tailor the metric suitably.

Incidents Details :
In this section, you can view the opened incidents detail such as created date, case ID, and so on
in a tabular format.

You can use the Priority, Status, Assignment Group, and Categorization selectors to filter the
Incident details by the selected parameter.

Metrics Used
Metric Name Description

Opened Incidents The total number of incidents raised (opened) in a time frame.

3) Incident Backlog Trend Analysis Dashboard


Incident Backlog Overview
The Incident Backlog Overview dashboard enables you to view breakdown of the volume of
incident backlog by its assignment groups, trend of the incident backlog for the past 12 months
based on a chosen assignment group, and monthly incident backlog summary corresponding to
the assignees in the assignment group.

Viewing Incident Backlog Trend Analysis Dashboard

You can modify and view the contents of this dashboard using the following options:

 Select Classification Level Here - To view the Incident Backlog details based on their
Classification Level.

 Select Parent Group Here - To view the Incident Backlog details based on their Parent Group.

 Level 4, Level 3, Level 2, and Level 1. The Assignment Groups are segregated into a hierarchy of
levels. Level 4 is the highest level and Level 1 is the lowest. The lower level Assignment Groups
are filtered based on your choice of the higher level Assignment Group. The filters will be
displayed only if there are values other than "Unknown" and "Unspecified".

Sections in Incident Backlog Trend Analysis Dashboard

You can view the following sections in this dashboard:

3.1) Assignment Groups by Incident Backlog


In this section, you can view the volume of the incident backlog based on its assignment groups.

The summary is displayed as a horizontal bar graph. In the graph, horizontal axis represents the
incident backlog volume and the vertical axis represents the assignment group names.

Metrics Used
Metric
Description
Name

Total number of open incidents that are pending resolution as of a specific point in
time, such as the end of the working day.
Incident In a monthly report, you can use the Beginning and Ending Backlog to obtain the
Backlog Incident Backlog as of the first and last day of the month, respectively. By default,
this metric includes all Incidents not yet resolved or closed as of the last Data Refresh.

Note: The metric cannot be summarized across time because it is a snapshot metric.

3.2) Incident Backlog Trend for the Last 12


Months :
This section, you can view the last 12 month's trend of the Incident Backlog corresponding to an
Assignment Group. You can drive the contents of this section by choosing an Assignment Group
from the Assignment Groups by Incident Backlog section.

The Incident Backlog trend is displayed as a horizontal bar graph. In the graph, horizontal axis
represents the past 12 months and vertical axis represents the volume of the opened incidents and
closed incidents.

 A trend line overlaid on the Opened and Closed Incidents represents the Incident Backlog
trend.

Metrics Used
Metric
Description
Name

Opened
Incidents The total number of incidents raised (opened) in a time frame.

Closed The total number of Incidents closed in a period of time. Closed Incidents mark the
Incidents final status in the life cycle of an Incident.
Total number of open incidents that are pending resolution as of a specific point in
time, such as the end of the working day.

Incident In a monthly report, you can use the Beginning and Ending Backlog to obtain the
Backlog Incident Backlog as of the first and last day of the month, respectively. By default,
this metric includes all Incidents not yet resolved or closed as of the last Data
Refresh.

Note: The metric cannot be summarized across time because it is a snapshot metric.
Assignee Backlog Monthly Summary : In this section, you can view volume of the incident
backlog for the assignee in a particular assignment group.

Viewing Assignee Backlog Monthly Summary

The contents of this section is driven by your choice of the assignment group in the Assignment
Groups by Incident Backlog section and month of the incident backlog in the Incident Backlog
Trend for the Last 12 Months section. This section is displayed in a tabular format.

Tip: You can also adjust the Ending Backlog Threshold value to modify and view the contents
of the section.

Metrics Used
Metric
Description
Name

Beginning The number of open Incidents that are pending resolution at the beginning of a
Backlog specific time period.
Closed The total number of Incidents closed in a period of time. Closed Incidents mark the
Incidents final status in the life cycle of an Incident.
The number of open Incidents pending resolution as of the current date. Although
Current identical to Incident Backlog, Current Backlog differs by ignoring all date-based
Backlog filters.

Note: The metric cannot be summarized across time because it is a snapshot metric.
Ending The number of open Incidents in the Beginning Backlog that are still pending
Backlog resolution at the end of a time period.
Total number of open incidents that are pending resolution as of a specific point in
time, such as the end of the working day.

Incident In a monthly report, you can use the Beginning and Ending Backlog to obtain the
Backlog Incident Backlog as of the first and last day of the month, respectively. By default,
this metric includes all Incidents not yet resolved or closed as of the last Data
Refresh.

Note: The metric cannot be summarized across time because it is a snapshot metric.
Opened
Incidents The total number of incidents raised (opened) in a time frame.

4) Incident SLA Summary Dashboards :


Numerify's Incident SLA Summary dashboards are designed to provide you an insight on the
incident SLA achievement summary (percentage of logged incidents that are either responded or
resolved) overview and details for last 6 months.

The Service Level Agreements (SLA) are used to ensure that an incident is addressed (responded
or resolved) within a certain amount of time. Once the SLAs are set in place, it is possible to
gather the information in the system to create Service Level Management-specific reports. Any
data contained within the system on the incident SLAs and related records can be reported on
and actions can be triggered at different times during the SLAs life cycle.
For more information on the dashboards, refer to the below pages.

4.1) SLA Summary Document Overview


Dashboard :
Using the SLA Summary Document Overview dashboard, corresponding to an Assignment
Group, you can view the trend of SLA achievement rates for incidents over the past six months.

When you load the SLA Summary Overview dashboard, a high-level view of SLA achievement
month-over-month is displayed. You can limit your analysis by selecting the required
Assignment Group(s) from the filter options.

 Click a month to view the corresponding Opened and Resolved incidents and percentage of
Response and Resolution that met the SLA time frame.

 Click the tab to have a more elaborated view of the SLA achievement.

 Legends depicts the percentage of responses and resolutions that met SLA time frame.

Metrics Used
Metric Name Description

Opened Incidents The total number of incidents raised (opened) in a time frame.

Resolved The number of Incidents with the Incident status - Resolved as on a specific
Incidents time period. Incidents that are resolved and subsequently moved to the Closed
state within the time period are also included in the count.
Percentage Percentage of responses that met the Service Level Agreement time frame.
Response Met
SLA Note: This is based on the SLA flag given by the source system.
Percentage Percentage of resolutions that met the Service Level Agreement time frame.
Resolution Met
SLA Note: This is based on the SLA flag given by the source system.

4.2) SLA Summary Document Detail


Dashboard :
Using the SLA Summary Document Detail dashboard, you can view the Incident SLA
achievement summary and trend based on the Incident priorities.

This section displays the SLA achievement response and resolution rates for all the resolved
incidents in the last 6 months according to your selection of incident priorities.

The Incident SLA Trend graph exhibits the incident response and resolution trend percentage for
the last 6 months according to your selection of incident Priority. In the graph, X-axis represents
the months and Y-axis represents the Incidents response and resolution percentage.
 Click the tab to view the incident SLA achievement summary overview.

Metrics Used
Metric Name Description

Percentage of responses that met the Service Level Agreement time


Percentage Response Met frame.
SLA
Note: This is based on the SLA flag given by the source system.
Percentage Resolution Met Percentage of resolutions that met the Service Level Agreement time
SLA frame.

5) Resolved Incident Overview Dashboard


Numerify's Resolved Incident Overview dashboard enables you to view the resolved incident
summary details corresponding to their assignment groups, categories, sub categories, and
severity.

Viewing Resolved Incident Overview Dashboard

You can view the sections in the Resolved Incident Overview dashboard using the following
time period options:

 Yesterday

 Last 7 Days

 Last 30 Days

These options are relative to the Data Refreshed date and time that is displayed in the upper right
corner of the dashboard as Data Refreshed at Month DD, YYYY, hh:mm:ss AM/PM.

Data is extracted from ServiceNow at least once per day. Typically, a data extract occurs during
a time period that is close to midnight in the main time zone of your company. Under normal
circumstances, Yesterday corresponds to your normal expectation, which is the day that
precedes the day you are viewing the dashboard.

As an example, assume you are looking at this dashboard on April 15, 2016 at 9:00am EST and
your company’s main time zone is EST.

 If the Data Refreshed time is just prior to midnight between April 14 and April 15, Yesterday
includes data from April 14, 2016, 12:00:00 AM through the date and time displayed as the
Data Refreshed date; in other words, it will be somewhat less than a full 24 hours. Any data from
ServiceNow that is more recent than the Data Refreshed date will not be included in the
dashboard for Yesterday even if it is from April 14.

 If the Data Refreshed time is midnight between April 14 and April 15 or later, Yesterday
includes data from April 14, 2016, 12:00:00 AM through April 14, 2016, 12:59:59 PM; in
other words a full 24 hours’ worth of data.
Under normal circumstances, Last 7 Days corresponds to Yesterday plus the 6 days preceding
Yesterday, and Last 30 Days corresponds to Yesterday plus the 29 days preceding Yesterday.

Although rare, the data extract may not successfully complete. In this case, the above definitions
will continue to be relative to the Data Refreshed date and time, which may not correspond to
your normal expectation of Yesterday and Last X Days.

Note: By default, the dashboard displays the contents in the section based on data from the past
30 days.

Sections in Resolved Incident Overview Dashboard

You can view the following sections in this dashboard:

5.1) Resolved Incidents Volume :


This section displays total volume of the resolved incidents, volume of the resolved incidents
based on their Severity (Critical or High), and percentage indicator that indicates increase or
decrease in the current volume of the resolved incidents compared to volume of the resolved
incidents from previous day, week, or month.

Metrics Used
Metric
Description
Name

Resolved The number of Incidents with the Incident status - Resolved as on a specific time
Incidents period. Incidents that are resolved and subsequently moved to the Closed state
within the time period are also included in the count

5.2) Resolved Incidents by Group and


Category :
This section displays resolved incidents volume by their assignment group, category, and sub
category.

View by Assignment Group

The Assignment Group tab displays summary of the resolved incidents based on their
assignment groups.

You can further filter summary of the resolved incidents based on the assignment group level
options: Level 4, Level 3, Level 2, and Level 1.

The assignment groups are divided into a hierarchy of levels. Level 4 is the highest level and
Level 1 is the lowest. The lower level assignment groups are filtered based on your choice of the
higher level Assignment Group. The filters will be displayed only if there are values other than
"Unknown" and "Unspecified".

Note: By default, you can choose All in the level options to display incident summary
corresponding to all the assignment groups.
View by Category or SubCategory

The Category or SubCategory tabs display summary of the resolved incidents based on its
category or sub-category.

You can further filter the summary of the resolved incidents based on the following options:

 Top 10 – To view volume of the resolved incidents for the top 10 categories or sub categories.

 All – To view volume of the resolved incidents for all the categories or sub categories.

Viewing Resolved Incidents by Group and Category

You can view the following details of the Resolved Incidents in the Resolved Incidents by Group
and Category report:

 Incident Volume by dimensions - Volume of the Resolved Incidents corresponding to one of the
following dimensions: Assignment Group, Category, or Sub-Category. You can view the report as
a horizontal bar graph where, the X axis represents volume of the resolved incidents and Y axis
represents contributors of the chosen dimension.

 Incident Volume by Severity - Volume of the resolved incidents by their severity. You can view
the details in the form of a doughnut graph.

 Percentage of Total - Ratio of the resolved incident volume for one contributor in a dimension to
the total resolved incident volume across all contributors in the dimension. The ratio is represented
as a percentage.

 Percentage met SLA – Percentage of resolved incidents that met SLA in the chosen dimension.

Metrics Used
Metric Name Description

Resolved The number of Incidents with the Incident status - Resolved as on a specific time
Incidents period. Incidents that are resolved and subsequently moved to the Closed state
within the time period are also included in the count.
The ratio between the sum of Resolved Incidents that met SLA and the total
Resolved number of Incidents. Incidents that are resolved within the stipulated time are
Incidents Met termed as Incidents that met SLA. An Incident can have more than one SLA.
SLA Percent Only when an Incident meets all the SLAs it is considered to have met the SLA.

Note: This is based on the SLA flag given by the source system.

6) Resolved Incidents Trend Dashboard


This dashboard enables you to view trend of the resolved incidents over a period of time based
on the parameters such as, Status, Assignment Group, Category, and SubCategory.

You can view content of the sections in this dashboard for the following time periods:
 Days – Displays the resolved incidents trend based on data from the past 14 days.

 Weekly – Displays the resolved incidents trend based on data from the past 13 weeks. The 13
weeks period includes the current week and the past 12 weeks.

 Monthly – Displays the resolved incidents trend based on data from the past 13 months. The 13
months period includes the current month and the past 12 months.

Sections in Resolved Incidents Trend Dashboard

You can view the following sections in this dashboard:

6.1) Resolved Incidents Volume Trend :


Resolved Incidents Volume
In this section, you can view trend of the escalated incident volume as compared to the volume
of the resolved incidents.

This section displays incident summary as:

 KPIs - Volume of resolved and escalated incidents.

 Incident Volume Trend graph - Escalated incidents volume trend compared to volume of the
resolved incidents. In this graph, the horizontal axis represents the time period and the vertical axis
represents volume of the resolved incidents and escalated incidents.

Metrics Used
Metric
Description
Name

Resolved The number of Incidents with the Incident status - Resolved as on a specific time
Incidents period. Incidents that are resolved and subsequently moved to the Closed state
within the time period are also included in the count.
Escalated The incidents that have passed or are about to cross the SLA time line are escalated
Incidents to the next level for timely attention.

Resolved Incidents Breakdown by Volume In this section, you can view volume of the resolved
incidents based on assignment group, category, subcategory, and severity.

You can further restrict contents of this section to show top 10 or all the resolved incidents
breakdown based on their assignment groups. For example, you can select Assignment group to
view Resolved Incidents breakdown based on Assignment groups and click All to view All the
Assignment Groups and not just the top 10.

The Resolved Incidents Breakdown by Volume section displays Resolved Incident summary
based on their Severity. You can filter the Incident summary based on any one of the following
Severity selector options:

 Severity 1 – Critical

 Severity 2 – High
 Severity 3 – Moderate

 Severity 4 – Normal

 UNKNOWN

You can use the following tabs to filter the contents of this section:

 Assignment Group

 Category

 Sub-Category

Content of this section is displayed in the form of Resolved Incident Summary by Dimension
graph and Incident Volume Trend graph.

Resolved Incident Summary by Dimension graph

This graph displays Incident summary based on Assignment Group, Category, and SubCategory.
Furthermore, you can filter the contents of the graph using the following two selectors:

Top 10 - Displays the top 10 contributors from your chosen dimension.

All - Displays All the contributors from your chosen dimension.

In the graph, the horizontal axis represents volume of the resolved incidents and the vertical axis
represents contributors of the chosen dimension.

Incident Volume Trend graph

This graph displays the Escalated Incident volume trend as compared to the Resolved Incident
volume. Content of this graph is driven by your choice of dimension contributors in the
Resolved Incidents Breakdown by Dimension graph.

In the graph, the horizontal axis represents time period and vertical axes represents volume of the
resolved incidents and escalated incidents.

Metrics Used
Metric
Description
Name

Resolved The number of Incidents with the Incident status - Resolved as on a specific time
Incidents period. Incidents that are resolved and subsequently moved to the Closed state
within the time period are also included in the count.
Escalated The incidents that have passed or are about to cross the SLA time line are escalated
Incidents to the next level for timely attention.

7) Incident Scorecard Dashboard :


Incident Scorecard Dashboard
Use this dashboard to view the following details based on your choice of the customer:

 Incident volume for the past 30 days

 Monthly trend of Opened Incidents

 Opened Incidents for the past 30 days based on Priority

 Top 10 Configuration Items types with the highest volume of Opened Incidents

 Top 10 Configuration Items with the highest volume of Opened Incidents

The subsequent topics provide detailed information on each section.

Sections in Incident Scorecard dashboard

Navigate to the following section to find detailed information on each of them:

Incident volume Past 30 Days


Use this section to find volume of the opened incidents, reopened incidents, and escalated
incidents for the past 30 days. Each of the metric also shows a percentage increase or decrease in
the volume compared to the previous month. Note that you can filter this information based on
customers.

Metrics Used
Metric Name Description

Opened
Incidents The total number of incidents raised (opened) in a time frame.

Reopened The sum of Incidents that were reopened after they were closed. This is a
Incidents configurable metric. Based on your business requirements, the Numerify
Administrator can configure the initial setup to tailor the metric suitably.
Opened The number of open Incidents with the Incident status - Escalated. This is a
Incidents configurable metric. Based on your business requirements, the Numerify
Escalated Administrator can configure the initial setup to tailor the metric suitably.

Monthly Trend Opened Incidents : Monthly


Trend Opened Incidents
This section displays opened incidents trend and trend of the average time it takes to resolve
incidents in the last one year. In addition to viewing this section for a specific customer, you can
also filter the trend information based on Incident Priority or Category.

In the Monthly Trend Opened Incidents graph, horizontal axis represents the calendar months
in a year and vertical axis represents volume of the opened incidents and time taken in days to
resolve an incident.
Metrics Used
Metrics Used Description

Opened
Incidents The total number of incidents raised (opened) in a time frame.

MTTR The elapsed time between the Incident creation and resolution. The elapsed time is
measured as the total time open.

Priority Split Opened Incidents Past 30 days


This section enables you to view number of opened incidents, current trend of the opened
incidents volume, and percentage increase or decrease in the volume of the incidents compared
to the last 30 days. Note that you can filter this information based on customers.

Metrics Used
Metric Name Description

Opened Incidents The total number of incidents raised (opened) in a time frame.

Top 10 CI Types Top 10 CI Types


Among the many CI types used by a Customer, this section displays the following details for the
top 10 CI types using three different graphs:

 Volume of Opened Incidents grouped by their Priority for the last 30 days: You can select a CI
type to drive the content of the other two graphs in this section.

 Mean time it takes to resolve Incidents belonging to a CI type. You can also compare it with a
wide range of MTTR values:

o High - Displays the highest mean time it took to resolve Incidents of a particular CI type
among all the CI types. For example, if there are 5 CI types, the application displays the
highest MTTR value among the 5 CI types.

o Mean - Displays the average of all the MTTR values for Incidents belonging to all CI
types.

o Median - Displays the median of all the MTTR values for Incidents belonging to all CI
types.

o Low - Displays the lowest mean time it took to resolve Incidents of a particular CI type
among all the CI types.

 Trend of Opened Incidents belonging to the Configuration Item (CI) type for the last 30 days.

Note: You can select a CI type from the left graph to view the MTTR benchmark and Opened
Incident trend for that particular Configuration Item (CI) type.

Metrics Used
Metric Name Description
Opened
Incidents The total number of incidents raised (opened) in a time frame.

MTTR The elapsed time between the Incident creation and resolution. The elapsed time is
measured as the total time open.

Top 10 Configuration Items


Among the many Configuration Items used by a Customer, this section displays the following
details for the top 10 Configuration Items using three different graphs:

 Volume of Opened Incidents grouped by their Priority for the last 30 days: You can select a
Configuration Item (CI) to drive the content of the other two graphs in this section.

 Mean time it takes to resolve Incidents belonging to the Configuration Item (CI). You can also
compare it with a wide range of MTTR values:

o High - Displays the highest mean time it took to resolve Incidents of a particular
Configuration Item (CI) among all the Configuration Items. For example, if there are 5 CIs,
the application displays the highest MTTR value among the 5 Configuration Items.

o Mean- Displays the average of all the MTTR values for Incidents belonging to all
Configuration Items.

o Median- Displays the median of all the MTTR values for Incidents belonging to all
Configuration Items.

o Low - Displays the lowest mean time it took to resolve Incidents of a particular
Configuration Item (CI) among all the Configuration Items

 Trend of Opened Incidents belonging to the Configuration Item (CI) for the last 30 days.

Note: You can select a Configuration Item (CI) from the left graph to view the MTTR
benchmark and Opened Incident trend for that particular Configuration Item (CI).

Metrics Used
Metric Name Description

Opened
Incidents The total number of incidents raised (opened) in a time frame.

MTTR The elapsed time between the Incident creation and resolution. The elapsed time is
measured as the total time open.

8) Incident Daily Trends Dashboard


This dashboard enables you to analyze incident performance based on the fix rate trend of the
incidents. You can also analyze performance of the incidents based on their assignment groups,
incident assignee ratio within the assignment group, and Service Level Agreement (SLA) target
fulfillment numbers.
Available sections in this dashboard are described below.

 Incident Daily Trends

 Incidents for Date

Tip: You can use the Level 1, Assignment Group, and Priority selectors to filter and view the
corresponding incidents' details.

Incident Daily Trends

In this section, you can view daily trend of the incidents based on its fix rate. The bar graph
provides you a comparative view of the opened and resolved incidents. In the graph, the
horizontal axis represents the time period and the vertical axis represents volume of the opened
and resolved incidents and the incident fix rate.

Incidents for Date

In this section, you can view breakdown of the incidents by their assignment groups for the
selected time period. You can further analyze the incidents based on their assignee ration and
Met SLA target values.

Tip:

 You can chose the date from the graph in the Incident Daily Trends section.

 Click an assignment group to view the incident details specific to the assignment group.

Metrics Used
Metric Name Description

Opened
Incidents The total number of incidents raised (opened) in a time frame.

Resolved The number of Incidents with the Incident status - Resolved as on a specific time
Incidents period. Incidents that are resolved and subsequently moved to the Closed state
within the time period are also included in the count.
The rate at which the Incidents are fixed. Incident fix rate is the ratio between the
number of Incidents fixed out of the available open Incidents and the number of
Incident Fix open Incidents.
Rate
All the calculations are based on the same time period. The time period is
determined by the report. For example, if the report shows a weekly trend, then the
Incident Fix Rate is calculated for the week.
Incident
Assignee The ratio of the number of assignees assigned to the incidents as compared with the
Ratio total number of opened incidents.

Met the SLA


Targets The total number of incidents that have met the defined SLA targets.

9) Incident Assignee Details Dashboard


Incident Assignee Details
This dashboard enables you to view assignee details of the incidents. You can view breakdown
the of incidents by their status and priority.

Tip:

 You can use the Level 1, Assignment Group, and Priority selectors to filter and view the
corresponding Incidents' assignee details.

 Click the Incident Summary Dashboard button in the top left of your screen to get an overview
of the incidents.

Metrics Used
Metric Name Description

# of Incidents Total number of incidents raised in a period of time.

10) Incident SLA Details Dashboard


This dashboard enables you to view Service Level Agreement (SLA) details of the incidents for
any specific date. You can view breakdown the of incidents by their assignment group, assignee,
reassignment count, and Met SLA status.

Tip:

 You can use the Level 2, Level 1, Assignment Group, and Priority selectors to filter and view
the corresponding Incidents' SLA details.

 Click the Incident Daily Trends button in the top left of your screen to get an overview of the
incidents.

Metrics Used
Metric Name Description

Reassignment The number of times an Incident has been reassigned within the assignment
Count groups.
Incidents Met SLA The number of incidents that have met their SLA criteria within a defined
time period.

This Completes Numerify360 Incident reports.


INCIDENT reports (BMC) names and descriptions

Report name Description


All Incidents > Incident
Details (Dynamic – By Status
and Assigned Groups)
Lists details of all incidents. Details include summary and
work information.
The report provides a summary of all incidents by status. You
can drill down to the assigned groups for the selected incident
All Incidents by Status and
status.
Assigned Groups
You can also select an assigned group to see incident details.
For additional details about the incident, you can select the
incident record in the report to view the Incident form and
take required action.
Open Incidents > Count By
Assignee Group
Provides a count of the incident by assigned group and the
Open Incident Count by
each assignee for the group. Management can use this report
Assignee Group and Assignee
to review the current workload.
Open Incidents > Count By
Product Categorization
Provides a breakdown of the number of incidents for each
Open Incident Count by Product
product category (for example, under Hardware, the count for
Categorization
Processing Unit)
Resolved Incidents > Resolved
Incidents
Resolved Incident Volume by Displays details of all resolved incidents based on Tier 1
Product Categorization product categorization

Crystal Reports — names and descriptions

Report name Description


Asset > Configuration Items
with Open Incidents
Configuration Items with Open
Lists CIs that have open incidents on them
Incidents
Incident Information > Aging
Lists all open incidents and the amount of time since the
Incidents By Activity Time
reported date
Incident Information > All
Incidents
High Volume Incident by Displays a pie chart of all incidents based on the company.
Company Chart This report is intended for use with multi-tenancy.
High Volume Incident by
Displays a pie chart of all incidents based on the department
Departments Chart
High Volume Incident
Displays a pie chart of all incidents based on the user
Requester Chart
Lists details of all incidents based on a specified date range.
Incident Details by Date Range
Details include Summary and work information.
Incident Volume By Product Displays a bar graph illustrating all incidents based on Tier 1
Categorization Chart product categorization
Monthly Incident Volumes Provides a count of all incidents by month
Weekly Incident Volume Chart Provides a count of all incidents by week
Incident Information >
Assignee Charts
Open Incident Volume by Displays a bar chart of the number of open incidents for each
Assignee assignee
Resolved and Closed Incident Displays a bar chart of the number of resolved and closed
Volume by Assignee incidents for each assignee
Incident Information >
Assignment Log Data
Displays a history of the groups assigned to each incident
Group Assignment to Incidents
request
Incident Information > Open
Incidents
Incident Volume By Priority Displays a bar graph illustrating all open incidents based on
and Status Charts Tier 1 product categorization
Reports all open incidents that are assigned to the ID from
My Open Incidents
which the report is run
Open Incidents – Current / by Provides a list of all open, current incidents or a list of
Date Range incidents based on a particular date range
Incident Information >
Resolved Incidents
Displays all resolved incidents that are assigned to the ID
My Resolved Incidents
under which the report is run
Resolved Incident Counts by Provides a count of all resolved incidents based on product
Product Categorization categorization
Resolved Incident Volume by Displays a pie chart illustrating all resolved cases based on
Company Charts company. This report is for multi-tenancy clients.
Resolved Incident Volume By Displays a pie chart illustrating all resolved cases based on a
Department Charts department
Displays pie charts illustrating all resolved and closed cases--
Resolved Incident Volume By
one based on status, and the other based on priority of all
Priority and Status Charts
resolved cases
Resolved Incident Volume By Displays a pie chart illustrating all resolved incidents based on
Product Categorization Chart Tier 1 product categorization
Relationship Information >
Change
Lists incidents that were caused by changes
Change Induced Incidents
Note: This report is available only if BMC Change
Management is installed.
Incident Information >
Related Configuration Items
Returns a list of incident requests that have a related CI.
Incidents with Related
Included is the type of CI, a summary of the incident request,
Configuration Items
and the reported date

Backlog Evolution This report uses unsolved tickets as a baseline to compare against incoming new
tickets and the daily rate of solved tickets over the last three months.
High & Urgent Priority Tickets
This report uses high and urgent unsolved tickets as a baseline to compare against incoming new high
and urgent tickets and the daily rate of solved high and urgent tickets over the last three months.
Incident Evolution
This report displays tickets with the type set to Incident, comparing new incident tickets with resolved
and unsolved incident tickets over the last three months.
Resolution Times
This report displays resolution times for solved and closed tickets over the last three months using
three measurements of time: less than 2 hours, less than 8 hours, and less than 24 hours.
Ticket Priorities
This report displays tickets by priority groupings over the last three months. Tickets with low and
normal priorities are grouped together as are tickets with high and urgent priorities.

Building data series


Table 1. Data series conditions
Condition Description

Priority There are four values for priority: Low, Normal, High, and Urgent.
As with status, you can use the field operators to select tickets that span different
priority settings. For example, this statement returns all tickets that are not urgent:
Priority is less than urgent.

Type The ticket type values are:


Question
Incident is used to indicate that there is more than one occurrence of the same
problem. When this occurs, one ticket is set to Problem and the other tickets that are
reporting the same problem are set to Incident and linked to the problem ticket.
Problem is a support issue that needs to be resolved.
Task is used by the support agents to track various tasks.

Group Use this condition to narrow down tickets by group name.

Assignee Use this condition to narrow down tickets by agent.

Organization Use this condition to narrow down tickets by organization.

Tags You use this condition to determine if tickets contain a specific tag or tags. You can
include or exclude tags in the condition statement by using the operators contains at
least one of the following or contains none of the following. More than one tag can
be entered. They must be separated with a space.

Ticket channel The ticket channel is where and how the ticket was created and can be any of the
following options:
Web form
Email
Chat
Twitter
Twitter DM (direct message)
Twitter Favorite
Voicemail
Phone call (incoming)
Get Satisfaction
Feedback Tab
Web service (API)
Trigger or automation
Forum topic
Closed ticket
Ticket sharing
Facebook Post

Resolution time in Use this condition to narrow down tickets by the number of hours from when the
hours ticket was created to Closed.

Ticket Satisfaction Ticket satisfaction rating is available on Professional and Enterprise plans. This
condition returns the following customer satisfaction rating values:
Unoffered means that the survey has not previously been sent
Offered means that the survey has already been sent
Bad means that the ticket has received a negative rating
Bad with comment means that the ticket has received a negative rating with a
comment
Good means that the ticket has received a positive rating
Good with comment means that the ticket has received a positive rating with a
comment

Requester's language Returns the language preference of the person who submitted the request.

Reopens Available on Professional and Enterprise. The number of times a ticket has moved
from Solved to Open or Pending.

Agent replies Available on Professional and Enterprise. The number of public agent comments.

Group stations Available on Professional and Enterprise. The number of different groups to which a
ticket has been assigned.

Assignee stations Available on Professional and Enterprise. The number of different agents to which a
ticket has been assigned.

First reply time in Available on Professional and Enterprise. The time between ticket creation and the
hours first public comment from an agent. You can specify either calendar hours or
business hours.

First resolution time in Available on Professional and Enterprise. The time from when a ticket is created to
hours when it is first solved. You can specify either calendar hours or business hours.

Full resolution time in Available on Professional and Enterprise. The time from when a ticket is created to
hours when it is solved for the last time. You can specify either calendar hours or business
hours.

Agent wait time in Available on Professional and Enterprise. The cumulative time a ticket has been in a
hours Pending state (awaiting customer response). You can specify either calendar hours
or business hours.

Requester wait time in Available on Professional and Enterprise. The cumulative time that a ticket is in a
hours New, Open or On-hold state. You can specify either calendar hours or business
hours.

On-hold time in hours Available on Professional and Enterprise. The cumulative time that a ticket is in the
On-hold status. You can specify either calendar hours or business hours.

Custom fields Custom fields that set tags (drop-down list and checkbox) are available as
conditions. You can select the drop-down list values and Yes or No for checkboxes.

Knowledge Management Reports


Knowledge Management Activity
Knowledge Management Documents Summary
Knowledge Management: User Demand
Knowledge Management Usage by Department
Self-Service Knowledge Search History
Self-Service Escalated Knowledge Management Search Escalation
Problem Management Reports
Open and Closed Problems by Area
Open and Closed Problems by Service
Problems Closed Meeting SLA Target
Average Time to Diagnose Problems

Problem reports — names and descriptions:

Report name Description


Problem Investigation > All
All Problem Investigations by Lists all problem investigations, based on the
Coordinator Group problem coordinator's group
Problem Investigation > Open by
Service
Lists open problem investigation records grouped by
Open Problem Investigations by Service
service
Problem Investigation > Resolved by
Product Categorization
Resolved Problem volume by Product Lists resolved and closed problem investigation
Categorization records grouped by product category
Known Error > All by Coordinator
Group
Lists all known errors, based on the problem
All Known Errors by Coordinator Group
coordinator's group

Crystal Reports- ProblemManagement— names and descriptions

Report name Description


Known Error > Resolved >
Resolved Known Errors by
Coordinator Group
Lists resolved known errors, which includes known errors
Resolved Known Errors by
with a status of canceled, closed, and corrected. The listing is
Coordinator Group
grouped by status and problem coordinator group
Known Error > Open > Open
Known Errors by Coordinator
Group
Open Known Errors by Lists open known errors records grouped by status and
Coordinator Group problem coordinator group
Problem Investigation > Open
Open Problem Investigations by Lists open problem investigation records grouped by status
Coordinator Group and problem coordinator group
Problem Investigation >
Resolved
Resolved Problem Investigations Lists resolved and closed problem investigation records
by Coordinator Group grouped by status and problem coordinator group
Problem Investigation > Root
Cause
Problem Investigations by Root
Lists problem investigations grouped by root cause
Cause
Resolved Problem Investigations Lists all resolved problem investigations, grouped by the
by Root Cause root cause of the problem
Request Management Reports
Request Aging Report *
Service Desk Reports
Escalated Interactions
Open and Closed Service Desk Interactions
First Time Fixed Interactions
Interactions Resulting in Related Issues
Top 20 Operators by Average Interaction Time in Last 90 Days
Interactions Closed in a Given Year
Number of Service Desk Requests by Department
Service Level Management Reports
SLM: Response SLO Metrics
SLM: Summary
SLM: Availability-Duration Metrics
SLM: Availability-Uptime Metrics
Change Management Reports
Open and Closed Change Requests
Percentage of Rejected Changes
Percentage of Emergency Changes
Percentage of Successful Changes
Changes Scheduled for This Week
Configuration Management Reports
Configuration Item Relationships *
Configuration Item Summary *
Percentage of Configuration Items Related to Other Configuration Items *

REPORTS
 System: Provides a complete report on all the system related activities of all the
devices. This category of reports include All Events, All Down Events, SNMP Trap Log,
Windows Event Log, Performance Monitor Log, Notification Profiles Triggered,
Downtime Scheduler Log, Schedule Reports Log, All Alerts and All Down Alerts.
 Health and Performance: Gives you a detailed report on the health and
performance of all/top N devices.
 Availability(Average uptime) and Response: Gives you a
detailed report on the availability and the response time of all/top N devices
 Inventory: Inventory reports are available for servers, desktops, all devices, SNMP-
enabled devices and non-SNMP devices.
 Custom Report Builder: custom report builder is the easiest way to generate report
using only the data that you want. This stages in four types (Category, Devices,
Moinitors, Time Period and Graph or Table view).

Key Performance Indicators


KU Information Technology monitors and assesses hundreds of metrics to ensure our systems
and processes are working as efficiently and effectively as possible to meet the needs of our
customers.

KPI Reporting
To provide transparency and hold ourselves accountable, we share Key Performance Indicators
(KPIs) with KU leadership each month. Here are some examples of the KPIs we report on:

Change Request (CR) Records


We track all changes we make to systems and determine if they caused a problem. We strive to
plan and make changes and updates to our services with a minimum of disruption to customers.
As you can see by the graph below, most changes are completed with no problems. It is
important to identify, track and investigate changes that do cause a problem, to assess if there is
something we can do to improve the service, and if change procedures need to be reviewed or
revised.

Change Approval Process


Our Change management process follows best practice according to Information Technology
Infrastructure Library (ITIL) for service management. A proven procedure for change review
and approval that is repeatable is more likely to be successful each time a change is made. We
track all changes to see if they followed the established review and approval process. We also
keep track of the types of changes to see if there are trends that can inform our knowledge and
processes.

(click graph to enlarge)


All Incidents by Sub-Service
An "incident" is a problem that has been reported by a customer or identified by IT staff. We
track each incident and the correlated service, and report on the top 20 services with the most
reported incidents. This helps us identify issues and trends, so we can investigate solutions to
correct the issues and avoid future problems.

(click graph to enlarge)


IT Metrics Categorization
IT Metrics Categorization: For each IT function, there should be regular operational metrics for
all key dimensions of delivery as well as quality management or verification metrics. Further,
each area should have unit measures to enable an understanding of performance. These three
types of metrics are detailed further as:

 operational or primary metrics – those metrics used to monitor, track and decision the daily or
core work against key delivery parameters
1) quality,
2) availability,
3) cost,
4) delivery against SLAs,
5) schedule.
Operational metrics are the base of effective management and are the fundamental
measures of the activity being done
 verification or secondary metrics – those metrics used to verify that the work completed meets
standards or is functioning as designed. Verification metrics should also be collected and reviewed
by the same operational team though potentially by different members of the team or as
participation as part of a broader activity (e.g. DR test). Verification metrics provide an additional
measure of either overall quality or critical activity effectiveness.
 performance or tertiary metrics – those metrics that provide insight as to the performance of the
function or activity. Performance metrics enable insight as to the team’s efficiency, timeliness, and
effectiveness
1) unit cost
2) defects per unit
3) productivity etc

Operational Metrics Subset:


 Server asset counts (by type, by OS, by age, location, business unit, etc)
 server configurations by version (n, n-1, n-2, etc) or virtualized/non-virtual, individual or
grouped and if EOL or obsolete
 Overall server utilization, performance, etc by component (CPU, memory, etc)
 Server incidents, availability, customer impacts by time period, trended with root cause
and chronic or repeat issues areas identified
 Server delivery time, server upgrade cycle time
 Server cost overall and by type of server, by cost area (admin, maintenance, HW, etc) and
cost by vendor
 Server backup attempt and completion, server failover in place, etc

Verification metrics:

 Monthly sample of the configuration management database server records for accuracy
and completeness, Ongoing scan of network for servers not in the configuration
management database, Regular reporting of all obsolete server configs with callouts on
those exceeding planned service or refresh dates
 Customer transaction times, Regular (every six months) capacity planning and
performance reviews of critical business service stacks including servers
 Root cause review of all significant impacting customer events, auto-detection of server
issues versus manual or user detection ratios
 DR Tests, server privileged access and log reviews, regular monthly server recovery or
failover tests (for a sample)

Performance metrics:
 Level of standardization or virtualization, level of currency/obsolescence
 Level of customer impact availability, customer satisfaction with performance, amount of
headroom to handle business growth
 Administrators per server, Cost per server, Cost per business transaction
 Server delivery time, man hours required to deliver a server

Obviously, if you are just setting out, you will collect on some of these metrics first. As
you incorporate their collection and automate the work and reporting associated with
them you can then tackle the additional metrics. And you will vary them according to the
importance of different elements in your shop. If cost is critical, then reporting on cost
and efficiency plays such as virtualization will naturally be more important. If time to
market or availability are critical, than those elements should receive greater focus.
Below is a diagram that reflects the construct of the three types of metrics and their
relationship to the different metrics areas and score cards:
So, in addition to the metrics framework, what else is required to be successful leveraging
the metrics?

 First and foremost, the culture of your team must be open to alternate views and
support healthy debate. Otherwise, no amount of data (metrics) or facts will enable the
team to change directions from the party line. If you and your management team do not
lead regular, fact-based discussions where course can be altered and different alternatives
considered based on the facts and the results, you likely do not have the openness needed
for this approach to be successful. Consider leading by example here and emphasize fact
based discussions and decisions.
 Also you must have defined processes that are generally adhered. If your group’s work is
heavily ad hoc and different each time, measuring what happened the last time will not
yield any benefits. If this is the case, you need to first focus on defining even at a high
level, the major IT processes and help your team’s adopt them. Then you can
proceed to metrics and the benefits they will accrue.
 Accountability, sponsorship and the willingness to invest in the improvement
activities are also key factors in the speed and scope of the improvements that can
occur. As a leader you need to maintain a personal engagement in the metrics reviews
and score card results. They should into your team’s goals and you should monitor the
progress in key areas. Your sponsorship and senior business sponsorship where
appropriate will be major accelerators to progress. And hold teams accountable for their
results and improvements within their domain.
Types of Indicators and Metrics

The need for metrics and indicators is underlined by many organizations, such as the Information
Technology Infrastructure Library (ITIL), ISACA (COBIT 5) and ISOITIL defines three types of
metrics:

technology metrics,
process metrics
service metrics.

Note that technology and process metrics are also referred to as operational metrics.

1) Technology Metrics
Technology metrics measure specific aspects of the IT infrastructure and
equipment

 central processing unit (CPU) utilization of servers,


 storage space utilized, network status (e.g., speed, bandwidth utilization)
 average uptime (availability of technology).

Most technology metrics provide inputs on IT utilization, which is a very small part of service, to
the chief information officer (CIO) or data center manager; however, unless this metric is
compared with another metric, it may not provide meaningful information for top management.
For example, consider a network response of 100 milliseconds, (i.e., a message reaches its
destination in 100 milliseconds). If management expects network response to be 10 milliseconds,
the response time requires attention, and if management expects network response to be 300
milliseconds, the response time is more than satisfactory.

2) Process Metrics
Process metrics measure specific aspects of a process

 number of changes that are rolled back within a month


 average incident response time in a month,
 percentage of employees who attended to task on time,
 average time to complete a process.

Process metrics provide information about the functioning of processes. These metrics are
generally used for compliance conformance that is related to internal controls. However, too
many process metrics may not serve the purpose of monitoring. Metrics that are related to critical
processes may be considered for management reporting.

Service Metrics
The primary focus of ITIL is on providing service. Service metrics are essential metrics for
management to monitor. They provide an end-to-end measurement of service performance.

Examples of service-level metrics include the following:

 Results of a customer satisfaction survey indicating how much IT contributes to customer satisfaction
 Cost of executing a transaction (banks use this metric to measure the cost of a transaction that is
carried out via different service channels, such as Internet, mobile, ATM and branch)
 Efficiency of service, which is based on the average time to complete a specific service. A service is
not just a process; a service can consist of multiple processes.

Enterprise Goals and Sample Metrics


COBIT 5 identifies 17 generic enterprise goals that are based on dimensions of a balanced
scorecard (BSC).

The COBIT 5 process reference model identifies 37 IT-related generic processes. COBIT 5
suggests using the following metrics:

 Number of customer service disruptions due to IT service-related incidents (reliability)


 Percent of business stakeholders satisfied that customer service delivery meets agreed-on
levels
 Number of customer complaints
 Trend of customer satisfaction survey results

Depending upon the organization’s customer services offered using IT solutions, the following
metrics (which shall be a subset of metrics defined previously) may be considered:

 Impact on customer satisfaction due to service disruptions because of IT-related incidents


 Percent of business stakeholders satisfied that customer service delivery meets agreed-on
levels
 Reduction or increase in number of customer complaints related to nonavailability of IT-
based services

There are two IT-related goals that primarily map to the enterprise goal of customer-oriented
service culture.14 They are IT-related goals 01, Alignment of IT and Business strategy and 07,
Delivery of IT services in line with business requirements. Metrics suggested for IT-related goal
07 from COBIT 5 are (for simplicity, only those IT goals that primarily map to the enterprise
goal in the example have been considered):

 Number of business disruptions due to IT service incidents


 Percent of business stakeholders satisfied that IT service delivery meets agreed-on
service levels
 Percent of users satisfied with the quality of IT service delivery

Based on business requirements, the following metrics may be considered:

 Number of IT incidents affecting business service


 Percent of IT incidents affecting business service to total IT incidents
 Number of customer complaints related to service delivery due to issues related to IT
• Cost per help desk ticket

• Cost per terabyte of network storage

Process- • Network storage refresh year (Y/N)


Specific
• Cost per desktop end user
Measures
• Desktop services refresh year (Y/N)

• Cost per e-mail inbox

• E-mail refresh year (Y/N)


‘Quality of Service’ Measures

• %, Help desk tickets escalated above Tier 1

• Help desk first contact resolution rate

• Help desk abandonment rate

Operational
Quality • Help desk response time (speed) to answer

• %, Network storage uptime

• Total number of desktop configurations

• %, E-mail uptime

• E-mail inbox, maximum storage


Table 3: Illustrative Metrics to Assess Operations Management

• %, Server uptime

• %, End-to-end server performance

• Root cause of outage category (e..g.., power lines down, human accidentally

unplugged server, glitch, etc..)

• System load

• Disk utilization

• Disk utilization by application priority

• Memory utilization

• Disk I/O statistics

• CPU utilization

• Scheduled availability
Servers/Network
• Network performance

• E-mail performance

• Core systems performance

• Website availability

• Mean time to resolve incident

• Average number of affected users by type of incident

• Maximum number of affected users by type

• Average system login time

• Average response time

• Average response time (peak hours)

• %, Peak infrastructure utilization

• Application incidents

• Application incidents (by application priority)

• Application incidents (by incident severity)

• Mean time to resolve

• Mean time to resolve (by incident severity)

• Mean time to resolve (by type)

• Average number of affected users

Applications • Average number of affected users (by incident severity)

• Average number of affected users for a single incident

• Application incident resolution index

• %, Downtime

• %, Downtime (by application priority)

• Maximum peak, downtime


• Average application response time

• Average number of users per day

26
Creating a Balanced Portfolio of Information Technology Metrics

www.businessofgovernment.org

Table 3: Illustrative Metrics to Assess Operations Management (continued)

• Employees on staff

• Employee salaries

• Retirees and new hires

• Personnel leaving the department for voluntary reasons

• Personnel leaving the department for involuntary reasons

• Employee sick days

• Ratio of management to staff

• Average duration of positions open

• Turnover by position

• Open positions
Personnel
• Average duration of open positions

• Total contractors

• Average length of contractor assignment

• Average staff cost

• Average contractor cost

• Training and professional development courses completed by employees

• Number of employees pursuing advanced degrees (e..g.., MBAs, graduate

courses in data analytics, security, etc..)

• Number of employees holding professional certifications such as CISA

(Certified Information Systems Auditor)

• Cost efficiency improvements

• Cost savings from efficiency improvements


Continuous
• %, Costs for value-added services
Improvement
• Cost by business initiative

• Anticipating future benefits from technical investments

• Availability of assigned systems

• Security systems health score

• User password changes

• % of systems with known high vulnerabilities


• % of systems without known high vulnerabilities

• Mean time to patch systems


Security
• Number of security incidents

• Number of discovered malware types

• Number of malware agents remediated

• Number of compromised clients

• Number of risk assessments

• Number of connected devices (clients, servers, network, other)

• Adoption rates of software applications

• Usability of software applications

• IT help desk service experience

End User
Customer • IT help desk responsiveness

Satisfaction
Metrics • Availability of training and development programs

• Experience with training and development programs

• Software application usage rates

• Software application obsolescence

Table 4: Illustrative Metrics to Assess IT Innovation

• Budget allocated to training and development, especially on new technologies,

programs, and practices dealing with IT management and governance

• Number of times senior IT leaders are invited to participate in strategic projects of

Personne the agency


l
• Number of projects where IT is playing a leadership role

• Number of ideas submitted by employees (over 30, 60, 90 days)

• Amount of knowledge increased

• R&D budgeted project funding per employee

• Number of experimental projects underway with emerging technologies

• Number of successful new business processes re-engineering projects completed

• Planned value

• Earned value

• Actual cost, to date

Projects • Project success rate

• Project change success rate


• % Late

• % Over budget

• Total scope changes

• Average scope changes per project

• Amount of budget spent on new IT projects

• Amount of budget spent on prototyping and experimenting with emerging

Budget technologies

• % of IT budget spent on innovation when compared to overall % of agency


budget spent on innovation

• Number of awards received from associations, magazines, forums, etc..

• % of IT workforce on strategic agency projects..

• % of CIOs and key functional managers’ time spent on charting the future
Stakeholders(strategic innovation) rather than on day-to-day operations

• Membership on advisory boards

• Number and quality of innovative strategic engagements with academia,


NGOs, and the private sector

Core Data Center Metrics and


Targets)

Target Value (to be


Metric
Metric Category Description achieved by the end of
FY 2015)

Power usage Energy The amount of total power consumed at 1..5 or lower

effectiveness a facility divided by the total amount of

IT power consumed

Cost per Cost per The total costs of a data center divided Not yet established

operating system operating by the number of operating systems,

per hour system figured for an hourly cost basis

Full-time Labor The total number of servers divided At least 25 servers per

equivalent ratio by the total number of data center full-time equivalent

personnel (government and contract

employees)
Facility utilization Facility The total number of server racks At least 80 percent

multiplied by 30 square feet and then

divided by the total square feet reported

in the data center

Storage utilization Storage The total storage used divided by the 75 percent for in-house

total storage available storage utilization and/

or 80 percent for cloud

computing/outsourced

facilities

Core to non-core Facility The number of physical servers in core At least 65 percent

physical server data centers vs.. the number of physical

ratio servers in non-core data centers

Core to non-core Virtualization The number of operating systems in At least 65 percent

operating system core data centers vs.. the number of

ratio operating systems in non-core data

centers

Virtualized Virtualization The number of virtualized operating 75 percent of operating

operating systems systems divided by the total number of systems virtualized

operating systems

Virtualization Virtualization The number of virtual operating systems 10 operating systems

density per virtual host per virtual host

Virtual hosts Virtualization The number of virtualized hosts divided At least 20 percent

by the total number of servers

Virtualization Virtualization Average of the preceding three Not applicable—

optimization metrics: virtualized operating systems, average of the three

percent virtualization density, and virtual hosts metrics above

===================================================================

Metrics can play an important role in achieving excellence as they force the organization to pay
attention to their performance and prompt management to make adjustments when goals are not
being achieved.

Operational metrics
Online application performance. The average time it takes to render a screen or page. It is also
important to measure the variability of performance (discussed further in the supplemental
operational metrics section).
Online application availability. The percentage of time the application is functioning properly.
This can be difficult to define. If the application is available for some users but not all, is it
"available?" What if most functions are working but a minor function is not? To address this
problem, I like to define the primary functions an application performs. Then, if any of these
functions are unavailable, the application is considered down even if most of the application is
usable. Also, if the application is primarily used during business hours, I like to have separate
metrics for that time versus other times. So, the metrics might be: primary functions during
business hours; all functions during business hours; primary functions 24x7; and all functions
24x7.

Batch SLAs met. The percentage of key batch jobs that finish on time.

Production incidents. The number of production problems by severity.

Supplemental operational metrics. Other metrics that might be used to enhance operational
effectiveness include the number of unscheduled changes to the production systems, the
throughput of batch processes, complexity scores for major applications (indicating how difficult
they are to maintain), architectural integrity (the percent of applications on preferred
technologies, another indication of how difficult applications are to maintain) and the variability
of online application performance.

This last item requires a quick but important note. Business users can get quite frustrated when
online production issues are simply defined as "we are experiencing slowdowns." One technique
to solve this problem is to set a target for each screen or page in the application with the target
defined as the time 90 percent of the screen or page occurrences will render. Then, actual
performance can be compared to this goal and the percentage of time the goal is hit provides a
good indicator of the customer service level (CSL).

With this technique, both the business and technology know how the application is doing when
the CSL number is reported. If an application's CSL is 90 percent, the application is running
exactly as expected whereas a CSL of 85 percent or 50 percent describe different degrees of not
achieving expected results.

Delivery metrics
Project satisfaction. The average score from post project surveys completed by business
partners. After each project, it is important to solicit feedback from the business. The survey
should contain one summary question for the project satisfaction metric (e.g., what is your
overall satisfaction with this project on a scale of one to five?), a few more specific questions and
an area for written comments. The survey should also be completed by the technology group to
gain further insights on the areas that could be improved moving forward, but these scores are
not included in the metric as they tend to be biased on the high side.

Project delivery. The percentage of projects delivered on time. "On time" is another tricky
concept. For projects using the waterfall methodology, the projected delivery date can vary
greatly once the team engages in the design process. I have found it useful to make sure business
partners know that the delivery date is not set until design is done and therefore, this metric uses
that date for a target. For Agile projects, this metric is not relevant as the delivery date is almost
always met by adjusting scope.

Project cost. The percentage of projects delivered within the cost estimate. For this metric, I also
use the post design cost estimate for the same reasons noted in the previous section. Again, Agile
projects are less likely to benefit from this metric.
SponsoredPost Sponsored by Workplace by Facbook

Want to boost your boost productivity, engagement, and culture?

69% of business leaders believe company communications help them achieve their vision.

Defect containment. The percentage of defects contained to the test environments. It is well
known that defects are much more expensive to fix in production. This metric counts the defects
corrected during the development process and compares this count to any defects found in the
first 30 days of production. While 30 days may seem like a short period, I have tried using the
first 90 days of production for this, but the wait to determine the metric was more problematic
than the additional information provided by the longer timeframe.

Supplemental delivery metrics. Additional metrics that might be included in this area: how
well interim deliverables, such as the completion of design, are hit on time; how well first
estimates compare to the final project cost; how many changes are made during the freeze
between project completion and the production install; and how many projects require an
unscheduled change after installation.

Organizational metrics
Attrition. The percentage of employees who move to other jobs. For this metric, it is important
to only include voluntary separations, as you do not want to provide managers with an incentive
to retain poor performers. It is also important to differentiate between employees who leave the
company versus those that leave to take another position within the company.

Performance reviews. The percentage of employees with current written reviews. Providing
employees with constructive feedback is one of the most important steps an organization can
take to improve productivity. Unfortunately, in many organizations, managers and employees
dread this process and it is often neglected. The problem is often the enforcement of a grading
system, which becomes the focus rather than the specific feedback. If you can do it, skip the
grade and have the manager focus on what needs to happen for the employee to get to the next
level — a discussion everyone should find useful.
Supplemental organizational metrics. There are many other metrics that can be useful in
creating an engaged workforce. Examples include making sure employees have written
performance expectations and goals at the start of the year, tracking the amount of training
provided to employees (e.g., setting targets just like CPAs and other professionals mandate) and
highlighting the number of employees in formal mentoring relationships.

Financial metrics
Budget variance. Actual costs compared to budgeted costs. This should be done for both direct
expenses (salaries) and inter-company expenses (allocations from other areas) since direct
expenses are more controllable.

Resource cost. The average cost of a technology resource. This metric provides a good view of
how well managers are controlling costs by using cheaper outsourcing labor, being thoughtful in
the use of higher priced temporary labor and managing an organization that is not top heavy with
expensive employees (discussed in more detail in the Supplemental Financial Metrics section).
Some organizations set targets for outsourcing (e.g., 30 percent of the workforce), but I think the
overall resource cost metric is much more powerful. If managers believe they can be more
productive and keep costs down using a variety of techniques, why not let them rather than focus
on a single strategy?

Supplemental financial metrics. There are several other metrics that can be useful for
organizations. Simply keeping a running total of the dollars saved from cost initiatives (e.g.,
moving to cheaper technologies) can help keep the focus on these projects. Tracking costs by
activity (e.g., development versus maintenance versus running the systems versus other costs)
can highlight areas for improvement.

Finally, as alluded to above, many organizations have a tendency to become top heavy over time
so it is useful to track this in a metric. For example, if an organization has eight levels starting
with new college graduates and going to vice presidents, a simple metric can be created by
assigning a number to each employee (e.g., new college graduates = 1, VPs = 8), adding up the
numbers and dividing the sum by the number of employees to determine the average level in the
group.

Critical metrics for IT success: Summary


It might be useful to provide an example of how metrics can help organizations be more
proactive in identifying and solving issues. The online performance of an application I once
supported jumped from an average of 1.2 seconds per screen render to 1.6 seconds in a single
month. There were no complaints from users and our team probably wouldn’t have noticed this
degradation without our metrics.

After some investigation, we found that a central support group had installed new security
software that was interacting with our application in an inefficient manner. We worked with that
group and were able to get our performance back down to 1.2 seconds by using the new software
in a different way. While a 0.4 second increase in performance is not the end of the world, this
could have become a problem if a few different issues happened over time and were not caught.

Successful IT organizations solve issues like this before they become problems, while less
successful organizations are caught off guard when the business complains about a degradation
in their technology. Metrics are key to making sure your organization proactively addresses
symptoms before they become real problems rather than reacting when a problem is real and
there is a crisis.
This article is published as part of the IDG Contributor Network

IT Metrics to Assess Operations Management

Operations management metrics give CIOs and other IT managers situational awareness of their
department’s performance, resources, personnel, and strategic activities.. These metrics should:

• Provide a gauge for the health of the overall IT department

• Be able to track how each ingredient within the IT department is advancing


the goals of the unit

• Signal areas in need of intervention to address current challenges and be


ready to take advantage of future opportunities

All CIOs interviewed noted that they have a very good inventory of metrics when it comes to
sensing and measuring the performance of their technical infrastructure.. As one CIO told us:

We know every detail of our technical system, up to the very


minute how a given system is performing… I wish we had the
same level [of] detail on our non-tech assets..

At the operational level, the focus of operations management metrics is to ensure


that resources and assets are being deployed optimally.. For instance, the
Arkansas Department of Information Systems (DIS) developed metrics for its 28
categories of services.. One, in particu-lar, was for its telephone data networking
services.. Through operations metrics that measured how much the agency pays
for long-distance services and how much it billed for services, it found that it was
underbilling for services provided.. The DIS found that there were call min-utes
unaccounted for because its system did not have the appropriate customer
identifier in the system to assign the charges..23 Without the operations metrics,
underbilling probably would have continued..

For operations to improve, metrics—such as those related to continuous improvement—are


paramount.. Continuous improvement places attention on cost, quality, and the delivery of proj-
ects and services.. For instance, cost efficiency improvements metrics measure savings or
improvements to projects where cost savings are measured against the cost between baseline
options and alternative options.. This allows managers to see if their efforts are really paying off,
are not making a difference, or are counterproductive.. For instance, organizations can often
implement cost-savings efforts by reducing human capital or lowering the amount of mainte-
nance or services offered.. For instance, an input in many organizations is the offering of train-
ing and professional development.. In lean times, these offerings might be seen as an employee
perk and could be done away with.. However, these efforts could be counterproductive, as they
likely will end up costing the organization more money because increasing skill and knowledge
levels within the organization is essential to meeting new demands.. The cost efficiency
improvement metric will help to spot this trend..

At a tactical and more strategic level, these metrics are used to uncover why key ingredients
(e..g.., infrastructure, personnel, etc..) are either not performing up to par, or what might be
needed to increase their functionality and capabilities.. Consider personnel metrics: These met-
rics can be as basic as employee salary and number of employees on staff.. Other, more com-
plex metrics can capture a number of bottom-line impacts within the organization.. For instance,
metrics such as employee absenteeism can measure absence rate, unscheduled absence rate,
overtime expense, employee performance/productivity index, and overtime as a percentage of
labor costs.. These metrics reveal impacts on the organization related to the cost of
replacement workers, decreased employee morale, and overtime pay due to absence..

23. Waxer, C. How governments are developing better performance metrics. (July 27, 2011). e.Republic, Folsom, CA.
http://www.govtech.com/policy-management/How-Governments-Are-Developing-Better-Performance-Metrics.html.

25
Creating a Balanced Portfolio of Information Technology Metrics

IBM Center for The Business of Government

Table 3: Illustrative Metrics to Assess Operations Management

• Total budget allocations per year

• % of budget on strategic priorities

• % of total projects exceeding planned value

• % of total projects over budget

Overall
Department • % of total projects on or below budget

Performance • Number of awards/accolades received by department

• Cost performance index

• % of hours by business priority

• % of hours by business initiative

• Cost by business initiative

• %, Server uptime

• %, End-to-end server performance

• Root cause of outage category (e..g.., power lines down, human accidentally

unplugged server, glitch, etc..)

• System load

• Disk utilization

• Disk utilization by application priority

• Memory utilization

• Disk I/O statistics

• CPU utilization

Servers/Networ • Scheduled availability


k
• Network performance

• E-mail performance

• Core systems performance

• Website availability

• Mean time to resolve incident

• Average number of affected users by type of incident

• Maximum number of affected users by type

• Average system login time


• Average response time

• Average response time (peak hours)

• %, Peak infrastructure utilization

• Application incidents

• Application incidents (by application priority)

• Application incidents (by incident severity)

• Mean time to resolve

• Mean time to resolve (by incident severity)

• Mean time to resolve (by type)

• Average number of affected users

Applications • Average number of affected users (by incident severity)

• Average number of affected users for a single incident

• Application incident resolution index

• %, Downtime

• %, Downtime (by application priority)

• Maximum peak, downtime

• Average application response time

• Average number of users per day

26
Creating a Balanced Portfolio of Information Technology Metrics

www.businessofgovernment.org

Table 3: Illustrative Metrics to Assess Operations Management (continued)

• Employees on staff

• Employee salaries

• Retirees and new hires

• Personnel leaving the department for voluntary reasons

• Personnel leaving the department for involuntary reasons

• Employee sick days

• Ratio of management to staff

• Average duration of positions open

• Turnover by position

• Open positions
Personnel
• Average duration of open positions

• Total contractors

• Average length of contractor assignment

• Average staff cost

• Average contractor cost

• Training and professional development courses completed by employees

• Number of employees pursuing advanced degrees (e..g.., MBAs, graduate

courses in data analytics, security, etc..)

• Number of employees holding professional certifications such as CISA

(Certified Information Systems Auditor)

• Cost efficiency improvements

• Cost savings from efficiency improvements


Continuous
• %, Costs for value-added services
Improvement
• Cost by business initiative

• Anticipating future benefits from technical investments

• Availability of assigned systems

• Security systems health score

• User password changes

• % of systems with known high vulnerabilities


• % of systems without known high vulnerabilities

• Mean time to patch systems


Security
• Number of security incidents

• Number of discovered malware types

• Number of malware agents remediated

• Number of compromised clients

• Number of risk assessments

• Number of connected devices (clients, servers, network, other)

• Adoption rates of software applications

• Usability of software applications

• IT help desk service experience

End User
Customer • IT help desk responsiveness

Satisfaction
Metrics • Availability of training and development programs

• Experience with training and development programs

• Software application usage rates

• Software application obsolescence

As noted in Challenge One (page 15), IT performance on operations is not


exclusively within the department’s control.. IT is an enabler for many
organizational functions, but only when paired with the proper managerial and
oversight processes.. Everyone must share in the development of goals and not
think of themselves as separate entities.. This type of alignment requires leaders
to be engaged in shared governance..

27
Creating a Balanced Portfolio of Information Technology Metrics

IBM Center for The Business of Government

As discussed in Finding Three (page 18), CIOs acknowledged that linking the performance of their IT
unit to the overall goals of the organization is no easy feat.. For one, IT is just one of the components
within an overall program and only rarely is in the driver’s seat.. Yet, when things go wrong with the
overall project, the IT department often takes a large share of the blame.. Leading CIOs are not
giving up on this challenge, and are working to “prove their salt..” Consider Results Minneapolis, a
public online dashboard linking 34 pages’ worth of IT depart-mental performance measures to larger
city values.. One of the dashboard headings highlights the goal of having IT services and operations
customer-focused and well managed.. Metrics—like the number of IT projects in progress, the
number of IT projects on budget, and expenditure per full-time IT employee compared to other
departmental employees—are visualized as evidence of how well the IT department is performing..
The IT department goals strive to push larger city goals forward and establish IT’s worth in concrete
terms.. Other CIOs are looking at how to track the contribution of the IT department to projects that
are transforming their agency, or how IT is fundamental to building new programs or implementing
new policy options.. CIOs are comparing the percentages of their budgets and resources that are
allocated for these efforts versus those that are allocated for standard IT maintenance and basic
computing resource provisioning..
Specifically, the analytics are grouped into the following two categories:
 Basic operational analytics: calculates KPIs (key performance indicators) and
their trends directly from the ticket data. The analytics performed includes
1.1. ticket volume and incident
1.2. resolution time (i.e the time spent solving a ticket)
1.3. trends over time by any ticket group
1.4. volume share and work load of any ticket group or individual resource
1.5. SLA (service level agreement) performance
1.6. ticket arrival distribution by time of the day or day of the week
1.7. ticket resolution performance in terms of the trends in arrivals
1.8. completions and backlogs.
 Advanced operational analytics: dives deeper into ticket data and applies
techniques such as clustering, modeling and simulation to further derive business
intelligence and identify potential saving opportunities. The analytics performed
includes
1.1. ticket resolution effort estimation
1.2. resource load analysis
1.3. ticket volume forecasting
1.4. resource utilization measurement
1.5. resource sharing and pooling
1.6. staffing analysis,
1.7. group rights-sizing
1.8. cross-skilling recommendation.

Вам также может понравиться