Академический Документы
Профессиональный Документы
Культура Документы
The available dashboards for the Incident Management Analytics are as follows:
You can use the following time slice tabs to view the Opened Incidents details in every section
of this dashboard:
Days - displays the Opened Incidents trend based on data from the past 14 days.
Weekly - displays the Opened Incidents trend based on data from the past 13 weeks. The 13
weeks period includes the current week and the past 12 weeks.
Monthly - displays the Opened Incidents trend based on data from the past 13 months. The 13
months period includes the current month and the past 12 months.
You can view the following sections in the Opened Incidents Trend dashboard:
View by Status
You can choose one of the following tabs to view the related Opened Incident details:
Reopened: to view the Reopened Incidents trend compared to the Opened Incidents volume.
Escalated: to view the Escalated Incidents trend compared to the Opened Incidents volume.
Incident Volume Trend Graph – trend graph of the reopened or escalated incidents as
compared to the volume of the opened incidents. In the graph, horizontal axis represents the time
slice and vertical axis represents volume of the opened incidents and reopened incidents.
Note: The Horizontal axis contributors are based on your choice of the time slice tabs in the
Opened Incidents Trend dashboard.
Metrics Used
Metric Name Description
Opened
Incidents The total number of incidents raised (opened) in a time frame.
MTTR The elapsed time between the Incident creation and resolution. The elapsed time
is measured as the total time open.
Opened The number of open Incidents with the Incident status - Escalated. This is a
Incidents configurable metric. Based on your business requirements, the Numerify
Escalated Administrator can configure the initial setup to tailor the metric suitably.
You can also use the Severity selector to view volume of the opened incidents and reopened
incidents based on their severity.
You can further restrict the report to show only the following:
Top 10 or All - For example, you can select Assignment group to view opened incidents
breakdown based on the assignment groups and click All to view all the assignment
groups and not just the top 10.
Reopened or Escalated - For example, you can click Reopened to view the breakdown
of reopened incidents in the chosen dimension.
The Opened Incidents Breakdown by Volume report displays the opened incident summary
based on their severity. You can filter the incident summary based on any one of the Severity
options (for example, Severity 1 - Critical, Severity 2 - High, Severity 3 - Moderate, Severity
4 - Normal, or UNKNOWN).
You can also use the Assignment Group, Category, and Subcategory options to filter contents of
the
Furthermore, you can use the following options to filter the contents of the Opened Incidents
Breakdown by Volume report
The Opened Incidents Breakdown by Volume report is displayed in the form of the following
two graphs:
You can also filter contents of the graph using the following two selectors:
Incident Volume Trend - Displays the reopened or escalated incident volume trend compared to
the opened incidents. Content of this graph is driven by your choice of dimension contributors
(Assignment Group, Category, or Sub Category) in the Opened Incident Summary by Dimension
graph. In the graph, the horizontal axis depicts the time period and the vertical axis depicts
volume of the opened, reopened, and escalated incidents.
You can also filter contents of this graph using the following Status options:
o Reopened - To view trend of the reopened incidents compared to the opened incidents.
o Escalated - To view trend of the escalated incidents compared to the opened incidents.
Metrics Used
Metric Name Description
Opened
Incidents The total number of incidents raised (opened) in a time frame.
Closed The total number of Incidents closed in a period of time. Closed Incidents mark
Incidents the final status in the life cycle of an Incident.
Reopened The sum of Incidents that were reopened after they were closed. This is a
Incidents configurable metric. Based on your business requirements, the Numerify
Administrator can configure the initial setup to tailor the metric suitably.
Opened The number of open Incidents with the Incident status - Escalated. This is a
Incidents configurable metric. Based on your business requirements, the Numerify
Escalated Administrator can configure the initial setup to tailor the metric suitably.
2) Opened Incidents Overview Dashboard :
The Opened Incidents Overview dashboard enables you to view the Opened Incidents volume
based on their Groups, Categories, Distribution, Status, and Severity.
You can use the following tabs to decide the level of detail displayed on the Opened Incidents
Overview dashboard:
Overview – To display the Opened Incidents volume based on Group, Category, and
Distribution. .
Detail – To display the Opened Incidents volume based on Severity, Status, and Categorization.
You can view additional details such as Incident ID, Description, and so on for further analysis.
In addition to the Overview and Detail tabs, you have the following time period options:
Yesterday
Last 7 Days
Last 30 Days
These options are relative to the Data Refreshed date and time that is displayed in the upper right
corner of the dashboard as Data Refreshed at Month DD, YYYY, hh:mm:ss AM/PM.
Data is extracted from ServiceNow at least once per day. Typically, a data extract occurs during
a time period that is close to midnight in the main time zone of your company. Under normal
circumstances, Yesterday corresponds to your normal expectation, which is the day that
precedes the day you are viewing the dashboard.
As an example, assume you are looking at this dashboard on April 15, 2016 at 9:00am EST and
your company’s main time zone is EST.
If the Data Refreshed time is just prior to midnight between April 14 and April 15, Yesterday
includes data from April 14, 2016, 12:00:00 AM through the date and time displayed as the Data
Refreshed date; in other words, it will be somewhat less than a full 24 hours. Any data from
ServiceNow that is more recent than the Data Refreshed date will not be included in the
dashboard for Yesterday even if it is from April 14.
If the Data Refreshed time is midnight between April 14 and April 15 or later, Yesterday
includes data from April 14, 2016, 12:00:00 AM through April 14, 2016, 12:59:59 PM; in other
words a full 24 hours’ worth of data.
Under normal circumstances, Last 7 Days corresponds to Yesterday plus the 6 days preceding
Yesterday, and Last 30 Days corresponds to Yesterday plus the 29 days preceding Yesterday.
Although rare, the data extract may not successfully complete. In this case, the above definitions
will continue to be relative to the Data Refreshed date and time, which may not correspond to
your normal expectation of Yesterday and Last X Days.
Note: By default, the dashboard displays the Opened Incidents volume based on data from the
past 30 days.
The Overview tab
You can view the following dashboard sections using the Overview tab:
Note: The reference time slice for the percentage indicator depends on your choice of the time
frame in the Opened Incidents Overview dashboard.
Metrics Used
Metric Name Description
Opened Incidents The total number of incidents raised (opened) in a time frame.
Percentage of Opened The percentage of opened incidents in a given Category or Dimension as
Incidents compared with the total number of open Incidents.
Opened Incidents by Group and Category: This section enables you to view the opened incidents
volume by their assignment group, category, and sub category.
You can use the Assignment Group tab to view volume of the opened incidents by each
assignment group, total percentage of opened incidents for the selected assignment group, and
percentage of opened Incidents that met SLA out of the total opened incidents for that group.
Assignment Groups are classified hierarchically using Levels. You can use the Level selectors to
filter and view the Opened Incidents by their Assignment Group levels. A higher number level
indicates a higher position in the hierarchy. Assignment Groups in lower levels are subsets of
groups in the higher levels. For example: If we consider two levels, Level 2 and Level 1 where
Level 2 is higher than Level 1, your choice of Assignment Group in Level 2 filters the
Assignment Groups displayed in Level 1. By default, you can choose All in the Level selectors,
to display Incident summary corresponding to All the Assignment Groups.
For more information on the contents in the Opened Incidents by Group and Category section,
see Viewing Opened Incidents by Group and Category.
View by Category/SubCategory
You can use the Category/SubCategory tab to view volume of opened incidents by each
category or sub category, total percentage of opened incidents for the selected category or sub
category, and percentage of opened incidents that met SLA out of the total opened incidents for
that category or sub category.
You can also use the following tabs to filter and view the opened incidents volume by category
or sub category:
Top 10 - To view the volume of the opened incidents by the top 10 category or sub category.
All - To view volume of the opened incidents by all the categories or sub categories.
You can use Assignment Group, Category, or SubCategory tabs to view the following details:
Volume of Opened Incidents – A horizontal bar graph displays the breakdown of the
Opened Incidents volume by one of the following dimensions: Assignment Group,
Category or Sub Category . The Horizontal axis represents the volume of the Opened
Incidents and the Vertical axis represents the contributors of the chosen dimension. You
can choose any one of the contributor from this graph to view the following related
Opened Incident details:
o Percentage Met SLA – Displays the total percentage of the Opened Incidents volume
that met SLA in the dimension of your choice. For example: if we consider the Opened
Incidents count to be 5 and the count of Opened Incidents that met SLA to be 3 then the
Percentage met SLA will be 60%.
Metrics Used
Metric Used Description
Opened Incidents The total number of incidents raised (opened) in a time frame.
Opened Incidents The number of open Incidents that have met the SLA. This is a configurable
Met SLA metric. Based on your business requirements, the Numerify Administrator can
configure the initial setup to tailor the metric suitably.
Percentage of The percentage of opened incidents in a given Category or Dimension as
Opened Incidents compared with the total number of open Incidents.
You can view distribution of the opened incidents volume by their contact type in a vertical bar
graph. In the graph, the horizontal axis represents the contact type contributors and the vertical
axis represents the opened incidents volume.
View by Location
You can view distribution of the opened incidents volume by their location in a vertical bar
graph. In the graph, the horizontal axis represents the location contributors and the vertical axis
represents the opened incidents volume.
View by Severity
You can view distribution of the opened incidents volume by their severity in a vertical bar
graph. In the graph, the horizontal axis represents the severity contributors and the vertical axis
represents the opened incidents volume.
Metrics Used
Metric Name Distribution
Opened Incidents The total number of incidents raised (opened) in a time frame.
You can view the following dashboard sections using the Detail tab:
In the Incidents Breakdown section, you can view the Opened Incidents volume by their
severity, status, and categorization.
View by Severity
The By Severity graph in this section displays the total opened incidents volume, volume of the
incidents based on their severity level (Critical or High), and percentage indicator that indicates
the increase or decrease in the current volume of the incidents compared to the volume of
incidents from the previous day, week, or month.
View by Status
The By Status graph in this section displays opened incidents volume by their status. In the
graph, the horizontal axis represents status of the opened incidents and the vertical axis
represents the opened incidents volume.
View by Categorization
The By Categorization graph in this section displays the volume of the Reopened Incidents.
Metrics Used
Metric
Description
Name
Reopened The sum of Incidents that were reopened after they were closed. This is a
Incidents configurable metric. Based on your business requirements, the Numerify
Administrator can configure the initial setup to tailor the metric suitably.
Incidents Details :
In this section, you can view the opened incidents detail such as created date, case ID, and so on
in a tabular format.
You can use the Priority, Status, Assignment Group, and Categorization selectors to filter the
Incident details by the selected parameter.
Metrics Used
Metric Name Description
Opened Incidents The total number of incidents raised (opened) in a time frame.
You can modify and view the contents of this dashboard using the following options:
Select Classification Level Here - To view the Incident Backlog details based on their
Classification Level.
Select Parent Group Here - To view the Incident Backlog details based on their Parent Group.
Level 4, Level 3, Level 2, and Level 1. The Assignment Groups are segregated into a hierarchy of
levels. Level 4 is the highest level and Level 1 is the lowest. The lower level Assignment Groups
are filtered based on your choice of the higher level Assignment Group. The filters will be
displayed only if there are values other than "Unknown" and "Unspecified".
The summary is displayed as a horizontal bar graph. In the graph, horizontal axis represents the
incident backlog volume and the vertical axis represents the assignment group names.
Metrics Used
Metric
Description
Name
Total number of open incidents that are pending resolution as of a specific point in
time, such as the end of the working day.
Incident In a monthly report, you can use the Beginning and Ending Backlog to obtain the
Backlog Incident Backlog as of the first and last day of the month, respectively. By default,
this metric includes all Incidents not yet resolved or closed as of the last Data Refresh.
Note: The metric cannot be summarized across time because it is a snapshot metric.
The Incident Backlog trend is displayed as a horizontal bar graph. In the graph, horizontal axis
represents the past 12 months and vertical axis represents the volume of the opened incidents and
closed incidents.
A trend line overlaid on the Opened and Closed Incidents represents the Incident Backlog
trend.
Metrics Used
Metric
Description
Name
Opened
Incidents The total number of incidents raised (opened) in a time frame.
Closed The total number of Incidents closed in a period of time. Closed Incidents mark the
Incidents final status in the life cycle of an Incident.
Total number of open incidents that are pending resolution as of a specific point in
time, such as the end of the working day.
Incident In a monthly report, you can use the Beginning and Ending Backlog to obtain the
Backlog Incident Backlog as of the first and last day of the month, respectively. By default,
this metric includes all Incidents not yet resolved or closed as of the last Data
Refresh.
Note: The metric cannot be summarized across time because it is a snapshot metric.
Assignee Backlog Monthly Summary : In this section, you can view volume of the incident
backlog for the assignee in a particular assignment group.
The contents of this section is driven by your choice of the assignment group in the Assignment
Groups by Incident Backlog section and month of the incident backlog in the Incident Backlog
Trend for the Last 12 Months section. This section is displayed in a tabular format.
Tip: You can also adjust the Ending Backlog Threshold value to modify and view the contents
of the section.
Metrics Used
Metric
Description
Name
Beginning The number of open Incidents that are pending resolution at the beginning of a
Backlog specific time period.
Closed The total number of Incidents closed in a period of time. Closed Incidents mark the
Incidents final status in the life cycle of an Incident.
The number of open Incidents pending resolution as of the current date. Although
Current identical to Incident Backlog, Current Backlog differs by ignoring all date-based
Backlog filters.
Note: The metric cannot be summarized across time because it is a snapshot metric.
Ending The number of open Incidents in the Beginning Backlog that are still pending
Backlog resolution at the end of a time period.
Total number of open incidents that are pending resolution as of a specific point in
time, such as the end of the working day.
Incident In a monthly report, you can use the Beginning and Ending Backlog to obtain the
Backlog Incident Backlog as of the first and last day of the month, respectively. By default,
this metric includes all Incidents not yet resolved or closed as of the last Data
Refresh.
Note: The metric cannot be summarized across time because it is a snapshot metric.
Opened
Incidents The total number of incidents raised (opened) in a time frame.
The Service Level Agreements (SLA) are used to ensure that an incident is addressed (responded
or resolved) within a certain amount of time. Once the SLAs are set in place, it is possible to
gather the information in the system to create Service Level Management-specific reports. Any
data contained within the system on the incident SLAs and related records can be reported on
and actions can be triggered at different times during the SLAs life cycle.
For more information on the dashboards, refer to the below pages.
When you load the SLA Summary Overview dashboard, a high-level view of SLA achievement
month-over-month is displayed. You can limit your analysis by selecting the required
Assignment Group(s) from the filter options.
Click a month to view the corresponding Opened and Resolved incidents and percentage of
Response and Resolution that met the SLA time frame.
Click the tab to have a more elaborated view of the SLA achievement.
Legends depicts the percentage of responses and resolutions that met SLA time frame.
Metrics Used
Metric Name Description
Opened Incidents The total number of incidents raised (opened) in a time frame.
Resolved The number of Incidents with the Incident status - Resolved as on a specific
Incidents time period. Incidents that are resolved and subsequently moved to the Closed
state within the time period are also included in the count.
Percentage Percentage of responses that met the Service Level Agreement time frame.
Response Met
SLA Note: This is based on the SLA flag given by the source system.
Percentage Percentage of resolutions that met the Service Level Agreement time frame.
Resolution Met
SLA Note: This is based on the SLA flag given by the source system.
This section displays the SLA achievement response and resolution rates for all the resolved
incidents in the last 6 months according to your selection of incident priorities.
The Incident SLA Trend graph exhibits the incident response and resolution trend percentage for
the last 6 months according to your selection of incident Priority. In the graph, X-axis represents
the months and Y-axis represents the Incidents response and resolution percentage.
Click the tab to view the incident SLA achievement summary overview.
Metrics Used
Metric Name Description
You can view the sections in the Resolved Incident Overview dashboard using the following
time period options:
Yesterday
Last 7 Days
Last 30 Days
These options are relative to the Data Refreshed date and time that is displayed in the upper right
corner of the dashboard as Data Refreshed at Month DD, YYYY, hh:mm:ss AM/PM.
Data is extracted from ServiceNow at least once per day. Typically, a data extract occurs during
a time period that is close to midnight in the main time zone of your company. Under normal
circumstances, Yesterday corresponds to your normal expectation, which is the day that
precedes the day you are viewing the dashboard.
As an example, assume you are looking at this dashboard on April 15, 2016 at 9:00am EST and
your company’s main time zone is EST.
If the Data Refreshed time is just prior to midnight between April 14 and April 15, Yesterday
includes data from April 14, 2016, 12:00:00 AM through the date and time displayed as the
Data Refreshed date; in other words, it will be somewhat less than a full 24 hours. Any data from
ServiceNow that is more recent than the Data Refreshed date will not be included in the
dashboard for Yesterday even if it is from April 14.
If the Data Refreshed time is midnight between April 14 and April 15 or later, Yesterday
includes data from April 14, 2016, 12:00:00 AM through April 14, 2016, 12:59:59 PM; in
other words a full 24 hours’ worth of data.
Under normal circumstances, Last 7 Days corresponds to Yesterday plus the 6 days preceding
Yesterday, and Last 30 Days corresponds to Yesterday plus the 29 days preceding Yesterday.
Although rare, the data extract may not successfully complete. In this case, the above definitions
will continue to be relative to the Data Refreshed date and time, which may not correspond to
your normal expectation of Yesterday and Last X Days.
Note: By default, the dashboard displays the contents in the section based on data from the past
30 days.
Metrics Used
Metric
Description
Name
Resolved The number of Incidents with the Incident status - Resolved as on a specific time
Incidents period. Incidents that are resolved and subsequently moved to the Closed state
within the time period are also included in the count
The Assignment Group tab displays summary of the resolved incidents based on their
assignment groups.
You can further filter summary of the resolved incidents based on the assignment group level
options: Level 4, Level 3, Level 2, and Level 1.
The assignment groups are divided into a hierarchy of levels. Level 4 is the highest level and
Level 1 is the lowest. The lower level assignment groups are filtered based on your choice of the
higher level Assignment Group. The filters will be displayed only if there are values other than
"Unknown" and "Unspecified".
Note: By default, you can choose All in the level options to display incident summary
corresponding to all the assignment groups.
View by Category or SubCategory
The Category or SubCategory tabs display summary of the resolved incidents based on its
category or sub-category.
You can further filter the summary of the resolved incidents based on the following options:
Top 10 – To view volume of the resolved incidents for the top 10 categories or sub categories.
All – To view volume of the resolved incidents for all the categories or sub categories.
You can view the following details of the Resolved Incidents in the Resolved Incidents by Group
and Category report:
Incident Volume by dimensions - Volume of the Resolved Incidents corresponding to one of the
following dimensions: Assignment Group, Category, or Sub-Category. You can view the report as
a horizontal bar graph where, the X axis represents volume of the resolved incidents and Y axis
represents contributors of the chosen dimension.
Incident Volume by Severity - Volume of the resolved incidents by their severity. You can view
the details in the form of a doughnut graph.
Percentage of Total - Ratio of the resolved incident volume for one contributor in a dimension to
the total resolved incident volume across all contributors in the dimension. The ratio is represented
as a percentage.
Percentage met SLA – Percentage of resolved incidents that met SLA in the chosen dimension.
Metrics Used
Metric Name Description
Resolved The number of Incidents with the Incident status - Resolved as on a specific time
Incidents period. Incidents that are resolved and subsequently moved to the Closed state
within the time period are also included in the count.
The ratio between the sum of Resolved Incidents that met SLA and the total
Resolved number of Incidents. Incidents that are resolved within the stipulated time are
Incidents Met termed as Incidents that met SLA. An Incident can have more than one SLA.
SLA Percent Only when an Incident meets all the SLAs it is considered to have met the SLA.
Note: This is based on the SLA flag given by the source system.
You can view content of the sections in this dashboard for the following time periods:
Days – Displays the resolved incidents trend based on data from the past 14 days.
Weekly – Displays the resolved incidents trend based on data from the past 13 weeks. The 13
weeks period includes the current week and the past 12 weeks.
Monthly – Displays the resolved incidents trend based on data from the past 13 months. The 13
months period includes the current month and the past 12 months.
Incident Volume Trend graph - Escalated incidents volume trend compared to volume of the
resolved incidents. In this graph, the horizontal axis represents the time period and the vertical axis
represents volume of the resolved incidents and escalated incidents.
Metrics Used
Metric
Description
Name
Resolved The number of Incidents with the Incident status - Resolved as on a specific time
Incidents period. Incidents that are resolved and subsequently moved to the Closed state
within the time period are also included in the count.
Escalated The incidents that have passed or are about to cross the SLA time line are escalated
Incidents to the next level for timely attention.
Resolved Incidents Breakdown by Volume In this section, you can view volume of the resolved
incidents based on assignment group, category, subcategory, and severity.
You can further restrict contents of this section to show top 10 or all the resolved incidents
breakdown based on their assignment groups. For example, you can select Assignment group to
view Resolved Incidents breakdown based on Assignment groups and click All to view All the
Assignment Groups and not just the top 10.
The Resolved Incidents Breakdown by Volume section displays Resolved Incident summary
based on their Severity. You can filter the Incident summary based on any one of the following
Severity selector options:
Severity 1 – Critical
Severity 2 – High
Severity 3 – Moderate
Severity 4 – Normal
UNKNOWN
You can use the following tabs to filter the contents of this section:
Assignment Group
Category
Sub-Category
Content of this section is displayed in the form of Resolved Incident Summary by Dimension
graph and Incident Volume Trend graph.
This graph displays Incident summary based on Assignment Group, Category, and SubCategory.
Furthermore, you can filter the contents of the graph using the following two selectors:
In the graph, the horizontal axis represents volume of the resolved incidents and the vertical axis
represents contributors of the chosen dimension.
This graph displays the Escalated Incident volume trend as compared to the Resolved Incident
volume. Content of this graph is driven by your choice of dimension contributors in the
Resolved Incidents Breakdown by Dimension graph.
In the graph, the horizontal axis represents time period and vertical axes represents volume of the
resolved incidents and escalated incidents.
Metrics Used
Metric
Description
Name
Resolved The number of Incidents with the Incident status - Resolved as on a specific time
Incidents period. Incidents that are resolved and subsequently moved to the Closed state
within the time period are also included in the count.
Escalated The incidents that have passed or are about to cross the SLA time line are escalated
Incidents to the next level for timely attention.
Top 10 Configuration Items types with the highest volume of Opened Incidents
Metrics Used
Metric Name Description
Opened
Incidents The total number of incidents raised (opened) in a time frame.
Reopened The sum of Incidents that were reopened after they were closed. This is a
Incidents configurable metric. Based on your business requirements, the Numerify
Administrator can configure the initial setup to tailor the metric suitably.
Opened The number of open Incidents with the Incident status - Escalated. This is a
Incidents configurable metric. Based on your business requirements, the Numerify
Escalated Administrator can configure the initial setup to tailor the metric suitably.
In the Monthly Trend Opened Incidents graph, horizontal axis represents the calendar months
in a year and vertical axis represents volume of the opened incidents and time taken in days to
resolve an incident.
Metrics Used
Metrics Used Description
Opened
Incidents The total number of incidents raised (opened) in a time frame.
MTTR The elapsed time between the Incident creation and resolution. The elapsed time is
measured as the total time open.
Metrics Used
Metric Name Description
Opened Incidents The total number of incidents raised (opened) in a time frame.
Volume of Opened Incidents grouped by their Priority for the last 30 days: You can select a CI
type to drive the content of the other two graphs in this section.
Mean time it takes to resolve Incidents belonging to a CI type. You can also compare it with a
wide range of MTTR values:
o High - Displays the highest mean time it took to resolve Incidents of a particular CI type
among all the CI types. For example, if there are 5 CI types, the application displays the
highest MTTR value among the 5 CI types.
o Mean - Displays the average of all the MTTR values for Incidents belonging to all CI
types.
o Median - Displays the median of all the MTTR values for Incidents belonging to all CI
types.
o Low - Displays the lowest mean time it took to resolve Incidents of a particular CI type
among all the CI types.
Trend of Opened Incidents belonging to the Configuration Item (CI) type for the last 30 days.
Note: You can select a CI type from the left graph to view the MTTR benchmark and Opened
Incident trend for that particular Configuration Item (CI) type.
Metrics Used
Metric Name Description
Opened
Incidents The total number of incidents raised (opened) in a time frame.
MTTR The elapsed time between the Incident creation and resolution. The elapsed time is
measured as the total time open.
Volume of Opened Incidents grouped by their Priority for the last 30 days: You can select a
Configuration Item (CI) to drive the content of the other two graphs in this section.
Mean time it takes to resolve Incidents belonging to the Configuration Item (CI). You can also
compare it with a wide range of MTTR values:
o High - Displays the highest mean time it took to resolve Incidents of a particular
Configuration Item (CI) among all the Configuration Items. For example, if there are 5 CIs,
the application displays the highest MTTR value among the 5 Configuration Items.
o Mean- Displays the average of all the MTTR values for Incidents belonging to all
Configuration Items.
o Median- Displays the median of all the MTTR values for Incidents belonging to all
Configuration Items.
o Low - Displays the lowest mean time it took to resolve Incidents of a particular
Configuration Item (CI) among all the Configuration Items
Trend of Opened Incidents belonging to the Configuration Item (CI) for the last 30 days.
Note: You can select a Configuration Item (CI) from the left graph to view the MTTR
benchmark and Opened Incident trend for that particular Configuration Item (CI).
Metrics Used
Metric Name Description
Opened
Incidents The total number of incidents raised (opened) in a time frame.
MTTR The elapsed time between the Incident creation and resolution. The elapsed time is
measured as the total time open.
Tip: You can use the Level 1, Assignment Group, and Priority selectors to filter and view the
corresponding incidents' details.
In this section, you can view daily trend of the incidents based on its fix rate. The bar graph
provides you a comparative view of the opened and resolved incidents. In the graph, the
horizontal axis represents the time period and the vertical axis represents volume of the opened
and resolved incidents and the incident fix rate.
In this section, you can view breakdown of the incidents by their assignment groups for the
selected time period. You can further analyze the incidents based on their assignee ration and
Met SLA target values.
Tip:
You can chose the date from the graph in the Incident Daily Trends section.
Click an assignment group to view the incident details specific to the assignment group.
Metrics Used
Metric Name Description
Opened
Incidents The total number of incidents raised (opened) in a time frame.
Resolved The number of Incidents with the Incident status - Resolved as on a specific time
Incidents period. Incidents that are resolved and subsequently moved to the Closed state
within the time period are also included in the count.
The rate at which the Incidents are fixed. Incident fix rate is the ratio between the
number of Incidents fixed out of the available open Incidents and the number of
Incident Fix open Incidents.
Rate
All the calculations are based on the same time period. The time period is
determined by the report. For example, if the report shows a weekly trend, then the
Incident Fix Rate is calculated for the week.
Incident
Assignee The ratio of the number of assignees assigned to the incidents as compared with the
Ratio total number of opened incidents.
Tip:
You can use the Level 1, Assignment Group, and Priority selectors to filter and view the
corresponding Incidents' assignee details.
Click the Incident Summary Dashboard button in the top left of your screen to get an overview
of the incidents.
Metrics Used
Metric Name Description
Tip:
You can use the Level 2, Level 1, Assignment Group, and Priority selectors to filter and view
the corresponding Incidents' SLA details.
Click the Incident Daily Trends button in the top left of your screen to get an overview of the
incidents.
Metrics Used
Metric Name Description
Reassignment The number of times an Incident has been reassigned within the assignment
Count groups.
Incidents Met SLA The number of incidents that have met their SLA criteria within a defined
time period.
Backlog Evolution This report uses unsolved tickets as a baseline to compare against incoming new
tickets and the daily rate of solved tickets over the last three months.
High & Urgent Priority Tickets
This report uses high and urgent unsolved tickets as a baseline to compare against incoming new high
and urgent tickets and the daily rate of solved high and urgent tickets over the last three months.
Incident Evolution
This report displays tickets with the type set to Incident, comparing new incident tickets with resolved
and unsolved incident tickets over the last three months.
Resolution Times
This report displays resolution times for solved and closed tickets over the last three months using
three measurements of time: less than 2 hours, less than 8 hours, and less than 24 hours.
Ticket Priorities
This report displays tickets by priority groupings over the last three months. Tickets with low and
normal priorities are grouped together as are tickets with high and urgent priorities.
Priority There are four values for priority: Low, Normal, High, and Urgent.
As with status, you can use the field operators to select tickets that span different
priority settings. For example, this statement returns all tickets that are not urgent:
Priority is less than urgent.
Tags You use this condition to determine if tickets contain a specific tag or tags. You can
include or exclude tags in the condition statement by using the operators contains at
least one of the following or contains none of the following. More than one tag can
be entered. They must be separated with a space.
Ticket channel The ticket channel is where and how the ticket was created and can be any of the
following options:
Web form
Email
Chat
Twitter
Twitter DM (direct message)
Twitter Favorite
Voicemail
Phone call (incoming)
Get Satisfaction
Feedback Tab
Web service (API)
Trigger or automation
Forum topic
Closed ticket
Ticket sharing
Facebook Post
Resolution time in Use this condition to narrow down tickets by the number of hours from when the
hours ticket was created to Closed.
Ticket Satisfaction Ticket satisfaction rating is available on Professional and Enterprise plans. This
condition returns the following customer satisfaction rating values:
Unoffered means that the survey has not previously been sent
Offered means that the survey has already been sent
Bad means that the ticket has received a negative rating
Bad with comment means that the ticket has received a negative rating with a
comment
Good means that the ticket has received a positive rating
Good with comment means that the ticket has received a positive rating with a
comment
Requester's language Returns the language preference of the person who submitted the request.
Reopens Available on Professional and Enterprise. The number of times a ticket has moved
from Solved to Open or Pending.
Agent replies Available on Professional and Enterprise. The number of public agent comments.
Group stations Available on Professional and Enterprise. The number of different groups to which a
ticket has been assigned.
Assignee stations Available on Professional and Enterprise. The number of different agents to which a
ticket has been assigned.
First reply time in Available on Professional and Enterprise. The time between ticket creation and the
hours first public comment from an agent. You can specify either calendar hours or
business hours.
First resolution time in Available on Professional and Enterprise. The time from when a ticket is created to
hours when it is first solved. You can specify either calendar hours or business hours.
Full resolution time in Available on Professional and Enterprise. The time from when a ticket is created to
hours when it is solved for the last time. You can specify either calendar hours or business
hours.
Agent wait time in Available on Professional and Enterprise. The cumulative time a ticket has been in a
hours Pending state (awaiting customer response). You can specify either calendar hours
or business hours.
Requester wait time in Available on Professional and Enterprise. The cumulative time that a ticket is in a
hours New, Open or On-hold state. You can specify either calendar hours or business
hours.
On-hold time in hours Available on Professional and Enterprise. The cumulative time that a ticket is in the
On-hold status. You can specify either calendar hours or business hours.
Custom fields Custom fields that set tags (drop-down list and checkbox) are available as
conditions. You can select the drop-down list values and Yes or No for checkboxes.
REPORTS
System: Provides a complete report on all the system related activities of all the
devices. This category of reports include All Events, All Down Events, SNMP Trap Log,
Windows Event Log, Performance Monitor Log, Notification Profiles Triggered,
Downtime Scheduler Log, Schedule Reports Log, All Alerts and All Down Alerts.
Health and Performance: Gives you a detailed report on the health and
performance of all/top N devices.
Availability(Average uptime) and Response: Gives you a
detailed report on the availability and the response time of all/top N devices
Inventory: Inventory reports are available for servers, desktops, all devices, SNMP-
enabled devices and non-SNMP devices.
Custom Report Builder: custom report builder is the easiest way to generate report
using only the data that you want. This stages in four types (Category, Devices,
Moinitors, Time Period and Graph or Table view).
KPI Reporting
To provide transparency and hold ourselves accountable, we share Key Performance Indicators
(KPIs) with KU leadership each month. Here are some examples of the KPIs we report on:
operational or primary metrics – those metrics used to monitor, track and decision the daily or
core work against key delivery parameters
1) quality,
2) availability,
3) cost,
4) delivery against SLAs,
5) schedule.
Operational metrics are the base of effective management and are the fundamental
measures of the activity being done
verification or secondary metrics – those metrics used to verify that the work completed meets
standards or is functioning as designed. Verification metrics should also be collected and reviewed
by the same operational team though potentially by different members of the team or as
participation as part of a broader activity (e.g. DR test). Verification metrics provide an additional
measure of either overall quality or critical activity effectiveness.
performance or tertiary metrics – those metrics that provide insight as to the performance of the
function or activity. Performance metrics enable insight as to the team’s efficiency, timeliness, and
effectiveness
1) unit cost
2) defects per unit
3) productivity etc
Verification metrics:
Monthly sample of the configuration management database server records for accuracy
and completeness, Ongoing scan of network for servers not in the configuration
management database, Regular reporting of all obsolete server configs with callouts on
those exceeding planned service or refresh dates
Customer transaction times, Regular (every six months) capacity planning and
performance reviews of critical business service stacks including servers
Root cause review of all significant impacting customer events, auto-detection of server
issues versus manual or user detection ratios
DR Tests, server privileged access and log reviews, regular monthly server recovery or
failover tests (for a sample)
Performance metrics:
Level of standardization or virtualization, level of currency/obsolescence
Level of customer impact availability, customer satisfaction with performance, amount of
headroom to handle business growth
Administrators per server, Cost per server, Cost per business transaction
Server delivery time, man hours required to deliver a server
Obviously, if you are just setting out, you will collect on some of these metrics first. As
you incorporate their collection and automate the work and reporting associated with
them you can then tackle the additional metrics. And you will vary them according to the
importance of different elements in your shop. If cost is critical, then reporting on cost
and efficiency plays such as virtualization will naturally be more important. If time to
market or availability are critical, than those elements should receive greater focus.
Below is a diagram that reflects the construct of the three types of metrics and their
relationship to the different metrics areas and score cards:
So, in addition to the metrics framework, what else is required to be successful leveraging
the metrics?
First and foremost, the culture of your team must be open to alternate views and
support healthy debate. Otherwise, no amount of data (metrics) or facts will enable the
team to change directions from the party line. If you and your management team do not
lead regular, fact-based discussions where course can be altered and different alternatives
considered based on the facts and the results, you likely do not have the openness needed
for this approach to be successful. Consider leading by example here and emphasize fact
based discussions and decisions.
Also you must have defined processes that are generally adhered. If your group’s work is
heavily ad hoc and different each time, measuring what happened the last time will not
yield any benefits. If this is the case, you need to first focus on defining even at a high
level, the major IT processes and help your team’s adopt them. Then you can
proceed to metrics and the benefits they will accrue.
Accountability, sponsorship and the willingness to invest in the improvement
activities are also key factors in the speed and scope of the improvements that can
occur. As a leader you need to maintain a personal engagement in the metrics reviews
and score card results. They should into your team’s goals and you should monitor the
progress in key areas. Your sponsorship and senior business sponsorship where
appropriate will be major accelerators to progress. And hold teams accountable for their
results and improvements within their domain.
Types of Indicators and Metrics
The need for metrics and indicators is underlined by many organizations, such as the Information
Technology Infrastructure Library (ITIL), ISACA (COBIT 5) and ISOITIL defines three types of
metrics:
technology metrics,
process metrics
service metrics.
Note that technology and process metrics are also referred to as operational metrics.
1) Technology Metrics
Technology metrics measure specific aspects of the IT infrastructure and
equipment
Most technology metrics provide inputs on IT utilization, which is a very small part of service, to
the chief information officer (CIO) or data center manager; however, unless this metric is
compared with another metric, it may not provide meaningful information for top management.
For example, consider a network response of 100 milliseconds, (i.e., a message reaches its
destination in 100 milliseconds). If management expects network response to be 10 milliseconds,
the response time requires attention, and if management expects network response to be 300
milliseconds, the response time is more than satisfactory.
2) Process Metrics
Process metrics measure specific aspects of a process
Process metrics provide information about the functioning of processes. These metrics are
generally used for compliance conformance that is related to internal controls. However, too
many process metrics may not serve the purpose of monitoring. Metrics that are related to critical
processes may be considered for management reporting.
Service Metrics
The primary focus of ITIL is on providing service. Service metrics are essential metrics for
management to monitor. They provide an end-to-end measurement of service performance.
Results of a customer satisfaction survey indicating how much IT contributes to customer satisfaction
Cost of executing a transaction (banks use this metric to measure the cost of a transaction that is
carried out via different service channels, such as Internet, mobile, ATM and branch)
Efficiency of service, which is based on the average time to complete a specific service. A service is
not just a process; a service can consist of multiple processes.
The COBIT 5 process reference model identifies 37 IT-related generic processes. COBIT 5
suggests using the following metrics:
Depending upon the organization’s customer services offered using IT solutions, the following
metrics (which shall be a subset of metrics defined previously) may be considered:
There are two IT-related goals that primarily map to the enterprise goal of customer-oriented
service culture.14 They are IT-related goals 01, Alignment of IT and Business strategy and 07,
Delivery of IT services in line with business requirements. Metrics suggested for IT-related goal
07 from COBIT 5 are (for simplicity, only those IT goals that primarily map to the enterprise
goal in the example have been considered):
‘Quality of Service’ Measures
Operational
Quality • Help desk response time (speed) to answer
• %, E-mail uptime
• %, Server uptime
• Root cause of outage category (e..g.., power lines down, human accidentally
• System load
• Disk utilization
• Memory utilization
• CPU utilization
• Scheduled availability
Servers/Network
• Network performance
• E-mail performance
• Website availability
• Application incidents
• %, Downtime
26
Creating a Balanced Portfolio of Information Technology Metrics
www.businessofgovernment.org
• Employees on staff
• Employee salaries
• Turnover by position
• Open positions
Personnel
• Average duration of open positions
• Total contractors
End User
Customer • IT help desk responsiveness
Satisfaction
Metrics • Availability of training and development programs
• Planned value
• Earned value
• % Over budget
Budget technologies
• % of CIOs and key functional managers’ time spent on charting the future
Stakeholders(strategic innovation) rather than on day-to-day operations
Power usage Energy The amount of total power consumed at 1..5 or lower
IT power consumed
Cost per Cost per The total costs of a data center divided Not yet established
Full-time Labor The total number of servers divided At least 25 servers per
employees)
Facility utilization Facility The total number of server racks At least 80 percent
Storage utilization Storage The total storage used divided by the 75 percent for in-house
computing/outsourced
facilities
Core to non-core Facility The number of physical servers in core At least 65 percent
centers
operating systems
Virtual hosts Virtualization The number of virtualized hosts divided At least 20 percent
===================================================================
Metrics can play an important role in achieving excellence as they force the organization to pay
attention to their performance and prompt management to make adjustments when goals are not
being achieved.
Operational metrics
Online application performance. The average time it takes to render a screen or page. It is also
important to measure the variability of performance (discussed further in the supplemental
operational metrics section).
Online application availability. The percentage of time the application is functioning properly.
This can be difficult to define. If the application is available for some users but not all, is it
"available?" What if most functions are working but a minor function is not? To address this
problem, I like to define the primary functions an application performs. Then, if any of these
functions are unavailable, the application is considered down even if most of the application is
usable. Also, if the application is primarily used during business hours, I like to have separate
metrics for that time versus other times. So, the metrics might be: primary functions during
business hours; all functions during business hours; primary functions 24x7; and all functions
24x7.
Batch SLAs met. The percentage of key batch jobs that finish on time.
Supplemental operational metrics. Other metrics that might be used to enhance operational
effectiveness include the number of unscheduled changes to the production systems, the
throughput of batch processes, complexity scores for major applications (indicating how difficult
they are to maintain), architectural integrity (the percent of applications on preferred
technologies, another indication of how difficult applications are to maintain) and the variability
of online application performance.
This last item requires a quick but important note. Business users can get quite frustrated when
online production issues are simply defined as "we are experiencing slowdowns." One technique
to solve this problem is to set a target for each screen or page in the application with the target
defined as the time 90 percent of the screen or page occurrences will render. Then, actual
performance can be compared to this goal and the percentage of time the goal is hit provides a
good indicator of the customer service level (CSL).
With this technique, both the business and technology know how the application is doing when
the CSL number is reported. If an application's CSL is 90 percent, the application is running
exactly as expected whereas a CSL of 85 percent or 50 percent describe different degrees of not
achieving expected results.
Delivery metrics
Project satisfaction. The average score from post project surveys completed by business
partners. After each project, it is important to solicit feedback from the business. The survey
should contain one summary question for the project satisfaction metric (e.g., what is your
overall satisfaction with this project on a scale of one to five?), a few more specific questions and
an area for written comments. The survey should also be completed by the technology group to
gain further insights on the areas that could be improved moving forward, but these scores are
not included in the metric as they tend to be biased on the high side.
Project delivery. The percentage of projects delivered on time. "On time" is another tricky
concept. For projects using the waterfall methodology, the projected delivery date can vary
greatly once the team engages in the design process. I have found it useful to make sure business
partners know that the delivery date is not set until design is done and therefore, this metric uses
that date for a target. For Agile projects, this metric is not relevant as the delivery date is almost
always met by adjusting scope.
Project cost. The percentage of projects delivered within the cost estimate. For this metric, I also
use the post design cost estimate for the same reasons noted in the previous section. Again, Agile
projects are less likely to benefit from this metric.
SponsoredPost Sponsored by Workplace by Facbook
69% of business leaders believe company communications help them achieve their vision.
Defect containment. The percentage of defects contained to the test environments. It is well
known that defects are much more expensive to fix in production. This metric counts the defects
corrected during the development process and compares this count to any defects found in the
first 30 days of production. While 30 days may seem like a short period, I have tried using the
first 90 days of production for this, but the wait to determine the metric was more problematic
than the additional information provided by the longer timeframe.
Supplemental delivery metrics. Additional metrics that might be included in this area: how
well interim deliverables, such as the completion of design, are hit on time; how well first
estimates compare to the final project cost; how many changes are made during the freeze
between project completion and the production install; and how many projects require an
unscheduled change after installation.
Organizational metrics
Attrition. The percentage of employees who move to other jobs. For this metric, it is important
to only include voluntary separations, as you do not want to provide managers with an incentive
to retain poor performers. It is also important to differentiate between employees who leave the
company versus those that leave to take another position within the company.
Performance reviews. The percentage of employees with current written reviews. Providing
employees with constructive feedback is one of the most important steps an organization can
take to improve productivity. Unfortunately, in many organizations, managers and employees
dread this process and it is often neglected. The problem is often the enforcement of a grading
system, which becomes the focus rather than the specific feedback. If you can do it, skip the
grade and have the manager focus on what needs to happen for the employee to get to the next
level — a discussion everyone should find useful.
Supplemental organizational metrics. There are many other metrics that can be useful in
creating an engaged workforce. Examples include making sure employees have written
performance expectations and goals at the start of the year, tracking the amount of training
provided to employees (e.g., setting targets just like CPAs and other professionals mandate) and
highlighting the number of employees in formal mentoring relationships.
Financial metrics
Budget variance. Actual costs compared to budgeted costs. This should be done for both direct
expenses (salaries) and inter-company expenses (allocations from other areas) since direct
expenses are more controllable.
Resource cost. The average cost of a technology resource. This metric provides a good view of
how well managers are controlling costs by using cheaper outsourcing labor, being thoughtful in
the use of higher priced temporary labor and managing an organization that is not top heavy with
expensive employees (discussed in more detail in the Supplemental Financial Metrics section).
Some organizations set targets for outsourcing (e.g., 30 percent of the workforce), but I think the
overall resource cost metric is much more powerful. If managers believe they can be more
productive and keep costs down using a variety of techniques, why not let them rather than focus
on a single strategy?
Supplemental financial metrics. There are several other metrics that can be useful for
organizations. Simply keeping a running total of the dollars saved from cost initiatives (e.g.,
moving to cheaper technologies) can help keep the focus on these projects. Tracking costs by
activity (e.g., development versus maintenance versus running the systems versus other costs)
can highlight areas for improvement.
Finally, as alluded to above, many organizations have a tendency to become top heavy over time
so it is useful to track this in a metric. For example, if an organization has eight levels starting
with new college graduates and going to vice presidents, a simple metric can be created by
assigning a number to each employee (e.g., new college graduates = 1, VPs = 8), adding up the
numbers and dividing the sum by the number of employees to determine the average level in the
group.
After some investigation, we found that a central support group had installed new security
software that was interacting with our application in an inefficient manner. We worked with that
group and were able to get our performance back down to 1.2 seconds by using the new software
in a different way. While a 0.4 second increase in performance is not the end of the world, this
could have become a problem if a few different issues happened over time and were not caught.
Successful IT organizations solve issues like this before they become problems, while less
successful organizations are caught off guard when the business complains about a degradation
in their technology. Metrics are key to making sure your organization proactively addresses
symptoms before they become real problems rather than reacting when a problem is real and
there is a crisis.
This article is published as part of the IDG Contributor Network
Operations management metrics give CIOs and other IT managers situational awareness of their
department’s performance, resources, personnel, and strategic activities.. These metrics should:
All CIOs interviewed noted that they have a very good inventory of metrics when it comes to
sensing and measuring the performance of their technical infrastructure.. As one CIO told us:
At a tactical and more strategic level, these metrics are used to uncover why key ingredients
(e..g.., infrastructure, personnel, etc..) are either not performing up to par, or what might be
needed to increase their functionality and capabilities.. Consider personnel metrics: These met-
rics can be as basic as employee salary and number of employees on staff.. Other, more com-
plex metrics can capture a number of bottom-line impacts within the organization.. For instance,
metrics such as employee absenteeism can measure absence rate, unscheduled absence rate,
overtime expense, employee performance/productivity index, and overtime as a percentage of
labor costs.. These metrics reveal impacts on the organization related to the cost of
replacement workers, decreased employee morale, and overtime pay due to absence..
23. Waxer, C. How governments are developing better performance metrics. (July 27, 2011). e.Republic, Folsom, CA.
http://www.govtech.com/policy-management/How-Governments-Are-Developing-Better-Performance-Metrics.html.
25
Creating a Balanced Portfolio of Information Technology Metrics
Overall
Department • % of total projects on or below budget
• %, Server uptime
• Root cause of outage category (e..g.., power lines down, human accidentally
• System load
• Disk utilization
• Memory utilization
• CPU utilization
• E-mail performance
• Website availability
• Application incidents
• %, Downtime
26
Creating a Balanced Portfolio of Information Technology Metrics
www.businessofgovernment.org
• Employees on staff
• Employee salaries
• Turnover by position
• Open positions
Personnel
• Average duration of open positions
• Total contractors
End User
Customer • IT help desk responsiveness
Satisfaction
Metrics • Availability of training and development programs
27
Creating a Balanced Portfolio of Information Technology Metrics
As discussed in Finding Three (page 18), CIOs acknowledged that linking the performance of their IT
unit to the overall goals of the organization is no easy feat.. For one, IT is just one of the components
within an overall program and only rarely is in the driver’s seat.. Yet, when things go wrong with the
overall project, the IT department often takes a large share of the blame.. Leading CIOs are not
giving up on this challenge, and are working to “prove their salt..” Consider Results Minneapolis, a
public online dashboard linking 34 pages’ worth of IT depart-mental performance measures to larger
city values.. One of the dashboard headings highlights the goal of having IT services and operations
customer-focused and well managed.. Metrics—like the number of IT projects in progress, the
number of IT projects on budget, and expenditure per full-time IT employee compared to other
departmental employees—are visualized as evidence of how well the IT department is performing..
The IT department goals strive to push larger city goals forward and establish IT’s worth in concrete
terms.. Other CIOs are looking at how to track the contribution of the IT department to projects that
are transforming their agency, or how IT is fundamental to building new programs or implementing
new policy options.. CIOs are comparing the percentages of their budgets and resources that are
allocated for these efforts versus those that are allocated for standard IT maintenance and basic
computing resource provisioning..
Specifically, the analytics are grouped into the following two categories:
Basic operational analytics: calculates KPIs (key performance indicators) and
their trends directly from the ticket data. The analytics performed includes
1.1. ticket volume and incident
1.2. resolution time (i.e the time spent solving a ticket)
1.3. trends over time by any ticket group
1.4. volume share and work load of any ticket group or individual resource
1.5. SLA (service level agreement) performance
1.6. ticket arrival distribution by time of the day or day of the week
1.7. ticket resolution performance in terms of the trends in arrivals
1.8. completions and backlogs.
Advanced operational analytics: dives deeper into ticket data and applies
techniques such as clustering, modeling and simulation to further derive business
intelligence and identify potential saving opportunities. The analytics performed
includes
1.1. ticket resolution effort estimation
1.2. resource load analysis
1.3. ticket volume forecasting
1.4. resource utilization measurement
1.5. resource sharing and pooling
1.6. staffing analysis,
1.7. group rights-sizing
1.8. cross-skilling recommendation.