Академический Документы
Профессиональный Документы
Культура Документы
Corporate Headquarters
Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
800 553-NETS (6387)
Fax: 408 526-4100
THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT
SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE
OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY.
The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB’s public
domain version of the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California.
NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED “AS IS” WITH
ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT
LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF
DEALING, USAGE, OR TRADE PRACTICE.
IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING,
WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO
OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
CCSP, CCVP, the Cisco Square Bridge logo, Follow Me Browsing, and StackWise are trademarks of Cisco Systems, Inc.; Changing the Way We Work, Live, Play, and Learn, and
iQuick Study are service marks of Cisco Systems, Inc.; and Access Registrar, Aironet, ASIST, BPX, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, Cisco, the Cisco
Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unity, Empowering the Internet Generation,
Enterprise/Solver, EtherChannel, EtherFast, EtherSwitch, Fast Step, FormShare, GigaDrive, GigaStack, HomeLink, Internet Quotient, IOS, IP/TV, iQ Expertise, the iQ logo, iQ
Net Readiness Scorecard, LightStream, Linksys, MeetingPlace, MGX, the Networkers logo, Networking Academy, Network Registrar, Packet, PIX, Post-Routing, Pre-Routing,
ProConnect, RateMUX, ScriptShare, SlideCast, SMARTnet, StrataView Plus, TeleRouter, The Fastest Way to Increase Your Internet Quotient, and TransPath are registered
trademarks of Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or Website are the property of their respective owners. The use of the word partner does not imply a partnership relationship
between Cisco and any other company. (0502R)
Preface xi
Revision History xi
Combining IP Telephony and IPCC in the Same Cisco CallManager Cluster 1-15
Multi-Channel Design Considerations (Cisco Email Manager Option and Cisco Collaboration Server
Option) 3-15
Cisco Email Manager Option 3-17
CTI OS 5-11
Summary 5-14
IPCC Enterprise with Cisco CallManager Releases 3.1 and 3.2 6-3
IPCC Enterprise with Cisco CallManager Releases 3.3 and Later 6-4
GL O S S A R Y
INDEX
This document provides design considerations and guidelines for implementing Cisco IP Contact Center
(IPCC) Enterprise Edition solutions based on the Cisco Architecture for Voice, Video, and Integrated
Data (AVVID).
This document builds upon ideas and concepts presented in the Cisco IP Telephony Solution Reference
Network Design (SRND), which are available online at
http://cisco.com/go/srnd
This document assumes that you are already familiar with basic contact center terms and concepts and
with the information presented in the Cisco IP Telephony SRND. To review IP Telephony terms and
concepts, refer to the documentation at the preceding URL.
Revision History
Unless stated otherwise, the information in this document applies specifically to Cisco IP Contact Center
Enterprise Edition Releases 5.0 and 6.0.
This document may be updated at any time without notice. You can obtain the latest version of this
document online at
http://cisco.com/go/srnd
Visit this Cisco.com website periodically and check for documentation updates by comparing the
revision date (on the front title page) of your copy with the revision date of the online document.
The following table lists the revision history for this document.
Obtaining Documentation
Cisco documentation and additional literature are available on Cisco.com. Cisco also provides several
ways to obtain technical assistance and other technical resources. These sections explain how to obtain
technical information from Cisco Systems.
Cisco.com
You can access the most current Cisco documentation at this URL:
http://www.cisco.com/univercd/home/home.htm
You can access the Cisco website at this URL:
http://www.cisco.com
You can access international Cisco websites at this URL:
http://www.cisco.com/public/countries_languages.shtml
Ordering Documentation
You can find instructions for ordering documentation at this URL:
http://www.cisco.com/univercd/cc/td/doc/es_inpck/pdi.htm
You can order Cisco documentation in these ways:
• Registered Cisco.com users (Cisco direct customers) can order Cisco product documentation from
the Ordering tool:
http://www.cisco.com/en/US/partner/ordering/index.shtml
• Nonregistered Cisco.com users can order documentation through a local account representative by
calling Cisco Systems Corporate Headquarters (California, USA) at 408 526-7208 or, elsewhere in
North America, by calling 1 800 553-NETS (6387).
Documentation Feedback
You can send comments about technical documentation to bug-doc@cisco.com.
You can submit comments by using the response card (if present) behind the front cover of your
document or by writing to the following address:
Cisco Systems
Attn: Customer Document Ordering
170 West Tasman Drive
San Jose, CA 95134-9883
We appreciate your comments.
Note Use the Cisco Product Identification (CPI) tool to locate your product serial number before submitting
a web or phone request for service. You can access the CPI tool from the Cisco Technical Support
Website by clicking the Tools & Resources link under Documentation & Tools. Choose Cisco Product
Identification Tool from the Alphabetical Index drop-down list, or click the Cisco Product
Identification Tool link under Alerts & RMAs. The CPI tool offers three search options: by product ID
or model name; by tree view; or for certain products, by copying and pasting show command output.
Search results show an illustration of your product with the serial number label location highlighted.
Locate the serial number label on your product and record the information before placing a service call.
• Internet Protocol Journal is a quarterly journal published by Cisco Systems for engineering
professionals involved in designing, developing, and operating public and private internets and
intranets. You can access the Internet Protocol Journal at this URL:
http://www.cisco.com/ipj
• World-class networking training is available from Cisco. You can view current offerings at
this URL:
http://www.cisco.com/en/US/learning/index.html
The Cisco IP Contact Center (IPCC) solution consists of four primary Cisco software components:
• IP Communications infrastructure: Cisco CallManager
• Queuing and self-service: Cisco IP Interactive Voice Response (IP IVR) or Cisco Internet Service
Node (ISN)
• Contact center routing and agent management: Cisco Intelligent Contact Management (ICM)
Software
• Agent desktop software: Cisco Agent Desktop or Cisco Toolkit Desktop (CTI Object Server)
In addition to these core components, the following Cisco hardware products are required for a complete
IPCC deployment:
• Cisco IP Phones
• Cisco voice gateways
• Cisco LAN/WAN infrastructure
Once deployed, IPCC provides an integrated Automatic Call Distribution (ACD), IVR, and Computer
Telephony Integration (CTI) solution.
The following sections discuss each of the software products in more detail and describe the data
communications between each of these components. For more information on a particular product, refer
to the specific product documentation available online at
www.cisco.com
Cisco CallManager
Cisco CallManager, when combined with the appropriate LAN/WAN infrastructure, voice gateways,
and IP phones, provides the foundation for a Voice over IP (VoIP) solution. Cisco CallManager is a
software application that runs on Intel Pentium-based servers running Microsoft Windows Server
operating system software and Microsoft SQL Server relational database management software. The
Cisco CallManager software running on a server is referred to as a Cisco CallManager server. Multiple
Cisco CallManager servers can be grouped into a cluster to provide for scalability and fault tolerance.
For details on Cisco CallManager call processing capabilities and clustering options, refer to the
Cisco IP Telephony Solution Reference Network Design (SRND) guide, available at
www.cisco.com/go/srnd
Call reporting uses the IPCC reporting infrastructure, which provides both standard reports and a
development environment for custom reporting via Cisco Consulting Engineering Services or a certified
reseller.
Note Because the IP IVR and IP Queue Manager are so similar, the remainder of this document refers to the
IP IVR only.
IP IVRs ICM
7 9
Public
network 7 6 3
10 4 7
5
1
5 10 9
M
M 5
V
2 CallManager 8
cluster
10
8 Agent available IP IP IP
IP voice
9 Screen pop TDM voice
Call control and
11 Call answered CTI data
76583
IP phones and agent desktops
IP IVR 2 PSTN
V
SCI
IVR 2 PIM IP voice
TDM Voice
132072
CTI/Call
control data
In larger, multi-site (multi-cluster) environments, multiple PGs are usually deployed. Each
Cisco CallManager cluster requires a co-located PG. When multiple Cisco CallManager clusters are
deployed, the ICM software makes them all appear to be part of one logical enterprise-wide contact
center with one enterprise-wide queue.
There are three kinds of agent and supervisor desktop options available:
• Cisco Agent Desktop, an out-of-the-box agent desktop (shown in Figure 1-3).
• Cisco Toolkit Desktop, which is a software development toolkit built on the CTI Object Server
(CTI OS). The Cisco Toolkit Desktop is implemented for custom desktops or desktops integrated
with other applications (for example, a customer database application).
• Embedded CRM desktops such as the Cisco Siebel Desktop. The Cisco Siebel Desktop is an IPCC
agent desktop that is fully embedded within the Siebel desktop application. The Cisco Siebel
Desktop is offered by Cisco, and a number of other embedded CRM desktops are available from
Cisco partners.
In addition to an agent desktop, a supervisor desktop is available with each of these options.
The chapter on Agent Desktop and Supervisor Desktop, page 7-1, covers details on desktop selection
and design considerations.
Administrative Workstation
The Administrative Workstation (AW) provides a collection of administrative tools for managing the
ICM software configuration. The two primary configuration tools on the AW are the Configuration
Manager and the Script Editor. The Configuration Manager tool is used to configure the ICM database
to add agents, add skill groups, assign agents to skill groups, add dialed numbers, add call types, assign
dialed numbers to call types, assign call types to ICM routing scripts, and so forth. The Script Editor tool
is used to build ICM routing scripts. ICM routing scripts specify how to route and queue a contact (that
is, the script identifies which agent should handle a particular contact).
For details on the use of these tools, refer to the IP Contact Center Administration Guide, available at
http://www.cisco.com/univercd/cc/td/doc/product/icm/ipccente/index.htm
The AW is the only software module that must run on a separate server from all of the other IPCC
software modules. An ICM installation supports an unlimited number of AWs and can be deployed
co-located with or remote from the ICM Central Controller. Each AW is independent of other AWs, and
redundancy is provided by deploying multiple AWs.
Some AWs communicate directly with the ICM Central Controller, and they are called distributor AWs.
An ICM deployment must have at least one distributor AW. Additional AWs (distributors or clients) are
also allowed for redundancy (primary and secondary distributors) or for additional access by the AW
clients in a site. At any additional site, at least one distributor and any number of client AWs can be
deployed; however, client AWs should always be local to their AW distributor.
Client AWs communicate with a distributor AW to view and modify the ICM Central Controller
database and to receive real-time reporting data. Distributor AWs relieve the Central Controller (the
real-time call processing engine) from the task of constantly distributing real-time contact center data to
the client AWs.
JTAPI Communications
In order for JTAPI communications to occur between Cisco CallManager and external applications such
as the IPCC and IP IVR, a JTAPI user ID and password must be configured within Cisco CallManager.
Upon startup of the Cisco CallManager PIM or upon startup of the IP IVR, the JTAPI user ID and
password are used to log in to Cisco CallManager. This login process by the application (Cisco
CallManager PIM or IP IVR) establishes the JTAPI communications between the Cisco CallManager
cluster and the application. A single JTAPI user ID is used for all communications between the entire
Cisco CallManager cluster and the ICM. A separate JTAPI user ID is also required for each IP IVR
server. In an IPCC deployment with one Cisco CallManager cluster and two IP IVRs, three JTAPI user
IDs are required: one JTAPI user ID for the ICM application and two JTAPI user IDs for the two IP
IVRs.
The Cisco CallManager software includes a module called the CTI Manager, which is the layer of
software that communicates via JTAPI to applications such as the ICM and IP IVR. Every node within
a cluster can execute an instance of the CTI Manager process, but the Cisco CallManager PIM on the
PG communicates with only one CTI Manager (and thus one node) in the Cisco CallManager cluster.
The CTI Manager process communicates CTI messages to/from other nodes within the cluster. For
example, suppose a deployment has a voice gateway homed to node 1 in a cluster, and node 2 executes
the CTI Manager process that communicates to the ICM. When a new call arrives at this voice gateway
and needs to be routed by the ICM, node 1 sends an intra-cluster message to node 2, which will send a
route request to the ICM to determine how the call should be routed.
Each IP IVR also communicates with only one CTI Manager (or node) within the cluster. The Cisco
CallManager PIM and the two IP IVRs from the previous example could each communicate with
different CTI Managers (nodes) or they could all communicate with the same CTI Manager (node).
However, each communication uses a different user ID. The user ID is how the CTI Manager keeps track
of the different applications.
When the Cisco CallManager PIM is redundant, only one side is active and in communication with the
Cisco CallManager cluster. Side A of the Cisco CallManager PIM communicates with the CTI Manager
on one Cisco CallManager node, and side B of the Cisco CallManager PIM communicates with the CTI
Manager on another Cisco CallManager node. The IP IVR does not have a redundant side, but the IP IVR
does have the ability to fail over to another CTI Manager (node) within the cluster if its primary CTI
Manager is out of service. For more information on failover, refer to the chapter on Design
Considerations for High Availability, page 3-1.
The JTAPI communications between the Cisco CallManager and IPCC include three distinct types of
messaging:
• Routing control
Routing control messages provide a way for Cisco CallManager to request routing instructions from
IPCC.
• Device and call monitoring
Device monitoring messages provide a way for Cisco CallManager to notify IPCC about state
changes of a device (IP phone) or a call.
• Device and call control
Device control messages provide a way for Cisco CallManager to receive instructions from IPCC
on how to control a device (IP phone) or a call.
A typical IPCC call includes all three types of JTAPI communication within a few seconds. When a new
call arrives, Cisco CallManager requests routing instructions from the ICM. For example, when Cisco
CallManager receives the routing response from the ICM, Cisco CallManager attempts delivery of the
call to the agent phone by instructing the phone to begin ringing. At that point, Cisco CallManager
notifies the ICM that the device (IP phone) has started ringing, and that notification enables the agent’s
answer button on the desktop application. When the agent clicks the answer button, the ICM instructs
Cisco CallManager to make the device (IP phone) go off-hook and answer the call.
In order for the routing control communication to occur, Cisco CallManager requires the configuration
of a CTI Route Point. A CTI Route Point is associated with a specific JTAPI user ID, and this association
enables Cisco CallManager to know which application provides routing control for that CTI Route Point.
Directory (Dialed) Numbers (DNs) are then associated with the CTI Route Point. A DN is associated to
a CTI Route Point that is associated with the ICM JTAPI user ID, and this enables Cisco CallManager
to generate a route request to the ICM when a new call to that DN arrives.
In order for the IP phones (devices) to be monitored and controlled, they also must be associated in Cisco
CallManager with a JTAPI user ID. In an IPCC environment, the IP phones are associated with the ICM
JTAPI user ID. When an agent logs in from the desktop, the Cisco CallManager PIM requests Cisco
CallManager to allow the PIM to begin monitoring and controlling that device (IP phone). Until the
login has occurred, Cisco CallManager does not allow the ICM to monitor or control that IP phone. If
the device has not been associated with the ICM JTAPI user ID, then the agent login request will fail.
Because the IP IVR also communicates with Cisco CallManager using the same JTAPI protocol, these
same three types of communication also occur with the IP IVR. Unlike the ICM, the IP IVR provides
both the application itself and the devices to be monitored and controlled.
The devices that the ICM monitors and controls are the physical IP phones. The IP IVR does not have
real physical ports like a traditional IVR. Its ports are logical ports (independent software tasks or
threads running on the IP IVR application server) called CTI Ports. For each CTI Port on the IP IVR,
there needs to be a CTI Port device defined in Cisco CallManager.
Unlike a traditional PBX or telephony switch, Cisco CallManager does not select the IP IVR port to
which it will send the call. Instead, when a call needs to be made to a DN that is associated with a CTI
Route Point that is associated with an IP IVR JTAPI user, Cisco CallManager asks the IP IVR (via JTAPI
routing control) which CTI Port (device) should handle the call. Assuming the IP IVR has an available
CTI Port, the IP IVR will respond to the Cisco CallManager routing control request with the Cisco
CallManager device identifier of the CTI Port that is going to handle that call.
When an available CTI Port is allocated to the call, an IP IVR workflow is started within the IP IVR.
When the IP IVR workflow executes the accept step, a JTAPI message is sent to Cisco CallManager to
answer the call on behalf of that CTI Port (device). When the IP IVR workflow wants the call transferred
or released, it again instructs the Cisco CallManager what it would like done with that call. These
scenarios are examples of device and call control performed by the IP IVR.
When a caller releases the call while interacting with the IP IVR, the voice gateway detects the caller
release and notifies Cisco CallManager via H.323 or Media Gateway Control Protocol (MGCP), which
then notifies the IP IVR via JTAPI. When DTMF tones are detected by the voice gateway, it notifies
Cisco CallManager via H.245 or MGCP, which then notifies the IP IVR via JTAPI. These scenarios are
examples of device and call monitoring performed by the IP IVR.
In order for the CTI Port device control and monitoring to occur, the CTI Port devices on Cisco
CallManager must be associated with the appropriate IP IVR JTAPI user ID. If you have two 150-port
IP IVRs, you would have 300 CTI ports. Half of the CTI ports (150) would be associated with JTAPI
user IP IVR #1, and the other 150 CTI ports would be associated with JTAPI user IP IVR #2.
While Cisco CallManager can be configured to route calls to IP IVRs on its own, routing of calls to the
IP IVRs in an IPCC environment should be done by the ICM (even if you have only one IP IVR and all
calls require an initial IVR treatment). Doing so will ensure proper IPCC reporting. For deployments
with multiple IP IVRs, this routing practice also allows the ICM to load-balance calls across the multiple
IP IVRs.
Device Targets
Each IP phone must be configured in the ICM Central Controller database as a device target. Only one
extension on the IP phone may be configured as an ICM device target. Additional extensions may be
configured on the IP phone, but those extensions will not be known to the ICM software and, thus, no
monitoring or control of those additional extensions is possible. The ICM provides call treatment for
Reroute On No Answer (RONA), therefore it is not necessary to configure call forwarding on
ring-no-answer in the Cisco CallManager configuration for the IP phones. Unless call center policy
permits warm (agent-to-agent) transfers, the IPCC extension also should not be published or dialed by
anyone directly, and only the ICM software should route calls to this IPCC phone extension.
At agent login, the agent ID and IP phone extension are associated, and this association is released when
the agent logs out. This feature allows the agent to log in at another phone and allows another agent to
log in at that same phone. At agent login, the Cisco CallManager PIM requests Cisco CallManager to
allow it to begin monitoring that IP phone and to provide device and call control for that IP phone. As
mentioned previously, each IP phone must be mapped to the ICM JTAPI user ID in order for the agent
login to be successful.
Labels
Labels are the response to a route request from a routing client. The label is a pointer to the destination
where the call is to be routed (basically, the number to be dialed by the routing client). Many labels in
an IPCC environment correspond to the IPCC phone extensions so that Cisco CallManager and IP IVR
can route or transfer calls to the phone of an agent who has just been selected for a call.
Often, the way a call is routed to a destination depends upon where the call originated and where it is
being terminated. This is why IPCC uses labels. For example, suppose we have an environment with two
regionally separated Cisco CallManager clusters, Site 1 and Site 2. An IP phone user at Site 1 will
typically just dial a four-digit extension to reach another IP phone user at Site 1. In order to reach an IP
phone user at Site 2 from Site 1, users might have to dial a seven-digit number. To reach an IP phone
user at either site from a PSTN phone, users might have to dial a 10- digit number. From this example,
we can see how a different label would be needed, depending upon where the call is originating and
terminating.
Each combination of device target and routing client must have a label. For example, a device target in
an IPCC deployment with a two-node Cisco CallManager cluster and two IP IVRs will require three
labels. If you have 100 device targets (IP phones), you would need 300 labels. If there are two regionally
separated Cisco CallManager clusters, each with two IP IVRs and 100 device targets per site, then we
would need 1200 labels for the six routing clients and 200 device targets (assuming we wanted to be able
to route a call from any routing client to any device target). If calls are to be routed to device targets only
at the same site as the routing client, then we would need only 600 labels (three routing clients to 100
device targets, and then doubled for Site 2).
Labels are also used to route calls to IP IVR CTI Ports. Details on configuring labels are provided in the
IPCC Installation Guide, available on Cisco.com. A bulk configuration tool is also available to simplify
the configuration of the labels.
Agents
Agents are configured within the ICM and are associated with one specific Cisco CallManager PIM (that
is, one Cisco CallManager cluster). Within the ICM configuration, you also configure the password for
the agent to use at login.
Skill Groups
Skill groups are configured within the ICM so that agents with similar skills can be grouped together.
Agents can be associated with one or more skill groups. Changes made to an agent’s skill group
association while the agent is logged in are not activated until the agent logs out and logs in again.
Skill groups are associated with a specific Cisco CallManager PIM. Skill groups from multiple PIMs
can be grouped into Enterprise Skill Groups. Creating and using Enterprise Skill Groups can simplify
routing and reporting in some scenarios.
IPCC Routing
The example routing script in Figure 1-4 illustrates how IPCC routes calls. In this routing script, the
Cisco CallManager PIM (or cluster) is the routing client. Upon receipt of the route request, the ICM
maps the DN to a call type and then maps the call type to this routing script. In this routing script, the
ICM router first uses a Select node to look for the Longest Available Agent (LAA) in the BoatSales skill
group on the CCM_PG_1 peripheral (or cluster). The ICM router determines that agent 111 is the LAA.
Agent 111 is currently logged in from device target 1234 (Cisco CallManager phone extension 1234 in
this scenario). The ICM router then determines the label to be returned, based upon the device target and
routing client combination. The appropriate label is then returned to the routing client (Cisco
CallManager cluster) so that the call can be routed properly to that IP Phone (device target).
76581
Translation Routing and Queuing
If no agents are available, then the router exits the Select node and transfers the call to an IP IVR to begin
queuing treatment. The transfer is completed using the Translation Route to VRU node. The Translation
Route to VRU node returns a unique translation route label to the original routing client, the Cisco
CallManager cluster. The translation route label will equal a DN configured in Cisco CallManager. In
Cisco CallManager, that DN is mapped to a CTI Route Point that is associated with the JTAPI user for
the IP IVR to which the call is being transferred.
Cisco CallManager and IP IVR will execute the JTAPI routing control messaging to select an available
CTI Port.
When the call is successfully transferred to the IP IVR, the IP IVR translation routing application first
sends a request instruction message to the ICM via the SCI between the IP IVR and the ICM. The ICM
identifies the DN as being the same as the translation route label and is then able to re-associate this call
with the call that was previously being routed. The ICM then re-enters the routing script that was
previously being run for this call. The re-entry point is the successful exit path of the Translation Route
to VRU node. (See Figure 1-5.) At this point, the routing client has changed from the Cisco CallManager
cluster to IPIVR1.
While the call was being transferred, the routing script was temporarily paused. After the transfer to the
IP IVR is successfully completed, the IP IVR becomes the routing client for this routing script. Next the
routing script queues the call to the BoatSales skill group and then instructs the IP IVR to run a specific
queue treatment via the Run VRU Script node. Eventually agent 111 becomes available, and as in the
previous example, the label to be returned to the routing client is identified based upon the combination
of device target and routing client. Note that the routing client is now the IP IVR. The label returned
(1234) when agent 111 becomes available causes the IP IVR to transfer the call to agent 111 (at
extension 1234).
Original
Original route request routing client
CallManager
Cluster
76582
For each combination of Cisco CallManager cluster and IP IVR, a translation route and a set of labels is
required. For example, if a deployment has one Cisco CallManager cluster and four IP IVRs, then four
translation routes and sets of labels are required.
For deployments with multiple IP IVRs, the ICM routing script should select the IP IVR with the greatest
number of idle IP IVR ports and then translation-route the call to that specific IP IVR. If no IP IVR ports
are available, then the script should execute a Busy node. If a high number of calls are executing Busy
nodes, then it is important to resize your IP IVR port capacity.
While the IVR is playing the queuing treatment (announcements) to the caller, the ICM waits for an
available agent of a particular skill (as defined within the routing script for that call). When an agent
with the appropriate skill becomes available, the ICM reserves that agent and then instructs the IVR to
transfer the voice path to that agent's phone.
Note Cisco recommends that all call control (answer, release, transfer, conference, and so on) be done from
the agent desktop application.
When a transferring agent wants to transfer a call to another skill group or agent, the transferring agent
clicks on the transfer button on the IPCC Agent Desktop. A dialog box allows the transferring agent to
enter the dialed number of a skill group or agent. An alphanumeric dialed number string (such as "sales"
or "service") is also valid. The transferring agent also selects whether this transfer is to be a single-step
(blind) transfer or a consultative transfer. (Single-step transfer is the default.) The transferring agent then
clicks OK to complete (single-step) or initiate (consultative) the transfer. The transfer request message
flows from the transferring agent desktop to the CTI Server and then to the Cisco CallManager PIM.
Any call data that was delivered to the transferring agent or added by the transferring agent is sent along
with the transfer request to the Cisco CallManager PIM.
For help with designing a dial plan for your IPCC deployment, consult your Cisco Systems Engineer
(SE).
Note Changes to the agent desk settings profile do not take effect until the agent logs out and logs in again.
Post Route
Entries in the Dialed Number Plan must also be configured to indicate whether a post-route is required.
For dialed numbers to be used in transfer scenarios, Cisco recommends that the post-route option be set
to Yes for transfers. When this field is set to Yes, the dialed number to be used for the route request must
be supplied in the Dialed Number column of the Dialed Number Plan Editor.
Route Request
Assuming a match is found in the DNP for the transfer, the DNP type is allowed for the transferring
agent, and the post-route option is set to Yes, the PIM logic will then generate a route request to the ICM
central controller using the dialed number specified in this same DNP entry.
Upon receipt of the route request, the ICM router matches the dialed number to a call type and executes
the appropriate routing script to find an appropriate target agent for the call. Within the routing script,
any of the call data collected so far could be used in the intelligent routing of the call. The ICM router
will determine which device target (phone extension and desktop) the agent is logged into and will then
return the label that points to that device target to the Cisco CallManager PIM.
At this point there are numerous scenarios that can occur, depending upon the type of transfer being
performed, as described in the following sections:
• Single-Step (Blind) Transfer, page 1-17
• Consultative Transfer, page 1-18
placed on hold. When the target agent's phone begins ringing, the original caller hears the ringing
(assuming auto-answer is not enabled). The target agent receives a screen pop with all call data, and the
Answer button on their agent desktop is enabled when the phone begins ringing. Upon answering the
call, the target agent is speaking with the original caller and the transfer is then complete. If the target
agent does not answer, then RONA (reroute on no answer) call rerouting logic will take over.
If auto-answer is enabled, the original caller and the target agent do not hear any ringing; the call is just
connected between the original caller and the target agent.
If the agent is transferring the call to a generic (skill-group) DN to find an available agent with a
particular skill, but no such agent is currently available, then the ICM routing script should be configured
to translation-route the call to an IP IVR for queuing treatment. The call would still be released from the
transferring agent desktop almost immediately. Any call data collected by the transferring agent would
automatically be passed to the IVR. The caller will not hear any ringback tones because the IP IVR CTI
Port will answer immediately. When the target agent becomes ready, the ICM will instruct the IVR to
transfer the call, and the ICM will pop the agent desktop with all call data.
If the agent has transferred the call to a number that is not within the ICM Dialed Number Plan, then the
caller will be transferred anyway. The destination for the transferred call depends upon the number that
was dialed and what is configured in the Cisco CallManager dial plan. Transfers not using the dialed
number plan are not recommended because of agent roaming restrictions, call data not following the call,
and reporting limitations.
Consultative Transfer
Some parts of the message flow for a consultative transfer are similar to the message flow for a blind
transfer. When the Cisco CallManager PIM receives the label from the ICM router indicating where to
transfer the call, the Cisco CallManager PIM tells Cisco CallManager to initiate a consultative transfer
to the number specified in the label. Cisco CallManager places the original caller on hold and makes a
consultative call to the number specified in the label. The caller generally hears tone on hold while the
transfer is being completed.
When the target agent phone begins ringing, Cisco CallManager generates a Consult Call Confirmation
message and a Device Ringing message.
The consult call confirmation message causes the Cisco CallManager PIM to notify the transferring
agents desktop that the call is proceeding, and it enables the Transfer Complete button. The transferring
agent can hear the target agent's phone ringing (assuming auto-answer is not enabled for the target
agent). At any time after this, the agent can click the Transfer Complete button to complete the transfer
(before or after the target answers their phone).
The Device Ringing message causes the Cisco CallManager PIM to pop the target agent's desktop with
call data and to enable their Answer button (assuming auto-answer is not enabled). When the target agent
clicks the Answer button (or auto-answer is invoked), a voice path between the transferring agent and
target agent is established (assuming the transferring agent has not clicked the Transfer Complete
button).
Generally the transferring agent will not click the Transfer Complete button before the target agent
answers because the probable reason they used consultative transfer was that they wanted to talk with
the target agent before completing the transfer. However, the transferring agent can click on the Transfer
Complete button at any time after it is enabled.
If the agent is transferring the call to a generic DN to find an available agent with a particular skill, but
no such agent is currently available, then the ICM routing script should be configured to route the call
to an IVR for queuing. In this scenario, the transferring agent would hear the IP IVR queue
announcements. The transferring agent could press the Transfer Complete button at any time to complete
the transfer. The caller would then begin hearing the IP IVR queuing announcements. Upon availability
of an appropriately skilled agent, the IP IVR transfers the call to this target agent and pops any call data
onto their screen.
If the agent is transferring the call to a number that is not in the ICM Dialed Number Plan and a number
that is not valid on the Cisco CallManager, the transferring agent will hear the failed consultation call
and will be able to reconnect with the original caller, as explained in the section on Reconnect, page
1-19.
Reconnect
During the consultation leg of a consultative transfer, the transferring agent can reconnect with the caller
and release the consult call leg. To do so, the agent simply clicks the Reconnect button. This action
causes the agent desktop to instruct the Cisco CallManager PIM to instruct Cisco CallManager to release
the consultation call leg and to reconnect the agent with the original caller.
This is basically the process an agent should use when they want to make a consultation call but do not
plan to complete the transfer. After a call is successfully reconnected, the transferring agent’s desktop
functionality will be exactly the same as before they requested the transfer. Therefore, the transferring
agent can later request another transfer, and there is no limit to the number of consultation calls an agent
can make.
Consultative transfers and reconnects are all done from the agent desktop and use the single Cisco
CallManager extension that is associated with the IPCC. The IPCC system does not support allowing the
transferring agent to place the original caller on hold and then use a second extension on their hardware
phone to make a consultation call. The hardware phone offers a button to allow this kind of transfer, but
it is not supported in an IPCC environment. If an agent transfers a call in this way, any call data will
be lost because the ICM did not route the call.
Alternate
Alternate is the ability for the agent to place the consultation call leg on hold and then retrieve the
original call leg while in the midst of a consultative transfer. The agent can then alternate again to place
the original caller back on hold and retrieve the consultation call leg. An agent can alternate a call as
many times as they would like.
When the transferring agent has alternated back to the original caller, the only call controls (buttons) that
are enabled are Release and Alternate. The Transfer (Complete) and Reconnect controls will be disabled.
The Alternate control will alternate the transferring agent back to talking with the consulted party. When
the agent has alternated back to the consultation leg, the Release, Alternate, Transfer, and Reconnect call
controls will be enabled. The Alternate control will alternate the transferring agent back to talking with
the original caller. The Transfer control will complete the transfer, and the Reconnect button will drop
the consulted party and reconnect the agent with the original caller.
Non-ICM Transfers
Transfers to numbers not in the DNP or to numbers configured in the DNP with post-route set to No are
allowed but do not result in an ICM-routed call. In these scenarios, the PIM simply sends a call transfer
request directly to Cisco CallManager and uses the dialed number from the transfer dialog on the agent
desktop. Call data is lost if the ICM does not route the call. Cisco recommends that any dialed number
for a transfer should have a match in the DNP, that it be marked for post-route, and that it have a DNP
type that is allowed for the transferring agent (based on their agent desk settings).
Agent-to-Agent Transfers
If the transfer is to a specific agent, then the agent requesting the transfer must enter the agent ID into
the transfer dialog box. The DNP entry matching the dialed number (agent ID) must have DNP type
equal to PBX. This causes the PIM to place the dialed number (agent ID) into the CED field before it
sends the route request to the ICM router. In the script editor, use the agent-to-agent routing node and
specify the CED field as the location of the agent ID so that the ICM router will route this call properly.
Agent IDs should not match any of the extensions on the Cisco CallManager cluster. If you begin all
agent IDs with the same number and they all have the same length, you could set up a generic wildcard
string that matches all agent IDs so that you need only one entry in the DNP for agent-to-agent routing.
If your environment has multiple PIMs, then you must use an agent ID number plan to determine which
PIM contains this agent. Agent IDs by themselves are not unique. Agent IDs are associated with a
specific PIM and can be reused on other PIMs. By not repeating agent IDs across the enterprise and by
setting up a consistent agent ID assignment plan (such as all PIM 1 agent IDs begin with a 1, all PIM 2
agent IDs begin with a 2, and so on), you can parse the CED field in the script editor to determine which
PIM contains the agent. The parsing may be done via a series of "if" nodes in the script editor or via a
"route select" node. The agent-to-agent node requires the PIM to be specified.
In the event that the target agent is not in a ready state, the agent-to-agent script editor node allows
alternative routing for the call.
Transfer Reporting
After a call transfer is completed, a call detail record for the original call leg will exist and a new call
detail record will be opened for the new call leg. The two call records are associated with one another
via a common call ID assigned by the ICM. The time during the consultation call leg, before the transfer
is completed, is considered as talk time for the transferring agent.
For more details, refer to the IPCC Reporting Guide, available online at Cisco.com.
Gatekeeper Controlled
Gatekeeper control means that there is an independent entity acting as a gatekeeper. The
gatekeeper-controlled model is used for distributed call processing deployments. Before sending the call
out the gateway or intercluster trunk, Cisco CallManager will ask the gatekeeper if there is enough
bandwidth for the call to go through the WAN to another site. (See Figure 1-6.)
Cisco CallManager
cluster
Voice mail M
server
Gatekeeper
IP
IP Router/gateway
V
Router/gateway Router/gateway
V V
IP M IP M
server server
If the gatekeeper rejects the call, then Cisco CallManager can perform digit manipulation on the dialed
digits and send this call transparently out the PSTN.
For IPCC, it is important to define this alternate route and digit manipulation within the dialing plan if
the gatekeeper does not allow the call to go on the WAN. The reason this is important is that calls are
sent to agents and IVRs via routing clients (CTI Desktop, IVR, or CTI Route Point), which are not able
to hang up and redial the call. Therefore, the caller would receive busy tone and not be routed to its
peripheral target (agent or IVR).
The ramification of having calls go out to the PSTN is that two ports are consumed because calls would
have to come into and go out of the main or branch sites via voice gateway ports, which stay up if the
call then gets transferred to another agent or IVR port at another site within the network.
The gatekeeper should be configured to allow enough bandwidth for call center traffic to go through.
The total amount of bandwidth needed would depend on whether incoming traffic from the PSTN is
routed through the WAN or if the WAN is used for inter-site transfers and conferences between agents.
Locations Controlled
For centralized call processing deployments, the locations-controlled model is used. In this model, Cisco
CallManager (not the gatekeeper) decides if there is enough bandwidth available on the WAN to send
the call. If there is not enough bandwidth, the call will fail. Transparent failover to the PSTN is not
available with locations-based call admission control. (See Figure 1-7.)
Cisco CallManager
cluster
Voice mail M
server Router/ Router/
gateway gateway
IP IP WAN IP
V V
Remote site
IP
76585
Central site Areas where bandwidth
must be provisioned.
For IPCC, if the call fails due to insufficient bandwidth, the caller receives busy tone because the call is
routed by IVR or the CTI Desktop application, and there is no mechanism for the routing client to
disconnect the call and then dial again.
Therefore, it is important to calculate the bandwidth allocation for each branch office properly. The
number of simultaneous calls to each branch should be calculated. Inter-site transfer and conference
situations as well as normal office traffic should also be taken into account. Ideally, agent phones should
be allocated as one "location" within the location configuration of Cisco CallManager to make sure that
traffic generated to and from office workers' phones does not interfere with the bandwidth allocated to
call center traffic.
There are numerous ways that IPCC can be deployed, but the deployments can generally be categorized
into the following major types or models:
• Single Site
• Multi-Site Centralized Call Processing
• Multi-Site Distributed Call Processing
• Clustering over the WAN
Many variations or combinations of these deployment models are possible. The primary factors that
cause variations within these models are as follows:
• Locations of IPCC servers
• Locations of voice gateways
• Choice of inter-exchange carrier (IXC) or local exchange carrier (LEC) trunks
• Pre-routing availability
• IVR queuing platform and location
• Transfers
• Traditional ACD, PBX, and IVR integration
• Sizing
• Redundancy
This chapter discusses the impact of these factors (except for sizing) on the selection of a design. With
each deployment model, this chapter also lists considerations and risks that must be evaluated using a
cost/benefit analysis. Scenarios that best fit a particular deployment model are also noted.
A combination of these deployment models is also possible. For example, a multi-site deployment may
have some sites that use centralized call processing (probably small sites) and some sites that use
distributed call processing (probably larger sites). Examples of scenarios where combinations are likely
are identified within each section.
Also in this chapter is a section on integration of traditional ACD and IVR systems into an IPCC
deployment, with considerations on hybrid PBX/ACD deployments. Sizing and redundancy are
discussed in later chapters of this IPCC design guide. For more information on the network infrastructure
required to support an IPCC solution, refer to the Cisco Network Infrastructure Quality of Service
Design guide, available at
http://www.cisco.com/go/srnd
For more information on deployment models for IPCC and IP Telephony, refer to the Cisco IP Telephony
Solution Reference Network Design (SRND) guide, available at
http://www.cisco.com/go/srnd
Single Site
A single-site deployment refers to any scenario where all voice gateways, agents, desktops, IP Phones,
and call processing servers (Cisco CallManager, Intelligent Contact Management (ICM), and IP IVR or
Internet Service Node (ISN)) are located at the same site and have no WAN connectivity between any
IPCC software modules. Figure 2-1 illustrates this type of deployment.
Signaling/CTI
PSTN IP Voice
TDM Voice
CallManager Cluster
M
V
M M
ICM M M
IVR/ISN AW/HDS
IP
126019
Agent
Figure 2-1 shows two IP IVRs, a Cisco CallManager cluster, redundant ICM PROGGERS, an
Administrative Workstation (AW) and Historical Data Server (HDS), and a direct connection to the
PSTN from the voice gateways. The ICM PROGGER in this scenario is running the following major
software processes:
• Router
• Logger
• Cisco CallManager Peripheral Interface Manager (PIM)
• Two IVR or ISN PIMs
• CTI Server
• CTI Object Server (CTI OS) or Cisco Agent Desktop Servers
Within this model, many variations are possible. For example, the ICM Central Controller and
Peripheral Gateways (PGs) could be split onto separate servers. For information on when to install the
ICM Central Controller and PG on separate servers, refer to the chapter on Sizing IPCC Components
and Servers, page 5-1.
The ICM could also be deployed in a simplex fashion instead of redundantly. For information on the
benefits and design for IPCC redundancy, refer to the chapter on Design Considerations for High
Availability, page 3-1.
The number of Cisco CallManager nodes and the hardware model used is not specified along with the
number of IP IVRs or ISN servers. For information on determining the number and type of servers
required, refer to the chapter on Sizing IPCC Components and Servers, page 5-1.
Also not specified in this model is the specific data switching infrastructure required for the LAN, the
type of voice gateways, or the number of voice gateways and trunks. Cisco campus design guides and
IP Telephony design guides are available to assist in the design of these components. The chapter on
Sizing Call Center Resources, page 4-1, discusses how to determine the number of gateway ports.
Another variation in this model is to have the voice gateways connected to the line side of a PBX instead
of the PSTN. Connection to multiple PSTNs and a PBX all from the same single-site deployment is also
possible. For example, a deployment can have trunks from a local PSTN, a toll-free PSTN, and a
traditional PBX/ACD. For more information, see Traditional ACD Integration, page 2-34, and
Traditional IVR Integration, page 2-35.
This deployment model also does not specify the type of signaling (ISDN, MF, R1, and so on) to be used
between the PSTN and voice gateway or the specific signaling (H.323 or MGCP) to be used between the
voice gateway and Cisco CallManager.
The amount of digital signal processor (DSP) resources required for placing calls on hold, consultative
transfers, and conferencing is also not specified in this model. For information on sizing of these
resources, refer to the Cisco IP Telephony Solution Reference Network Design (SRND) guide, available
at
http://www.cisco.com/go/srnd
The main advantage of the single-site deployment model is that there is no WAN connectivity required.
Given that there is no WAN in this deployment model, there is generally no need to use G.729 or any
other compressed Real-Time Transport Protocol (RTP) stream, so transcoding would not be required.
Transfers
In this deployment model (as well as in the multi-site centralized call processing model), both the
transferring agent and target agent are on the same PIM. This also implies that both the routing client
and the peripheral target are the same peripheral (or PIM). The transferring agent generates a transfer to
a particular dialed number (for example, looking for any specialist in the specialist skill group).
Assuming that a match is found in the Dialed Number Plan (DNP) for the transfer request, that the DNP
type is allowed for the transferring agent, and that the post-route option is set to yes, the Cisco
CallManager PIM logic will generate a route request to the ICM router. The ICM router will match the
dialed number to a call type and activate the appropriate routing script. The routing script looks for an
available specialist.
If a target agent (specialist) is available to receive the transferred call, then the ICM router will return
the appropriate label to the routing client (the Cisco CallManager PIM). In this scenario, the label is
typically just the extension of the phone where the target agent is currently logged in. Upon receiving
the route response (label), the Cisco CallManager PIM will then initiate the transfer by sending a JTAPI
transfer request to the Cisco CallManager.
At the same time that the label is returned to the routing client, pre-call data (which includes any call
data that has been collected for this call) is delivered to the peripheral target. In this scenario, the routing
client and peripheral target are the same Cisco CallManager PIM. This is because the transferring agent
and the target agent are both associated with the same PIM. In some of the more complex scenarios to
be discussed in later sections, sometimes the routing client and peripheral target are not the same.
If a target agent is not available to receive the transferred call, then the ICM routing script is typically
configured to transfer the call to an IVR so that queue treatment can be provided. In this scenario, the
label is a dialed number that will instruct the Cisco CallManager to transfer the call to an IVR. Also in
this scenario, the routing client and peripheral target are different. The routing client is the Cisco
CallManager PIM, while the peripheral target is the specific IVR PIM to which the call is being
transferred.
Figure 2-2 Multi-Site Deployment with Centralized Call Processing and Centralized Voice Gateways
Signaling/CTI
PSTN IP Voice
TDM Voice
CallManager Cluster
M
V
M M
ICM M M
PG/CTI
PG/CTI AW/HDS
IVR/ISN
VoIP WAN
IP IP
126020
Agent Agent
Advantages
• Only a small data switch and router, IP Phones, and agent desktops are needed at remote sites where
only a few agents exist, and only limited system and network management skills are required at
remote sites.
• No PSTN trunks are required directly into these small remote sites and offices, except for local
POTS lines for emergency services (911) in the event of a loss of the WAN link.
• PSTN trunks are used more efficiently because the trunks for small remote sites are aggregated.
• IPCC Queue Points (IP IVR or ISN) are used more efficiently because all Queue Points are
aggregated.
• No VoIP WAN bandwidth is used while calls are queuing (initial or subsequent).
As with the single-site deployment model, all the same variations exist. For example, multi-site
deployments can run the ICM software all on the same server or on multiple servers. The ICM software
can be installed as redundant or simplex. The number of Cisco CallManager and IP IVR or ISN servers
is not specified by the deployment model, nor are the LAN/WAN infrastructure, voice gateways, or
PSTN connectivity. For other variations, see Single Site, page 2-2.
Best Practices
• VoIP WAN connectivity is required for RTP traffic to agent phones at remote sites.
• RTP traffic to agent phones at remote sites may require compression to reduce VoIP WAN
bandwidth usage. It may be desirable for calls within a site to be uncompressed, so transcoding
might also be required depending upon how the IP Telephony deployment is designed.
• Skinny Client Control Protocol (SCCP) call control traffic from IP Phones to the Cisco CallManager
cluster flows over the WAN.
• CTI data to and from the IPCC Agent Desktop flows over the WAN. Adequate bandwidth and QoS
provisioning are critical for these links.
• Because there are no voice gateways at the remote sites, customers might be required to dial a
long-distance number to reach what would normally be a local PSTN phone call if voice gateways
with trunks were present at the remote site. This situation could be mitigated if the business
requirements are to dial 1-800 numbers at the central site. An alternative is to offer customers a
toll-free number to dial, and have those calls all routed to the centralized voice gateway location.
However, this requires the call center to incur toll-free charges that could be avoided if customers
had a local PSTN number to dial.
• The lack of local voice gateways with local PSTN trunks can also impact access to 911 emergency
services, and this must be managed via the Cisco CallManager dial plan. In most cases, local trunks
are configured to dial out locally and for 911 emergency calls.
• Cisco CallManager locations-based call admission control failure will result in a routed call being
disconnected. Therefore, it is important to provision adequate bandwidth to the remote sites. Also,
an appropriately designed QoS WAN is critical.
Transfers
In this scenario, the transferring agent and target agent are on the same Cisco CallManager cluster and
Cisco CallManager PIM. Therefore, the same call and message flows will occur as in the single-site
model, whether the transferring agent is on the same LAN as the target or on a different LAN. The only
differences are that QoS must be enabled and that appropriate LAN/WAN routing must be established.
For details on provisioning your WAN with QoS, refer to the Cisco Network Infrastructure Quality of
Service Design guide, available at
http://www.cisco.com/go/srnd
During consultative transfers where the agent (not the caller) is routed to an IP IVR port for queuing
treatment, transcoding is required because the IP IVR can generate only G.711 media streams.
Figure 2-3 Multi-Site Deployment with Centralized Call Processing and Distributed Voice Gateways
with IP IVR
PSTN
CallManager Cluster
M
V
M M
ICM M M
PG/CTI
PG/CTI AW/HDS
Signaling/CTI
IVR/ISN IP Voice
TDM Voice
VoIP WAN
IP IP
126021
Agent Agent
In this deployment model, shown with IP IVR for queuing and treatment, it might be desirable to restrict
calls arriving at a site to be handled by an agent within that site, but this is not required. By restricting
calls to the site where it arrived:
• VoIP WAN bandwidth is reduced for calls going to agents.
• Customer service levels for calls arriving into that site might suffer due to longer queue times and
handle times.
• Longer queue times can occur because, even though an agent at another site is available, the IPCC
configuration may continue to queue for an agent at the local site only.
• Longer handle times can occur because, even though a more qualified agent exists at another site,
the call may be routed to a local agent to reduce WAN bandwidth usage.
It is important for deployment teams to carefully assess the trade-offs between operational costs and
customer satisfaction levels to establish the right balance on a customer-by-customer basis. For example,
it may be desirable to route a specific high-profile customer to an agent at another site to reduce their
queue time and allow the call to be handled by a more experienced representative, while another
customer may be restricted to an agent within the site where the call arrived.
An IPCC deployment may actually use a combination of centralized and distributed voice gateways. The
centralized voice gateways can be connected to one PSTN carrier providing toll-free services, while the
distributed voice gateways can be connected to another PSTN carrier providing local phone services.
Inbound calls from the local PSTN could be both direct inward dial (DID) and contact center calls. It is
important to understand the requirements for all inbound and outbound calling to determine the most
efficient location for voice gateways. Identify who is calling, why they are calling, where they are calling
from, and how they are calling.
In multi-site deployments with distributed voice gateways, the ICM's pre-routing capability can also be
used to load-balance calls dynamically across the multiple sites. A list of PSTN carriers that offer ICM
pre-routing services can be found in the ICM product documentation available at
http://www.cisco.com/univercd/cc/td/doc/product/icm/
In multi-site environments where the voice gateways have both local PSTN trunks and separate toll-free
trunks delivering contact center calls, the ICM pre-routing software can load-balance the toll-free
contact center calls around the local contact center calls. For example, suppose you have a two-site
deployment where Site 1 currently has all agents busy and many calls in queue from locally originated
calls, and Site 2 has only a few calls in queue or maybe even a few agents currently available. In that
scenario, you could have the ICM instruct the toll-free provider to route most or all of the toll-free calls
to Site 2. This type of multi-site load balancing provided by the ICM is dynamic and automatically
adjusts as call volumes change at all sites.
Just as in the two previous deployment models, much variation exists in the number and type of ICM,
Cisco CallManager, and IP IVR or ISN servers; LAN/WAN infrastructure; voice gateways; PSTN
connectivity; and so forth.
Advantages
• Only limited systems management skills are needed for the remote sites because most servers,
equipment, and system configurations are managed from a centralized location.
• The ICM pre-routing option can be used to load-balance calls across sites, including sites with local
PSTN trunks in addition to toll-free PSTN trunks.
• No WAN RTP traffic is required for calls arriving at each remote site that are handled by agents at
that remote site.
Best Practices
• The IP IVR or ISN, Cisco CallManager, and PGs (for both Cisco CallManager and IVR/ISN) are
co-located. In this model, the only IPCC communications that can be separated across a WAN are
the following:
– ICM Central Controller to ICM PG
– ICM PG to IPCC Agent Desktops
– Cisco CallManager to voice gateways
– Cisco CallManager to IP Phones
• If calls are not going to be restricted to the site where calls arrive, or if calls will be made between
sites, more RTP traffic will flow across the WAN. It is important to determine the maximum number
of calls that will flow between sites or locations. Cisco CallManager locations-based call admission
control failure will result in a routed call being disconnected (rerouting within Cisco CallManager
is not currently supported). Therefore, it is important to provision adequate bandwidth to the remote
sites, and appropriately designed QoS for the WAN is critical.
• H.323 or MGCP signaling traffic between the voice gateways and the centralized Cisco
CallManager servers will flow over the WAN. Proper QoS implementation on the WAN is critical,
and signaling delays must be within tolerances listed in the Cisco IP Telephony Solution Reference
Network Design (SRND) guide, available at
http://www.cisco.com/go/srnd
Transfers
Intra-site or inter-site transfers using the VoIP WAN to send the RTP stream from one site to another
will occur basically the same way as a single-site transfer or a transfer in a deployment with centralized
voice gateways.
An alternative to using the VoIP WAN for routing calls between sites is to use a carrier-based PSTN
transfer service. These services allow the IPCC voice gateways to outpulse DTMF tones to instruct the
PSTN to reroute (transfer) the call to another voice gateway location. Each site can be configured within
the ICM as a separate peripheral. The label then indicates whether a transfer is intra-site or inter-site,
using Takeback N Transfer (TNT).
Figure 2-4 Multi-Site Deployment with Distributed Call Processing and Distributed Voice Gateways with IP IVR
Signaling/CTI
AW/HDS
IP Voice
ICM TDM Voice
PG/CTI PG/CTI
PG/CTI IVR IVR PG/CTI
VoIP WAN
M M V V M M
M M M M
IP IP
126022
PSTN
Agent Agent
As with the previous models, many variations are possible. The number and type of ICM Servers, Cisco
CallManager servers, and IP IVR servers can vary. LAN/WAN infrastructure, voice gateways, PSTN
trunks, redundancy, and so forth are also variable within this deployment model. Central processing and
gateways may be added for self-service, toll-free calls, and support for smaller sites. In addition, the use
of a pre-routing PSTN Network Interface Controller (NIC) is also an option.
Advantages
• Each independent site can scale to support up to 2000 agents per Cisco CallManager cluster, and
there is no software limit (you can have up to 80 PGs) to the number of sites that can be combined
by the ICM Central Controller to produce a single enterprise-wide contact center.
• All or most VoIP traffic can be contained within the LAN of each site, if desired. The QoS WAN
shown in Figure 2-4 would be required for voice calls to be transferred across sites. Use of a PSTN
transfer service (for example, Takeback N Transfer) could eliminate that need. If desired, a small
portion of calls arriving at a particular site can be queued for agent resources at other sites to
improve customer service levels.
• ICM pre-routing can be used to load-balance calls to the best site to reduce WAN usage for VoIP
traffic.
Best Practices
• The PG, Cisco CallManager cluster, and IP IVR must be co-located.
• The communication link from the ICM Central Controller to the PG must be sized properly and
provisioned for bandwidth and QoS. (For details, refer to the chapter on Bandwidth Provisioning
and QoS Considerations, page 8-1.)
• Gatekeeper-based call admission control could be used to reroute calls between sites over the PSTN
when WAN bandwidth is not available. It is best to ensure that adequate WAN bandwidth exists
between sites for the maximum amount of calling that can occur.
• If the communication link between the PG and the ICM Central Controller is lost, then all contact
center routing for calls at that site is also lost. Therefore, it is important to implement a fault-tolerant
WAN. Even when a fault-tolerant WAN is implemented, it is important to identify contingency
plans for call treatment and routing when communication is lost between the ICM Central Controller
and PG. For example, in the event of a lost ICM Central Controller connection, the Cisco
CallManager CTI route points could send the calls to IP IVR ports to provide basic announcement
treatment or to invoke a PSTN transfer to another site. Another alternative is for the Cisco
CallManager cluster to route the call to another Cisco CallManager cluster that may have a PG with
an active connection to the ICM Central Controller.
• While two inter-cluster call legs for the same call will not cause unnecessary RTP streams, two
separate call signaling control paths will remain intact between the two clusters (producing a logical
hair-pinning and reducing the number of inter-cluster trunks by two).
• Latency between ICM Central Controllers and remote PGs cannot exceed 200 ms one way (400 ms
round-trip).
Transfers
Transfers within a site function just like a single-site transfer. Transfers between Cisco CallManager
clusters use either the VoIP WAN or a PSTN service.
If the VoIP WAN is used, sufficient inter-cluster trunks must be configured. An alternative to using the
VoIP WAN for routing calls between sites is to use a PSTN transfer service. These services allow the
IPCC voice gateways to outpulse DTMF tones to instruct the PSTN to reroute (transfer) the call to
another voice gateway location. Another alternative is to have the Cisco CallManager cluster at Site 1
make an outbound call back to the PSTN. The PSTN would then route the call to Site 2, but the call
would use two voice gateway ports at Site 1 for the remainder of the call.
Figure 2-5 Multi-Site Deployment with Distributed Call Processing and Distributed Voice Gateways with ISN
Signaling/CTI
AW/HDS
IP Voice
ICM TDM Voice
PG
ISN PG
PG/CTI PG/CTI
PG/CTI PG/CTI
VoIP WAN
M M V V M M
M M M M
IP IP
126023
PSTN
Agent Agent
As with the previous models, many variations are possible. The number and type of ICM Servers, Cisco
CallManager servers, and ISN servers can vary. LAN/WAN infrastructure, voice gateways, PSTN
trunks, redundancy, and so forth are also variable within this deployment model. Central processing and
gateways may be added for self-service, toll-free calls, and support for smaller sites. In addition, the use
of a pre-routing PSTN Network Interface Controller (NIC) is also an option.
Advantages
• ISN Servers can be located either centrally or remotely. Call treatment and queuing will still be
distributed, executing on the local gateway, regardless of ISN server location. ISN is shown
centrally located in Figure 2-5.
• Each independent site can scale to support up to 2000 agents per Cisco CallManager cluster, and
there is no software limit to the number of sites that can be combined by the ICM Central Controller
to produce a single enterprise-wide contact center.
• All or most VoIP traffic can be contained within the LAN of each site, if desired. The QoS WAN
would be required for voice calls to be transferred across sites. Usage of a PSTN transfer service
(for example, Takeback N Transfer) could eliminate that need. If desired, a small portion of calls
arriving at a particular site can be queued for agent resources at other sites to improve customer
service levels.
• ICM pre-routing can be used to load-balance calls to the best site to reduce WAN usage for VoIP
traffic.
• Failure at any one site has no impact on operations at another site.
• Each site can be sized according to the requirements for that site.
• The ICM Central Controller provides centralized management for configuration of routing for all
calls within the enterprise.
• The ICM Central Controller provides the capability to create a single enterprise-wide queue.
• The ICM Central Controller provides consolidated reporting for all sites.
Best Practices
• The Cisco CallManager PG and Cisco CallManager cluster must be co-located. The ISN PG and ISN
servers must be co-located.
• The communication link from the ICM Central Controller to PG must be properly sized and
provisioned for bandwidth and QoS. Cisco provides a partner tool called the VRU Peripheral
Gateway to ICM Central Controller Bandwidth Calculator to assist in calculating the VRU
PG-to-ICM bandwidth requirement. This tool is available online at
http://www.cisco.com/partner/WWChannels/technologies/resources/IPCC_resources.html
• If the communication link between the PG and the ICM Central Controller is lost, then all contact
center routing for calls at that site is lost. Therefore, it is important that a fault-tolerant WAN is
implemented. Even when a fault-tolerant WAN is implemented, it is important to identify
contingency plans for call treatment and routing when communication is lost between the ICM
Central Controller and PG.
• Latency between ICM Central Controllers and remote PGs cannot exceed 200 ms one way (400 ms
round-trip)
Transfers
Transfers within a site function just like a single-site transfer. Transfers between Cisco CallManager
clusters use either the VoIP WAN or a PSTN service.
If the VoIP WAN is used, sufficient intercluster trunks must be configured. An alternative to using the
VoIP WAN for routing calls between sites is to use a PSTN transfer service. These services allow the
IPCC voice gateways to outpulse DTMF tones to instruct the PSTN to reroute (transfer) the call to
another voice gateway location. Another alternative is to have the Cisco CallManager cluster at Site 1
make an outbound call back to the PSTN. The PSTN would then route the call to Site 2, but the call
would use two voice gateway ports at Site 1 for the remainder of the call.
Dedicated Signaling/CTI
Private Link
IP Voice
TDM Voice
PG/CTI PG/CTI
PG/CTI IVR IVR PG/CTI
VoIP WAN
M M V V M M
M M M M
IP IP
126024
PSTN
Agent Agent
Advantages
The primary advantage of the distributed ICM option is the redundancy gained from splitting the ICM
Central Controller between two redundant sites.
Best Practices
• ICM Central Controllers (Routers and Loggers) must have a dedicated link to carry the private
communication between the two redundant sites. In a non-distributed ICM model, the private traffic
usually traverses an Ethernet crossover cable or LAN connected directly between the side A and
side B ICM Central Controller components. In the distributed ICM model, the private
communication between the A and B ICM components must travel across a dedicated link such as
a T1.
• Latency across the private dedicated link cannot exceed 100 ms one way (200 ms round-trip).
• Latency between ICM Central Controllers and remote PGs cannot exceed 200 ms one way (400 ms
round-trip).
• The private link cannot traverse the same path as public traffic. The private link must have path
diversity and must reside on a link that is completely path-independent from ICM public traffic.
• The redundant centralized model is explored in the next section on Clustering over the WAN
Advantages
• No single point of failure, including loss of an entire central site
• Remote agents require no reconfiguration to remain fully operational in case of site or link outage.
When outages occur, agents and agent devices dynamically switch to the redundant site.
• Central administration for both ICM and Cisco CallManager
• Reduction of servers for distributed deployment
Best Practices
• The highly available (HA) WAN between the central sites must be fully redundant with no single
point of failure. (For information regarding site-to-site redundancy options, refer to the WAN
infrastructure and QoS design guides available at http://cisco.com/go/srnd.) In case of partial failure
of the highly available WAN, the redundant link must be capable of handling the full central-site
load with all QoS parameters. For more information, see the section on Bandwidth and QoS
Requirements for IPCC Clustering Over the WAN, page 2-22.
• A highly available (HA) WAN using point-to-point technology is best implemented across two
separate carriers, but this is not necessary when using a ring technology.
• Latency requirements across the highly available (HA) WAN must meet the current Cisco IP
Telephony requirements for clustering over the WAN. Currently a maximum latency of 20 ms one
way (40 ms round-trip) is allowed. This equates to a transmission distance of approximately
1860 miles (3000 km) under ideal conditions. The transmission distance will be lessened by
network conditions that cause additional latency. For full specifications, refer to the Cisco IP
Telephony Solution Reference Network Design (SRND) guide, available at
http://www.cisco.com/go/srnd
• IPCC latency requirements can be met by conforming to IP Telephony requirements. However, the
bandwidth requirements for Cisco CallManager intra-cluster communications differ between IPCC
and IP Telephony. For more information, see the section on Bandwidth and QoS Requirements for
IPCC Clustering Over the WAN, page 2-22.
• Bandwidth requirements across the highly available (HA) WAN include bandwidth and QoS
provisioning for (see Bandwidth and QoS Requirements for IPCC Clustering Over the WAN, page
2-22):
– Cisco CallManager intra-cluster communications (ICC)
– Communications between Central Controllers
– Communications between Central Controller and PG
– Communications between CTI Object Server (CTI OS) and CTI Server, if using CTI OS
• Separate dedicated link(s) for ICM private communications are required between ICM Central
Controllers Side A and Side B and between PGs Side A and Side B to ensure path diversity. Path
diversity is required due to the architecture of ICM. Without path diversity, the possibility of a dual
(public communication and private communication) failure exists. If a dual failure occurs even for
a moment, ICM instability and data loss may occur, including the corruption of one logger database.
• Dedicated private link(s) may be two separate dedicated links, one for Central Controller private and
one for Cisco CallManager PG private, or one converged dedicated link containing Central
Controller and PG private. See Site-to-Site ICM Private Communications Options, page 2-20, for
more information.
• Separate paths must exist from agent sites to each central site. Both paths must be capable of
handling the full load of signaling, media, and other traffic if one path fails. These paths may reside
on the same physical link from the agent site, with a WAN technology such as Frame Relay using
multiple permanent virtual circuits (PVCs).
• The minimum cluster size using IP IVR as the treatment and queuing platform is 5 nodes (publisher
plus 4 subscribers). This minimum is required to allow IP IVR at each site to have redundant
connections locally to the cluster without traversing the WAN. JTAPI connectivity between Cisco
CallManager and IP IVR is not supported across the WAN. Local gateways also will need local
redundant connections to Cisco CallManager.
• The minimum cluster size using ISN as the treatment and queuing platform is 3 nodes (publisher
plus 2 subscribers). However, Cisco recommends 5 nodes, especially if there are IP Phones (either
contact center or non-contact center) local to the central sites, central gateways, or central media
resources that would require local failover capabilities.
Figure 2-7 Centralized Voice Gateways with Centralized Call Treatment and Queuing Using IP IVR
Site 1 Site 2
PG 2A PG 2B
ICM ICM
A B
IVR 1 IVR 2
Highly
1 2 3 Available 4 5
WAN
M M M M M
PG 1A PG 1B
V CTIOS 1A CTIOS 1B V
WAN
ICC
126025
PSTN ICM Public IP PSTN
ICM Private Remote Agent Site
Advantages
• Component location and administration are centralized.
• Calls are treated and queued locally, eliminating the need for queuing across a WAN connection.
Best Practices
• WAN connections to agent sites must be provisioned with bandwidth for voice as well as control
and CTI. See Bandwidth and QoS Requirements for IPCC Clustering Over the WAN, page 2-22, for
more information.
• Local voice gateway may be needed at remote sites for local out-calling and 911. For more
information, refer to the Cisco IP Telephony Solution Reference Network Design (SRND) guide,
available at
http://www.cisco.com/go/srnd
• Central site outages would include loss of half of the ingress gateways, assuming a balanced
deployment. Gateways and IVRs must be scaled to handle the full load in both sites if one site fails.
• Carrier call routing must be able to route calls to the alternate site in the case of a site or gateway
loss. Pre-routing may be used to balance the load, but it will not be able to prevent calls from being
routed to a failed central site. Pre-routing is not recommended.
Figure 2-8 Centralized Voice Gateways with Centralized Call Treatment and Queuing Using ISN
Site 1 Site 2
PG 2A PG 2B
ICM ICM
A B
ISN 1 ISN 2
Highly
1 2 3 Available 4 5
WAN
M M M M M
Gatekeeper 1 Gatekeeper 2
PG 1A PG 1B
V CTIOS 1A CTIOS 1B V
WAN
ICC
126026
PSTN ICM Public IP PSTN
ICM Private Remote Agent Site
Advantages
• Component location and administration are centralized.
• Calls are treated and queued locally, eliminating the need for queuing across a WAN connection.
• There is less load on Cisco CallManager because ISN is the primary routing point. This allows
higher scalability per cluster compared to IP IVR implementations. See Sizing IPCC Components
and Servers, page 5-1, for more information.
Best Practices
• WAN connections to agent sites must be provisioned with bandwidth for voice as well as control
and CTI. See Bandwidth and QoS Requirements for IPCC Clustering Over the WAN, page 2-22, for
more information.
• A local voice gateway might be needed at remote sites for local out-calling and 911.
Figure 2-9 Distributed Voice Gateways with Distributed Call Treatment and Queuing Using ISN
Site 1 Site 2
PG 2A PG 2B
ICM ICM
A B
ISN 1 ISN 2
Highly
1 2 3 Available 4 5
WAN
M M M M M
Gatekeeper 1 Gatekeeper 2
PG 1A PG 1B
CTIOS 1A CTIOS 1B
WAN
ICC
126027
IP ICM Public
PSTN V
ICM Private
Remote Agent Site
Advantages
• No or minimal voice RTP traffic across WAN links if ingress calls and gateways are provisioned to
support primarily their local agents. Transfers and conferences to other sites would traverse the
WAN.
• Calls are treated and queued at the agent site, eliminating the need for queuing across a WAN
connection.
• Local calls incoming and outgoing, including 911, can share the local VXML gateway.
• There is less load on Cisco CallManager because ISN is the primary routing point. This allows
higher scalability per cluster compared to IP IVR implementations. See Sizing IPCC Components
and Servers, page 5-1, for more information.
Best Practices
• Distributed gateways require minimal additional remote maintenance and administration over
centralized gateways.
• The media server for ISN may be centrally located or located at the agent site. Media may also be
run from gateway flash. Locating the media server at the agent site reduces bandwidth requirements
but adds to the decentralized model.
ICM Central Controller Private and Cisco CallManager PG Private Across Dual Links
Dual links, shown in Figure 2-10, separate ICM Central Controller Private traffic from VRU/CM PG
Private traffic.
Figure 2-10 ICM Central Controller Private and Cisco CallManager PG Private Across Dual Links
Site 1 Site 2
ICM A ICM B
PG A PG B
126028
Advantages
• Failure of one link does not cause both the ICM Central Controller and PG to enter simplex mode,
thus reducing the possibility of an outage due to a double failure.
• The QoS configuration is limited to two classifications across each link, therefore links are simpler
to configure and maintain.
• Resizing or alterations of the deployment model and call flow may affect only one link, thus
reducing the QoS and sizing changes needed to ensure proper functionality.
• Unanticipated changes to the call flow or configuration (including misconfiguration) are less likely
to cause issues across separate private links.
Best Practices
• The links must be across separate dedicated circuits. The links, however, do not have to be
redundant and must not be redundant against each other.
• Link sizing and configuration must be examined before any major change to call load, call flow, or
deployment configuration
• The link must be a dedicated circuit and not be tunneled across the highly available (HA) WAN. See
Best Practices, page 2-15, at the beginning of the section on Clustering Over the WAN, page 2-15,
for more information on path diversity.
ICM Central Controller Private and Cisco CallManager PG Private Across Single Link
A single link, shown in Figure 2-11, carries both ICM Central Controller Private traffic and VRU/CM
PG Private traffic. Single-link implementations are more common and less costly than dual-link
implementations.
Figure 2-11 ICM Central Controller Private and Cisco CallManager PG Private Across Single Link
Site 1 Site 2
ICM A ICM B
PG A PG B
126029
Advantages
• Less costly than separate-link model
• Fewer links to maintain, but more complex
Best Practices
• The link does not have to be redundant. If a redundant link is used, however, latency on failover
must not exceed 500 ms.
• Separate QoS classifications and reserved bandwidth are required for Central Controller
high-priority and PG high-priority communications. For details, see Bandwidth Provisioning and
QoS Considerations, page 8-1.
• Link sizing and configuration must be examined before any major change to call load, call flow, or
deployment configuration. This is especially important in the single link model.
• Link must be a dedicated circuit fully isolated from, and not tunneled across, the highly available
(HA) WAN. See Best Practices, page 2-15, at the beginning of the section on Clustering Over the
WAN, page 2-15, for more information on path diversity.
Bandwidth and QoS Requirements for IPCC Clustering Over the WAN
Bandwidth must be provisioned to properly size links and set reservations within those links. The
following sections detail bandwidth requirements for ICM Private, ICM Public, Cisco CallManager, and
CTI traffic.
If one dedicated link is used between sites for private communication, add all link sizes together and use
the Total Link Size at the bottom of Table 2-1. If separate links are used, one for Router/Logger Private
and one for PG Private, use the first row for Router/Logger requirements and the bottom three (out of
four) rows added together for PG Private requirements.
Effective BHCA (effective load) on all similar components that are split across the WAN is defined as
follows:
• Router + Logger
This value is the total BHCA on the call center, including conferences and transfers. For example,
10,000 BHCA ingress with 10% conferences or transfers would be 11,000 Effective BHCA.
• Cisco CallManager PG
This value includes all calls that come through ICM Route Points controlled by Cisco CallManager
and/or that are ultimately transferred to agents. This assumes that each call comes into a route point
and is eventually sent to an agent. For example, 10,000 BHCA ingress calls coming into a route
point and being transferred to agents, with 10% conferences or transfers, would be 11,000 effective
BHCA.
• IP IVR PG
This value is the total BHCA for call treatment and queuing. For example, 10,000 BHCA ingress
calls, with all of them receiving treatment and 40% being queued, would be 14,000 effective BHCA.
• ISN PG
This value is the total BHCA for call treatment and queuing coming through an ISN. 100% treatment
is assumed in the calculation. For example, 10,000 BHCA ingress calls, with all of them receiving
treatment and 40% being queued, would be 14,000 effective BHCA.
• IP IVR or ISN Variables
This value represents the number of Call and ECC variables and the variable lengths associated with
all calls routed through the IP IVR or ISN, whichever technology is used in the implementation.
For the combined dedicated link in this example, the results are as follows:
• Total Link = 2,550,000 bps
• Router/Logger high-priority bandwidth queue of 297,000 bps
• PG high-priority bandwidth queue of 1,888,000 bps
If this example were implemented with two separate links, Router/Logger private and PG private, the
link sizes and queues would be as follows:
• Router/Logger link of 330,000 bps (actual minimum link is 1.5 Mb, as defined earlier), with
high-priority bandwidth queue of 297,000 bps
• PG link of 2,220,000 bps, with high-priority bandwidth queue of 1,888,000 bps
If the HA WAN is lost for any reason, the Cisco CallManager cluster becomes split. The primary result
from this occurrence is that ICM loses contact with half of the agent phones. ICM is in communication
with only half of the cluster and cannot communicate with or see any phones registered on the other half.
This causes ICM to immediately log out all agents with phones that are no longer visible. These agents
cannot log back in until the highly available WAN is restored or their phones are forced to switch cluster
sides.
Advantages
• Remote agent deployment results in money saved for a contact center enterprise, thereby increasing
return on investment (ROI).
• A remote agent can be deployed with standard IPCC agent desktop applications such as Cisco
CTI OS, Cisco Agent Desktop, or customer relationship management (CRM) desktops.
• This model works with ADSL or Cable broadband networks.
• The Broadband Agent Desktop "Always on" connection is a secure extension of the corporate LAN
in the home office.
• At-home agents have access to the same IPCC applications and most IPCC features in their home
office as when they are working at the IPCC Enterprise contact center, and they can access those
features in exactly the same way
• This model provides high-quality voice using IP phones, with simultaneous data to the agent
desktop via existing broadband service.
• IPCC home agents and home family users can securely share broadband Cable and DSL
connections, with authentication of IPCC corporate users providing access to the VPN tunnel.
• The home agent solution utilizes the low-cost Cisco 831 Series router.
• This model supports dynamic IP addressing via Dynamic Host Configuration Protocol (DHCP) or
Point-to-Point Protocol over Ethernet (PPPoE).
• The Cisco 831 Series router provides VPN tunnel origination, Quality of Service (QoS) to the edge,
and Firewall (and other security functions), thus reducing the number of devices to be managed.
• The Remote Agent router can be centrally managed by the enterprise using a highly scalable and
flexible management product such as CiscoWorks.
• The remote agent solution is based on Cisco IOS VPN Routers for resiliency, high availability, and
a building-block approach to high scalability that can support thousands of home agents.
• All traffic, including data and voice, is encrypted with the Triple Data Encryption Standard (3DES).
Best Practices
• Follow all applicable V3PN and Business Ready Teleworker design guidelines outlined in the
documentation available at:
http://www.cisco.com/go/teleworker
http://www.cisco.com/go/v3pn
http://www.cisco.com/go/srnd
• Configure remote agent IP phones to use G.729 with minimum bandwidth limits. Higher quality
voice can be achieved with the G.711 codec. The minimum bandwidth to support G.711 is 512 kbps
upload speed.
• Implement fault and performance management tools such as NetFlow, Service Assurance Agent
(SAA), and Internetwork Performance Monitor (IPM).
• Wireless access points are supported; however, their use is determined by the enterprise security
polices for remote agents.
• Only one remote agent per household is supported.
• Cisco recommends that you configure the conference bridge on a DSP hardware device. There is no
loss of conference voice quality using a DSP conference bridge. This is the recommended solution
even for pure IP Telephony deployments.
• The Remote Agent over Broadband solution is supported only with centralized IPCC and Cisco
CallManager clusters.
• There might be times when the ADSL or Cable link goes down. When the link is back up, you might
have to reset your ADSL or Cable modem, Cisco 831 Series router, and IP phone. This task will
require remote agent training.
• Only unicast Music on Hold (MoH) streams are supported.
• There must be a Domain Name System (DNS) entry for the remote agent desktop, otherwise the
agent will not be able to connect to a CTI OS server. DNS entries can be dynamically updated or
entered as static updates.
• The remote agent's workstation and IP phone must be set up to use Dynamic Host Configuration
Protocol (DHCP).
• The remote agent’s PC requires Windows XP Pro for the operating system. In addition, XP Remote
Desktop Control must be installed.
• The Cisco 7960 IP Phone requires a power supply. The Cisco 831 Series router does not supply
power to the IP Phone.
• Home agent broadband bandwidth requires a minimum of 256 kbps upload speed and 1.4 Mbps
download speed for ADSL, and 1 Mbps download for Cable. Before actual deployment, make sure
that the bandwidth is correct. If you are deploying Cable, then take into account peak usage times.
If link speeds fall below the specified bandwidth, the home agent can encounter voice quality
problems such as clipping.
• Remote agent round-trip delay to the IPCC campus is not to exceed 180 ms for ADSL or 60 ms for
Cable. Longer delay times can result in voice jitter, conference bridge problems, and delayed agent
desktop screen pops.
• If the Music on Hold server is not set up to stream using a G.729 codec, then a transcoder must be
set up to enable outside callers to receive MoH.
• For Cisco Supervisor Desktop, there are supervisor limitations to silent monitoring, barge-in,
intercept, and voice recording with regard to home agent IP phones. Cisco Agent Desktop
(Enterprise and Express) home and campus supervisors cannot voice-monitor home agents.
Supervisors are capable of sending and receiving only text messages, and they can see which home
agents are online and can log them out.
• Desktop-based monitoring is not supported for IPCC Express with Cisco Agent Desktop.
Desktop-based monitoring is applicable only with IPCC Enterprise edition.
• CTI OS Supervisor home and campus supervisors can silently monitor, barge in, and intercept, but
not record home agents. CTI OS home and campus supervisors can send and receive text messages,
make an agent “ready,” and also log out home agents.
• Connect the agent desktop to the RJ45 port on the back of the IP phone. Otherwise, CTI OS
Supervisor will not be able to voice-monitor the agent phone.
• Only IP phones that are compatible with Cisco IPCC are supported. For compatibility information,
refer to the following documentation:
– Bill of Materials at
http://www.cisco.com/univercd/cc/td/doc/product/icm/ccbubom/60bom.pdf
– Compatibility Matrix at
http://cisco.com/application/pdf/en/us/guest/products/ps1844/c1609/ccmigration_09186a0080
31a0a7.pdf
– Release Notes for IPCC Express at
http://www.cisco.com/univercd/cc/td/doc/product/voice/sw_ap_to/apps_3_5/english/admn_ap
p/rn35_2.pdf
• You can find a test for the broadband line speed at http://www.Broadbandreports.com. From this
website, you can execute a test that will benchmark the home agent's line speed (both upload and
download) from a test server.
• The email alias for V3PN questions is: ask-ese-vpn@cisco.com.
Remote Agent with IP Phone Deployed via the Business Ready Teleworker
Solution
In this model, the remote agent’s IP phone and workstation are connected via the VPN tunnel to the main
IPCC campus. Customer calls routed to the remote agent are handled in the same manner as campus
agents. (See Figure 2-12.)
Figure 2-12 Remote Agent with IP Phone Deployed via the Business Ready Teleworker Solution
IPCC
Cisco IP Corporate
phone Network
VPN
IP
Broadband Head-End
modem Broadband Internet router
CTI Encrypted VPN tunnel
data
126030
Cisco 831
Series remote
router
Advantages
• High-speed broadband enables cost-effective office applications
• Site-to-site "always on" VPN connection
• Advanced security functions allow extension of the corporate LAN to the home office
• Supports full range of converged desktop applications, including CTI data and high-quality voice
Best Practices
• Minimum broadband speed supported is 256 kbps upload and 1.0 Mbps download for cable.
• Minimum broadband speed supported is 256 kbps upload and 1.4 Mbps download for ADSL.
• Agent workstation must have 500 MHz, 512 MB RAM or greater.
• IP phone must be configured to use G.711 on minimum broadband speeds.
• QoS is enabled only at the Cisco 831 Router edge. Currently, service providers are not providing
QoS.
• Enable security features on the Cisco 831 Series router.
• The Cisco 7200 VXR and Catalyst 6500 IPSec VPN Services Module (VPNSM) offer the best
LAN-to-LAN performance for agents.
• The remote agent's home phone must be used for 911 calls.
• Redirect on no answer (RONA) should be used when a remote agent is logged in and ready but is
unavailable to pick up a call.
Sizing Information
• Maximum of 200 agents (any mixture of inbound and outbound agents) on a fully coresident
configuration (Outbound Option Dialer, IPCC PG, CTI Server, and CTI OS on a single server).
• Maximum of 300 agents (any mixture of inbound and outbound agents) on a PG when the CTI OS
on the PG server is configured for no more than 200 agents.
Figure 2-13 illustrates the model for the IPCC Outbound Option with more than 200 agents.
Figure 2-13 IPCC Outbound Option with More Than 200 Agents
Private LAN
Cross-over Cable
ICM Peripheral ICM Peripheral
Gateway 2A Gateway 2B
PSTN CallManager PIM (1) CallManager PIM (1) ICM CTIOS ICM CTIOS
Voice Network CTI Server CTI Server Server 2A Server 2B
Private LAN
Cross-over Cable Ethernet Converged
Visible LAN
Cisco router/
Voice Gateway V
IP IP
126032
CCMC- CRS1 CRS2 Agent
ICM
BACKUP 24 Ports 24 Ports PC
Dialer 2
SUB
Advantages
The Cisco Outbound Option Dialer solution allows an agent to participate in outbound campaigns as well
as inbound calls by utilizing a pure software IP-based dialer.
In summary, the main benefits of the IPCC Outbound Option are:
• IPCC Outbound Option has enterprise-wide dialing capability, with IP Dialers placed at multiple
call center sites. The Campaign Manager server is located at the central site.
• This option provides centralized management and configuration via the ICM Admin workstation.
• IPCC Release 6.0 and later provide the Enhanced Call Progress Analysis feature, including
answering machine detection.
• This option provides true call-by-call blending of inbound and outbound calls.
• This option incorporates flexible outbound mode control by using the ICM script editor to control
type of outbound mode and percentage of agents within a skill to use for outbound activity.
• Transfer to IVR mode (agent-less campaigns) and Direct Preview mode are available in IPCC
Release 6.0 and later.
• This option provides integrated webview reporting with outbound specific reporting templates.
Best Practices
Follow these guidelines and best practices when implementing the IPCC Outbound Option:
• A media routing PG is required, and a media routing PIM is required for each Dialer.
• An IP Dialer may be installed on an IPCC PG server for a total blended agent count of 200 (either
inbound, outbound, or blended). Multiple Dialers located at a single peripheral do provide some
fault tolerance but are not a true hot-standby model.
• IP Dialers support only the G.711 audio codec for customer calls. Although outbound agents may
be placed within a region that uses the G.729 codec, the codec switchover can add up to 1.5 seconds
to the transfer time between customer and agent and is therefore not recommended.
• IP Dialers should be located in close proximity to the Cisco CallManager cluster where the Dialers
are registered.
• Using the Cisco Media Termination phones with the outbound option might introduce an additional
0.5 second delay in transferring customer calls to the agent.
• The following gateways have been tested with IPCC Outbound Option Dialers:
– Cisco AS5300, AS5350, and AS5400 Series
– Cisco 6608
• All Outbound option dialers at a particular peripheral should have the same number of configured
ports.
• Outbound Option dialers perform a large number of call transfers, which increases the performance
load on the Cisco CallManager server. Ensure proper Cisco CallManager server sizing when
installing Outbound Option dialers. Also, proper Dialer call throttling should be enabled to prevent
overloading the Cisco CallManager server. For proper throttling values for your particular Cisco
CallManager server, refer to the Outbound Option Setup and Configuration Guide, available at
http://www.cisco.com/univercd/cc/td/doc/product/icm/icmentpr/icm60doc/icm6out/
For complete information on installing and configuring the Outbound software, see:
• Cisco ICM/IP Contact Center Enterprise Edition Outbound Option Setup and Configuration Guide
• Cisco ICM/IP Contact Center Enterprise Edition Outbound Option User Guide
Both of these documents are available online at
http://www.cisco.com/univercd/cc/td/doc/product/icm/icmentpr/icm60doc/icm6out/
ICM Central
NIC Controller
PSTN
PG/CTI
ACD server
PG/CTI
IP IVR server
M
CallManager
IP IP IP
IP voice
TDM voice
CTI/Call
76612
control data
An alternative to pre-routing calls from the PSTN is to have the PSTN deliver calls to just one site or to
split the calls across the two sites according to some set of static rules provisioned in the PSTN. When
the call arrives at either site, either the traditional ACD or the Cisco CallManager will generate a route
request to the ICM to determine which site is best for this call. If the call needs to be delivered to an
agent at the opposite site from where the call was originally routed, then TDM circuits between sites will
be required. Determination of where calls should be routed, and if and when they should be transferred
between sites, will depend upon the enterprise business environment, objectives, and cost components.
ICM Central
IVR PG Controller
PBX
PSTN
PG/CTI
IP IVR server
M
CallManager
IP IP IP
IP voice
TDM voice
CTI/Call
76613
control data
ICM Central
IVR PG Controller
PSTN
PG/CTI
IP IVR server
M
CallManager
IP IP IP
IP voice
TDM voice
CTI/Call
76614
control data
ICM Central
IVR PG Controller
PSTN
PG/CTI
IP IVR server
M
CallManager
IP IP IP
IP voice
TDM voice
CTI/Call
76615
control data
Figure 2-18 Traditional IVR Integration Using Cisco CallManager Transfer and IVR Double Trunking
ICM Central
IVR PG Controller
PSTN
PG/CTI
IP IVR server
M
CallManager
IP IP IP
IP voice
TDM voice
CTI/Call
76616
control data
This chapter covers several possible IPCC failover scenarios and explains design considerations for
providing high availability of system functions and features in each of those scenarios. This chapter
contains the following sections:
• Designing for High Availability, page 3-1
• Data Network Design Considerations, page 3-5
• Cisco CallManager and CTI Manager Design Considerations, page 3-7
• IP IVR (CRS) Design Considerations, page 3-11
• Internet Service Node (ISN) Design Considerations, page 3-13
• Multi-Channel Design Considerations (Cisco Email Manager Option and Cisco Collaboration
Server Option), page 3-15
• Cisco Email Manager Option, page 3-17
• Cisco Collaboration Server Option, page 3-18
• Cisco IPCC Outbound Option Design Considerations, page 3-19
• Peripheral Gateway Design Considerations, page 3-20
• Understanding Failure Recovery, page 3-31
• CTI OS Considerations, page 3-38
• Cisco Agent Desktop Considerations, page 3-39
• Other Considerations, page 3-39
Before implementing IPCC, use careful preparation and design planning to avoid costly upgrades or
maintenance later in the deployment cycle. Always design for the worst possible failure scenario, with
future scalability in mind for all IPCC sites.
In summary, plan ahead and follow all the design guidelines and recommendations presented in this
guide and in the Cisco IP Telephony Solution Reference Network Design (SRND) guide, available at
http://www.cisco.com/go/srnd
For assistance in planning and designing your IPCC solution, consult your Cisco or certified Partner
Systems Engineer (SE).
Figure 3-1 shows a high-level design for a fault-tolerant IPCC single-site deployment.
T1 lines
Public Voice Voice
network T1 lines gateway 1 gateway 2 Gatekeepers
V V
IDF IDF
switch 1 switch 2
MDF Firewall
MDF
TDM switch 1 switch 2
access
Cisco CallManager
cluster
IPCC Publisher
Agent 1 agents Agent 2 Corporate
PC PC Sub 1 M Sub 2 LAN
IP IVR 1 IP IVR
IP IP M M group
IP IVR 2
AW A AW B
CTI OS CTI OS CM CM VRU VRU
A B PG A PG B PG A PG B
CTI CTI
server server ICM A WedView
A B Call control, CTI data, Reporting
ICM B IP messaging Client
126155
In Figure 3-1, each component in the IPCC site is duplicated for redundancy and connected to all of its
primary and backup servers, with the exception of the intermediate distribution frame (IDF) switch for
the IPCC agents and their phones. The IDF switches do not interconnect with each other, but only with
the main distribution frame (MDF) switches, because it is better to distribute the agents among different
IDF switches for load balancing and for geographic separation (for example, different building floors or
different cities). If an IDF switch fails, all calls should be routed to other available agents in a separate
IDF switch or to an IP IVR (CRS) queue. Follow the design recommendations for a single-site
deployment as documented in the Cisco IP Telephony Solution Reference Network Design (SRND)
guide, available at
http://www.cisco.com/go/srnd
If designed correctly for high availability and load balancing, any IPCC site can lose half of its systems
and still be operational. With this type of design, no matter what happens in the IPCC site, each call
should be handled in one of the following ways:
• Routed and answered by an available IPCC agent
• Sent to an available IP IVR (CRS) or ISN port
• Answered by the Cisco CallManager AutoAttendant
• Prompted by an IP IVR (CRS) or ISN announcement that the call center is currently experiencing
technical difficulties, and to call back later
The components in Figure 3-1 can be rearranged to form two connected IPCC sites, as illustrated in
Figure 3-2.
MDF MDF
TDM switch 1 switch 2
access
Publisher Agent 2
Agent 1
PC Subscriber 1 M Subscriber 2 PC
IP M M IP
IP IVR 1 IP IVR 2
CM CM
PG A PG B
VRU VRU
PG A PG B
CTI OS CTI OS
A B
CTI CTI
server server
A B
AW A AW B
ICM A ICM B
Call control, CTI data,
IP messaging
TDM voice lines
Ethernet lines
126156
WedView WedView
Reporting Client Reporting Client
Figure 3-2 emphasizes the redundancy of the single site design in Figure 3-1. Side A and Side B are
basically mirror images of each other. In fact, one of the main IPCC features to enhance high availability
is its simple mechanism for converting a site from non-redundant to redundant. To implement IPCC high
availability, all you have to do is to duplicate the first side and cross-connect all the corresponding parts.
The following sections use Figure 3-1 as the model design to discuss issues and features that you should
consider when designing IPCC for high availability. These sections use a bottom-up model (from a
network model perspective, starting with the physical layer first) that divides the design into segments
that can be deployed in separate stages.
Cisco recommends using only duplex (redundant) Cisco CallManager, IP-IVR/ISN, and ICM
configurations for all IPCC deployments that require high availability. This chapter assumes that the
IPCC failover feature is a critical requirement for all deployments, and therefore it presents only
deployments that use a redundant (duplex) Cisco CallManager configuration, with each Cisco
CallManager cluster having at least one publisher and one subscriber. Additionally, where possible,
deployments should follow the best practice of having no devices, call processing, or CTI Manager
services running on the Cisco CallManager publisher.
Figure 3-3 High Availability in a Network with Two Voice Gateways and One Cisco CallManager Cluster
T1 lines
Public Voice Voice
network T1 lines gateway 1 gateway 2 Gatekeepers
V V
IDF IDF
switch 1 switch 2
TDM
access MDF MDF
switch 1 switch 2
Firewall
Cisco CallManager
cluster
Call control, CTI data,
IP messaging Publisher Corporate
TDM voice lines Sub 1 M Sub 2 LAN
Ethernet lines
M M
76602
Using multiple voice gateways avoids the problem of a single gateway failure causing blockage of all
calls. In a configuration with two voice gateways and one Cisco CallManager cluster, each gateway
should register with a different primary Cisco CallManager to spread the workload across the Cisco
CallManagers in the cluster. Each gateway should use the other Cisco CallManager as a backup in case
its primary Cisco CallManager fails. Refer to the Cisco IP Telephony Solution Reference Network
Design (SRND) guide (available at http://www.cisco.com/go/srnd) for details on setting up Cisco
CallManager redundancy groups for backup.
When calculating the number of trunks from the PSTN, be sure enough trunks are available to handle
the maximum busy hour call attempts (BHCA) when one or more voice gateways fail. During the design
phase, first decide how many simultaneous voice gateway failures are acceptable for the site. Based upon
this requirement, the number of voice gateways used, and the distribution of trunks across those voice
gateways, you can determine the number of trunks required. The more you distribute the trunks over
multiple voice gateways, the fewer trunks you will need. However, using more voice gateways will
increase the cost of that component of the solution, so you should compare the annual operating cost of
the trunks (paid to the PSTN provider) against the one-time fixed cost of the voice gateways.
For example, assume the call center has a maximum BHCA that results in the need for four T1 lines, and
the company has a requirement for no call blockage in the event of a single component (voice gateway)
failure. If two voice gateways are deployed in this case, then each voice gateway should be provisioned
with four T1 lines (total of eight). If three voice gateways are deployed, then two T1 lines per voice
gateway (total of six) would be enough to achieve the same level of availability. If five voice gateways
are deployed, then one T1 per voice gateway (total of five) would be enough to achieve the same level
of availability. Thus, you can reduce the number of T1 lines required by adding more voice gateways.
The operational cost savings of fewer T1 lines may be greater than the one-time capital cost of additional
voice gateways. In addition to the recurring operational costs of the T1 lines, you should also factor in
the one-time installation cost of the T1 lines to ensure that your design accounts for the most
cost-effective solution. Every installation has different availability requirements and cost metrics, but
using multiple voice gateways is often more cost-effective. Therefore, it is a worthwhile design practice
to perform this cost comparison.
After you have determined the number of trunks needed, the PSTN service provider has to configure
them in such a way that calls can be terminated onto trunks connected to all of the voice gateways (or at
least more than one voice gateway). From the PSTN perspective, if the trunks going to the multiple voice
gateways are configured as a single large trunk group, then all calls will automatically be routed to the
surviving voice gateways when one voice gateway fails. If all of the trunks are not grouped into a single
trunk group within the PSTN, then you must ensure that PSTN re-routing or overflow routing to the other
trunk groups is configured for all dialed numbers.
If a voice gateway with a digital interface (T1 or E1) fails, then the PSTN automatically stops sending
calls to that voice gateway because the carrier level signaling on the digital circuit has dropped. Loss of
carrier level signaling causes the PSTN to busy out all trunks on that digital circuit, thus preventing the
PSTN from routing new calls to the failed voice gateway. When the failed voice gateway comes back
on-line and the circuits are back in operation, the PSTN automatically starts delivering calls to that voice
gateway again.
Because the voice gateways register with a primary Cisco CallManager, an increase in the amount of
traffic on a given voice gateway will result in more traffic being handled by its primary Cisco
CallManager. Therefore, when sizing the Cisco CallManager servers, plan for the possible failure of a
voice gateway and calculate the maximum number of trunks that may be in use on the remaining voice
gateways registered with each CallManager server.
With standalone voice gateways, it is possible that the voice gateway itself is operational but that its
communication paths to the Cisco CallManager servers are severed (for example, a failed Ethernet
connection). If this occurs in the case of a H.323 gateway, you can use the busyout-monitor interface
command to monitor the Ethernet interfaces on a voice gateway. To place a voice port into a busyout
monitor state, use the busyout-monitor interface voice-port configuration command. To remove the
busyout monitor state on the voice port, use the no form of this command.
When the voice gateway interface to the switch fails, the voice gateway automatically busies out all its
trunks. This prevents new calls from being routed to this voice gateway from the PSTN. Calls in progress
do not survive because the Real-Time Transport Protocol (RTP) stream connection no longer exists.
Parties at both ends of the line receive silence and, after a configurable timeout, calls are cleared. You
can set the Transmission Control Protocol (TCP) timeout parameter in the voice gateway, and you can
also set a default timeout in Cisco CallManager. The calls are cleared by whichever timeout expires first.
When the voice gateway interface to the switch recovers, the trunks are automatically idled and the
PSTN should begin routing calls to this voice gateway again (assuming the PSTN has not permanently
busied out those trunks).
Publisher
(CTI Manager
and
CallManager)
ICM
PG IVR
SDL SDL
Subscriber Subscriber
JTAPI (CTI Manager JTAPI
(CTI Manager
and and
Cisco Cisco
CallManager) CallManager) Softphone
MGCP SCCP
V H.323
76603
IP IP IP IP IP IP
The servers in a Cisco CallManager cluster communicate with each other using the Signal Distribution
Layer (SDL) service. SDL signaling is used only by the Cisco CallManager service to talk to the other
Cisco CallManager services to make sure everything is in sync within the Cisco CallManager cluster.
The CTI Managers in the cluster are completely independent and do not establish a direct connection
with each other. CTI Managers route only the external CTI application requests to the appropriate
devices serviced by the local Cisco CallManager service on this subscriber. If the device is not resident
on its local Cisco CallManager subscriber, then the Cisco CallManager service forwards the application
request to the appropriate Cisco CallManager in the cluster. Figure 3-5 shows the flow of a device
request to another Cisco CallManager in the cluster.
Publisher
(CTI Manager
and
CallManager)
Subscriber Subscriber
(CTI Manager (CTI Manager
and and
ICM Cisco Cisco
PG CallManager) CallManager)
Request: IPCC
agent ext. 101 Forward request
IP IP
76604
It is important to load-balance devices and CTI applications evenly across all the nodes in the Cisco
CallManager cluster.
The external CTI applications use a JTAPI user account on the CTI Manager to establish a connection
and assume control of the Cisco CallManager devices registered to this JTAPI user. In addition, given
that the CTI Managers are independent from each other, any CTI application can connect to any CTI
Manager to perform its requests. However, because the CTI Managers are independent, one CTI
Manager cannot pass the CTI application to another CTI Manager upon failure. If the first CTI Manager
fails, the external CTI application must implement the failover mechanism to connect to another CTI
Manager in the cluster. For example, the Voice Response Unit (VRU) Peripheral Gateway (PG) allows
the administrator to input two CTI Managers, primary and secondary, in its JTAPI subsystem. The Cisco
CallManager PG handles failover for the CTI Manager by using its two sides, sides A and B, which both
log into the same JTAPI user upon initialization of the two CTI Managers. However, only one Cisco
CallManager PG side allows the JTAPI user to register and monitor the user devices to conserve system
resources in the Cisco CallManager cluster. The other side of the Cisco CallManager and VRU PG stays
in hot-standby mode, waiting to be activated immediately upon failure of the active side.
The CTI applications can use the same JTAPI user multiple times to log into separate CTI Managers.
This feature allows you to load-balance the CTI application connections across the cluster, and it adds
an extra layer of failover and redundancy at the CTI application level by allowing multiple connections
to separate CTI Managers while using the same JTAPI user to maintain control. However, keep in mind
that every time a JTAPI connection is established with a CTI Manager (JTAPI user logs into a CTI
Manager), the server CPU and memory usage will increase because the CTI application registers and
monitors events on all the devices associated with the JTAPI user. Therefore, make sure to allocate the
CTI application devices so that they are local to the CTI Manager where the application is connected.
(See Figure 3-6.)
Publisher
(CTI Manager
and
CallManager)
Subscriber Subscriber
(CTI Manager (CTI Manager
and and
ICM Cisco Cisco
PG CallManager) CallManager) IP IVR
JTAPI user 2 logs in
JTAPI user 1
logs in
User 2 CTI ports
IP IP IP IP
76605
Figure 3-6 shows two external CTI applications using the CTI Manager, the Cisco CallManager PG, and
the IP IVR (CRS). The Cisco CallManager PG logs into the CTI Manager using the JTAPI account
User 1, while IP IVR (CRS) uses User 2. Each subscriber has two phones to load-balance the calls, and
each server has one JTAPI connection to load-balance the CTI applications.
To avoid overloading the available resources, it is best to load-balance devices (phones, gateways, ports,
CTI Route Points, CTI Ports, and so forth) and CTI applications evenly across all the nodes in the Cisco
CallManager cluster.
Cisco CallManager and CTI Manager design should be the second design stage, right after the network
design stage, and deployment should occur in this same order. The reason is that the IP telephony
infrastructure must be in place to dial and receive calls using its devices before you can deploy any
telephony applications. Before moving to the next design stage, make sure that a PSTN phone can call
an IP phone and that this same IP phone can dial out to a PSTN phone, with all the call survivability
capabilities considered for treating these calls. Also keep in mind that the Cisco CallManager cluster is
the heart of the IPCC system, and any server failure in a cluster will take down two services (CTI and
Cisco CallManager), thereby adding extra load to the remaining servers in the cluster.
Distribute Cisco CallManager devices (phones, CTI ports, and CTI route points) evenly across all Cisco
CallManagers. Also be sure that all servers can handle the load for the worst-case scenarios, where they
are the only remaining server in their cluster. For more information on how to load-balance the Cisco
CallManager clusters, refer to the Cisco IP Telephony Solution Reference Network Design (SRND)
guide, available at
http://www.cisco.com/go/srnd
Step 1 Create a Cisco CallManager redundancy group, and add subscribers to the group. (Publishers and TFTP
servers should not be used for call processing, device registration, or CTI Manager use.)
Step 2 Designate two CTI Managers to be used for each side of the duplex Peripheral Gateway (PG).
Step 3 Assign one of the CTI Managers to be the JTAPI service of the Cisco CallManager PG side A. (See
Figure 3-7.)
Step 4 Assign the remaining CTI Manager to be the JTAPI service of the Cisco CallManager PG side B. (See
Figure 3-7.)
126812
PG side A, Cisco CallManager PIM 1 PG side B, Cisco CallManager PIM 1
Figure 3-8 High Availability with Two IP IVR Servers and One Cisco CallManager Cluster
T1 lines
Public Voice Voice
network T1 lines gateway 1 gateway 2 Gatekeepers
V V
M M
76606
IP IVR group
You can increase IP IVR (CRS) availability by using one of the following optional methods:
• Call-forward-busy and call-forward-on-error features in Cisco CallManager. This method is more
complicated, and Cisco recommends it only for special cases where a few critical CTI route points
and CTI ports absolutely must have high availability down to the call processing level in Cisco
CallManager. For more information on this method, see IP IVR (CRS) High Availability Using
Cisco CallManager, page 3-13.
• ICM script features to check the availability of an IP IVR prior to sending a call to it. For more
information on this method, see IP IVR (CRS) High Availability Using ICM, page 3-13.
Note Do not confuse the IP IVR (CRS) subsystems with services. IP IVR uses only one service, the Cisco
Application Engine service. The IP IVR subsystems are connections to external applications such as the
CTI Manager and ICM.
Note When using the call forwarding features to implement high availability of IP IVR ports, avoid creating
a loop in the event that all the IP IVR servers are unavailable. Basically, do not establish a path back to
the first CTI port that initiated the call forwarding.
Note All calls at the IP IVR are dropped if the IP IVR server, IVR-to-CallManager JTAPI link, or the IP IVR
PG fails.
T1 lines
Public Voice Voice
network T1 lines gateway 1 gateway 2 Gatekeepers
V V
126813
ISN PG Pair
Note Calls in ISN are not dropped if the Application Server or ISN PG fails because they can be redirected to
another ISN Voice Browser on another ISN controlled gateway as part of the fault-tolerant design using
TCL scripts in the voice gateway that are provided with the ISN images.
For more information on these options, review the ISN product documentation at:
http://www.cisco.com/univercd/cc/td/doc/product/icm/isn/isn21/index.htm
• Agent Reporting and Management (ARM) and Task Event Services (TES) Connections
ARM and TES services provide call (ARM) and non-voice (TES) state and event notification from
the IPCC CTI Server to the multi-channel systems. These connections provide agent information to
the email and web environments as well as accepting and processing task requests from them. The
connection is a TCP/IP socket that connects to the agent's associated CTI Server, which can be
deployed as a redundant or duplex pair on the Agent Peripheral Gateway.
ConAPI
Logger AW ConAPI
Router
Database Database
PC
Phone M
CEM CEM CCS
Database Cisco
CallManager
126033
Customers/Callers
ConAPI
ConAPI
Administrative
Agent Browsers Browsers
CEM Server
Machine TServer Inbasket
CEM DB CCL DB
Database Server
126034
Machine
UIServer
SpellServer RServer
UIServer Machine
CEM DB CCL DB
Database Server
126035
Machine
Internet DMZ
DCA
Caller CallManager Workstation CTI
Agent PG
requests for Desktop Phone
ARM
a Call Back CTI
or for Web CTI SVR
DCA Connection IP
Collaboration
with Chat Media Routing Agents
(SSC, MSC), CCS ARM (MR) PG
or Web MRI MRI
Collaboration CMB MR PIM
with a Phone ICM BAPI
Call (BC) Queue
ICM Distributor
AW
ICM Administration
F CMS_jserver
I F Connection (aka
R I Conapi Connection)
E R
E HTTP Agent Connection
W
A W
126036
L A Note: Arrow indicates the direction in
L L which the connection is initiated.
L
Logger MR PG IPCC PG
SQL Server
IPCC PIM IPCC PIM CTI/CTIOS
7/2000
Import
ODBC TCP/IP TCP/IP TCP/IP
EMT
Campaign
manager EMT Dialer
IP/TI/
Analog/EI
IP ACD/
CO Gateway M CallManager
Administrative
Workstation
126037
Components with dashed lines
are only used by IPCC Agent
Desktop
The system can support multiple dialers across the enterprise, all of which are under control of the
central Campaign Manager software. Although they do not function as a redundant or duplex pair the
way a Peripheral Gateway does, with a pair of dialers under control of the Campaign Manager, a failure
of one of the dialers can be handled automatically and calls will continue to be handled by the surviving
dialer. Any calls that were already connected to agents would remain connected and would experience
no impact from the failure.
For smaller implementations, the Dialer could be co-resident on the IPCC Peripheral Gateway. For
larger systems, the Dialer should be on its own server, or you could possibly use multiple Dialers under
control of the central Campaign Manager.
Recommendations for high availability:
• Deploy the Media Routing Peripheral Gateways in duplex pairs.
• Deploy Dialers on their own servers as standalone devices to eliminate a single point of failure. (If
they were co-resident on a PG, the dialer would go down if the PG server failed.)
• Deploy multiple Dialers and make use of them in the Campaign Manager to allow for automatic fault
recovery to a second Dialer in the event of a failure.
• Include Dialer "phones" (virtual phones in Cisco CallManager) in redundancy groups in Cisco
CallManager to allow them to fail-over to a different subscriber, as would any other phone or device
in the Cisco CallManager cluster.
by the PG is not registered to that specific Cisco CallManager server in the cluster, the CTI Manager
forwards that request via Cisco CallManager SDL links to the other Cisco CallManager servers in the
cluster. There is no need for a PG to connect to multiple Cisco CallManager servers in a cluster.
Duplex Cisco CallManager PG implementations are highly recommended because the PG will have only
one connection to the Cisco CallManager cluster using a single CTI Manager. If that CTI Manager were
to fail, the PG would no long be able to communicate with the Cisco CallManager cluster. Adding a
redundant or duplex PG allows the ICM to have a second pathway or connection to the Cisco
CallManager cluster using a second CTI Manager process on a different Cisco CallManager server in
the cluster.
The minimum requirement for ICM high-availability support for CTI Manager and IP IVR (CRS) is a
duplex (redundant) Cisco CallManager PG environment with one Cisco CallManager cluster containing
at least two servers. Therefore, the minimum configuration for a Cisco CallManager cluster in this case
is one publisher and one subscriber. (See Figure 3-15.)
Figure 3-15 ICM High Availability with One Cisco CallManager Cluster
T1 lines
Public Voice Voice
network T1 lines gateway 1 gateway 2 Gatekeepers
V V
MDF MDF
switch 1 switch 2 Firewall
TDM IDF IDF
access switch 1 switch 2
Cisco CallManager
cluster Corporate
Publisher LAN
Sub 1 M Sub 2 IP IVR 1 IP IVR
M group
M
IP IVR 2
CM CM
PG A PG B
Ethernet lines
ICM central controllers
Redundant ICM servers can be located at the same physical site or geographically distributed. In both
cases, the ICM Call Router and Logger/Database Server processes are interconnected through a private,
dedicated LAN. If the servers are located at the same site, you can provide the private LAN by inserting
a second NIC card in each server (sides A and B) and connecting them with a crossover cable. If the
servers are geographically distributed, you can provide the private LAN by inserting a second NIC card
in each server (sides A and B) and connecting them with a dedicated T1 line that meets the specific
network requirements for this connection as documented in the Cisco ICM Software Installation Guide,
available at
http://www.cisco.com/univercd/cc/td/doc/product/icm/icmentpr/icm50doc/coreicm5/plngupin/instl
gd.pdf
Within the ICM PG, two software processes are run to manage the connectivity to the Cisco
CallManager cluster: the JTAPI Gateway and the CallManager PIM. The JTAPI Gateway is started by
the PG automatically and runs as a node-managed process, which means that the PG will monitor this
process and automatically restart it if it should fail for any reason. The JTAPI Gateway handles the
low-level JTAPI socket connection protocol and messaging between the PIM and the Cisco CallManager
CTI Manager, and it is specific for the version of Cisco CallManager.
The ICM PG PIM is also a node-managed process and is monitored for unexpected failures and
automatically restarted. This process manages the higher-level interface between the ICM and the Cisco
CallManager cluster, requesting specific objects to monitor and handling route requests from the Cisco
CallManager cluster.
In a duplex ICM PG environment, both JTAPI services from both Cisco CallManager PG sides log into
the CTI Manager upon initialization. Cisco CallManager PG side A logs into the primary CTI Manager,
while PG side B logs into the secondary CTI Manager. However, only the active side of the Cisco
CallManager PG registers monitors for phones and CTI route points. The duplex ICM PG pair works in
hot standby mode, with only the active PG side PIM communicating with the Cisco CallManager cluster.
The standby side logs into the secondary CTI Manager only to initialize the interface and prime it for a
failover. The registration and initialization services of the Cisco CallManager devices take a significant
amount of time, and having the CTI Manager primed significantly decreases the time for failover.
In duplex PG operation, the PG side that is able to connect to the ICM Call Router Server and request
configuration information first will be the side that goes active. It is not deterministic based upon the
Side A or Side B designation of the PG device, but it depends only upon the ability of the PG to connect
to the Call Router, and it ensures that only the PG side that has the best connection to the Call Router
will attempt to go active.
Figure 3-16 Cisco CallManager PGs Cannot Cross-Connect to Backup CTI Managers
ICM PG ICM PG
A B
Subscriber 1 Subscriber 2
(CTI Manager (CTI Manager
and and
Cisco CallManager) Publisher Cisco CallManager)
126158
(CTI Manager
and
CallManager)
CallManager CallManager
PG A PG B
CallManager A CallManager B
M M
IP
SCCP messages
CallManager CallManager
PG A PG B
CallManager A CallManager B
M M
IP
79926
SCCP messages
• Call processing continues for any devices not registered to Cisco CallManager subscriber A. Call
processing also continues for those devices on subscriber A when they are re-registered with their
backup subscriber.
• Agents on an active call will stay in their connected state until they complete the call; however, the
agent desktop will be disabled to prevent any conference, transfer, or other telephony events during
the failover. After the agent disconnects the active call, that agent's phone will re-register with the
backup subscriber, and agent desktop functionality will be restored to the same state prior to
failover.
• When Cisco CallManager A recovers, phones and gateways re-home to it. This re-homing can be
set up on Cisco CallManager to gracefully return groups of phones and devices over time or to
require manual intervention during a maintenance window to minimize the impact to the call center.
• Call processing continues normally after the phones and devices have returned to their original
subscriber.
CallManager CallManager
PG A PG B
CallManager C CallManager D
M M
CallManager A M M CallManager B
IP
is already logged into the secondary (now primary) CTI Manager, the device registration and
initialization time is significantly shorter than if the JTAPI service on PG side B had to log into the CTI
Manager.
The following conditions apply to this scenario:
• All phones and gateways are registered with Cisco CallManager A.
• All phones and gateways are configured to re-home to Cisco CallManager B (that is, B is the
backup).
• Cisco CallManagers C and D are each running a separate instance of CTI Manager.
• When Cisco CallManager C fails, PG side A detects a failure of the CTI Manager on that server and
induces a failover to PG side B.
• PG side B registers all dialed numbers and phones with Cisco CallManager D, and call processing
continues.
• After an agent disconnects from all calls, that agent's desktop functionality is restored to the same
state prior to failover.
• When Cisco CallManager C recovers, PG side B continues to be active and uses the CTI Manager
on Cisco CallManager D.
CallManager CallManager
PG A PG B
CallManager C CallManager D
M M
CallManager A M M CallManager B
IP
SCCP messages
• There is no impact to the agents, calls in progress, or calls in queue. The system can continue to
function normally; however; the Call Routers will be in simplex mode until the private network link
is restored.
If the two private network connections were combined into one link, the failures would follow the same
path; however, the system would be running in simplex on both the Call Router and the Peripheral
Gateway. If a second failure were to occur at that point, the system could lose some or all of the call
routing and ACD functionality.
– If the agent desktop (CTI OS or Cisco Agent Desktop) is registered to the CTI OS Server at the
side-B site but the active Peripheral Gateway side is at the side-A site
Under normal operation, the CTI OS desktop (and Cisco Agent Desktop Server) will
load-balance their connections to the CTI OS Server pair. At any given time, half the agent
connections would be on a CTI OS server that has to cross the visible network to connect to the
active Peripheral Gateway CTI Server (CG). When the visible network fails, the CTI OS Server
detects the loss of connection with the remote Peripheral Gateway CTI Server (CG) and
disconnects the active agent desktop clients to force them to re-home to the redundant CTI OS
Server at the remote site. The CTI OS agent desktop is aware of the redundant CTI OS server
and will automatically use this server. During this transition, the agent desktop will be disabled
and will return to operational state as soon as it is connected to the redundant CTI OS server.
(The agent may be logged out or put into not-read state, depending upon the /LOAD parameter
defined for the Cisco CallManager Peripheral Gateway in ICM Config Manager).
• Agents will be impacted as noted above if their IP Phones are registered to the side of the Cisco
CallManager cluster opposite the location of their active Peripheral Gateway and CTI OS Server
connection. Only agents that were active on the surviving side of the Peripheral Gateway with
phones registered locally to that site will not be impacted.
At this point, the Call Router and Cisco CallManager Peripheral Gateway will run in simplex mode, and
the system will accept new calls from only the surviving side for IPCC call treatment. The IP-IVR/ISN
functionality will also be limited to the surviving side as well.
When the active Cisco CallManager fails, the agent desktops show the agents as being logged out, their
IP phones display a message stating that the phone has gone off-line, and all the IP phone soft keys are
grayed out until the phones fail-over to the backup Cisco CallManager. To continue receiving calls, the
agents must wait for their phones to re-register with a backup Cisco CallManager to have their desktop
functionality restored by the CTI server to the state prior to the Cisco CallManager service failure. Upon
recovery of the primary Cisco CallManager, the agent phones re-register with their original service
because all the Cisco CallManager devices are forced to register with their home Cisco CallManager.
In summary, the Cisco CallManager service is separate from the CTI Manager service, which talks to
the Cisco CallManager PG via JTAPI. The Cisco CallManager service is responsible for registering the
IP phones, and its failure does not affect the Cisco CallManager PGs. From a Cisco CallManager
perspective, the PG does not go off-line because the Cisco CallManager server running CTI Manager
remains operational. Therefore, the PG does not need to fail-over.
IP IVR (CRS)
When a CTI Manager fails, the IP IVR (CRS) JTAPI subsystem shuts down and restarts by trying to
connect to the secondary CTI Manager, if a secondary is specified. In addition, all voice calls at this IP
IVR are dropped. If there is an available secondary CTI Manager, it logs into this CTI Manager again
and re-registers all the CTI ports associated with the IP IVR JTAPI user. After all the Cisco CallManager
devices are successfully registered with the IP IVR JTAPI user, the server resumes its Voice Response
Unit (VRU) functions and handles new calls. This does not impact the Internet Service Node (ISN)
because it does not depend upon the Cisco CallManager JTAPI service.
ICM
The ICM is a collection of services and processes within these services. The failover and recovery
process for each of these services is unique and requires carefully examination to understand the impact
to other parts of the IPCC solution, including another ICM service.
As stated previously, all redundant ICM services discussed in this chapter must be located at the same
site and connected through a private LAN. You can provide the private LAN by installing a second
network interface card (NIC) in each server (sides A and B) and connecting them with a crossover cable.
By doing this, you can eliminate all external network equipment failures.
Note Agents should not push any buttons during desktop failover because these keystrokes can be buffered
and sent to the CTI server when it completes its failover and restores the agent states.
Once the CTI Manager or PG completes its failover, the agents can return to their previous call state
(talking, ready, not ready, and so forth). At this point, the agents should also be able to release, transfer,
or conference calls if they were on a call at the time of the failure. All the call data that had been collected
and stored via a call data update message is retained on the agent desktops, recovered, and matched with
call context information saved on the PG. However, all agents without active calls are reset to the default
Not Ready state. In addition, the Longest Available Agent (LAA) algorithm resets the timers for all the
agents to zero.
Figure 3-21 Redundant ICM VRU PGs with Two IP IVR Servers
T1 lines
Public Voice Voice
network T1 lines gateway 1 gateway 2 Gatekeepers
V V
MDF MDF
switch 1 switch 2 Firewall
TDM IDF IDF
access switch 1 switch 2
Cisco CallManager
cluster Corporate
Publisher LAN
CM VRU VRU
CM PG A PG B
PG A PG B
Ethernet lines
ICM central controllers
redundant Logger while it was off-line. The Loggers maintain a recovery key that tracks the date and
time of each entry recorded in the database, and these keys will be used to restore data to the failed
Logger over the private network.
If the Logger was off-line for more than 12 hours, the system will not automatically resynchronize the
databases. In this case, resynchronization has to be done manually using the ICMDBA application.
Manual resynchronization allows the system administrator to decide when to perform this data transfer
on the private network, perhaps scheduling it during a maintenance window when there would be little
call processing activity in the system.
The Logger replication process that sends data from the Logger database to the HDS Admin
Workstations will automatically replicate each new row written to the Logger database when the
synchronization takes place as well.
There is no impact to call processing during a Logger failure; however, the HDS data that is replicated
from that Logger would stop until the Logger can be restored.
Additionally, if the Outbound Option is used, the Campaign Manager software is loaded on only one of
the Logger platforms (must be Logger A). If that platform is out of service, any outbound calling will
stop until the Logger can be restored to operational status.
T1 lines
Public Voice Voice
network T1 lines gateway 1 gateway 2 Gatekeepers
V V
MDF MDF
switch 1 switch 2 Firewall
TDM IDF IDF
access switch 1 switch 2
Cisco CallManager
cluster Corporate
Publisher LAN
CM VRU VRU
CM PG A PG B
PG A PG B
IPCC_Site1
AW AW
A B
ICM A
Call control, CTI data,
IP messaging ICM B
TDM voice lines
Ethernet lines
126159
ICM central controllers
WedView
Reporting Client
Administrative Workstation Real-Time Distributors are clients of the ICM Call Router real-time feed
that provides real-time information about the entire IPCC across the enterprise. Real-Time Distributors
at the same site can be set up as part of an Admin Site that includes a designated primary real-time
distributor and one or more secondary real-time distributors. Another option is to add Client Admin
Workstations which do not have their own local SQL databases and are homed to a Real-Time
Distributor for their SQL database and real-time feed.
The Admin Site reduces the number of real-time feed clients the ICM Call Router has to service at a
particular site. For remote sites, this is important because it can reduce the required bandwidth to support
remote Admin Workstations across a WAN connection.
When using an Admin Site, the primary real-time distributor is the one that will register with the ICM
Call Router for the real-time feed, and the other real-time distributors within that Admin Site register
with the primary real-time distributor for the real-time feed. If the primary real-time distributor is down
or does not accept the registration from the secondary real-time distributors, they will register with the
ICM Call Router for the real-time feed. Client AWs that cannot register with the primary or secondary
real-time distributors will not be able to perform any Admin Workstation tasks until the distributors are
restored.
Alternatively, each real-time distributor could be deployed in its own Admin Site regardless of the
physical site of the device. This will create more overhead for the ICM Call Router to maintain multiple
real-time feed clients; however, it will prevent a failure of the primary real-time distributor from taking
down the secondary distributors at the site.
Additionally, if the Admin Workstation is being used to host the ConAPI interface for the Multi-Channel
Options (Cisco Email Manager and Cisco Content Server), any configuration changes made to the ICM,
Cisco Email Manager, or Cisco Content Server systems will not be passed over the ConAPI interface
until it is restored.
CTI Server
The CTI Server monitors the PIM data traffic for specific CTI messages (such as "call ringing" or "off
hook" events) and makes them available to CTI clients such as the CTI OS Server or Cisco Agent
Desktop Enterprise Server. It also processes third-party call control messages (such as "make call" or
"answer call") from the CTI clients and sends these messages via the PIM interface of the PG to Cisco
CallManager to process the event on behalf of the agent desktop.
CTI Server is redundant on duplex CTI Servers or can be co-resident on the PG servers. (See
Figure 3-23.) It does not, however, maintain agent state in the event of a failure. Upon failure of the CTI
Server, the redundant CTI server becomes active and begins processing call events. CTI OS Server is a
client of the CTI Server and is designed to monitor both CTI Servers in a duplex environment and
maintain the agent state during failover processing. CTI OS agents will see their desktop buttons
gray-out during the failover to prevent them from attempting to perform tasks while the CTI Server is
down. The buttons will be restored as soon as the redundant CTI Server is restored, and the agent does
not have to log on again to the desktop application.
The CTI Server is also critical to the operation of the Multi-Channel Options (Cisco Email Manager and
Cisco Content Server) as well as the Outbound Option. If the CTI Server is down on both sides of the
duplex agent Peripheral Gateway pair, none of the agents for that Agent Peripheral Gateway will be able
to log into these applications.
Figure 3-23 Redundant CTI Servers with No Cisco Agent Desktop Server Installed
T1 lines
Public Voice Voice
network T1 lines gateway 1 gateway 2 Gatekeepers
V V
MDF MDF
switch 1 switch 2 Firewall
TDM IDF IDF
access switch 1 switch 2
Cisco CallManager
cluster Corporate
Publisher LAN
CTI CTI CM CM
server server PG A PG B VRU VRU
A B PG A PG B
AW A AW B
ICM A ICM B
Call control, CTI data,
IP messaging
TDM voice lines
126160
Ethernet lines ICM central controllers
WedView
Reporting Client
CTI OS Considerations
CTI OS acts as client to CTI Server and provides agent and supervisor desktop functionality for IPCC.
It manages agent state and functionality during a failover of CTI Server, and it can be deployed as
redundant CTI OS Servers. The CTI OS Agent Desktop load-balances the agents between the redundant
servers automatically, and agents sitting next to each other may in fact be registered to two different CTI
OS Servers.
The CTI Object Server (CTI OS) consists of two services, the CTI OS service and the CTI driver. If
either of these two fails, then the active CTI OS fails-over to its peer server. Therefore, it is important
to keep both of these services active at all times.
Other Considerations
An IPCC failover can affect other parts of the solution. Although IPCC may stay up and running, some
data could be lost during its failover, or other products that depend on IPCC to function properly might
not be able to handle an IPCC failover. This section examines what happens to other critical areas in the
IPCC solution during and after failover.
Reporting
The IPCC reporting feature uses real-time, five-minute and half-hour intervals to build its reporting
database. Therefore, at the end of each five-minute and half-hour interval, each Peripheral Gateway will
gather the data it has kept locally and send it to the Call Routers. The Call Routers process the data and
send it to their local Logger and Database Servers for historical data storage. If the deployment has the
Historical Data Server (HDS) option, that data is then replicated to the HDS server from the Logger as
it is written to the Logger database.
The Peripheral Gateways provide buffering (in memory and on disk) of the five-minute and half-hour
data collected by the system to handle network connectivity failures or slow network response as well
as automatic retransmission of data when the network service is restored. However, physical failure of
both Peripheral Gateways in a redundant pair can result in loss of the half-hour or five-minute data that
has not been transmitted to the Central Controller. Cisco recommends the use of redundant Peripheral
Gateways to reduce the chance of losing both physical hardware devices and their associated data during
an outage window.
When agents log out, all their reporting statistics stop. The next time the agents log in, their real-time
statistics start from zero. Typically, ICM failover does not force the agents to log out; however, it does
reset their agent statistics when the ICM failover is complete, but their agent desktop functionality is
restored back to its pre-failover state.
For further information, refer to the Cisco IP Contact Center Reporting Guide, available at
http://www.cisco.com/univercd/cc/td/doc/product/icm/icmentpr/icm50doc/icm5rept/index.htm
Central to designing an IP Contact Center (or any call center) is the proper sizing of its resources. This
chapter discusses the tools and methodologies needed to determine the required number of call center
agents (based on customer requirements such as call volume and service level desired), the number of
IP IVR ports required for various call scenarios (such as call treatment, queuing, and self-service
applications), and the number of voice gateway ports required to carry the traffic volume coming from
the PSTN or other TDM source such as PBXs and TDM IVRs.
The methodologies and tools presented in this chapter are based on traffic engineering principles using
the Erlang-B and Erlang-C models applied to the various resource in an IPCC deployment. Examples are
provided for an IPCC deployment to illustrate how resources can be impacted under various call
scenarios such as call treatment in the IP IVR and agent wrap-up time. These tools and methodologies
are intended as building blocks for sizing call center resources and for any telephony applications in
general.
busy hour (the average of the 10 busiest hours in one year). This average is not always applied, however,
when staffing is required to accommodate a marketing campaign or a seasonal busy hour such as an
annual holiday peak. In a call center, staffing for the maximum number of agent is determined using peak
periods, but staffing requirements for the rest of the day are calculated separately for each period
(usually every hour) for proper scheduling of agents to answer calls versus scheduling agents for offline
activities such as training or coaching. For trunks or IVR ports (in most cases), it is not practical to add
or remove trunks or ports daily, so these resources are sized for the peak periods. In some retail
environments, additional trunks could be added during the peak season and disconnected afterwards.
Servers
Servers are resources that handle traffic loads or calls. There are many types of servers in a call center,
such as PSTN trunks and gateway ports, agents, voicemail ports, and IVR ports.
Talk Time
Talk time is amount of time an agent spends talking to a caller, including the time an agent places a caller
on hold and the time spent during consultative conferences.
Erlang
Erlang is a measurement of traffic load during the busy hour. The Erlang is based on having 3600
seconds (60 minutes, or 1 hour) of calls on the same circuit, trunk, or port. (One circuit is busy for one
hour regardless of the number of calls or how long the average call lasts.) If a contact center receives 30
calls in the busy hour and each call lasts for six minutes, this equates to 180 minutes of traffic in the busy
hour, or 3 Erlangs (180 min/60 min). If the contact center receives 100 calls averaging 36 seconds each
in the busy hour, then total traffic received is 3600 seconds, or 1 Erlang (3600 sec/3600 sec).
Use the following formula to calculate the Erlang value:
Traffic in Erlangs = (Number of calls in the busy hour ∗ AHT in sec) / 3600 sec
The term is named after the Danish telephone engineer A. K. Erlang, the originator of queuing theory
used in traffic engineering.
Blocked Calls
A blocked call is a call that is not serviced immediately. Callers are considered blocked if they are
rerouted to another route or trunk group, delayed and put in a queue, or if they hear a tone (such as a
busy tone) or announcement. The nature of the blocked call will determine the model used for sizing the
particular resources.
Service Level
This term is a standard in the contact center industry, and it refers to the percentage of the offered call
volume (received from the voice gateway and other sources) that will be answered within x seconds,
where x is a variable. A typical value for a sales call center is 90% of all calls answered in less than
10 seconds (some calls will be delayed in a queue). A support-oriented call center might have a different
service level goal, such as 80% of all calls answered within 30 seconds in the busy hour. Your contact
center’s service level goal drives the number of agents needed, the percentage of calls that will be
queued, the average time calls will spend in queue, and the number of PSTN trunks and IP IVR ports
needed. For an additional definition of service level within IPCC products, refer to the IPCC glossary
available online at
http://www.cisco.com
Queuing
When all agents are busy with other callers or are unavailable (after call wrap-up mode), subsequent
callers must be placed in a queue until an agent becomes available. The percentage of calls queued and
the average time spent in the queue are determined by the service level desired and by agent staffing.
Cisco's IPCC solution uses an IP IVR to place callers in queue and play announcements. An IVR can
also be used to handle all calls initially (call treatment, prompt and collect – such as DTMF input or
account numbers – or any other information gathering) and for self-service applications where the caller
is serviced without needing to talk to an agent (such as obtaining a bank account balance, airline
arrival/departure times, and so forth). Each of these scenarios requires a different number of IP IVR
ports to handle the different applications because each will have a different average handle time and
possibly a different call load. The number of trunks or gateway ports needed for each of these
applications will also differ accordingly. (See the section on Sizing Call Center Agents, IVR Ports, and
Trunks, page 4-11, for examples on how to calculate the number of trunks and gateway ports needed.)
126044
Time Agent is Occupied
Ring delay time (network ring) should be included if calls are not answered immediately. This delay
could be a few seconds on average, and it should be added to the trunk average handle time.
Before you can answer these basic questions, you must have the following minimum set of information
that is used as input to these calculators:
• The busy hour call attempts (BHCA)
• Average handle time (AHT) for each of the resources
• Service level (percentage of calls that are answered within x seconds)
• Grade of service, or percent blockage, desired for PSTN trunks and IP IVR ports
The remaining sections of this chapter help explain the differences between the Erlang-B and Erlang-C
traffic models in simple terms, and they list which model to use for sizing the specific call center
resource (agents, gateway ports, and IP IVR ports). There are various web sites that provide call center
sizing tools free of charge (some offer feature-rich versions for purchase), but they all use the two basic
traffic models, Erlang-B and Erlang-C. Cisco does not endorse any particular vendor product; it is up to
the customer to choose which tool suits their needs. The input required for any of the tools, and the
methodology used, are the same regardless of the tool itself.
Cisco has chosen to develop its own telephony sizing tool, called Cisco IPC Resource Calculator. The
first version discussed here is designed to size call center resources. Basic examples are included later
in this chapter to show how to use the Cisco IPC Resource Calculator. Additional examples are also
included to show how to use the tool when some, but not all, of the input fields are known or available.
Before discussing the Cisco IPC Resource Calculator, the next two sections present a brief description
of the generic Erlang models and the input/output of such tools (available on the internet) to help the
reader who does not have access to the Cisco IPC Resource Calculator or who chooses to use other
non-Cisco Erlang tools.
Erlang-C
The Erlang-C model is used to size agents in call centers that queue calls before presenting them to
agents. This model assumes:
• Call arrival is random.
• If all agents are busy, new calls will be queued and not blocked.
The input parameters required for this model are:
• The number of calls in the busy hour (BHCA) to be answered by agents
• The average talk time and wrap-up time
• The delay or service level desired, expressed as the percentage of calls answered within a specified
number of seconds
The output of the Erlang-C model lists the number of agents required, the percentage of calls delayed or
queued when no agents are available, and the average queue time for these calls.
Erlang-B
The Erlang-B model is used to size PSTN trunks, gateway ports, or IP IVR ports. It assumes the
following:
• Call arrival is random.
• If all trunks/ports are occupied, new calls are lost or blocked (receive busy tone) and not queued.
The input and output for the Erlang B model consists of the following three factors. You need to know
any two of these factors, and the model will calculate the third.
• Busy Hour Traffic (BHT), or the number of hours of call traffic (in Erlangs) during the busiest hour
of operation. BHT is the product of the number of calls in the busy hour (BHCA) and the average
handle time (AHT).
• Grade of Service, or the percentage of calls that are blocked because not enough ports are available
• Ports (lines), or the number of IP IVR or gateway ports
Project Identification
This field is a description to identify the project or customer name and the specific scenario for this
calculation. It helps to distinguish the different scenarios run (exported and saved) for a project or a
customer proposal.
Recommended Agents
The number of seated agents (calculated using Erlang-C) required to staff the call center during the busy
hour or busy interval.
Queued Calls
The percentage of all calls queued in the IVR during the busy hour or interval. This value includes calls
queued and then answered within the Service Level Goal as well as calls queued beyond the SLG. For
example, if the SLG is 90% of calls answered within 30 seconds and queued calls are 25%, then there
are 10% of calls queued beyond 30 seconds, and the remaining 15% of calls are queued and answered
within 30 seconds (the SLG).
Agents Utilization
The percentage of agent time engaged in handling call traffic versus idle time. After-call work time is
not included in this calculation.
Submit
After entering data in all required input fields, click on the Submit button to compute the output values.
Export
Click on the Export button to save the calculator input and output in a comma-separated values (CSV)
format to a location of your choice on your hard drive. This CSV file could be imported into a Microsoft
Excel spreadsheet and formatted for insertion into bid proposals or for presentation to clients or
customers. Multiple scenarios could be saved by changing one or more of the input fields and combining
all outputs in one Excel spreadsheet by adding appropriate titles to columns reflecting changes in the
input. This format makes comparing results of multiple scenarios easy to analyze.
Notice that the output shows 1980 calls completed by the voice gateway, out of the total of 2000 calls
attempted from the PSTN. This is because we have requested a provisioning of 1% blockage from our
PSTN provider, which results in 20 calls (1%) being blocked and receiving busy tone out of the total
2000 calls.
Agents
The result of 90 seated agents is determined by using the Erlang-C function imbedded in the IPC
Resource Calculator, and calls will be queued to this resource (agents).
Notice that, with 90 agents, the calculated service level is 93% of calls answered within 30 seconds,
which exceeds the desired 90% requested in the input section. Had there been one less agent (89 instead
of 90), then the 90% SLG would not have been met.
This result also means that 7% of the calls will be answered beyond the 30 second SLG. In addition,
there will be 31.7% of calls queued; some will queue less than 30 seconds and others longer. The average
queue time for queued calls is 20 seconds.
If 31.7% of the calls will queue, then 68.3% of the calls will be answered immediately without delay in
a queue, as shown in the output in Figure 4-3.
These two outputs from the Erlang-C calculation are then used as inputs for the imbedded Erlang-B
function in the calculator to compute the number of IVR ports required for queuing and the
corresponding PSTN trunks required. The Erlang-B function is used here because a call would receive
a busy signal (be lost) if no trunks or IVR ports were available to answer or service the call.
The following traffic load impacts the required number of IP IVR ports for queuing derived from the
output of the calculator:
• The traffic load presented by the calls that queue (627 queued) with an average queue time of
20 seconds when no agents are available to answer the call immediately. This load shows that
10 IVR ports are required for queuing.
Note that trunks and IVT ports remained virtually the same, except that there is one additional trunk (113
instead of 112). This slight increase is not due to the wrap-up time, but rather is a side effect of the slight
change in the SLG (92% instead of 93%) due to rounding calculations for the required 116 agents due
to wrap-up time.
Inserting these values into the first column, titled Self Service, in the calculator produces the following
results, as illustrated in Figure 4-6:
• 75 IVR ports for self-service
• 75 trunks
These PSTN trunks and IVR ports are in addition to any that might be needed for priority calls (20%)
and normal calls (60%) that require PSTN trunks and IVR ports for queuing and call treatment before
transferring to an agent. The remaining columns could be used if you had multiple trunk and IVR groups
that were not pooled together (multiple self-service applications) or if the IVR employed had the
capability to queue calls at the edge (remote branch with local PSTN incoming gateway), as is the case
with Cisco ISN.
Note There are many Erlang-B calculators available for free on the Web. (Search on Erlang-B.)
Normal Calls
• 18,000 ∗ 60% = 10,800 calls.
• Average time in IVR for call treatment = 171 seconds.
• Average talk time = 5 minutes (300 seconds).
• SGL = 90% of the calls to be answered within 30 seconds.
Inserting these parameters into the IPC Resource Calculator produces the following results (as illustrated
in Figure 4-8):
• 907 agents are required.
• 1469 trunks are required.
• 44 IVR ports are needed for queuing.
• 563 IVR ports are needed for call treatment.
Now that we have sized all the resources required for the three types of calls in this call center example,
we can add the results to determine the total required resources of each type (agents, PSTN trunks, and
IVR ports):
• Agents for high-priority calls (calls answered by agents, no IVR) = 384
• Agents for normal calls (calls transferred to agents after IVR treatment) = 907
• Total agents = 384 + 907 = 1291
• IVR ports for self-service = 75
• IVR ports for queuing = 6 + 40 = 46
• IVR ports for call treatment = 540
• Total IVR ports = 75 + 46 + 540 = 661
• Total PSTN trunks = 75 + 386 + 1469 = 1930
If IP IVR is used, then you must enter the number of call treatment and queuing ports into the
Configuration and Ordering Tool for proper sizing of server resources. You can access the Configuration
and Ordering Tool at
http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1844/prod_how_to_order.html
If Cisco ISN is the IVR type used, refer to the section on Sizing ISN Components, page 4-20, for
additional details on sizing ISN servers.
Note This section uses the same example as in the preceding Call Center Example with IVR Self-Service
Application, page 4-16, but it reiterates the parameters of that example for clarity and simplicity.
IP
ISN
VXML/H.323 GED125
PSTN V PG
IVR/queuing on
an IOS gateway of 689 + 921 =
under ISN 1610 calls
Agent Shrinkage
Agent shrinkage is a result of any time for which agents are being paid but are not available to handle
calls, including activities such as breaks, meetings, training, off-phone work, unplanned absence,
non-adherence to schedules, and general unproductive time.
Agents Required
This number is based on Erlang-C results for a specific call load (BHCA) and service level.
Agents Staffed
To calculate this factor, divide the number of agents required from Erlang-C by the productive agent
percentage (or 1 minus the shrinkage percentage). For example, if 100 agents are required from Erlang-C
and the shrinkage is 25%, then 100/.75 yields a staffing requirement of 134 agents.
load). However, if 600 of the calls arrive during a 15-minute interval and the balance of the calls
(1400) arrive during the rest of the hour, then 106 agents and 123 trunks would be required instead
to answer all 600 calls within the same service level goal. In a sales call center, the potential to
capture additional sales and revenue could justify the cost of the additional agents, especially if the
marketing campaign commercials are staggered throughout the hour, the day, and the various time
zones.
• Consider agent absenteeism, which can cause service levels to go down, thus requiring additional
trunks and IP IVR queuing ports because more calls will be waiting in queue longer and fewer calls
will be answered immediately.
• Adjust agent staffing based on the agent shrinkage factor (adherence to schedules and staffing
factors, as explained in Agent Staffing Considerations, page 4-25).
• Allow for growth, unforeseen events, and load fluctuations. Increase trunk and IVR capacity to
accommodate the impact of these events (real life) compared to Erlang model assumptions.
(Assumptions might not match reality.) If the required input is not available, make assumptions for
the missing input, run three scenarios (low, medium, and high), and choose the best output result
based on risk tolerance and impact to the business (sales, support, internal help desk, industry,
business environment, and so forth). Some trade industries publish call center metrics and statistics,
such as those shown in Table 4-1, available from web sites such as
http://www.benchmarkportal.com. You can use such industry statistics in the absence of any
specific data about your call center (no existing CDR records, historical reports, and so forth).
Use the output of the IPC Resource Calculator as input for other Cisco configuration and ordering tools
that may require as input, among other factors, the number of IVR ports, number of agents, number of
trunks, and the associated traffic load (BHCA).
Proper sizing of your Cisco IP Contact Center (IPCC) Enterprise solution is important for optimum
system performance and scalability. Sizing considerations include the number of agents the solution can
support, the maximum busy hour call attempts (BHCA), and other variables that affect the number, type,
and configuration of servers required to support the deployment. Regardless of the deployment model
chosen, IPCC Enterprise is based on a highly distributed architecture, and questions about capacity,
performance, and scalability apply to each element within the solution as well as to the overall solution.
This chapter presents best design practices focusing on scalability and capacity for IPCC Enterprise
deployments. The design considerations, best practices, and capacities presented in this chapter are
derived primarily from testing and, in other cases, extrapolated test data. This information is intended to
enable you to size and provision IPCC solutions appropriately.
The information presented in Figure 5-1, Figure 5-2, and Table 5-1 does not apply equally to all
implementations of IPCC. The data is based on testing in particular scenarios, and it serves only as a
guide, along with the sizing variables information in this chapter. As always, you should be conservative
when sizing and should plan for growth.
Note Sizing considerations are based upon capacity and scalability test data. Major ICM software processes
were run on individual servers to measure their specific CPU and memory usage and other internal
system resources. Reasonable extrapolations were used to derive capacities for co-resident software
processes and multiple CPU servers. This information is meant as a guide for determining when ICM
software processes can be co-resident within a single server and when certain processes need their own
dedicated server. Table 5-1 assumes that the deployment scenario includes two fully redundant servers
that are deployed as a duplexed pair. While a non-redundant deployment might theoretically deliver
higher capacity, no independent testing has been done to validate this theory. Therefore, you can and
should refer to Table 5-1 for sizing information about simplexed as well as duplexed deployments.
Note The Cisco IP Contact Center solution does not provide a quad-processor Media Convergence Server
(MCS) at this time. If extra performance is required beyond the limits described in the table below, it
might be possible to use an off-the-shelf quad-processor server in lieu of the MCS 7845. For server
specifications, refer to the Cisco Intelligent Contact Management Software Bill of Materials (BOM)
documentation available at
http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1001/products_usage_guidelines_list.ht
ml.
The following notes apply to all figures and tables in this chapter:
• The number of agents indicates the number of logged-in agents.
• Server types:
– APG = Agent Peripheral Gateway
– CAD = Cisco Agent Desktop
– HDS = Historical Data Server
– PRG = Progger
– RGR = Rogger
Figure 5-1 Minimum Servers Required for Typical IPCC Deployment with CTI Desktop
HDS
APG
132068
The following notes apply to Figure 5-1:
• Sizing is based upon the Cisco MCS 7845 (3.0 GHz or higher) and 5 skill groups per agent.
• Voice Response Unit (VRU) and Cisco CallManager components are not shown.
• For more than 2,000 agents, refer to Table 5-1.
• The Agent Peripheral Gateway (APG) consists of a Generic PG (Cisco CallManager PIM and VRU
PIM), CTI Server, and CTI OS.
• For more information about APG deployment and configuration options, see Figure 5-3 and
Figure 5-4.
Figure 5-2 Minimum Servers Required for Typical IPCC Deployment with Cisco Agent Desktop
HDS
132069
The following notes apply to Figure 5-2:
• Sizing is based upon the Cisco MCS 7845 (3.0 GHz or higher) and 5 skill groups per agent.
• Voice Response Unit (VRU) and Cisco CallManager components are not shown.
• For more than 2,400 agents, refer to Table 5-1.
• The Agent Peripheral Gateway (APG) consists of a Generic PG (Cisco CallManager PIM and VRU
PIM), CTI Server, and CTI OS.
Number Maximum
Component Server Model of CPUs Agents Notes
Progger: MCS-7835I-3.0-CC1 Cannot be co-resident with Administrative
Workstation (AW) or Historical Data Server (HDS).
Peripheral MCS-7835H-3.0-CC1 1 100
Gateway, Router, Maximum of 50 simultaneous queued calls.
and Logger
MCS-7845H-3.0-CC1 250 Maximum of 125 simultaneous queued calls.
Logger database is limited to 14 days.
2
Maximum of 100 agents on MCS-7845 if using a
co-resident Cisco Agent Desktop server.
Maximum of 50 agents on MCS-7845 if using a
co-resident Dialer.
Maximum of 25 agents on MCS-7845 if using a
co-resident Cisco Agent Desktop server and Dialer.
Outbound Dialer is not supported on the MCS-7835
in the Progger configuration.
Rogger: MCS-7835I-3.0-CC1
Router and Logger MCS-7835H-3.0-CC1 1 500
MCS-7845H-3.0-CC1 2 1,500
Router MCS-7845H-3.0-CC1 2 5,000 MCS-7835 not supported
Third-party quad 4 6000
server
Logger MCS-7845H-3.0-CC1 2 5,000 MCS-7835 not supported
Third-party quad 4 6,000
server
Administrative MCS-7835I-3.0-CC1 AW/HDS cannot be co-resident with a Progger,
Workstation (AW) Rogger, Router, Logger, or PG.
MCS-7835H-3.0-CC1 1 500
and Historical Data
Maximum of 2 AW/HDS supported with a single
Server (HDS) MCS-7845H-3.0-CC1 2 5,000
Logger; maximum of 4 with duplexed Loggers.
Third-party quad 4 6,000
WebView server can be co-resident with HDS for up
server
to 50 simultaneous users.
WebView MCS-7835I-3.0-CC1 50 A total of 4 WebView servers may be deployed to
Reporting Server MCS-7835H-3.0-CC1 1 simultaneous reach 200 simultaneous WebView clients.
WebView Difference between MCS-7845 and MCS-7835 is the
MCS-7845H-3.0-CC1 2 clients number of agents supported by AW/HDS.
Table 5-1 Sizing Information for IPCC Components and Servers (continued)
Number Maximum
Component Server Model of CPUs Agents Notes
Agent PG MCS-7835I-3.0-CC1 Refer to Figure 5-3 and Figure 5-4 for more details
about Agent PG configuration options.
(Inbound only) MCS-7835H-3.0-CC1 1 250
Up to 150 Cisco Agent Desktop agents are supported
on an MCS-7835 server Agent PG.
MCS-7845H-3.0-CC1 2 500
Up to 300 Cisco Agent Desktop agents are supported
on a MCS-7845 server Agent PG.
VRU ports should not exceed half of the maximum
supported agents listed in the Maximum Agents
column. Additional VRU PGs can be deployed to
accommodate a greater number of VRU ports.
For more information on the various Agent PG
deployment options, see Peripheral Gateway and
Server Options, page 5-10.
Voice Response MCS-7835I-3.0-CC1 Use the number of ports instead of agent count.
Unit (VRU) PG
MCS-7835H-3.0-CC1 1 600 ports Average of 5 Run VRU Script Nodes per call.
MCS-7845H-3.0-CC1 2 1200 ports Maximum of 8 PIMs per MCS-7845 and 4 PIMs per
MCS-7835. Not to exceed 2 PIMs per 300 ports on a
Generic PG.
Agent PG with MCS-7835H-3.0-CC1 1 (Inbound Moving the dialer off of the Agent PG has no effect
Outbound Voice agents) + (2.5 on the total number of Outbound agents supported.
MCS-7845H-3.0-CC1 2
(includes Dialer ∗ Outbound
Each transfer to a VRU port is equivalent to an agent.
and Media Routing agents) < 250
PG)
(Inbound
agents) + (2.5
∗ Outbound
agents) < 500
Dialer only MCS-7835H-3.0-CC1 1 100 Each transfer to a VRU port is equivalent to an agent.
MCS-7845H-3.0-CC1 2 200
Agent PG with MCS-7835H-3.0-CC1 1 Up to 500 Media Routing (MR) PG co-residency requires the
Media Blender MCS-7845H-3.0-CC1 2 total sessions MCS-7845. See subsequent rows of this table for
(Collaboration capacity numbers.
includes Media
Routing PG)
Media Blender MCS-7845H-3.0-CC1 2 Up to 500 MCS-7835 is not supported.
total sessions
With MCS-7845:
• Single-session chat: 250 agents and 250 callers
• Blended collaboration: 250 agents and
250 callers
• Multi-session chat: 125 agents and 375 callers
Table 5-1 Sizing Information for IPCC Components and Servers (continued)
Number Maximum
Component Server Model of CPUs Agents Notes
Web Collaboration MCS-7845H-3.0-CC1 2 500 total MCS-7835 is not supported.
Server sessions or
With MCS-7845:
250
one-to-one • Single-session chat: 250 agents and 250 callers
• Blended collaboration: 250 agents and
250 callers
• Multi-session chat: 125 agents and 375 callers
Dynamic Content MCS-7845H-3.0-CC1 2 100 MCS-7835 is not supported.
Adapter (DCA) for
DCA co-residency is not supported.
Web Option
Overall limitation (MCS-7845): 100 concurrent
DCA sessions.
Email Manager MCS-7835H-3.0-CC1 1 See note. MCS-7835 is not supported.
Server
MCS-7845H-3.0-CC1 2 1000 (max) Less than 10 agents: all Cisco Email Manager
components and databases co-exist on single server
(MCS-7845).
Up to 250 agents: 2 servers – Cisco Email Manager
AppServer, UI Server, and WebView on first;
database server (Primary, LAMBDA, and CIR
databases) on second.
Up to 500 agents: 4 servers – Cisco Email Manager
AppServer on first; Cisco Email Manager UI Server
(first) and WebView server on second; Cisco Email
Manager UI Server (second) on third; database
server on fourth. (In this scenario, an MCS-7835 may
be used for the second UI Server box.)
Up to 1000 agents: 7 servers – Cisco Email Manager
AppServer on first (quad processor recommended);
Cisco Email Manager UI Server (first) and WebView
server on second; Cisco Email Manager UI Server
(second) on third; Cisco Email Manager UI Server
(third) on fourth; Cisco Email Manager UI Server
(forth, required if more than 750 agents) on fifth,
database server (Primary and LAMDA) on sixth;
database server (CIR) on seventh. (In this scenario,
an MCS-7835 may be used for the n+1 UI Server
boxes.)
For sizing information, refer to the Cisco Intelligent
Contact Management Software Bill of Materials
(BOM) documentation available at
http://www.cisco.com/en/US/partner/products/sw/c
ustcosw/ps1001/products_usage_guidelines_list.ht
ml.
Table 5-1 Sizing Information for IPCC Components and Servers (continued)
Number Maximum
Component Server Model of CPUs Agents Notes
Internet Service For the server specifications for the Internet Service
Node (ISN) Node (ISN), refer to the Cisco Internet Service Node
Application Server (ISN) Software Bill of Materials available at
and Voice Browser http://www.cisco.com/en/US/partner/products/sw/c
ustcosw/ps1006/prod_technical_reference_list.html.
IP IVR Server For the IP IVR server specifications, refer to the
Cisco IPCC Express and IP IVR Configuration and
Ordering Tool, available at
http://www.cisco.com/en/US/partner/products/sw/c
ustcosw/ps1846/prod_how_to_order.html
Agents
The number of agents is another important metric that will impact performance of most IPCC server
components including Cisco CallManager clusters. For impact of agents on the performance of Cisco
CallManager components, see Sizing Cisco CallManager Servers For IPCC, page 6-1
Skill Groups
The number of skill groups per agent has significant effects on the CTI OS Server, the Cisco
CallManager PG, and the ICM Router and Logger. Cisco recommends that you limit the number of skill
groups per agent to 5 or fewer, when possible, and that you periodically remove unused skill groups so
that they do not affect system performance. You can also manage the effects on the CTI OS server by
increasing the value for the frequency of statistical updates.
Queuing
The IP IVR places calls in a queue and plays announcements until an agent answers the call. For sizing
purposes, it is important to know whether the IVR will handle all calls initially (call treatment) and direct
the callers to agents after a short queuing period, or whether the agents will handle calls immediately
and the IVR will queue only unanswered calls when all agents are busy. The answer to this question
determines very different IVR sizing requirements and affects the performance of the ICM
Router/Logger and Voice Response Unit (VRU) PG. Required VRU ports can be determined using the
Cisco IPC Resource Calculator. (See Cisco IPC Resource Calculator, page 4-6, for more information.)
Reporting
Real-time reporting can have a significant effect on Logger, Progger, and Rogger processing due to
database access. A separate server is required for an Administrative Workstation (AW) and/or Historical
Data Server (HDS) to off-load reporting overhead from the Logger, Progger, and Rogger.
132071
CAD SVR CAD SVR CAD SVR CAD SVR CAD SVR
CTI OS
The CTI OS is most commonly configured as a co-resident component on the Agent PG (see Figure 5-3
and Figure 5-4), supporting up to 500 agents.
Table 5-3 lists additional sizing factors for CTI OS.
Table 5-4 Maximum Number of Agents Supported by a Logical Call Center (LCC)
Summary
Proper sizing of IPCC components requires analysis beyond the number of agents and busy hour call
attempts. Configurations with multiple skill groups per agent, significant call queuing, and other factors
contribute to the total capacity of any individual component. Careful planning and discovery in the
pre-sales process should uncover critical sizing variables, and these considerations should be applied to
the final design and hardware selection.
Correct sizing and design can ensure stable deployments for large systems up to 6,000 agents and
180,000 BHCA. For smaller deployments, cost savings can be achieved with careful planning and
co-resident ICM components (for example, Progger, Rogger, and Agent PG).
Additionally, designers should pay careful attention to the sizing variables that will impact sizing
capacities such as skill groups per Agent. While it is often difficult to determine these variables in the
pre-sales phase, it is critical to consider them during the initial design, especially when deploying
co-resident PGs and Proggers. While new versions will scale far higher, the Cisco Agent Desktop
Monitor Server is still limited in the number of simultaneous sessions that can be monitored by a single
server when monitoring and recording are required.
This chapter discusses the concepts, provisioning, and configuration of Cisco CallManager clusters
when used in an IPCC Enterprise environment. Cisco CallManager clusters provide a mechanism for
distributing call processing across a converged IP network infrastructure to support IP telephony,
facilitate redundancy, and provide feature transparency and scalability.
This chapter covers only the IPCC Enterprise operation of clusters within both single and multiple
campus environments and proposes reference designs for implementation. Before reading this chapter,
Cisco recommends that you study the details about the operations of Cisco CallManager clusters
presented in the Call Processing chapter of the Cisco IP Telephony Solution Reference Network Design
(SRND) guide, available at
www.cisco.com/go/srnd
The information in this chapter builds upon the concepts presented in the Cisco IP Telephony SRND.
Some duplication is necessary to clearly concepts relating to IPCC as an application supported by the
Cisco CallManager call processing architecture. However, the foundational concepts are not duplicated
here, and you should become familiar with them before continuing with this chapter.
This chapter documents general best practices and scalability considerations for sizing the Cisco
CallManager servers used with your IPCC Enterprise deployments. Within the context of this document,
scalability refers to Cisco CallManager server and/or cluster capacity when used in the IPCC Enterprise
environment.
Note A cluster may contain a mix of server platforms, but all servers in the cluster must run the same Cisco
CallManager software release and service pack. The publisher server should be of equal or higher
capability than the subscriber servers. (See Table 6-2.)
• Devices (including phones, music on hold, route points, gateway ports, CTI ports, JTAPI Users, and
CTI Manager) should never reside or be registered on the publisher. Any administrative work on
Cisco CallManager will impact call processing and CTI Manager activities if there are any devices
registered with the publisher.
• Do not use a publisher as a failover or backup call processing server unless you have fewer than 50
agent phones and the installation is not mission critical or is not a production environment. The
Cisco MCS-7825H-3000 is the minimum server required. Any deviations will require review by
Cisco Bid Assurance on a case-by-case basis.
• Any deployment with more than 50 agent phones requires a minimum of two subscriber servers and
a combined TFTP and publisher.
• If you require more than one primary subscriber to support your configuration, then distribute all
agents equally among the cluster nodes. This assumes BHCA is uniform across all agents (average
BHCA processed is about the same on all nodes).
• Similarly, distribute all gateway ports and IP IVR CTI ports equally among the cluster nodes.
• If you require more than one ICM JTAPI user (CTI Manager) and more than one primary subscriber,
then group and configure all devices monitored by the same ICM JTAPI User (third-party
application provider), such as ICM route points and agent devices, in the same server if possible.
• If you have a mixed cluster with IPCC and general office IP phones, group and configure each type
on a separate server if possible (unless you need only one subscriber server). For example, all IPCC
agents and their associated devices and resources (gateway ports, CTI ports, and so forth) would be
on one or more Cisco CallManager servers, and all general office IP phones and their associated
devices (such as gateway ports) would be on other Cisco CallManager servers, as long as cluster
capacity allows. In this case, the 1:1 redundancy scheme is strongly recommended. (See Call
Processing Redundancy with IPCC, page 6-9, for details)
• Under normal circumstances, place all servers from the Cisco CallManager cluster within the same
LAN or MAN. Cisco does not recommend placing all members of a cluster on the same VLAN or
switch.
• If the cluster spans an IP WAN, you must follow the specific guidelines for clustering over the IP
WAN as described in both the section on Clustering Over the WAN, page 2-15 in this guide, and
the section on Clustering Over the IP WAN in the Cisco IP Telephony Solution Reference Network
Design (SRND) guide, available at
www.cisco.com/go/srnd
For additional Cisco CallManager clustering guidelines, refer to the Cisco IP Telephony Solution
Reference Network Design (SRND) guide at
www.cisco.com/go/srnd
• The default trace file location for the Cisco CallManager and signal distribution layer (SDL) is on
the primary drive. This trace file should be redirected to the secondary F drive array, and the CTI
default trace file location should be directed to the C drive array. This configuration will have the
least impact on disk I/O resources.
Note If your system does not meet the guidelines in this document, or if you consider the system to be complex
(IP Telephony and IPCC mixed with other applications), contact your Cisco Systems Engineer (SE) for
proper sizing of the Cisco CallManager cluster.
Table 6-1 Sample Input for Cisco CallManager Capacity Tool (continued)
In addition to the device information, the Cisco CallManager Capacity Tool also requires information
regarding the dial plan, such as route patterns and translation patterns.
The IPCC input includes entries for agents (inbound and outbound), Internet Service Node (ISN) or
IP IVR ports for gateway ports, and percent of total calls that are transferred and/or conferenced.
When all the details have been entered, the Cisco CallManager Capacity Tool calculates how many
servers of the desired server type are required, as well as the number of clusters if the required capacity
exceeds a single cluster.
At this time, the Cisco CallManager Capacity Tool is available to all Cisco employees and partners at
http://www.cisco.com/cgi-bin/CT/CCMCT/ct.cgi
The maximum number of IPCC Enterprise agents that a single Cisco CallManager server can support
depends on the server platform, as indicated in Table 6-3.
Table 6-3 Maximum Number of IPCC Enterprise Agents per Cisco CallManager (Release 3.3 or Later) Server Platform
Cisco CallManager MCS Server Platform and Equivalent Maximum IPCC High-Availability High-Performance
Server Characteristics1 Agents per Server 2 Platform 3 Server
4
• Cisco MCS-7845H-3000 (Dual Prestonia Xeon 3.06 GHz 500 Yes Yes
or higher) 4 GB RAM
• HP DL380-G3 3.06 GHz 2-CPU 3
5
• Cisco MCS-7845H-2400 (Dual Prestonia Xeon 500 Yes Yes
2400 MHz) 4 GB RAM (With the addition of
battery-backed write cache, BBWC, installed separately)
• HP DL380-G3 2400 MHz 2-CPU
• Cisco MCS-7845H-2400 (Dual Prestonia Xeon 250 Yes Yes
2400 MHz) 4 GB RAM (Without BBWC)
• HP DL380-G3 2400 MHz 2-CPU
• Cisco MCS-7835H-3000 (Prestonia Xeon 3.06 GHz) 250 Yes No 5
2 GB RAM (With the addition of battery-backed write
cache, BBWC, installed separately)
• HP DL380-G3 3.06 GHz 1-CPU
Table 6-3 Maximum Number of IPCC Enterprise Agents per Cisco CallManager (Release 3.3 or Later) Server Platform
Cisco CallManager MCS Server Platform and Equivalent Maximum IPCC High-Availability High-Performance
Server Characteristics1 Agents per Server 2 Platform 3 Server
• Cisco MCS-7835H-3000 (Prestonia Xeon 3.06 GHz) 125 Yes No
2 GB RAM (Without BBWC)
• HP DL380-G3 3.06 GHz 1-CPU
• Cisco MCS-7825H-3000 (Pentium 4, 3.06 GHz) 100 No No
2 GB RAM
• HP DL320-G2 3.06 GHz 6
1. For the latest information on server memory requirements, refer to Product Bulletin No. 2864, Physical Memory Recommendations for Cisco
CallManager Version 4.0 and Later, available at http://www.cisco.com/en/US/products/sw/voicesw/ps556/prod_bulletin0900aecd80284099.html.
2. Agent capacities are based on a maximum of 30 BHCA per agent in the busy hour and failover scenario.
3. A high-availability server supports redundancy for both the power supplies and the hard disks.
4. This server has the battery-backed write cache kit (BBWC) installed.
5. This server does not have the battery-backed write cache kit (BBWC) installed. Without this kit, the capacity would be half the stated limit. The kit must
be ordered and installed separately to achieve the maximum stated agent capacity.
6. The maximum number of IPCC agents supported on a single non-high-availability platform (such as the MCS-7825H) is 50 agents in a mission-critical
call center. With a redundant configuration, this limit does not apply.
coresident applications. All of these functions can consume additional memory resources within the
Cisco CallManager server. To improve performance, you can install additional certified memory in the
server, up to the maximum supported for the particular platform.
A Cisco CallManager cluster with a very large dial plan containing many gateways, route patterns,
translation patterns, and partitions can take an extended amount of time to initialize when the Cisco
CallManager Service is first started. If the system does not initialize within the default time, there are
service parameters that can be increased to allow additional time for the configuration to initialize. For
details on the service parameters, refer to the online help for Service Parameters in Cisco CallManager
Administration.
With this upgrade method, there is no period (except for the failover period) when devices are registered
to subscriber servers that are running different versions of the Cisco CallManager software. This factor
can be important because the Intra-Cluster Communication Signaling (ICCS) protocol that
communicates between subscribers can detect a different software version and shut down
communications to that subscriber. This action could potentially partition a cluster for call processing,
but SQL and LDAP replication would not be affected.
The 2:1 redundancy scheme allows for fewer servers in a cluster, but it can potentially result in an outage
during upgrades. This is not a recommended scheme for IPCC, although it is supported if it is a customer
requirement and possible outage of call processing is not of concern to the customer.
The 2:1 redundancy scheme enables you to upgrade the cluster using the following method. If the Cisco
CallManager service does not run on the publisher database server, upgrade the servers in the following
order:
Note Cisco does not recommend that you oversubscribe the backup server(s) during the upgrade.
Cisco strongly recommends that you have no more than the maximum of 500 IPCC agents
registered to the backup server during the upgrade. Cisco strongly recommends that you perform
the upgrade during off-peak hours when low call volume occurs.
Step 5 Upgrade each primary server that has the Cisco CallManager service running on it. Remember to
upgrade one server at a time. During the upgrade of the second primary subscriber, there will be some
outage for users and agents subscribed on that server, until the server is upgraded. Similarly, when you
upgrade the fourth primary subscriber, there will be some outage for users and agents subscribed on that
server, until the server is upgraded.
M Backup M M
Backup
M M Backup M M
Not Recommended
1 2 3
Publisher and TFTP Server(s) Publisher and TFTP Server(s)
MAX 50 AGENTS MAX 500 AGENTS MAX 1000 AGENTS
4 5
Publisher and TFTP Server(s) Publisher/TFTP
MAX 1500 AGENTS MAX 2000 AGENTS
126040
Backup M M Primary
1 2 3
Publisher and TFTP Server(s) Publisher and TFTP Server(s)
MAX 50 AGENTS MAX 500 AGENTS MAX 1000 AGENTS
4 5
Publisher and TFTP Server(s) Publisher/TFTP
MAX 1500 AGENTS MAX 2000 AGENTS
M Primary M Primary
Backup M Backup M
M Primary M Primary
Backup M
M Primary
Figure 6-4 1:1 IPCC Enterprise Redundancy with Cisco CallManager Release 3.3 or Later, with 50/50
Load Balancing (High-Performance Server with BBWC Installed)
1250 to 1001 to
1500 M M 1250
1750 to 1501 to
126042
2000 M M 1750
Note MCS-7845H-2.4 Advanced server does not come with BBWC installed; BBWC must be ordered
separately.
Figure 6-5 1:1 Redundancy for Mixed Office and IPCC Phones with Cisco CallManager Release 3.3 or Later on
MCS-7845H-3000 High-Performance Server with 50/50 Load Balancing
250 IPCC AGENTS AND 500 IPCC AGENTS AND 1000 IPCC AGENTS AND
3750 PHONES 7500 PHONES 15000 PHONES
15000 M M 11250
An agent desktop is a required component of an IPCC deployment. From the agent desktop, the agent
performs agent state control (login, logout, ready, not ready, and wrap-up) and call control (answer,
release, hold, retrieve, make call, transfer, and conference).
Within the Cisco Intelligent Contact Management (ICM) configuration, an IPCC agent desktop is not
statically associated with any specific agent or IP Phone extension. Agents and IP Phone extensions
(device targets) must be configured within the ICM configuration, and both are associated with a specific
Cisco CallManager cluster. When logging in from an agent desktop, the agent is presented with a dialog
box that prompts for agent ID, password (optional, depending upon agent configuration in the ICM), and
the IPCC phone extension to be used for this login session. It is at login time that the agent ID, IP Phone
extension (device target), and agent desktop IP address are all dynamically associated. The association
is released upon agent logout. This mechanism enables an agent to hot-desk from one agent desktop to
another. It also provides for laptop roaming so that an agent can take their laptop to any IP Phone and
log in from that IP Phone (assuming the IP Phone has been configured in the ICM and in Cisco
CallManager to be used in an IPCC deployment). Agents can also log in to other IP Phones using the
extension mobility feature.
All communication from the agent desktop passes through the CTI OS Server (see Figure 7-1). The CTI
OS Server can run on the same Peripheral Gateway (PG) server as the Cisco CallManager PG process
(typical scenario) or on a separate server. If the CTI OS Server runs on its own platform, then that server
is sometimes called a CTI gateway (CG) as opposed to a Peripheral Gateway (PG). The hardware and
third-party software requirements for a CG and PG are the same. Server sizing is discussed in the chapter
on Sizing IPCC Components and Servers, page 5-1.
IP IVR 2 PSTN
V
SCI
IVR 2 PIM IP voice
TDM Voice
132072
CTI/Call
control data
For each Cisco CallManager PG (and Cisco CallManager cluster), there is one CTI OS Server. The CTI
OS Server and the Cisco CallManager PG communicate with each other via the Open Peripheral
Controller (OPC) process. All communications from the CTI OS Server are passed to the CTI Server,
then via OPC to the Cisco CallManager PG process, then typically to either the ICM Central Controller
or the Cisco CallManager.
There may be one or more CTI OS Servers connecting to the CTI Server. The CTI OS Server interfaces
with the CTI OS desktop and toolkit as well as Cisco Agent Desktop (Release 6.0 and later). All agent
state change requests flow from the agent desktop through CTI OS to the CTI Server to the Cisco
CallManager PG to the ICM Central Controller. The ICM Central Controller monitors the agent state so
that it knows when it can and cannot route calls to that agent and can report on that agent's activities.
Call control (answer, release, hold, retrieve, make call, and so on) flows from the agent desktop through
the CTI OS Server to the CTI Server to the Cisco CallManager PG and then to the Cisco CallManager.
The Cisco CallManager then performs the requested call or device control. It is the role of the Cisco
CallManager PG to keep the IPCC agent desktop and the IP Phone in sync with one another.
• Prepackaged CRM integrations — These integrations are available through Cisco CRM Technology
Partners. These integrations are based on the CTI OS toolkit and are not discussed individually in
this document.
In addition to an agent desktop, a supervisor desktop is available with the Cisco Agent Desktop and
CTI OS options.
Cisco Agent Desktop is a packaged agent and supervisor desktop application. It has a system
administration interface that allows configuration of the desktop and workflow automation. Desktop
configuration includes: defining what buttons are visible; specifying call, voice, and data processing
functions for buttons; and specifying what telephony data will appear on the desktop. The workflow
automation enables data processing actions to be scheduled based on telephony events (for example,
popping data into third-party applications on answer and sending email on dropped events). The
workflow automation interfaces with applications written for Microsoft Windows browsers and terminal
emulators. Some customizations can be as simple as using keystroke macros for screen pops.
While CTI OS is a toolkit, it does provide a pre-built, operational agent and supervisor desktop
executable. Source code for these executables is provided. Source code is also provided with a number
of sample applications, which are included with the toolkit to allow for easy customization. The CTI OS
Toolkit does provide the most flexibility. It allows a custom agent or supervisor desktop to be developed,
as well as offering advanced tools for integrating the desktop to a database, CRM, or other applications.
Aside from the differences between configured versus customized applications, one major distinction
between the two desktop solutions is that Cisco Agent Desktop offers all of the following features:
• Ad-hoc recording (CTI OS users must rely on a third-party recording solution.)
• IP Phone Agents (IPPA) — This is an XML application that allows agents on Cisco 7940 and 7960
IP Phones to log in and perform basic ACD functions from their phones.
• SPAN port silent monitoring — A server-based and switch-based silent monitor solution that works
with IPPA as well as Cisco Agent Desktop agents. CTI OS does offer endpoint monitoring, but it
requires a PC to be running at the agent's location, which is not the case with IPPA.
Prepackaged CRM integrations are provided by the major CRM manufacturers. These packages are
based upon either CTI or CTI OS tools.
Cisco Agent Desktop, Supervisor Desktop, and CTI OS cannot co-exist with Cisco CallManager PG; the
configuration of agents and supervisors must be kept separate. Cisco Supervisor Desktop cannot be used
to monitor a CTI OS agent desktop, nor can a CTI OS supervisor monitor a Cisco Agent Desktop agent.
The following sections cover these two desktop options separately. Both rely upon communication with
the CTI Server, as described in the previous section.
One of the base Cisco Agent Desktop services, the Enterprise server, is a monitor-only CTI application
that provides value-added services to the agent desktop. Similarly, the other Cisco Agent Desktop
services provide value-added features such as recording and chatting. Prior to Release 6.0, Cisco Agent
Desktop received its CTI input from the CTI server; with Release 6.0 and later (except for IPCC
Express), Cisco Agent Desktop receives its CTI input from the CTI OS Server.
The Cisco Agent Desktop servers may be co-resident on the Peripheral Gateway (PG). As the number
of agents increases, the Cisco Agent Desktop servers might require a dedicated server. For more
information on server requirements, refer to the chapter on Sizing IPCC Components and Servers, page
5-1.
Figure 7-2 illustrates the system components.
PSTN Agent PC
-Cisco Agent Desktop
IP phone IP
Cisco voice
gateway Cisco Catalyst Administrator PC Agent PC
V
-SPAN port for -Administrative Supervisor PC -Cisco Agent Desktop
Silent Monitor Workstation -Cisco Supervisor Desktop -Media termination (optional)
(optional) -Cisco Agent Desktop
Ethernet LAN
M
Cisco
ICM IP IVR CallManager PG and CTI OS PG and CTI OS
-Cisco Desktop Base Server (optional, for redundancy)
-Cisco Desktop VoIP Monitor Server -Cisco Desktop Base Server
132175
-Cisco Desktop Recording Server -Cisco Desktop VoIP Monitor Server
-Cisco Desktop Recording Server
Cisco Agent Desktop Release 6.0(1) includes the following new features:
• CTI OS-based implementation
• No longer dependent on shares; configuration data is now stored in Directory Services
• Desktops are automatically updated when new versions are detected at startup
• System redundancy
Cisco Desktop Administrator includes the following new features:
• Configuration settings are set up and maintained through the Cisco Agent Desktop Configuration
Setup utility, which can be accessed through the Desktop Administrator (or as a standalone
program). These configuration settings are no longer set up during the installation process.
• HTTP Post/Get action enables interaction between Agent Desktop (Premium version only) and
web-based applications.
Architecturally, CTI OS Server is positioned between the CTI OS agent desktop and the CTI Server.
CTI OS Server provides a mechanism to maintain agent and call state information so that the agent
desktop can be stateless. This architecture provides the necessary support to develop a browser-based
agent desktop if desired.
The CTI OS system consists of three major components (see Figure 7-3):
• CTI OS Server
• CTI OS Agent Desktop
• CTI OS Supervisor Desktop (only on Cisco IPCC for now)
PG/CTI platform
CTIOS server
Cisco CTI
CallManager server CTI CTIOS
PIM server server
driver node
JTAPI TCP/IP
Agent workstation
Desktop computer
76375
This chapter presents an overview of the IPCC Enterprise network architecture, deployment
characteristics of the network, and provisioning requirements of the IPCC network. Essential network
architecture concepts are introduced, including network segments, keep-alive (heartbeat) traffic, flow
categorization, IP-based prioritization and segmentation, and bandwidth and latency requirements.
Provisioning guidelines are presented for network traffic flows between remote components over the
WAN, including recommendations on how to apply proper Quality of Service (QoS) to WAN traffic
flows. For a more detailed description of the IPCC architecture and various component internetworking,
see Architecture Overview, page 1-1.
Cisco IP Contact Center (IPCC) has traditionally been deployed using private, point-to-point leased-line
network connections for both its private (duplexed controller, side-to-side) as well as public (Peripheral
Gateway to Central Controller) WAN network structure. Optimal network performance characteristics
(and route diversity for the fault tolerant failover mechanisms) are provided to the IPCC application only
through dedicated private facilities, redundant IP routers, and appropriate priority queuing.
Enterprises deploying networks that share multiple traffic classes, of course, prefer to maintain their
existing infrastructure rather than revert to an incremental, dedicated network. Convergent networks
offer both cost and operational efficiency, and such support is a key aspect of Cisco Powered Networks.
Beginning with IPCC Enterprise Release 5.0, application layer Quality of Service (QoS) packet marking
on the IPCC public path is supported from within the IPCC application, thus simplifying WAN
deployment in a converged network environment when that network is enabled for QoS. QoS
deployment on the public network enables remote Peripheral Gateways (PGs) to share a converged
network and, at the same time, guarantees the stringent ICM/IPCC traffic latency, bandwidth, and
traffic-related prioritization requirements inherent in the real-time requirements of the product. This
chapter presents recommendations for configuring QoS for the traffic flows over the WAN. The public
network that connects the remote PGs to the Central Controller is the main focus.
Historically, two QoS models have been used: Integrated Services (IntServ) and Differentiated Services
(DiffServ). The IntServ model relies on the Resource Reservation Protocol (RSVP) to signal and reserve
the desired QoS for each flow in the network. Scalability becomes an issue with IntServ because state
information of thousands of reservations has to be maintained at every router along the path. DiffServ,
in contrast, categorizes traffic into different classes, and specific forwarding treatments are then applied
to the traffic class at each network node. As a coarse-grained, scalable, and end-to-end QoS solution,
DiffServ is more widely used and accepted. IPCC applications are not aware of RSVP and, therefore,
IPCC does not support IntServ. The QoS considerations in this chapter are based on DiffServ.
Adequate bandwidth provisioning is a critical component in the success of IPCC deployments.
Bandwidth guidelines and examples are provided in this chapter to help with provisioning the required
bandwidth.
Network Segments
The fault-tolerant architecture employed by IPCC requires two independent communication networks.
The private network (or dedicated path) carries traffic necessary to maintain and restore synchronization
between the systems and to allow clients of the Message Delivery Subsystem (MDS) to communicate.
The public network carries traffic between each side of the synchronized system and foreign systems.
The public network is also used as an alternate network by the fault-tolerance software to distinguish
between node failures and network failures.
Note The terms public network and visible network are used interchangeably throughout this document.
A third network, the signaling access network, may be deployed in ICM systems that also interface
directly with the carrier network (PSTN) and that deploy the Hosted ICM/IPCC architecture. The
signaling access network is not addressed in this chapter.
Figure 8-1 illustrates the fundamental network segments for an IPCC Enterprise system with two PGs
(with sides A and B co-located) and two geographically separated CallRouter servers.
Figure 8-1 Example of Public and Private Network Segments for an IPCC Enterprise System
CallRouter A CallRouter B
V V
PG 1A PG 1B PG 2A PG 2B 126999
Public Network
CC/PG Private Network
• Remote call centers connect to each Central Controller side via the public network. Each WAN link
to a call center must have adequate bandwidth to support the PGs and AWs at the call center. The
IP routers in the public network use IP-based priority queuing or QoS to ensure that ICM traffic
classes are processed within acceptable tolerances for both latency and jitter.
• Call centers (PGs and AWs) local to one side of the Central Controller connect to the local Central
Controller side via the public Ethernet, and to the remote Central Controller side over public WAN
links. This arrangement requires that the public WAN network must provide connectivity between
side A and side B. Bridges may optionally be deployed to isolate PGs from the AW LAN segment
to enhance protection against LAN outages.
• To achieve the required fault tolerance, the private WAN link must be fully independent from the
public WAN links (separate IP routers, network segments or paths, and so forth). Independent WAN
links ensure that any single point of failure is truly isolated between public and private networks.
Additionally, public network WAN segments traversing a routed network must be deployed so that
PG-to-CallRouter route diversity is maintained throughout the network. Be sure to avoid routes that
result in common path selection (and, thus, a common point of failure) for the multiple
PG-to-CallRouter sessions (see Figure 8-1).
down if a keep-alive response from the other side is not heard. Microsoft Windows 2000 allows you to
specify keep-alive parameters on per-connection basis. For ICM public connections, the keep-alive
timeout is set to 5∗400 ms, meaning that a failure can be detected after 2 seconds, as was the case with
the UDP heartbeat prior to Release 5.0.
The reasons of moving to TCP keep-alive are as follows:
• The use of UDP heartbeats creates deployment complexities in a firewall environment. The dynamic
port allocation for heartbeat communications makes it necessary to open a large range of port
numbers, thus defeating the original purpose of the firewall device.
• In a converged network, algorithms used by routers to handle network congestion conditions have
different effects on TCP and UDP. As a result, delays and congestion experienced by UDP heartbeat
traffic can have, in some cases, little correspondence to the TCP connections.
Traffic Flow
This section briefly describes the traffic flows for the public and private networks.
• Medium-priority and low-priority traffic — For the Central Controller, this traffic includes shared
data sourced from routing clients as well as (non-route control) call router messages, including call
router state transfer (independent session). For the OPC (PG), this traffic includes shared non-route
control peripheral and reporting traffic. This class of traffic is sent in TCP sessions designated as
medium-priority and low-priority, respectively, with the private non-high priority IP address.
• State transfer traffic — State synchronization messages for the Router, OPC, and other synchronized
processes. It is sent in TCP with a private non-high-priority IP address.
Network Provisioning
This section covers:
• Quality of Service, page 8-8
• Bandwidth Sizing, page 8-10
• Bandwidth Requirements for CTI OS Agent Desktop, page 8-11
• Bandwidth Requirements for Cisco Agent Desktop Release 6.0, page 8-13
Quality of Service
This section covers:
• QoS Planning, page 8-8
• Public Network Marking Requirements, page 8-8
• Configuring QoS on IP Devices, page 8-9
• Performance Monitoring, page 8-10
QoS Planning
In planning QoS, a question often arises about whether to mark traffic in the application or at the network
edge. Marking traffic in the application saves the access lists for classifying traffic in IP routers and
switches, and it might be the only option if traffic flows cannot be differentiated by IP address, port
and/or other TCP/IP header fields. As mentioned earlier, ICM currently supports DSCP markings on the
public network connection between the Central Controller and the PG. Additionally, when deployed
with Microsoft Windows Packet Scheduler, ICM offers shaping and 802.1p. The shaping functionality
mitigates the bursty nature of ICM transmissions by smoothing transmission peaks over a given time
period, thereby smoothing network usage. The 802.1p capability, a LAN QoS handling mechanism,
allows high-priority packets to enter the network ahead of low-priority packets in a congested Layer-2
network segment.
Traffic can be marked or remarked on edge routers and switches if it is not marked at its source or if the
QoS trust is disabled in an attempt to prevent non-priority users in the network from falsely setting the
DSCP or 802.1p values of their packets to inflated levels so that they receive priority service. For
classification criteria definitions on edge routers and switches, see Table 8-1.
Note Cisco has begun to change the marking of voice control protocols from DSCP 26 (PHB AF31) to
DSCP 24 (PHB CS3). However, many products still mark signaling traffic as DSCP 26 (PHB AF31).
Therefore, in the interim, Cisco recommends that you reserve both AF31 and CS3 for call signaling.
Table 8-1 Public Network Traffic Markings (Default) and Latency Requirements
DSCP / 802.1p
Latency DSCP / 802.1p Using Bypassing Packet
Priority IP address & port Requirement Packet Scheduler Scheduler
High High-priority public IP address 200 ms AF31 / 3 AF31 / 3
and high-priority connection
port
Medium High-priority public IP address 1,000 ms AF31 / 3 AF21 / 2
and medium-priority
connection port
Low Non-high-priority public IP 5 seconds AF11 / 1 AF11 / 1
address and low-priority
connection port
Note The terms public network and visible network are used interchangeably throughout this document.
Performance Monitoring
Once the QoS-enabled processes are up and running, the Microsoft Windows Performance Monitor
(PerfMon) can be used to track the performance counters associated with the underlying links. For
details on using PerfMon for this purpose, refer to the Cisco ICM Enterprise Edition Administration
Guide, available at
http://www.cisco.com/en/US/products/sw/custcosw/ps1001/products_administration_guides_list.html
Bandwidth Sizing
This section briefly describes bandwidth sizing for the public (visible) and private networks.
Table 8-2 CTI OS Bandwidth Requirements as a Function of Agent Skill Group Membership (at
20,000 BHCA)
Best Practices and Options for CTI OS Server and CTI OS Agent Desktop
To mitigate the bandwidth demands, use any combination of the following options:
In the case where remote agents have their skill group statistics turned off but the supervisor would like
to see the agent skill group statistics, the supervisor could use a different connection profile with
statistics turned on. In this case, the volume of traffic sent to the supervisor would be considerably less.
For each skill group and agent (or supervisor), the packet size for a skill-group statistics message is
fixed. So an agent in two skill groups would get two packets, and a supervisor observing five skill groups
would get five packets. If we assume 10 agents at the remote site and one supervisor, all with the same
two skill groups configured (in IPCC, the supervisor sees all the statistics for the skill groups to which
any agent in his agent team belongs), then this approach would reduce skill-group statistics traffic by
90% if only the supervisor has statistics turned on to observe the two skill groups but agents have
statistics turned off.
Also, at the main location, if agents want to have their skill-group statistics turned on, they could do so
without impacting the traffic to the remote location if the supervisor uses a different connection profile.
Again, in this case no additional CTI OS servers would be required.
In the case where there are multiple remote locations, assuming only supervisors need to see the
statistics, it would be sufficient to have only one connection profile for all remote supervisors.
Table 8-3 shows the bandwidth usage between Cisco Agent Desktop and the CTI OS and Cisco Agent
Desktop servers for heartbeats and skill statistics. This type of data is passed to and from logged-in
agents at set intervals, regardless of what the agent is doing. The refresh interval for these skill group
statistics was the default setting of 10 seconds. This refresh interval can be configured in CTI OS. Skill
group statistics were also configured in CTI OS as described in the Cisco Agent Desktop Installation
Guide, available at
http://www.cisco.com
Table 8-3 Bandwidth Usage for Heartbeats and Skill Statistics (Bytes Per Second)
Example
If there are 25 remote agents with 10 skills per agent, the number of bytes per second (Bps) sent from
the CTI OS server to those desktops across the WAN can be calculated as follows:
25 agents ∗ (2.1 Bps + (10 skills ∗ 46.4 Bps) = 11,653 Bps
11,653 Bps ∗ 8 bits per byte = 93,220 average bits per second = 93.22 kilobits per second (kbps)
Table 8-4 lists the total bytes of data sent when an agent changes state from Ready to Not Ready and
enters a reason code.
Example
If there are 25 remote agents with 5 skills per agent, each of whom changes agent state one time, the
total number of bytes sent from the CTI OS server to Cisco Agent Desktop is:
25 ∗ 6883 = 172,075 bytes
Table 8-5 lists the total bytes of data required for a typical call scenario. For this call scenario, Cisco
Agent Desktop is used to perform the following functions:
• Transition an agent from the work ready state.
• Answer an incoming ACD call using the softphone controls.
• Put the agent in a work ready state.
• Hang up the call using the softphone controls.
• Select wrap-up data.
This scenario includes presenting Expanded Call Context (ECC) variables to the agent. Each ECC
variable is 20 bytes in length, assuming a worst-case scenario.
Example
Assume there are 25 remote agents with 5 skills and 5 ECC variables, who each answer 20 calls in the
busy hour. Also assume a full-duplex network, and use the larger of the To/From bandwidth numbers,
which is 37,205 bytes in this case.
37,205 bytes per call ∗ 25 agents ∗ 20 calls per hour = 18,602,500 bytes per hour.
(18,602,500 bytes per hour) / (3600 seconds per hour) = 5,167 bytes per second (Bps)
Note Access to LDAP is not included in the calculation because both Cisco Agent Desktop and Cisco
Supervisor Desktop read their profiles only once, at startup, and then cache it. The numbers in this
example are not based on calls in progress, but on calls attempted or completed. The amount of
bandwidth used is per call, and does not depend on the length of the call (a 1-minute call and a 10-minute
call typically use the same amount of bandwidth, excluding voice traffic). The example does not take
into account the additional traffic generated if calls are transferred, held, or conferenced.
Be sure to mark RTP packets for monitoring, recording, and playback, in addition to other required RTP
and signaling marking. For details on traffic marking, refer to the Cisco IP Telephony Solution
Reference Network Design (SRND) guide, available at
www.cisco.com/go/srnd
Figure 8-2 shows a main office and a remote office. The main office contains the various Cisco Desktop
services and the switch shared with the remote office. Both the main office and the remote office have
Cisco Agent Desktop agents and supervisors. In this diagram, all agents and supervisors belong to the
same logical contact center (LCC) and are on the same team.
Remote Office
Router
Supervisor B
V
Ethernet
WAN
Router Agent B
Main Office
Ethernet
IP Phone IP Phone
Services:
CallManager IP IP
Switch ICM
IPIVR
Supervisor A Agent A
127000
In the main office, agents and supervisors use IP phones. In the remote office, agents and supervisors
use media termination softphones.
The amount of traffic between the monitor services and the monitoring supervisor is equal to the
bandwidth of one IP phone call (two RTP streams of data). (Monitor services refers to both the VoIP
Monitor service and the Desktop Monitor service.)
When calculating bandwidth, you must use the size of the RTP packet plus the additional overhead of
the networking protocols used to transport the RTP data through the network.
G.711 packets carrying 20 ms of speech data require 64 kbps of network bandwidth. (See Table 8-6.)
These packets are encapsulated by four layers of networking protocols (RTP, UDP, IP, and Ethernet).
Each of these protocols adds its own header information to the G.711 data. As a result, the G.711 data,
once packed into an Ethernet frame, requires 87.2 kbps of bandwidth per data stream as it travels over
the network. An IP phone call consists of two streams, one from A to B and one from B to A. For an IP
phone call using the G.711 codec, both streams require 87.2 kbps.
For full-duplex connections, the bandwidth speed applies to both incoming and outgoing traffic. (For
instance, for a 100-Mbps connection, there is 100 Mbps of upload bandwidth and 100 Mbps of download
bandwidth.) Therefore, an IP phone call consumes the bandwidth equivalent of a single stream of data.
In this scenario, a G.711 IP phone call with no silence suppression requires 87.2 kbps of the available
bandwidth.
Monitor services send out two streams for each monitored call, both going from the service to the
requestor. This means that, for each monitor session, the bandwidth requirement is for two streams
(174.4 kbps with the G.711 codec).
If a VoIP Monitor service is used to monitor an agent's extension, this bandwidth is required between
the VoIP Monitor service and the supervisor's computer. In Figure 8-2, if supervisor A monitors
agent A, this bandwidth is required on the main office LAN. If supervisor A monitors agent B at the
remote office, another VoIP Monitor service is needed in the remote office (not shown in Figure 8-2).
The bandwidth requirement also applies to the WAN link.
If desktop monitoring is used, the bandwidth requirements are between the agent's desktop and the
supervisor's desktop. If supervisor A monitors agent A, this bandwidth is required on the main office
LAN. If supervisor A monitors agent B in the remote office, the bandwidth requirement also applies to
the WAN link.
The Recording and Statistics service is used to record agent conversations. See Table 8-6 for the
bandwidth requirements between the Recording and Statistics service and the monitor service.
Cisco Agent Desktop Release 6.0 introduced a new Recording Service. This service sends RTP streams
to supervisors for recording playback. The bandwidth used for the RTP streams is identical to silent
monitoring. See Table 8-6 for details.
If a VoIP Monitor service is used to monitor or record a call, the bandwidth requirement on the service's
network connection is two streams of voice data.
If a Desktop Monitor service is used, the additional load of the IP phone call is added to the bandwidth
requirement because the IP phone call comes to the same agent where the Desktop Monitor service is
located.
In either case, the bandwidth requirement is the bandwidth between the monitor service and the
requestor:
• VoIP Monitor service to supervisor
• Agent desktop to supervisor
• VoIP service to Recording and Statistics service
• Agent desktop to Recording and Statistics service
Table 8-7 and Table 8-8 display the percentage of total bandwidth available that is required for
simultaneous monitoring sessions handled by a single Desktop Monitor service.
The following notes also apply to the bandwidth requirements for the Desktop Monitor service shown
in Table 8-7 and Table 8-8:
• The bandwidth values are calculated based on the best speed of the indicated connections. A
connection's true speed can differ from the maximum stated due to various other factors.
• The bandwidth requirements are based on upload speed. Download speed affects only the incoming
stream for the IP phone call.
• The data represents the codecs without silence suppression. With silence suppression, the amount
of bandwidth used may be lower.
• The data shown does not address the quality of the speech of the monitored call. If the bandwidth
requirements approach the total bandwidth available and other applications must share access to the
network, latency (packet delay) of the voice packets can affect the quality of the monitored speech.
However, latency does not affect the quality of recorded speech.
• The data represents only the bandwidth required for monitoring and recording. It does not include
the bandwidth requirements for other Cisco Agent Desktop modules as outlined in other sections of
this document.
Table 8-7 Percentage of Available Upload Bandwidth Required for Simultaneous Monitoring Sessions with G.711 Codec
and No Silence Suppression
Table 8-8 Percentage of Available Upload Bandwidth Required for Simultaneous Monitoring Sessions with G.729 Codec
and No Silence Suppression
The following notes apply to the bandwidth requirements for the VoIP Monitor service, as listed in
Table 8-9 and Table 8-10:
• Because the VoIP Monitor service is designed to handle a larger load, the number of monitoring
sessions is higher than for the Desktop Monitor service.
• The bandwidth requirements are based on upload speed. Download speed affects only the incoming
stream for the IP phone call.
• Some of the slower connection speeds are not shown in Table 8-9 and Table 8-10 because they are
not supported for a VoIP Monitor service.
• The values in Table 8-9 and Table 8-10 are calculated based on the best speed of the indicated
connections. A connection's true speed can differ from the maximum stated due to various other
factors.
• The data represents the codecs without silence suppression. With silence suppression, the amount
of bandwidth used may be lower.
• The data shown does not address the quality of the speech of the monitored call. If the bandwidth
requirements approach the total bandwidth available and other applications must share access to the
network, latency (packet delay) of the voice packets can affect the quality of the monitored speech.
However, latency does not affect the quality of recorded speech.
• The data represents only the bandwidth required for monitoring and recording. It does not include
the bandwidth requirements for other Cisco Agent Desktop modules as outlined in other sections of
this document.
Table 8-9 Percentage of Available Upload Bandwidth Required for Simultaneous Monitoring
Sessions with G.711 Codec and No Silence Suppression
Table 8-10 Percentage of Available Upload Bandwidth Required for Simultaneous Monitoring
Sessions with G.729 Codec and No Silence Suppression
Bandwidth Requirements for Cisco Supervisor Desktop to Cisco Desktop Base Services
In addition to the bandwidth requirements discussed in the preceding sections, there is traffic from Cisco
Supervisor Desktop to the Cisco Agent Desktop Base Services.
For each agent on the supervisor's team, there is 2 kilobytes (kB) of bandwidth per call sent between
Cisco Supervisor Desktop and the Chat service, as shown in Table 8-11.
Table 8-11 Cisco Supervisor Desktop Bandwidth for a Typical Agent Call
The same typical call scenario was used to capture bandwidth measurements for both Cisco Agent
Desktop and Cisco Supervisor Desktop. See Typical Call Scenario, page 8-15, for more details.
If there are 10 agents on the supervisor's team and each agent takes 20 calls an hour, the traffic is:
10 agents ∗ 20 calls per hour = 200 calls per hour
200 calls ∗ 1650 bytes per call = 330,000 bytes per hour
(4330,000 bytes) / (3600 seconds per hour) = 92 kBps
92 kBps ∗ 8 bits per byte = 733 kbps.
There is additional traffic sent if the supervisor is viewing reports or if a silent monitor session is started
or stopped.
Table 8-12 Bandwidth Usage for Agent Detail Report (Average Bytes per Report)
Table 8-13 Bandwidth Usage for Team Agent Statistics Report (Average Bytes per Report)
Table 8-14 Bandwidth Usage for Team Skill Statistics Report (Average Bytes per Report)
Bandwidth for a supervisor viewing the Team Skill Statistics Report with 10 skills in the team is:
250 bytes per skill ∗ 10 skills ∗ 2 requests per minute = 5000 bytes per minute
(5000 bytes per minute) / (60 seconds per minute) = 83 bytes per second (Bps)
Table 8-15 Bandwidth Usage to Start or Stop Silent Monitoring (Average Bytes per Request)
For multiple remote locations, each remote location must have a VoIP Monitor service. Multiple VoIP
Monitor services are supported in a single logical contact center. The Recording and Statistics service
can be moved to the central location if the WAN connections are able to handle the traffic. If not, each
site should have its own logical contact center and installation of the Cisco Desktop software.
Note Cisco has begun to change the marking of voice control protocols from DSCP 26 (PHB AF31) to
DSCP 24 (PHB CS3). However, many products still mark signaling traffic as DSCP 26 (PHB AF31).
Therefore, in the interim, Cisco recommends that you reserve both AF31 and CS3 for call signaling.
The traffic from Cisco Agent Desktop to and from the Chat service (agent information, call status) is
less critical and should be classified as AF21 or AF11.
Integrating Cisco Agent Desktop Release 6.0 into a Citrix Thin-Client Environment
For guidance on installing Cisco Agent Desktop Release 6.0 applications in a Citrix thin-client
environment, refer to the documentation for Integrating CAD 6.0 into a Citrix Thin-Client Environment,
available at
http://www.cisco.com/application/pdf/en/us/partner/products/ps427/c1244/cdccont_0900aecd800e
9db4.pdf
This chapter describes the importance of securing the IPCC solution and points to the various security
resources available. It includes the following sections:
• Introduction to Security, page 9-1
• Security Best Practices, page 9-2
• Patch Management, page 9-3
• Antivirus, page 9-4
• Cisco Security Agent, page 9-5
• Firewalls and IPSec, page 9-6
• Security Features in Cisco CallManager Release 4.0, page 9-8
Introduction to Security
Achieving IPCC system security requires an effective security policy that accurately defines access,
connection requirements, and systems management within your contact center. Once you have a good
security policy, you can use many state-of-the-art Cisco technologies and products to protect your data
center resources from internal and external threats and to ensure data privacy, integrity, and system
availability.
Cisco has developed a set of documents with detailed design and implementation guidance for various
Cisco networking solutions in order to assist enterprise customers in building an efficient, secure,
reliable, and scalable network. These Solution Reference Network Design (SRND) guides, which can be
found at http://www.cisco.com/go/srnd, provide proven best practices to build out a network
infrastructure based on the Cisco Architecture for Voice, Video, and Integrated Data (AVVID). Among
them are the following relevant documents relating to Security and IP Telephony that should be used in
order to successfully deploy an IPCC network. Updates and additions are posted periodically, so
frequent site visits are recommended.
• IP Telephony SRND for Cisco CallManager 3.3
• IP Telephony SRND for Cisco CallManager 4.0
• Data Center Networking: Securing Server Farms SRND
• Data Center Networking: Integrating Security, Load Balancing, and SSL Services
An adequately secure IPCC configuration requires a multi-layered approach to protecting systems from
targeted attacks and the propagation of viruses. A first approach is to ensure that the servers hosting the
Cisco contact center applications are physically secure. They must be located in data centers to which
only authorized personnel have access. The next level of protection is to ensure the servers are running
antivirus applications with the latest virus definition files and are kept up-to-date with Microsoft and
other third-party security patches. The servers may be hardened according to the guidelines provided in
the security best-practices guides applicable to your release of the application.
Another level of security is the network segmentation of the servers. None of the IPCC servers are meant
to be deployed as internet-facing systems or bastion hosts (with the only exception of the Web
Collaboration option, if deployed). While desktop-based applications such as the CTI OS, Cisco Agent
Desktop, or Cisco Supervisor Desktop tend to be deployed in open corporate VLANs, servers making
up the IPCC solution should be placed in the data center behind a secure network. In cases where the
servers are geographically distributed, proper care should be taken to ensure the network links are
secure.
Patch Management
A document providing information for tracking Cisco-supported operating system files, SQL Server, and
security files is available at
http://www.cisco.com/univercd/cc/td/doc/product/voice/c_callmg/osbios.htm
This document also provides Cisco recommendations for applying software updates (Cisco
CallManager, IP IVR, and ISN only).
The Security Patch and Hotfix Policy for Cisco CallManager specifies that any applicable patch deemed
Severity 1 or Critical must be tested and posted to http://www.cisco.com within 24 hours as Hotfixes.
All applicable patches are consolidated and posted once per month as incremental Service Releases
A notification tool (email service) for providing automatic notification of new fixes, OS updates, and
patches for Cisco CallManager and associated products is available at
http://www.cisco.com/cgi-bin/Software/Newsbuilder/Builder/VOICE.cgi
Antivirus
Applications Supported
A number of third-party antivirus applications are supported for the IPCC system. For a list of
applications and versions supported on your particular release of the IPCC software, refer to the ICM
platform hardware specifications and related software compatibility data listed in the Cisco Intelligent
Contact Management (ICM) Bill of Materials and the Cisco CallManager product documentation
(available at http://www.cisco.com).
Note Deploy only the supported applications for your environment, otherwise a software conflict might arise,
especially when an application such as the Cisco Security Agent is installed on the IPCC systems. (See
Cisco Security Agent, page 9-5.)
Configuration Guidelines
Antivirus applications have numerous configuration options that allow very granular control of what and
how data should be scanned on a server.
With any antivirus product, configuration is a balance of scanning versus the performance of the server.
The more you choose to scan, the greater the potential performance overhead. The role of the system
administrator is to determine what the optimal configuration requirements will be for installing an
antivirus application within a particular environment. Refer to the security best-practices guide and your
particular antivirus product documentation for more detailed configuration information on an ICM
environment.
The following list highlights some general best practices:
• Upgrade to the latest supported version of the third-party antivirus application. Newer versions
improve scanning speed over previous versions, resulting in lower overhead on servers.
• Avoid scanning of any files accessed from remote drives (such as network mappings or UNC
connections). Where possible, each of these remote machines should have its own antivirus software
installed, thus keeping all scanning local. With a multi-tiered antivirus strategy, scanning across the
network and adding to the network load should not be required.
• Due to the higher scanning overhead of heuristics scanning over traditional antivirus scanning, use
this advanced scanning option only at key points of data entry from untrusted networks (such as
email and Internet gateways).
• Real-time or on-access scanning can be enabled, but only on incoming files (when writing to disk).
This is the default setting for most antivirus applications. Implementing on-access scanning on file
reads will yield a higher impact on system resources than necessary in a high-performance
application environment.
• While on-demand and real-time scanning of all files gives optimum protection, this configuration
does have the overhead of scanning those files that cannot support malicious code (for example,
ASCII text files). Cisco recommends excluding files or directories of files, in all scanning modes,
that are known to present no risk to the system. Also, follow the recommendations for which specific
ICM files to exclude in an ICM or IPCC implementation, as provided in the Security Best Practices
for Cisco Intelligent Contact Management Software available at
http://www.cisco.com/en/US/partner/products/sw/custcosw/ps1001/prod_technical_reference
_list.html
• Schedule regular disk scans only during low usage times and at times when application activity is
lowest. To determine when application purge activity is scheduled, refer to the Security Best
Practices guides listed in the previous item.
Guidelines for configuring antivirus applications for Cisco CallManager are available at the following
locations:
• http://cisco.com/en/US/partner/products/sw/voicesw/ps556/products_implementation_design_guides
_list.html
• http://cisco.com/en/US/partner/products/sw/voicesw/ps556/products_user_guide_list.html
• Managed mode — An XML export file specific to the agent and compatible with each voice
application in the deployed solution, can be downloaded from the same location and imported into
an existing CiscoWorks Management Center for Cisco Security Agents, part of the CiscoWorks
VPN/Security Management Solution (VMS) bundle.
The advanced CiscoWorks Management Center for Cisco Security Agents incorporates all management
functions for agents in core management software that provides a centralized means of defining and
distributing policies, providing software updates, and maintaining communications to the agents. Its
role-based, web browser "manage from anywhere" access makes it easy for administrators to control
thousands of agents per MC. Features include:
• Cisco ICM, IPCC Enterprise, and Internet Service Node (ISN) Agents, available at
http://www.cisco.com/kobayashi/sw-center/contact_center/csa/
• Other agents, available at
http://www.cisco.com/kobayashi/sw-center/sw-voice.shtml
Firewalls
Deploying the application in an environment where firewalls are in place requires the network
administrator to be knowledgeable of which TCP/UDP IP ports are used. For an inventory of all the ports
used across the contact center suite of applications for the most widely deployed versions of Cisco
products, refer to the Cisco Contact Center Product Port Utilization Guides available at
http://www.cisco.com/univercd/cc/td/doc/product/icm/port_uti/
Note Outbound Option Dialers and Cisco CallManager servers must not be segmented through a PIX firewall.
For details, refer to the Release Notes for the Cisco Secure PIX Firewall, available at
http://www.cisco.com/en/US/products/sw/secursw/ps2120/prod_release_notes_list.html.
network implementation implies a distributed model with the WAN connection secured via IPSec
tunnels. The testing undertaken in this release was limited to configuration of Cisco IOS™ IPSec in
Tunnel Mode, which means that only the Cisco IP Routers (IPSec peers) between the two sites were part
of the secure channel establishment. All data traffic is encrypted across the WAN link but unencrypted
on the local area networks. In tunnel mode, traffic flow confidentiality is ensured between IPSec peers
which, in this case, are the IOS Routers connecting a central site to a remote site.
The qualified specifications for the IPSec configuration are as follow:
• HMAC-SHA1 Authentication (ESP-SHA-HMAC)
• 3DES Encryption (ESP-3DES)
Cisco recommends that you use hardware encryption to avoid a significant increase in IP Router CPU
overhead and throughput impact. There are also some latency implications, so it is important to size the
network infrastructure (network hardware and physical links) accordingly. There are also considerations
that must be taken into account for QoS networks. The common recommendation is to classify and apply
QoS features based on packet header information before traffic is tunnel encapsulated and/or encrypted.
More detailed information on Cisco IOS IPSec functionality is available at
http://www.cisco.com/go/ipsec
Note The IPSec NAT Transparency feature introduces support for IP Security (IPSec) traffic to travel through
Network Address Translation (NAT) or Port Address Translation (PAT) points in the network by
addressing many known incompatibilities between NAT and IPSec. NAT Traversal is a feature that is
auto-detected by VPN devices. There are no configuration steps for a router running Cisco IOS Software
Release 12.2(13)T and above. If both VPN devices are NAT-T capable, NAT Traversal is auto-detected
and auto-negotiated.
Media Encryption
Media encryption is currently supported only on the Cisco 7970 IP Phones, which are not supported in
an IPCC environment. If Cisco 7970 IP Phones are deployed as part of your IPCC solution with Cisco's
permission, features such as silent monitoring and recording will not be available for any agents who are
equipped with this model of IP Phone.
Phone Settings
The Cisco IP Phone device configuration in Cisco CallManager provides the ability to disable the
phone's PC port as well as restricting access of a PC to the voice VLAN. Changing these default settings
to disable PC access will also disable the monitoring feature of the IPCC solution. The settings are
defined as follows:
• PC Port
– Indicates whether the PC port on the phone is enabled or disabled. The port labeled “10/100 PC”
on the back of the phone connects a PC or workstation to the phone so that they can share a
single network connection.
– This is a required field.
– Default: Enabled.
• PC Voice VLAN Access
– Indicates whether the phone will allow a device attached to the PC port to access the Voice
VLAN. Disabling Voice VLAN Access will prevent the attached PC from sending and receiving
data on the Voice VLAN. It will also prevent the PC from receiving data sent and received by
the phone. Set this setting to Enabled if an application is being run on the PC that requires
monitoring of the phone’s traffic. This could include monitoring and recording applications and
use of network monitoring software for analysis purposes.
– This is a required field.
– Default: Enabled.
Numerics
3DES Triple Data Encryption Standard
A
ACD Automatic call distribution
AW Administrative Workstation
B
BBWC Battery-backed write cache
C
CAD Cisco Agent Desktop
CC Central Controller
CG CTI gateway
D
DCA Dynamic Content Adapter
DN Directory number
E
ECC Extended Call Context
H
HA WAN Highly available WAN
I
ICC Intra-cluster communications
IP Internet Protocol
J
JTAPI Java Telephony Application Programming Interface
K
kb Kilobits
kB Kilobytes
L
LAMBDA Load Adaptive Message-Base Data Archive
M
MAC Media Access Control
MC Management center
MR Media Routing
N
NAT Network Address Translation
O
OPC Open Peripheral Controller
OS Object Server
P
PAT Port Address Translation
PG Peripheral Gateway
Q
QoS Quality of Service
R
RAID Redundant array of inexpensive disks
S
S1, S2, S3, and S4 Severity levels for service requests
T
TAC Cisco Technical Assistance Center
TTS Text-to-speech
U
UDP User Datagram Protocol
UI User interface
V
V3PN Cisco Voice and Video Enabled Virtual Private Network
W
WAN Wide area network
X
XML Extensible Markup Language
agents
Numerics
average call time 4-2
802.1q 8-9 general 1-12
login 1-12
manually entering the number of 4-8
A
number of 5-9
abandoned calls 4-10 recommended number 4-8
abbreviations GL-1 settings 1-11
ACD integration 2-34 shrinkage 4-25
acronyms GL-1 sizing 4-5, 4-12
active time 4-2 staffing requirements 4-25
additional information xi, xiv state change 8-14
Administrative Workstation (AW) 1-7, 5-5 talk time 4-2
Admin Workstation ConAPI Interface 3-15 transfers between 1-20
admission control 1-21 utilization 4-9
after-call work time 4-2, 4-8 wrap-up time 4-2, 4-14
Agent Desktop agent-to-agent transfers 1-20
bandwidth requirements 8-13 AHT 4-2
Base Services 5-13 alternate between calls 1-19
Cisco Agent Desktop 7-3 answered calls 4-9
CTI OS Toolkit 7-6 antivirus applications 9-4
described 1-6, 7-1 APG 5-6, 5-10, 5-11
details 7-3 AQT 4-9
Recording and Playback Service 5-13 architecture overview 1-1, 8-2
redundancy 3-39 ARM 3-16
required servers 5-3, 5-4 ASA 4-9
settings 1-11 assistance, obtaining xiii
sizing 5-12 automatic call distribution (ACD) 2-34
types 7-2 availability of functions and features 3-1
VoIP Monitor Service 5-13 average
Agent Detail Report 8-22 after-call work time 4-8
Agent PG (APG) 5-6, 5-10, 5-11 call duration 4-9
Agent Reporting and Management (ARM) 3-16 call talk time 4-8
B
C
balancing server loads 6-13
bandwidth calculators
call scenario 8-15 for call center resources 4-6
for call control 8-13 for Erlang values 4-4
for Cisco Agent Desktop 8-13 call admission control 1-21
for clustering over the WAN 2-22 call center terminology 4-1
for CTI OS Agent Desktop 8-11 call control 8-13
for Desktop Monitor 8-18 CallManager (see Cisco CallManager)
for monitoring services 8-17 call processing
for private network 8-10 centralized 2-4
for public (visible) network 8-11 distributed 2-9, 2-14
for silent monitoring 8-16 redundancy 6-9
for Supervisor Desktop 8-21 with IPCC 6-1
for VoIP Monitor 8-20 calls
latency requirements 8-7 abandoned 4-10
provisioning 8-1 alternate 1-19
sizing 8-10 answered 4-9
Base Services for Cisco Agent Desktop 5-13 blockage 4-3, 4-8
best practices completed 4-9
CTI OS bandwidth 8-12 duration 4-9
security 9-2 flow 1-4
sizing call center resources 4-25 high-priority 4-17, 4-21
BHCA 4-2, 4-7, 5-9 normal 4-21
BHCC 4-9 per interval 4-7
BHT 4-3 queuing 1-13, 1-15, 4-9
blended agent option 2-31 queuing on IP IVR 2-3, 2-6, 2-9, 2-10, 2-17
blind transfer 1-17 queuing on ISN 2-3, 2-6, 2-9, 2-12, 2-18, 2-19
blockage of calls 4-3, 4-8 routing 1-12
broadband 2-27 self-service 4-20, 4-21
bugs, reporting xiii timeline 4-4
Business Ready Teleworker 2-30 transferring 1-16
treatment 2-10, 2-12, 2-17, 2-18, 2-19, 4-10, 4-13 combination transfers 1-20
treatment time 4-8 combining IPCC and IP Telephony 1-15
wrap-up time 4-14 Combo Box 4-20
capacity components of IPCC 1-6, 5-1
of server platforms 6-4, 6-5 computer telephony integration (see CTI)
planning tool 6-5 conferences, transfers of 1-21
CCMCT 6-5 Configuration Manager 1-7
centralized call processing 2-4 consultative transfer 1-18
CIPT OS 9-3 Content Server Switches (CSS) 1-2
Cisco.com xii CSS 1-2
Cisco Agent Desktop (see Agent Desktop) CTI
Cisco CallManager Manager 3-7, 3-10, 3-32
Capacity Tool (CCMCT) 6-5 Object Server (see CTI OS)
described 1-1 Server 1-5, 3-37
failover 3-22, 3-31 CTI OS
high availability 3-7 Agent Desktop 8-11
recovery 3-31 architecture 7-6
redundancy 3-7 failover 3-38
releases 6-3, 6-4 server sizing 5-11
security 9-8 Toolkit 7-6
server capacity 6-4, 6-5 customer support xiii
sizing servers 6-1
supported server platforms 6-7
D
with IP IVR 3-13
Cisco Product Identification Tool xiii databases 5-10
Cisco Resource Manager 5-10 data network 3-5
Cisco Security Agent 9-5 DCA 5-7
Cisco Supervisor Desktop (see Supervisor Desktop) demilitarized zone (DMZ) 3-18
Cisco Technical Assistance Center (TAC) xiii deployment models
Citrix thin-client environment 8-25 clustering over the WAN 2-15
classifying traffic 8-10 described 2-1
clients for routing 1-10, 1-11 multi-site with centralized call processing 2-4
clustering over the WAN multi-site with distributed call processing 2-9
described 2-15 single site 2-2
failover scenarios 2-26, 3-28 design tools 4-4
clusters Desktop Monitor 8-17, 8-18
guidelines 6-2 devices
redundancy 6-10 authentication 9-8
Collaboration Server 3-15, 3-18 targets 1-10
ECC 5-10
H
Email Manager 3-15, 3-17, 5-7
encryption 9-8 H.323 3-15
Erlang hardware configurations 5-8
calculations 4-4, 4-5 HA WAN 2-15, 2-22, 2-26
defined 4-2 HDS 3-39, 5-5
export 4-10 heartbeat 8-4, 8-13
Extended Call Context (ECC) 5-10 high availability 3-1
extensions for IPCC and IP Telephony on same highly available (HA) WAN 2-15, 2-22, 2-26
phone 1-15
high-priority calls 4-17, 4-21
Historical Data Server (HDS) 3-39, 5-5
F history of revisions xi
hybrid IP Telephony and IPCC system 1-15
factors to consider for sizing 5-9
failover
Cisco CallManager 3-22
clustering over the WAN 2-26, 3-28
overview 1-1
I
security 9-1
ICC 2-15, 2-23 supervisor desktop 7-2
ICM IP Communications (IPC) Resource Calculator 4-6
Central Controller 1-5 IP IVR
components 1-5 described 1-3
described 1-3 failover recovery 3-32
distributed 2-14 high availability 3-11
failover recovery 3-32 ports 4-12
failover scenarios 3-23 redundancy 3-11
IP IVR redundancy 3-13 with Cisco CallManager 3-13
private communications 2-20 with ICM 3-13
redundancy 3-10 IPPA 7-6
routing clients 1-10 IP Phone Agent (IPPA) 7-6
software modules 1-5 IPSec 9-6
infrastructure of the network 3-5, 8-2 IP Security (IPSec) 9-6
installation of Windows 2000 9-2, 9-3 IP switching 1-2
integration IP Telephony
with ACD 2-34 combined with IPCC 1-15
with IVR 2-35 extensions 1-15
Intelligent Contact Management (see ICM) ISN
Interactive Voice Response (see IVR) Application Server 3-14, 4-23
interfaces, SCI 1-3 call treatment 2-12, 2-18, 2-19
Internet Service Node (see ISN) Combo Box 4-20
intra-cluster communications (ICC) 2-15, 2-23 described 1-2
IN transfer 1-2 design considerations 3-13
IPCC licenses 4-23
agent desktop 7-2 Media Server 3-15
architecture 1-1 queuing of calls 2-12, 2-18, 2-19
call flows 1-4 server capacities 4-20
clusters 6-2 simplified sizing method 4-23
combined with IP Telephony 1-15 sizing call center resources 4-15
components 1-6, 5-1 sizing components 4-20
component sizing 5-5 sizing servers 5-8
current release xi Voice Browser 3-14, 4-23
extensions 1-15 IVR
message flows 1-4 call treatment 2-10, 2-17
minimum hardware configurations 5-8 integration 2-35
Outbound Option 3-19 ports 4-10
K N
login 1-12
O
M OPC 1-5
manually entering the number of agents 4-8 Open Peripheral Controller (OPC) 1-5
SCCP 2-6
traffic
U
classification 8-10
flow 8-6, 8-11, 8-16 UDP 8-4
in a bust hour 4-3 User Datagram Protocol (UDP) 8-4
marking 8-8, 8-24
prioritization 8-5
V
types of 8-2
transfer connect 1-21 V3PN 2-27
transfers versions of software xi, 6-3, 6-4
agent-to-agent 1-20 visible network 8-3, 8-6, 8-8, 8-11
alternate 1-19 VLAN 9-8
blind 1-17 Voice and Video Enabled IPSec VPN (V3PN) 2-27
conferenced calls 1-21 Voice Browser 4-23, 5-8
consultative 1-18 voice gateways
described 1-16 centralized 2-4, 2-17, 2-18
IVR to agent 1-20 distributed 2-7, 2-10, 2-12, 2-19
multiple 1-20 functions 3-14
multi-site deployments with centralized call ports 4-13
processing 2-6, 2-9
Voice Response Unit (VRU) 1-13, 3-33, 5-6, 5-8, 5-9
non-ICM 1-19
voice VLAN 9-8
reconnect 1-19
Voice XML (VXML) 4-24
reporting 1-20
VoIP Monitor 8-17, 8-20
single-site deployments 2-3
VoIP Monitor Service for Cisco Agent Desktop 5-13
single step 1-17
VRU 1-13, 3-33, 5-6, 5-8, 5-9
using Cisco CallManager 2-38
VXML 4-24
using PBX 2-35
using PSTN 1-21, 2-36
Translation Route to VRU 1-13 W
translation routing 1-13
wait before abandon 4-8
Transmission Control Protocol (TCP) 8-4
WAN
treatment of calls 2-10, 2-12, 2-17, 2-18, 2-19, 4-13
clustering 2-15
treatment time 4-8
highly available 2-15, 2-22, 2-26
trunks
private 2-23
double trunking 2-37, 2-38
Web Collaboration Server 5-7
required number 4-10
WebView Reporting Server 5-5
sizing 4-5, 4-13
Windows 2000 Server installation 9-2, 9-3
trust 8-9
wrap-up time 4-2, 4-14
tunneling 9-6
types of dial plans 1-16