Вы находитесь на странице: 1из 49

Guide

Getting Started with Cisco


Application Centric Infrastructure
(ACI) in the Small-to-Midsize
Commercial Data Center
Cisco Validated Design
January 2015

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 1 of 49

Contents
Introduction .............................................................................................................................................................. 4
What You Will Learn ............................................................................................................................................. 4
Prerequisites ......................................................................................................................................................... 4
Audience ............................................................................................................................................................... 4
Disclaimer ............................................................................................................................................................. 4
Why Implement ACI ................................................................................................................................................. 4
What Problems Are We Solving? .......................................................................................................................... 4
ACI for the Commercial Data Center .................................................................................................................... 4
Converting Cisco Nexus 9000 NX-OS Mode to ACI Mode ................................................................................... 6
Data Center Design Evolution ................................................................................................................................ 6
Traditional Data Center Design ............................................................................................................................. 6
Commercial Collapsed Designs ....................................................................................................................... 7
Layer 2 Versus Layer 3 Implications ................................................................................................................ 8
The Cisco Layer 2 Design Evolution ..................................................................................................................... 9
Virtual Port Channels ....................................................................................................................................... 9
Server Network Interface Card (NIC) Teaming Design and Configuration ..................................................... 10
Virtual Overlays .............................................................................................................................................. 11
Spine-Leaf Data Center Design .......................................................................................................................... 12
Overlay Design ................................................................................................................................................... 13
ACI Fabric Overlay .............................................................................................................................................. 14
Sample Commercial Topologies .......................................................................................................................... 15
Cisco Nexus 9500 Product Line .......................................................................................................................... 15
Cisco Nexus 9300 Product Line .......................................................................................................................... 16
Design A: Two Spines and Two Leaves ............................................................................................................. 17
Design B: Two Spines and Four Access Leaves ................................................................................................ 19
Design C: Four Aggregation and Four Access Switches - Spine-Leaf ................................................................ 19
Integration into Existing Networks ....................................................................................................................... 20
Fabric Extender Support ..................................................................................................................................... 20
Storage Design ................................................................................................................................................... 22
Layer 4 - 7 Integration ......................................................................................................................................... 24
End-State Topology ............................................................................................................................................ 24
Example ACI Design and Configuration .............................................................................................................. 24
Validated ACI Physical Topology ........................................................................................................................ 25
Validated ACI Logical Topology .......................................................................................................................... 25
ACI Tenant Tab Object Review........................................................................................................................... 26
Tenants .......................................................................................................................................................... 26
Private Networks ............................................................................................................................................ 26
Bridge Domains .............................................................................................................................................. 27
Application Profiles ......................................................................................................................................... 27
Endpoint Groups (EPGs) ................................................................................................................................ 27
Contracts ........................................................................................................................................................ 27
Domains ......................................................................................................................................................... 27
Tenant Tab End-State Configuration Snapshot .................................................................................................. 28
SharePoint Application Profile Policy View ..................................................................................................... 28
EPG Domain Association ............................................................................................................................... 30
Verifying Discovered Endpoints in an EPG .................................................................................................... 31
Tenant Networking ......................................................................................................................................... 32
Policy Enforcement through Contracts ........................................................................................................... 34
External Routed Networks .............................................................................................................................. 35
End-State Tenant Tab Configuration .............................................................................................................. 39
ACI Fabric Tab - Fabric Policies ......................................................................................................................... 39
Enabling External Routing in the Fabric ......................................................................................................... 39
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 2 of 49

ACI Fabric Tab - Access Policies Object Review ................................................................................................ 41


Physical and External Domains ...................................................................................................................... 41
Pools .............................................................................................................................................................. 41
Global Policies................................................................................................................................................ 41
Interface Policies ............................................................................................................................................ 41
Switch Policies - Use these to create switch profiles and set up vPC pairs, and tie interface policies to
specific switch(es) to dictate port behavior on specific nodes ........................................................................ 42
Fabric Tab End-State Configuration Snapshot.................................................................................................... 42
Switch Profile Creation ................................................................................................................................... 42
Domain and VLAN Pool Creation and Association ......................................................................................... 44
AEP to Domain Association ........................................................................................................................... 45
Interface Policies ............................................................................................................................................ 46
Sample Configuration Output .............................................................................................................................. 49
Conclusion ............................................................................................................................................................. 49

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 3 of 49

Introduction
What You Will Learn
Cisco Application Centric Infrastructure (ACI) can easily be deployed and managed in any size commercial data
center, even with an IT staff of one. You will learn the value of ACI for the commercial data center, and understand
how a multi-tier application is built using the ACI policy model. This white paper will show you a real, Cisco
validated ACI topology and walk you through the components of a complete, multi-tier application deployment.
Several ACI features will also be highlighted.

Prerequisites
You should have a basic understanding of ACI and the policy model. However, brief concept reviews and links to
other resources are included in this document if you are unfamiliar with a concept.

Audience
This white paper is intended for sales engineers, field consultants, professional services, IT managers, partner
engineers, and customers who want to take advantage of an infrastructure built to deliver IT efficiency and support
rapid application deployment. It is intended to benefit commercial-sized and small data centers.

Disclaimer
Always refer to the Cisco ACI website for the most recent information on software versions, supported
configuration maximums, and device specifications, as information may evolve from the time of publication of this
paper.

Why Implement ACI


What Problems Are We Solving?
Application deployment is a slow, arduous process often hindered by difficulty translating application and business
requirements to infrastructure configuration. Application teams and network teams speak different languages. The
network needs to understand the needs of the application, but often there is no one to translate.
Network teams are pressed to quickly assess these needs by configuring multiple devices using multiple interfaces
when a new application is deployed or modified. Modern applications also have demands the network was not
originally designed to provide.

ACI for the Commercial Data Center


The future of networking with Cisco ACI is about providing a network that is deployed, monitored, and managed in
a fashion that supports rapid application change. ACI does so through the reduction of complexity and a common
policy framework that can automate provisioning and managing of resources.
Cisco ACI works to solve the business problem of slow application deployment due to focus on primarily technical
network provisioning and change management problems. It does so by helping to enable rapid deployment of
applications to meet changing business demands. ACI provides an integrated approach by providing applicationcentric end-to-end visibility from a software overlay down to the physical switching infrastructure. It also accelerates
and optimizes Layer 4 - 7 service insertion to build a system that brings the language of applications to the
network.
ACI delivers automation, programmability, and centralized provisioning by allowing the network to be automated
and configured based on business-level application requirements.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 4 of 49

ACI provides accelerated, cohesive deployment of applications across network and Layer 4 - 7 infrastructure, and
can enable visibility and management at the application level. Advanced telemetry for visibility into network health
and simplified day-two operations also opens up troubleshooting to the application itself. ACIs diverse and open
ecosystem is designed to plug into any upper-level management or orchestration system and attract a broad
community of developers. Integration and automation of both Cisco and third-party Layer 4 - 7 virtual and physical
service devices offers a single tool to manage the entire application environment.
With ACI mode customers can deploy the network based on application requirements in the form of policies,
removing the need to translate to the complexity of current network constraints. In tandem, ACI helps ensure
security and performance while maintaining complete visibility into application health on both virtual and physical
resources.
Figure 1 highlights how the network communication might be defined for a three-tier application from the ACI GUI
interface. The network is defined in terms of the needs of the application by mapping out who is allowed to talk to
whom, and what they are allowed to talk about. It does this by defining a set of policies, known as contracts, inside
an application profile, instead of configuring lines and lines of command-line interface (CLI) code on multiple
switches, routers, and appliances.
This policy model is configured centrally from a cluster of controllers called Cisco Application Policy Infrastructure
Controllers (APICs) and is pushed out to all Cisco Nexus 9000 Series Switches in the ACI fabric. All configuration
is performed through the APIC API (through the GUI, scripting, etc.). No switch is configured by the end user,
allowing rapid application deployment.
Figure 1.

Sample Three-Tier Application Policy

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 5 of 49

Converting Cisco Nexus 9000 NX-OS Mode to ACI Mode


This white paper will feature Cisco Nexus 9000 Series Switches in ACI mode. Cisco Nexus 9300 switches and
many Cisco Nexus 9500 line cards can be converted to ACI mode.
Cisco Nexus 9000 Series Switches are the foundation of the ACI architecture, and provide the network fabric. A
new operating system is used by Cisco Nexus 9000 switches running in ACI mode. The switches are then coupled
with a centralized controller, the APIC, and its open API. The APIC is the unifying point of automation, telemetry,
and management for the ACI fabric, helping to enable an application policy model approach to the data center.
Conversion from standalone Cisco NX-OS mode to ACI mode on Cisco Nexus 9000 Series Switches is outside of
the scope of this white paper.

Data Center Design Evolution


Traditional Data Center Design
Traditional data centers are built on a three-tier architecture with core, aggregation, and access layers (Figure 2),
or a two-tier collapsed core with the aggregation and core layers combined into one layer. Smaller data centers
may even take advantage of a single pair of switches. This architecture accommodates a north-south traffic pattern
where client data comes in from the WAN or Internet to be processed by a server in the data center, and is then
pushed back out of the data center. This is common for applications like web services, where most communication
is between an external client and an internal server. The north-south traffic pattern permits hardware
oversubscription, since most traffic is funneled in and out through the lower-bandwidth WAN or Internet bottleneck.
Figure 2.

Traditional Three-Tier Design

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 6 of 49

Commercial Collapsed Designs


Figures 3 through 5 depict collapsed and two-tier designs with Layer 3 at the aggregation layer, and Layer 2 at the
access and up to the aggregation layer, seen frequently in commercial data centers. Some customers may not
require a full two-tier design, and often deploy a pair of switches as the collapsed data center architecture.
Figure 3.

One-Tier Collapsed Design

Figure 4.

Two-Tier Small Access/Aggregation Design

Figure 5.

Traditional Two-Tier Medium Access/Aggregation Design

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 7 of 49

Layer 2 Versus Layer 3 Implications


Data center traffic patterns are changing. Today, more traffic moves east-west from server to server through the
access layer, as servers need to talk to each other and consume services within the data center. This shift is
primarily driven by the evolution of application design. Many modern applications need to talk to each other within
the data center. Applications driving this shift include big datas often-distributed processing design (for example,
Hadoop), live virtual machine or workload migration (for example, VMware vMotion), server clustering (for example,
Microsoft Cluster Services), and multi-tier applications. The oversubscribed hardware now is not sufficient for the
east-west 10 Gb-to-10 Gb communications. Additionally, east-west traffic is often forced up through the core or
aggregation layer, taking a suboptimal path.
Spanning Tree is another hindrance in the traditional three-tier data center design. Spanning Tree is required to
block loops in flooding Ethernet networks so that frames are not forwarded endlessly. Blocking loops means
blocking links, leaving only one active path (per virtual LAN, or VLAN). Blocked links severely impact available
bandwidth and oversubscription. This can also force traffic to take a suboptimal path, as Spanning Tree may block
a more desirable path (Figure 6).
Figure 6.

Suboptimal Path Between Servers in Different Pods Due to Spanning Tree-Blocked Links

Addressing these issues could include upgrading hardware to support 40 or 100 Gb interfaces, bundling links into
port channels to appear as one logical link to Spanning Tree, or moving the Layer 2/Layer 3 boundary down to the
access layer to limit the reach of Spanning Tree. Using a dynamic routing protocol between the two layers allows
all links to be active (Figure 7), and allows for fast reconvergence and equal cost multipathing (ECMP).

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 8 of 49

Figure 7.

Two-Tier Routed Access Layer Design

The tradeoff in moving Layer 3 routing to the access layer in a traditional Ethernet network is that it limits Layer 2
reachability (Figure 8). Applications like virtual machine workload mobility and some clustering software require
Layer 2 adjacency between source and destination servers. By routing at the access layer, only servers connected
to the same access switch with the same VLANs trunked down would be Layer 2-adjacent. However, the
alternative of spanning a VLAN across the entire data center for reachability is problematic due to Ethernets
broadcast nature and Spanning Tree reconvergence events.
Figure 8.

Routed Access Layer Limits Layer 2 Reachability

The Cisco Layer 2 Design Evolution


Virtual Port Channels
Over the past few years many customers have sought ways to move past the limitations of Spanning Tree. The
first step on the path to Cisco Nexus-based modern data centers came in 2008 with the advent of Cisco virtual port
channels (vPC). A vPC allows a device to connect to two different physical Cisco Nexus switches using a single
logical port-channel interface (Figure 9).
Prior to vPC, port channels generally had to terminate on a single physical switch. vPC gives the device activeactive forwarding paths. Because of the special peering relationship between the two Cisco Nexus switches,
Spanning Tree does not see any loops, leaving all links active.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 9 of 49

To the connected device, the connection appears as a normal port-channel interface, requiring no special
configuration. The industry standard term is called Multi-Chassis EtherChannel; the Cisco Nexus-specific
implementation is called vPC.
Figure 9.

vPC Physical Versus Logical Topology

vPC deployed on a Spanning Tree Ethernet network is a very powerful way to curb the number of blocked links,
thereby increasing available bandwidth. vPC on the Cisco Nexus 9000 is a great solution for commercial
customers, and those satisfied with current bandwidth, oversubscription, and Layer 2 reachability requirements.
Two of the sample small-to-midsized traditional commercial topologies are depicted using vPCs in Figures 10 and
11. These designs leave the Layer 2/Layer 3 boundary at the aggregation to permit broader Layer 2 reachability,
but all links are active as Spanning Tree does not see any loops to block. Details about the special vPC peering
relationship between Cisco Nexus switches will be discussed later in this document.
Server Network Interface Card (NIC) Teaming Design and Configuration
Modern applications and increasing virtual-machine density due to advances in CPU and memory footprints drive
server bandwidth demand, with many servers requiring 10 Gbps connections. Ideally, every server is dual-homed
to two different physical switches. Ordinarily, one of these connections would actively forward traffic, while the
other connection would stand by. While this design provides redundancy in case of a switch failure, a standby or
blocked connection wastes potential bandwidth.
Cisco Virtual Port Channel (vPC) allows connections from a device in a port channel to terminate on a pair of two
different Cisco Nexus switches set up in a special peering relationship. vPC provides Layer 2 multipathing,
increasing bandwidth while maintaining redundancy. Any device that supports port channels can be set up in a
vPC connected to a pair of switches in a vPC domain, and is unaware it is configured as a special type of port
channel.
Cisco vPC provides the following benefits. It:

Allows a single device to use a port channel connected to two upstream Cisco Nexus switches

Eliminates Spanning Tree-blocked ports

Provides a loop-free topology

Uses all available uplink bandwidth

Provides fast convergence if either a link or a switch fails

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 10 of 49

Provides link-level resiliency through standard port channel mechanisms

Helps ensure high availability by connecting to two different physical switches

In the sample topology highlighted in the designs outlined in Figures 10 and 11, the two leaf (access) switches are
set up in a vPC domain. Some servers are dual-homed, and some servers are single-homed. As a best practice, all
devices would be connected to both switches through vPC, but this is not a requirement.
Figure 10.

Traditional One-Tier Collapsed Design with vPC

Figure 11.

Traditional Two-Tier Design with vPC

Virtual Overlays
Customers with a two- or three-tier design that wish to route to the access layer, yet still maintain Layer 2
reachability between servers, can take the next step in the data center evolution by implementing a virtual overlay
fabric. In a Cisco Nexus 9000 fabric design, dynamic routing is configured between switches down to the access
layer so that all links are active. This eliminates the need for Spanning Tree on the fabric, and can enable equal
cost multipathing (ECMP) using the dynamic routing protocol.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 11 of 49

A virtual overlay fabric called virtual extensible LAN (VXLAN) is used to provide Layer 2 adjacencies over the Layer
3 fabric for servers and other devices that require Layer 2 reachability in the Cisco Nexus 9000 design. Combining
VXLAN and a dynamic routing protocol offers the benefits of an intelligent Layer 3 routing protocol, yet can also
provide Layer 2 reachability across all access switches for applications like virtual-machine workload mobility and
clustering (Figure 12).
Figure 12.

VXLAN Enables Layer 2 Communication over an Underlying IP Transport Network

The limitations of Spanning Tree in three-tier designs and the needs of modern applications are driving a shift in
network design toward a spine-leaf or access-aggregation architecture. Two- and three-tier designs are still a valid
and prevalent architecture; spine-leaf simply provides another easily integrated option.

Spine-Leaf Data Center Design


Spine-leaf topologies are based on the Clos network architecture. The term originates from Charles Clos at Bell
Laboratories, who published a paper in 1953 describing a mathematical theory of a multipathing, non-blocking,
multiple-stage network topology to switch telephone calls through.
Today, Clos original thoughts on design are applied to the modern spine-leaf topology. Spine-leaf is typically
deployed as two layers: spines (like an aggregation layer), and leaves (like an access layer). Spine-leaf topologies
provide high-bandwidth, low-latency, non-blocking server-to-server connectivity.
Leaf switches are what provide devices access to the fabric (the network of spine and leaf switches) and are
typically deployed at the top of the rack. All devices connect to the leaf switches. Devices may include servers,
Layer 4 - 7 services (firewalls and load balancers), and WAN or Internet routers. Leaf switches do not connect to
other leaf switches. However, every leaf should connect to every spine in a full mesh. Some ports on the leaf will
be used for end devices (typically at 1 Gb or 10 Gb), and some ports will be used for the spine connections
(at 40 Gb).

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 12 of 49

Spine switches are used to connect to all leaf switches, and are typically deployed at the end or middle of the row.
Spine switches do not connect to other spine switches. Spines serve as backbone interconnects for leaf switches.
Spines only connect to leaves.
All devices connected to the fabric are an equal number of hops away from one another. This delivers predictable
latency and high bandwidth between servers. The diagram in Figure 13 depicts a sample spine-leaf design.
Figure 13.

Spine-Leaf Topology

Another way to think about the spine-leaf architecture is by thinking of the spines as a central backbone with all
leaves branching off the spine like a star. Figure 14 depicts this logical representation, which uses identical
components laid out in an alternate visual mapping.
Figure 14.

Logical Representation of a Spine-Leaf Topology

Cisco Nexus 9000 Series Switches allow small-to-midsize commercial customers to start with a few switches and
implement a pay-as-you-grow model. When more access ports are needed, more leaves can be added. When
more bandwidth is needed, more spines can be added.

Overlay Design
Virtual network overlays partition a physical network infrastructure into multiple, logically isolated networks that can
be individually programmed and managed to deliver optimal network requirements.
Small-to-midsize commercial customers may require mobility between data centers, within different pods in a
single data center, and across Layer 3 network boundaries. Virtual network overlays make mobility and Layer 2
reachability possible.
Cisco has had a role in developing multiple overlay networks, each solving a different problem by providing an
innovative solution. For example, Cisco Overlay Transport Virtualization (OTV) provides cross-data center mobility
over Layer 3 data center interconnect (DCI) networks using MAC-in-IP encapsulation.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 13 of 49

Figure 15 shows two overlays in use: OTV on a Cisco ASR 1000 for data center-to-data center connectivity, and
VXLAN within the data center. Both provide Layer 2 reachability and extension. OTV is also available on the Cisco
Nexus 7000 Series Switch.
Figure 15.

Virtual Overlay Networks Provide Dynamic Reachability for Applications

For more information on overlays, read the Data Center Overlay Technologies white paper. For more information
on OTV, visit the Cisco OTV website.

ACI Fabric Overlay


The ACI fabric leaf and spine switches appear as a single switch to the outside world, capable of bridging and
routing. They also act as a hardware-based VXLAN, VLAN, and Network Virtualization Using Generic Routing
Encapsulation (NVGRE) gateway. Traffic is encapsulated and policy is enforced at the leaf layer, equal cost
multipathed through a spine switch, and de-encapsulated as traffic exits the fabric (Figure 16). The ACI fabric
encapsulation is based on the standard VXLAN frame format, and takes full advantage of several reserved fields in
the VXLAN header for ACI-specific features and policy enforcement.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 14 of 49

Figure 16.

Leaf Switches Encapsulate and Decapsulate Traffic and Enforce Policy

VXLAN is one of several protocols used on the ACI fabric to route traffic between nodes and enforce policy. These
protocols do not need to be configured, but they are visible from the APIC controller. All configuration and
management is done through the APIC controller.

Sample Commercial Topologies


A spine-leaf architecture taking advantage of 10 and 40 GE is designed to handle the shift to east-west traffic
patterns in the data center, limit the scope of Spanning Tree by routing between the layers, provide a loop-free alllinks-active topology, and perform ECMP.
The Cisco Nexus 9000 Series Switch portfolio provides a wide array of switches to choose from. Design
considerations include desired throughput, oversubscription ratio, estimated growth, server traffic patterns,
application requirements, and more.
This section will cover the Cisco Nexus 9000 portfolio, and outline several designs suitable for the small-to-midsize
commercial data center. This section will show simple network topologies. A later section will demonstrate how
these design options could be integrated into an existing network. Then, a Cisco validated design will be featured
that demonstrates how the ACI policy model could be applied to a three-tier application (Sharepoint).

Cisco Nexus 9500 Product Line


The Cisco Nexus 9500 Series modular switches are typically deployed as spines (aggregation, or core switches) in
commercial data centers. Figure 17 lists the line cards available for the Cisco Nexus 9500 chassis (Cisco NX-OS
mode-only line cards have been removed).

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 15 of 49

Figure 17.

Note:

Cisco Nexus 9500 ACI Mode Line Card Offerings

Please check the Cisco Nexus 9500 data sheets for the latest product information. Line card availability

may have changed since the time of writing. Refer to the software release notes to determine current chassis and
line card support for ACI mode.

Cisco Nexus 9300 Product Line


The Cisco Nexus 9300 Series fixed-configuration switches are typically deployed as leaves (access switches).
Some Cisco Nexus 9300 switches are semi-modular by providing an uplink module slot for additional ports. Figure
18 lists Cisco Nexus 9300 chassis that support ACI mode.
Figure 18.

Note:

Cisco Nexus 9300 Chassis Offerings

Please check the Cisco Nexus 9300 data sheets for the latest product information. Switch availability may

have changed since the time of writing.


If the 40 GE interfaces are not required, the 40 GE ports can be converted to 10 GE interfaces using the Cisco
Quad Small Form Factor Pluggable (QSFP) to SFP or SFP+ Adapter (QSA) module. The QSA module converts a
40 GE QSFP port into a 1 GE SFP or 10 GE SFP+ port. Customers have the flexibility to use any SFP+ or SFP
module or cable to connect to a lower-speed port. This flexibility allows a cost-effective transition to 40 GE over
time. The QSA module supports all SFP+ optics and cable reaches and several 1 GE SFP modules.
Figure 19 shows two QSA module setups.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 16 of 49

Figure 19.

QSA Module Shown Empty on the Left, with SFP/SFP+ Inserted on the Right

For more information on QSA modules, read the Cisco QSA Data Sheet.
For savings on 10 GE interfaces, Cisco SFP+ copper Twinax direct-attach cables are available for distances up to
10 meters (Figure 20). Twinax cables provide significant savings over traditional 10 GE fiber optic transceivers and
cabling.
Note:

Active Twinax must be used on ACI mode switches.

Figure 20.

Direct-Attach Twinax Copper Cable Assembly with SFP+ Connectors

For more information on all 10 GE SFP+ cable options, read the Cisco 10GBase SFP Modules data sheet.
Always check the Cisco Transceiver Modules Compatibility Information webpage to stay up to date on chassis
support.

Design A: Two Spines and Two Leaves


A simple two-spine (aggregation) and two-leaf (access) design (Figure 21) provides a palatable entry point for
small-to-midsize customers requiring more bandwidth and room for future expansion.
Figure 21.

Two-Spine, Two-Leaf Small-to-Medium Commercial Design

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 17 of 49

In the leaf or access layer the topology depicts a pair of fixed 1-rack-unit (RU) Cisco Nexus 9372PX switches. Each
9372PX provides 48 ports of 1/10 GE with SFP+ transceivers and six QSFP+ 40 Gbps ports. A10GBase-T version
of the switch is also available if twisted pair RJ-45 connectors are desired over SFP+ for 10 GE.
In the spine or aggregation layer the topology depicts two fixed Cisco Nexus 9336PQ Switches. Each 9336PQ
provides 36 ports of 40 GE with QSFP transceivers. If more ports are needed or to plan for future growth, the
chassis-based Cisco Nexus 9500 Series Switches could be used instead of the fixed 9336PQ.
The 9372 switches can operate in standalone Cisco NX-OS mode today, and are alternately capable of operating
in ACI mode, with software support anticipated during the first half of calendar year 2015. The 9336 switches
operate in ACI mode only.
For the latest comparison between Cisco Nexus 9500 Series Switch line cards, check out the Cisco Nexus 9000
Series Switches Compare Models tool.
The QSA 40-to-10 GE modules could also be used in this design as you transition from 10 GE to 40 GE.
Alternately, Cisco provides a low-cost 40-Gigabit transceiver called a bidirection (BiDi) that eases the move from
10 Gbps to 40 Gbps. Existing short-reach (SR) 40-Gigabit transceivers use connectors that require 12 strands of
fiber through an multiple-fiber push-on (MPO) connector (Figure 22). Unfortunately, existing 10-Gigabit fiber
deployments and patch panels use line card-to-line card multimode fiber. Upgrading from 10-Gigabit to 40-Gigabit
fiber can be an expensive endeavor if all transceivers, cabling, and patch panels had to be replaced.
Figure 22.

Existing 40-Gigabit, 12-Strand Fiber Cable and Connectors

As an alternative, the Cisco 40-Gigabit QSFP BiDi transceiver (Figure 23) addresses the challenges of the fiber
infrastructure by providing the ability to transmit full-duplex 40 Gbps over a standard OM3 or OM4 multimode fiber
with line-card connectors (Figure 24). The BiDi transceiver can reuse existing 10-Gigabit fiber cabling, instead of
deploying a new fiber infrastructure. The BiDi optic provides an affordable, simple upgrade path to 40 GE at almost
the same cost as 10-GE fiber today.
Figure 23.

Cisco QSFP BiDi Transceiver

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 18 of 49

Figure 24.

BiDi Fiber Connectivity and Speed

For more information on BiDi optics, read the Cisco QSFP BiDi Technology white paper.
This design will be showcased later in this white paper, and configuration examples will be provided.

Design B: Two Spines and Four Access Leaves


As your data center grows and more servers are added, Cisco Nexus 9000 deployment can be scaled out by
simply adding more leaves (access switches) to increase available server and device access ports (Figure 25).
Figure 25.

Two-Aggregation Four-Access Commercial Growth Design

In the leaf or access layer the topology depicts four fixed 1RU Cisco Nexus 9372PX Switches. Each 9372PX
provides 48 ports of 1/10 GE with SFP+ transceivers and six QSFP+ 40 GE ports. There is also a 10GBase-T
version of the switch available if twisted pair RJ-45 connectors are desired over SFP+ for 10 GE.
In the spine or aggregation layer, the topology depicts two fixed Cisco Nexus 9336PQ Switches. Each 9336PQ
provides 36 ports of 40 GE with QSFP transceivers. If more ports are needed or to plan for future growth, the
chassis-based Cisco Nexus 9500 Series Switches could be used instead of the fixed 9336PQ.

Design C: Four Aggregation and Four Access Switches - Spine-Leaf


A design with at least four spines (Figure 26) enters true spine-leaf territory, where the function of the spine
becomes pure backbone connectivity. In this design, the only devices connected to the spine are the leaves.
Failure of a single spine has little impact, simply leaving one less available path. All devices connect to the leaves.
Adding spines provides more cross-sectional bandwidth in the fabric, as well as additional paths to load-balance
traffic across. Like the previous design shown in Figure 25, as more access ports are required, more leaves can be
added to the topology. More spines can also be added as bandwidth requirements grow. The spine-leaf topology
allows for massive scale-out.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 19 of 49

Figure 26.

Four-Spine, Four-Leaf Scalable Bandwidth Design

Integration into Existing Networks


Adding ACI Nexus 9000 Series Switches to your data center does not mean existing devices must be removed or
reconfigured. Cisco Nexus 9000 switches can integrate seamlessly into a commercial data center while still
providing all the benefits the platform has to offer.
The three common scenarios for integrating ACI into existing environments includes: inserting a new ACI fabric as
an additional data center pod; inserting a new ACI fabric as a data center policy enforcement engine; and inserting
an extended ACI fabric to non-directly attached virtual and physical leaf switches.
The first and most common integration into an existing network is shown in Figure 27.
Figure 27.

Cisco ACI as a New Data Center Pod

For detailed information, refer to the Integrate Cisco Application Centric Infrastructure with Existing Networks white
paper.

Fabric Extender Support


To take full advantage of existing hardware, increase access port density, and increase 1-GE port availability,
Cisco Nexus 2000 Series Fabric Extenders (Figure 28) can be attached to Cisco Nexus 9300 Series Switches in a
single-homed configuration.
Table 1 outlines the Cisco fabric extenders currently supported.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 20 of 49

Table 1.

Supported Fabric Extenders

FEX Models

N9K-C9396PX

N9K-C9372PX

N9K-C9332PQ

N2K-C2248TP/TP-E

N2K-C2232PP

N2K-C2232TM-E

N2K-C2248PQ

Figure 28.

Cisco Nexus 2000 Series Fabric Extenders

Refer to the Cisco Nexus 9000 Software Release Notes for the most up-to-date feature support.
Fabric Extender Transceivers (FETs) also are supported to provide a cost-effective connectivity solution (FET-10G)
between Cisco Nexus 2000 Series Fabric Extenders and their parent Cisco Nexus 9300 switches.
For more information on FET-10 Gigabit transceivers, refer to the Nexus 2000 Series Fabric Extenders data sheet.
Supported Cisco Nexus 9000-to-Nexus 2000 Fabric Extender (FEX) topologies are pictured in Figure 29. As with
other Cisco Nexus platforms, think of the FEX like a logical remote line card of the parent Cisco Nexus 9000
switch. Each FEX connects to one parent switch. Servers should be dual-homed to two different FEXs.
Figure 29.

Supported Cisco Nexus 9300 Plus FEX Design

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 21 of 49

For detailed information, check out the Cisco Nexus 2000 Series NX-OS Fabric Extender Configuration Guide for
Cisco Nexus 9000 Series Switches, Release 6.x.

Storage Design
Existing IP-based storage, such as network-attached storage (NAS) or Small Computer System Interface over IP
(iSCSI) can be integrated into a Cisco Nexus 9000 fabric. Currently, Fibre Channel and Fibre Channel over
Ethernet (FCoE) are not supported on the Cisco Nexus 9000, but this section will show how they could be
designed alongside the Cisco Nexus 9000 and evolve with added features. Refer to the Cisco website for updates
on future support for FCoE N_Port Virtualization (NPV).
A converged storage fabric design is possible through Cisco Nexus 9000 switches if taking advantage of IP-based
storage like iSCSI or NAS to reduce cabling, switch ports, number of switches, and number of adapters required,
as well as significant power savings. All servers connect to Cisco Nexus 9300 leaves (access switches), carrying
both LAN and IP-based storage traffic. The storage devices could also connect to the leaves, or could be left in
place, connected to existing infrastructure. Figures 30 and 31 highlight both options.
Figure 30 illustrates a design where both servers and IP-based storage are directly connected to the leaf access
switches, reducing the number of hops and devices.
Figure 30.

Converged Access for Servers and IP-Based Storage Device on Cisco Nexus 9300

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 22 of 49

Figure 31.

Converged Access for Servers Using IP-Based Storage on the Cisco Nexus 9300

Cisco Nexus 9372 leaf access switches split out the LAN traffic to send to the spine aggregation switches, and
send the IP-based storage traffic to switches dedicated to storage traffic. While many switches could be used,
Cisco Nexus 5672UP Switches are featured in Figure 32.
Cisco Nexus 5600 Series Switches are the third generation of the leading data center server access Cisco Nexus
5000 Series of switches. The Cisco Nexus 5600 is the successor of the industrys most widely-adopted Cisco
Nexus 5500 Series Switches that maintain all the existing Nexus 5500 features, including LAN and SAN
convergence (unified ports, FCoE), fabric extenders (FEX) and FabricPath. In addition, the 5600 brings integrated
line-rate Layer 2 and 3 with true 40-GE support, Ciscos Dynamic Fabric Automation (DFA) innovation, NVGRE,
VXLAN bridging and routing capability, network programmability and visibility, deep buffers, and significantly higher
scale and performance for highly virtualized, automated, and cloud environments.
Figure 32.

Cisco Nexus 5672UP Switch

For more information, read the Cisco Nexus 5600 Platform Switches data sheet.
Customers who prefer to have separate, dedicated storage connections from each server could cable the storage
connections to their physical storage network, and cable the production IP LAN connections to Cisco Nexus 9000
switches.
The dedicated, physical storage network could take advantage of IP-based storage like iSCSI or NAS, or could be
comprised of a Fibre Channel or FCoE network. Regardless of the protocol, this design would not require any
change in existing storage cabling.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 23 of 49

Layer 4 - 7 Integration
Layer 4 - 7 services like firewalls and load balancers can be inserted and controlled through the ACI fabric using an
object called a service graph. The Layer 4 - 7 service appliances can be physical or virtual, and can be physically
located anywhere in the fabric.
ACI provides a single point of provisioning for services with the added ability to automate and script service
deployment. Reusable service templates can be created and replicated for new application rollouts.
For more information, refer to the Service Insertion with Cisco Application Centric Infrastructure guide.

End-State Topology
The simplified diagram in Figure 33 provides an example of an end-state design integrated into an existing data
center. The design features two Cisco Nexus 9372 Switches, two Nexus 9336 Switches, IP-based storage, an ASA
firewall appliance, connections to the campus Cisco Catalyst LAN environment, and connectivity to the WAN
router. Note that all devices connect to Cisco Nexus 9300 leaf access switches.
Figure 33.

Sample End-State WAN, Data Center, and Campus Network

Example ACI Design and Configuration


This section details a sample ACI design and configuration using real equipment, including two Cisco Nexus
9336PQ fixed spine switches, two Nexus 9396PX leaf switches, VMware ESXi servers, and a Nexus 6000 Series
Switch to serve as the outside Layer 3 connection. This topology has been tested in a Cisco lab to validate and
demonstrate the ACI solution in a repeatable format.
Microsoft SharePoint is used in this design to demonstrate the ACI policy model and how to map out a multi-tier
application.
Your exact configuration may vary - the intent of this section is to give a sample ACI design and configuration,
including the features discussed earlier in this white paper. Always refer to the Cisco configuration guides for the
latest information.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 24 of 49

Validated ACI Physical Topology


Figure 34 illustrates the physical topology used in the design.
Figure 34.

ACI Physical Topology

As shown, some servers are single-homed, and some servers are dual-homed. Some servers are connected to the
leaves through 1 GE, while others are 10 GE. Ideally, all devices would be dual-homed to a pair of leaves.
SharePoint consists of three major tiers: web, application, and database. Most of these reside as virtual machines
on VMware ESXi 5.5 servers, while some database servers are bare metal and not running on a virtualized
environment. This is to demonstrate a mixed data center, as most customers are not 100 percent virtualized, and
are moving to more bare metal and a combination of hypervisors.
External users from the WAN and Internet are connected through a Layer 3 switch to one of the leaf switches. The
bare-metal database servers are also connected off of the Layer 3 switch.

Validated ACI Logical Topology


Instead of configuring network policy on a switch-by-switch and device-by-device basis, we will configure an
application policy through the centralized APIC controller to determine what the pieces of the application are, who
they can talk to, and what information they are allowed to share.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 25 of 49

This is referred to as the policy model of ACI. The policy is configured centrally through the APIC and pushed out
to hardware to be enforced on the Cisco Nexus 9000 Series Switch fabric.
ACI objects will be reviewed briefly in this section to lay out the design, first in the tenant space, and then in the
fabric space of the GUI. For detailed design and definition of all ACI objects, refer to the Cisco Application Centric
Infrastructure Design Guide.
Figure 35 illustrates the logical topology used in the design to represent the policy model for a sample SharePoint
application profile.
Figure 35.

ACI Logical Topology

ACI Tenant Tab Object Review


This section will review basic ACI constructs relative to their usage in this validated design. There are two major
tabs in the GUI covered in this design: the Tenant tab and the Fabric tab. The Tenant tab is where the policy model
is configured. The Fabric tab is how the fabric infrastructure and ports are configured, and tied to the policy model.
This section highlights the Tenant tab.
Tenants
Tenants provide separation between groups or customers running on the same physical ACI fabric. By default,
tenants cannot communicate. Different tenants can be created for different business functions, such as production,
test and development, marketing, etc., or could be used for multiple customers on the same infrastructure. This
design features a single tenant.
Private Networks
Within a tenant, one or more private networks are created. Private networks are equivalent to a VRF in the legacy
networking world, and provide Layer 3 separation and isolation within a tenant. This design features two private
networks, one for internal communications, and one for external.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 26 of 49

Bridge Domains
Within a private network, one or more bridge domains are created. A bridge domain is essentially a container for
subnets that will be used by components of the applications. Adding an IP address and mask to a bridge domain
creates the distributed default gateway across the leaves so all endpoints always have a local default gateway,
even if the endpoint moves. Bridge domains also provide the ability to change forwarded behavior. By default, the
fabric will not flood traffic like Address Resolution Protocol (ARP) requests and unknown unicast. However, if an
application requires this type of flooding, those applications subnets can be placed in a separate bridge domain
where flooding can be enabled just for that application.
The relationships and hierarchy between tenants, private networks, and bridge domains is depicted in Figure 36.
Figure 36.

ACI Logical Model hierarchy

Application Profiles
An application profile defines the pieces or tiers of an application and the relationship between them. An application
profile for SharePoint is featured in this design. A different application would have a different application profile.
Endpoint Groups (EPGs)
Endpoint groups (EPGs) group servers or services with similar policy requirements. For example, SharePoint has
three tiers that require different behavior on the network: web, application, and database. All SharePoint database
servers belong to the same database EPG. Each device inside of an EPG is an individual endpoint. There are
several ways to group endpoints to EPGs, which include identifiers like VLAN, VXLAN, and NVGRE tags; physical
ports or leaves; and virtual ports using VMware integration. Each EPG is associated to one bridge domain, which
should contain the default gateways required by all endpoints in the group.
Contracts
ACI implements a whitelist model: no traffic is permitted on the fabric until policy is put in place. Policy is created
through contracts. Contracts are either consumed, provided, or both consumed and provided between EPGs. A
contract dictates who can talk to whom, and what they are allowed to talk about (i.e. ports, protocols). An
application profile contains a collection of EPGs and the contracts defining the policies between EPGs. A contract
contains one or more subjects, which define what communication is allowed between consumer and provider
EPGs.
Domains
Domains define how endpoints are connected to the fabric. There are four types of domains: VMware Virtual
Machine Manager (VMM), physical, external Layer 2, and external Layer 3.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 27 of 49

An EPG can be associated to multiple domains, depending on how its endpoints are connected. For example, in
the sample topology all of the web servers reside on VMware servers, and therefore the web EPG will only be tied
to a single VMware VMM domain. However, the database servers are found both as virtual machines and as baremetal servers, so the database EPG will be associated to two domains: the VMware VMM domain and a physical
domain that binds a port and VLAN encapsulation to the EPG.
By tying multiple domains to a single EPG we do not have to create different EPGs for servers that we want treated
exactly the same-- a single database EPG could have endpoints that reside on VMware, Hyper-V, Xen, and baremetal servers. ACI is agnostic to how and where the endpoints are connected; you simply tell ACI how to group
endpoints into EPGs.
Domains are generally configured in the fabric tab, and will be covered in more detail in the Fabric tab section later
in this document. Domains are associated to EPGs in the Tenant tab, featured in this section.

Tenant Tab End-State Configuration Snapshot


This section will highlight the policy created under the Tenant tab to enable the SharePoint application on the ACI
fabric. For step-by-step configuration, refer to the APIC configuration guides.
SharePoint Application Profile Policy View
Figures 37 and 38 illustrate the end-state relationship between all SharePoint EPGs, and the contracts between
EPGs to permit communication.
Figure 37.

Endpoint Relationship Between SharePoint EPGs and Contracts

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 28 of 49

Figure 38.

Endpoint Relationship Between SharePoint EPGs and Contracts

The SharePoint application profile includes the EPGs listed in Figure 39.
Figure 39.

EPGs in the SharePoint Application Profile

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 29 of 49

EPG Domain Association


Most of the endpoints are virtual machines on VMware servers, and are therefore tied to a VMware VMM domain.
The VMM domain tied to the EPGs was created in the VM Networking tab, featured later in this document. In
summary, VMM integration pushes a distributed virtual switch into vCenter, where the VMware admins can connect
hosts to the switch, and in turn, connect VMs to port groups. Each port group maps to an EPG. Then, when a
VMware admin connects a new web server VM, for example, to the web port group in vCenter, that VM will be
categorized as an endpoint in the Web EPG in the APIC.
For example, the Web, App, External, and vMotion EPGs only have endpoints on the VMware servers. These
EPGs are tied to a single domain, shown in Figure 40.
Figure 40.

EPGs Under One Domain

Some EPGs may have endpoints on different types of hypervisors or servers. The database tier has both virtual
machine endpoints, and bare-metal server endpoints, and is therefore tied to two domains, illustrated in Figure 41.
Figure 41.

EPGs Tied to Two Domains

The physical domain is for the bare-metal server connected to the switch through port Eth1/6 on Leaf 102. The
bare-metal database server resides on VLAN 200. In addition to mapping the physical domain to the database
EPG, a static binding must also be configured to tell ACI on which port and VLAN tag to look for the bare-metal
physical database server, depicted in Figure 42.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 30 of 49

Figure 42.

Static Binding Configuration

This static binding tells ACI that any traffic entering Leaf 102 on port Eth1/6 tagged with VLAN 200 belongs in the
Database EPG bucket.
Note:

A vMotion EPG has also been created to permit live virtual machine migration across the ACI fabric.

vMotion is a feature specific to VMware. Notice that the vMotion EPG does not provide or consume any contracts.
This is because by default, all endpoints in the same EPG can talk. Each host has a vMotion port in the vMotion
subnet, VLAN, and EPG. vMotion ports only need to talk to other vMotion ports; they do not need to talk to any
other endpoints.
vMotion could also be configured to move virtual machines in and out of the fabric. The ability to do this would
depend on the vSwitch design inside the VMware hypervisor, in addition to the policy design in ACI.
Verifying Discovered Endpoints in an EPG
To view discovered endpoints and verify they are being classified into the correct EPG, view the Operational >
Client End Points tab of an EPG. Note in Figure 43 that there are database endpoints on a VMware server
attached to Leaf 101 port 1/1, and on a bare-metal server talking through Leaf 102 port 1/6.
Figure 43.

Database Endpoints

Each EPG must be tied to a bridge domain, which should contain the default gateways needed for endpoint
networking. An EPG can only be tied to one bridge domain at a time.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 31 of 49

Tenant Networking
Each tenant also has one or more private networks. Each private network has one or more bridge domains
underneath. These objects are configured under the Networking folder, shown in Figure 44.
Figure 44.

How to Configure Private Networks

There are two private networks: External_VRF and Internal_VRF. Each private network has one bridge domain.
The ExternalBD belongs to the External private network, and the InternalBD belongs to the Internal private
network.
The Web, App, Database, and vMotion EPGs are part of the Internal network, and the external EPG and the
external routed subnets (configuration shown later) are part of the External network.
The web servers belong to the 10.1.1.0/24 subnet, the app servers belong to the 10.2.2.0/24 subnet, the database
servers belong to the 10.3.3.0/24 subnet, and the vMotion network uses the 10.99.99.0/24 subnet, as depicted by
the internal bridge domain configuration in Figure 45. All networks use a.254 default gateway, which is pushed and
active on each leaf the EPGs are present in the fabric.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 32 of 49

Figure 45.

How to Configure the Internal Bridge Domain

Note in the InternalBD configuration screenshot in Figure 45, the bridge domain is tied to the Internal_VRF private
network. Also take note of the flooding options that can be modified on a bridge domain-by-bridge domain basis.
The options shown are the default forwarding behaviors.
Tying an EPG to a bridge domain is configured in the EPG under the application profile, shown in Figure 46.
Figure 46.

Note:

How to Tie an EPG to a Bridge Domain

Just because subnets belong to the same bridge domain does not mean endpoints in those subnets can

communicate. Policy is enforced using contracts.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 33 of 49

Policy Enforcement through Contracts


To permit the Web, App, and Database EPGs to talk to one another, contracts must be created and applied. A
contract contains one or more subjects, which consist of an action, a filter, and an optional label.
When setting up a new ACI fabric, it is easiest to create a contract to permit all traffic between two EPGs to confirm
basic connectivity. After success with permission of all contracts, you can then limit a contract to only permit certain
types of traffic. For example, between users and web servers, one may only want to permit port 80 and port 443 for
HTTP and HTTPs traffic. All other traffic on all other ports would be denied. Contracts are created under the
Security Policies folder (Figure 47).
Figure 47.

Contracts Created Under the Security Policies Folder

The screenshot in Figure 47 highlights the App_to_Web contract, which contains a single subject called
App_Web_Subject, which uses a single filter to permit all traffic.
After the contracts are created, they must be applied between EPGs. An EPG can consume, provide, or both
consume and provide a contract. Figure 48 depicts the app EPG providing the App_to_Web contract. The Web
EPG is configured to consume the App_to_Web contract.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 34 of 49

Figure 48.

App EPG Provides the App_to_Web Contract

In Figure 48 you can also see the App EPG consumes another contract, the database resources provided by the
Database EPG. Once all contracts have been configured under the Contracts folder of each EPG, traffic should
flow as permitted by the contracts.
External Routed Networks
The last piece to configure in the Tenant tab is the external routed Layer 3 connection to the Cisco Nexus 6000
Series Switch, where external users reach the SharePoint application.
There are several pieces of an external routed domain to configure, each highlighted in the screenshot sequence
(Figures 49 - 54). In this design, Leaf 102 serves as the border leaf and runs Open Shortest Path First version 2
(OSPFv2) to the Cisco Nexus 6000, connected to port Eth1/6. OSPFv2 NSSA area 1 has already been configured
on the 6000 switch.
First, the external routed network is configured by specifying the basic routing protocol settings (Figure 49).
Figure 49.

Configuration of the External Routed Network

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 35 of 49

Note a single external routed network (External_Users) is tied to a single private network (External_VRF). It is also
tied to a domain (External_User_Domain), which is configured and covered later in the Fabric tab.
Next, the fabric needs to know which leaf and which port(s) connect to the routed device. This is achieved by
configuring the logical node and logical interface profiles. The logical node profile specifies the border leaf, and its
router ID (Figure 50).
Figure 50.

Configuration of the Logical Node and Interface Profiles

The logical interface profile specifies the interface the routed device is connected to, and what type of interface on
which the neighbor relationship should be established. Options include a physical interface, subinterface, or
switched virtual interface (SVI). This configuration uses SVI routing so that other routing relationships could be
established for different private networks (VRFs) or tenants on the same physical interface (Leaf 102, port 1/6).
Figure 51 shows this configuration.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 36 of 49

Figure 51.

Configuration of Relationships

Lastly, an OSPF interface protocol policy is set up to a specific network type and timers (timers have not been
modified), illustrated in Figure 52.
Figure 52.

How to Set Up an OSPF Interface Protocol Policy

This OSPF interface protocol policy is then associated to the interface profile of Leaf 102 (Figure 53).

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 37 of 49

Figure 53.

How to Associate the Interface Protocol Policy to the Interface Profile

Under the Networks folder of the external routed networks, individual subnets can be configured as EPGs. In this
design, there are four sample user sites, North, East, South, and West users (Figure 54).
Figure 54.

Configuration of Individual Subnets as EPGs

After the subnets are configured as EPGs, contracts must be created between these EPGs and the application
profile EPGs to permit the external subnets to communicate with the Web EPG, for example.
For more information, refer to Connecting Application Centric Infrastructure (ACI) to Outside Layer 2 and 3
Networks.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 38 of 49

End-State Tenant Tab Configuration


To review, the final application profile with all EPGs and contracts configured is represented in Figure 55.
Figure 55.

Configuration Summary of the Final Application Profile

This concludes the end-state configuration example of the Tenant tab. New applications could be added as new
application profiles to the same tenant, or a new tenant could be created if more separation is required.

ACI Fabric Tab - Fabric Policies


This section will review basic ACI constructs in the Fabric tab relative to their usage in this validated design. The
Fabric tab is how the fabric infrastructure and ports are configured, and tied to the policy model. Domains are
created in the Fabric tab and act as the glue between the Fabric and the Tenant space, since EPGs are associated
to one or more domains.
There are three areas under the Fabric tab: Inventory, Fabric Policies, and Access Policies. Inventory is where
fabric discovery is performed, and where individual leaf and spine nodes can be managed and monitored. Fabric
Policies are global settings, monitoring, and troubleshooting policies inside the fabric. Access Policies dictate
behavior on leaf ports egress to the fabric.
Turning on external routing used for the external routed domain is configured in the Fabric Policies area, covered in
the next section.
Enabling External Routing in the Fabric
Before external routed networks will connect to the fabric, you must set up an MP-BGP policy to set up an
autonomous system (AS) number for the fabric, and set up at least one spine as a Border Gateway Protocol (BGP)
route reflector.
First, a route reflector policy is created to designate the spine nodes as BGP route reflectors (Figure 56).

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 39 of 49

Figure 56.

Creating a Route Reflector Policy

Next, a pod policy group is configured, referencing the route reflector default policy configured in Figure 56 to set
routing protocol policies for the pod (ACI fabric). This is shown in Figure 57.
Figure 57.

Configuration of a Pod Policy Group

Then, the policy group must be applied to the pod (Figure 58).
Figure 58.

Applying the Policy Group to the Pod

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 40 of 49

Enabling external routing on the fabric is complete. For more detailed information on external bridged and external
routed networks, refer to the Connecting Application Centric Infrastructure (ACI) to Outside Layer 2 and 3 Networks
white paper.

ACI Fabric Tab - Access Policies Object Review


Access Policies dictate behavior on leaf ports egress to the fabric. Most of the configuration in this document is
performed in the Access Policies area. Objects configured here will dictate options like link speeds, link level
protocols, where domains are attached to the fabric, and vPC domains.
Most objects in the Access Policies area are set up once on initial fabric installation and configuration, and rarely
need to be modified. First, basic objects will be reviewed, followed by screenshots of an end-state policy
configuration of the Fabric tab.
There are several types of Access Policies utilized in this document. Definitions for important objects are laid out
for each type of Access Policy. Some Access Policies are out of the scope of this paper.
Physical and External Domains

Domains (review)

Domains define how endpoints are connected to the fabric. There are four types of domains: VMware
Virtual Machine Manager (VMM), physical, external Layer 2, and external Layer 3.

Domains are generally configured in the Fabric tab and are associated to EPGs in the Tenant tab, acting
as the glue between the Fabric and Tenant space. In essence, a domain specifies how devices connect
to the fabric.
Pools

VLAN and VXLAN Pools

Every domain is associated to a VLAN pool. The VLAN pool must include any VLANs used by servers in
the domain. However, if using a VMM domain, the pool can be a range of any unused VLANs that will be
pushed to the port groups on the distributed switch pushed from the APIC to vCenter server. As of time
of writing, VLANs cannot overlap on a single leaf switch.

For example, the domain used by the bare-metal database server in the previous section must be tied to
a VLAN pool that includes VLAN 99, which was used by the database server.
Global Policies

Attachable Access Entity Profiles (AEPs)

AEPs are what tie domains to ports in the fabric. Most likely, not all of your domains will exist on every
single port in the fabric. An AEP has one or more domains associated to it. An AEP is in turn tied to an
interface policy group, covered later. In essence, an AEP is where domains connect to the fabric.

A single AEP should group domains that require similar treatment on fabric interfaces. The sample
configuration will use two AEPs: one for VMware servers, and one for external devices.
Interface Policies

Policies: Create policies for link behavior like speed, duplex, Link Aggregation Control Protocol (LACP),
Link Layer Discovery Protocol (LLDP), and Cisco Discovery Protocol.

Policy Groups: Group various interface policies (listed in the previous Policies bullet) together, associated
to an AEP.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 41 of 49

Profiles: Profiles select specific interfaces and tie an interface policy group to the selected interface(s) to
dictate port behavior.

Switch Policies - Use these to create switch profiles and set up vPC pairs, and tie interface policies to
specific switch(es) to dictate port behavior on specific nodes

Policies: Create policies for switches like Multiple Spanning Tree (MST) region mappings and vPC domain
peer switches.

Policy Groups: Group various switch policies (listed in the previous Policies bullet) together.

Profiles: A best practice is to create a profile for each individual leaf switch, and a profile for each vPC pair
of leaves. Interface profiles are associated to switch profiles to dictate on which leaves the configured port
behavior should be applied.

Fabric Tab End-State Configuration Snapshot


This section will highlight the policy created under the Fabric tab to enable the SharePoint application on the ACI
fabric. For step-by-step configuration, refer to the APIC configuration guides.
The policy configuration has already been configured under the Tenant tab. The Fabric tab must now be configured
to: where the devices are connected and; how the ports should behave for the different types of servers and
domains connected.
These objects can all be created and associated using a wizard, or individually under each folder. Again, these
objects are generally created once when the fabric is initially installed, and rarely need to be modified.
Switch Profile Creation
First, as a best practice, switch profiles should be created for each leaf switch, and for each pair of leaf switches
that will be part of the same vPC domain. Before creating the switch profiles, create a default switch policy group to
select default Spanning Tree and monitoring policy behavior (Figure 59).
Figure 59.

Creating a Default Switch Policy Group

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 42 of 49

Next, create switch profiles for each leaf switch, and for each pair of leaf switches that will be put into a vPC
domain. Sample configuration of switch profiles are shown for Leaf 101 (Figure 60) and for the Leaf 101 and Leaf
102 pair (Figure 61).
Figure 60.

Switch Profile Configuration for Leaf 101

Figure 61.

Switch Profile Configuration for the Leaf 101 and Leaf 102 Pair

Later, interface profiles will be added to the switch profiles to dictate the behavior of ports and presence of VLANs.
This will be shown in the last step.
To place a pair of leaf switches into a vPC domain, add them under the switch policies folder (Figure 62).

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 43 of 49

Figure 62.

Note:

How to Add Leaf Switches into a vPC Domain

The same vPC rules apply as vPC on other Cisco Nexus platforms. A vPC domain can only contain two

switches, and a single switch can only be a member of one vPC domain at a time.
Domain and VLAN Pool Creation and Association
Next, VLAN pools were created for each of the three domains used in the configuration. One pool for VMware
server integration, one pool for the external routed domain, and one pool for the external physical database servers
(Figure 63).
Figure 63.

Creating VLAN Pools for Domains

Next, the VLAN pools must be associated to their respective domains (Figures 64 and 65). The domains have
already been created in this tab, with the exception of the VMware VMM domain, which is always created under
the VM Networking tab. The AEP association will be highlighted in the next section.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 44 of 49

Figure 64.

Associating VLAN Pools to Domains

Figure 65.

Associating VLAN Pools to Domains

AEP to Domain Association


Next, domains must be associated to AEPs. One AEP is used for VMware servers, and another AEP is used for
external devices (external routed users and bare-metal database servers). The screenshots in Figures 66 and 67
illustrate how to do this.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 45 of 49

Figure 66.

Associating Domains to AEPs

Figure 67.

Associating Domains to AEPs

Interface Policies
Several interface policies have been created to accommodate the different types of servers connected to the fabric.
Servers have different connectivity requirements, for example 1 GE versus 10 GE, individual link versus port
channel versus vPC, CDP/LLDP/LACP on or off, etc. These policies must be configured, and tied to the ports in
which the servers are connected.
First, reusable interface policies will be created for various link-level behaviors and protocols. As a best practice,
create on and off policies for each of the protocols so they may be reused across different ports. All policies have
been expanded in the following screenshot (Figure 68), however, the link-level policies are highlighted. Both 1 GE
and 10 GE policies have been created to accommodate the different server NIC speeds connected to the leaf
switches.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 46 of 49

Figure 68.

Expanded Policies with Link-Level Policies Highlighted

Next, the interface policies can be placed into policy groups to lump policies together for devices with similar fabric
connectivity requirements. Each interface policy group also ties to an AEP. Policy groups should be created for
each AEP.
In the sample configuration, Server 1 is running VMware ESXi and is part of the VMM domain, which is tied to the
VMware AEP. Server 1 has a single connection to Leaf 101 at 10 GE. The following screenshot (Figure 69) shows
the 10 GE and other link-level policies that will be used for Server 1. Later, this policy group will be tied to an
interface using an interface profile.
Figure 69.

Policies to Be Used for Server 1

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 47 of 49

This policy group could be used across any VMware 10 GE-attached servers, but in the sample topology, this
policy group is only used by Server 1. Server 2 has a 1-GE NIC and will require a different policy group.
Next, an interface profile binds the policy group to an interface, or collection of interfaces. Notice this interface is
generic (for example, 1/1), and does not refer to a specific leaf node. The final step will tie the interface profile to a
switch (leaf) node. The following screenshot (Figure 70) shows the interface profile for Server 1.
Figure 70.

Interface Profile for Server 1

As mentioned earlier, the final step is tying the interface profile to an actual leaf node where the device is
connected. For example, Server 3 is connected to Leaf 101 on port 1/3. The following screenshot (Figure 71)
depicts the association.
Figure 71.

Connecting Server 3 to Leaf 101

Now, every time a new server or device is attached to a leaf, all that needs to be done is to add a new interface
profile to the appropriate leaf switch.

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

Page 48 of 49

Sample Configuration Output


All configurations can be exported and imported to and from the APIC. The data are formatted as either XML or
JSON. This gives you the power to export a configuration from a fabric, optionally make changes, and import the
entire configuration - including the Tenant tab application profiles, EPGs, contracts, and the fabric configuration - to
a fabric without clicking through the GUI.
Many configuration samples are already available today on GitHub. GitHub is a website to share code between
people and organizations. GitHub makes change control easy, and allows people to contribute code and
documents publically.
Cisco maintains an ACI repository on GitHub, where people can work, download, and collaborate on projects. Any
user can fork a repository, which creates a new, unique project from an existing GitHub project. Users can also
contribute to the Cisco ACI repository.
The entire configuration featured in this design guide can be found on GitHub, downloaded, and imported into an
existing ACI fabric. Access the configuration here.

Conclusion
Cisco Application Centric Infrastructure (ACI) can be easily deployed by small to large businesses to rapidly deploy
new applications and bring the language of the business to the network.

Printed in USA

2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

C07-733638-00

01/15

Page 49 of 49

Оценить