Вы находитесь на странице: 1из 97

2006 Cisco. All rights reserved.

Cisco Confidential Presentation_ID


1
Virtualization in
Data Center
Unified Fabric with Nexus
Maciej Bocian
mbocian@cisco.com
Architecture Sales Manager
Data Centerand Virtualization, Central Europe
CCIE#7785
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
2
Traditional Data Center Network Topology
L2
VLAN A
Module 1
VLAN B
L3
VLAN D VLAN E VLAN C
L3
L2
Core
Aggregation
Access
Hierarchical Design
Triangle and Square Topologies
Multiple Access Models: Modular, Blade Switches and ToR
Multiple Oversubscription Targets (Per Application Characteristics)
2000 10000 Servers
10,000 to 50,000 ports
Module 2
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
3
New Data Center Architecture
Topology Layers:
Core Layer: Support high density L3 10GE aggregation
Aggregation Layer: Support high density L2/L3 10GE aggregation
Access Layer: Support EoR/MoR, ToR, & Blade for 1GE, 10GE, DCE & FCoE attached servers
Topology Service:
Services through service switches attached at L2/L3 boundary
Topology Flexibility:
Pod-wide VLANs, Aggregation-wide VLANs or DC-wide VLANs
Trade off between flexibility and fault domain
Agg-wide VLANs
DC-wide VLANs
Pod-wide VLANs
L2
L3
L2
L3
SAN Fabric
Fabric A
Fabric B
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
4
Physical facilities,
cabling and
standards
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
5
Data Center Strategy in Action
Physical Facilities
Servers
Network
Pod
4 - 6 Zones Per DC & 6 15 MW per DC
60,000 80,000 SQF per zone 1-3 MW per zone
200 400 racks/ cabinets per zone
Cooling and power per pod (per pair of rack rows)
8 48 servers per rack/ cabinet 1-1.5 KW per
cabinet
2 11 interfaces per server
2500 30000 server per DC
4000 120,000 ports per DC
Zone
DC
Pod
Storage
It all depends on server types
and network access layer model
COLD AISLE
HOT AISLE
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
6
Reference Physical Topology
Network Equipment and Zones
Server Rack
Network Rack
Zone
DC
Pod
Storage Rack
COLD AISLE
HOT AISLE
Pod
Pod
Module 1 Module N
Pod
Pod
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
7
Pod Concept
Network Zones and Pods
Pod
Pod/ Module Sizing
Typically mapped to access topology
Size: determined by distance and density
Cabling distance from server racks to network racks
100m Copper
200-500m Fiber
Cable density: # of servers by I /Os per server
Racks
Server: 6-30 Servers per rack
Network (based on access model)
Storage: special cabinets
DC Sizing
DC: a group of zones (or clusters, or areas)
Zone: Typically mapped to aggregation pair
Not all use hot-cold aisle design
Predetermined cable/power/cooling capacity
DC
COLD AI SLE
HOT AISLE
Pod
Pod
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
8
Network Equipment Distribution
End of Row and Middle of Row
Patch panel Patch panel
Network
Access Point
A - B
End of Row
server
server
server
server
Patch panel
X-connect
Network
Access Point
C - D
Patch panel
X-connect
Patch panel
Network
Access Point
A - B
Middle of Row
server
server
server
Patch panel
X-connect
Network
Access Point
C - D
Patch panel
X-connect
Patch panel
End of Row
Traditionally used
Copper from server to access switches
Poses challenges on highly dense server farms
Distance from farthest rack to access point
Row length may not lend itself well to
switch port density
Middle of Row
Use is starting to increase given EoR challenges
Copper from servers to access switches
I t addresses aggregation requirements for ToR
access environments
Fiber may be used to aggregate ToR
Common Characteristics
Typically used for modular access
Cabling is done at DC build-out
Model evolving from EoR to MoR
Lower cabling distances (lower cost)
Allows denser access (better flexibility)
6-12 multi-RU servers per Rack
4-6 Kw per server rack, 10Kw-20Kw per network
rack
Subnets and VLANs: one or many per switch.
Subnets tend to be medium and large: /24, /23
server
Fiber
Copper
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
9
Network Equipment Distribution
Top of Rack
Top of Rack
server
server
Top of Rack
server
ToR
Used in conjunction with dense access
racks(1U servers)
Typically one access switch per rack
Some customers are considering two +
cluster
Use of either side of rack is gaining traction
Cabling:
Within rack: Copper from server to
access switch
Outside rack (uplink):
Copper (GE): needs a MoR model for fiber
aggregation
Fiber (GE or 10GE):is more flexible and also
requires aggregation model (MoR)
Subnets and VLANS:
one or many subnets per access switch
Subnets tent to be small: /24, /25, /26
Patch panel
Network
Aggregation
Point
A - B
server
server
server
Patch panel
X-connect
Network
Aggregation
Point
A - B
Patch panel
X-connect
Patch panel
server
Top of Rack
Top of Rack
Network
Aggregation
Point
A - B
Patch panel
X-connect
Network
Aggregation
Point
C - D
Patch panel
X-connect
Top of Rack
server
Top of Rack
Patch panel Patch panel
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
10
Network Equipment Distribution
Blade Chassis
Switch to Switch
Potentially higher oversubscription
Scales well for blade server racks
(~3 blade chassis per rack)
Most current uplinks are copper but
the newer switches offer fiber
Migration from GE to 10GE uplinks
is taking place
Pass-through
Scales well for pass-through blade
racks
Copper from servers to access
switches
ToR
Have not seen it used in
conjunction with blade switches
May be a viable option on pass-
through environments is the
access port count is right
Efficient when used with Blade
Virtual Switch environments
Blade Chassis
sw1 sw2
Blade Chassis
sw1 sw2
Blade Chassis
sw1 sw2
Blade Chassis
Pass-through
Blade Chassis
Pass-through
Blade Chassis
Pass-through
Network
Aggregation
Point
A B C - D
Patch panel
X-connect
Network
Aggregation
Point
A B - C - D
Patch panel
X-connect
Top of Rack
Blade Chassis
Pass-through
Blade Chassis
Pass-through
Blade Chassis
Pass-through
Blade Chassis
sw1 sw2
Blade Chassis
sw1 sw2
Blade Chassis
sw1 sw2
Network
Aggregation
Point
A B C - D
Patch panel
X-connect
Network
Aggregation
Point
A B - C - D
Patch panel
X-connect
Patch panel
Patch panel
Patch panel
Patch panel
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
11
10 Gigabit Ethernet Server Connectivity
Cable
Transceiver
Latency (link)
Power
(each side)
Distance
Connector
(Media)
Twinax ~ 0.1s ~ 0.1W <10m
SFP+ CU*
copper
MM OM2
MM OM3
~ 0 1W
82m
300m
SFP+ SR
MMF,short reach
MM OM2
MM OM3
~ 0 1W
10m
100m
SFP+ USR
MMF, ul tra short reach
Cat6
Cat6a/7
Cat6a/7
2.5s
2.5s
1.5s
~ 6W***
~ 6W***
~ 4W***
55m
100m
30m
RJ45 10GBASE-T
copper
100Mb 1Gb 10Gb
UTP Cat 5
UTP Cat 5
MMF, SMF
10Mb
UTP Cat 3
Mid 1980s Mid 1990s Early 2000s Late 2000s
UTP Cat6a
MMF, SMF
TwinAx, CX4
Standard
SFF 8431**
IEEE 802.3ae
none
IEEE 802.3an
*** As of 2008; expected to decrease over time
10G Options
* Terminated cable ** Draft 3.0, not final
Twinax 15m
X2 CX4
copper
IEEE 802.3ak 4W
~ 0.1s
In-rack
X-rack
Across
racks
~50% power
savings with
EEE
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
12
Cost Effective 10GE Connectivity
Today
SFP+ USR Ultra Short Reach
100M on OM3 fiber, 30M on
OM2 fiber
Support on all Cisco Catalyst
and Nexus switches
Low Cost: $995 NTE
SFP+ Direct Attach
1, 3, 5 and 7M on Twinax
0.1W Power
Support across all Nexus
Switches
Low Cost: $150 - $260
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
13
Nexus 7000
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
14
Nexus 7010 10-Slot Chassis
First chassis in Nexus 7000 product
family
Optimized for data center environments
High density
256 10G interfaces per system
384 1G interfaces per systems
High performance
64 non-blocking 10G ports
1.2Tbps system bandwidth at initial
release
80Gbps per slot
60Mpps per slot
Future proof
Initial fabric provides up to 4.1Tbps
Product family scaleable to 15+Tbps
40/100G and Unified Fabric ready
33.1-38
(84-96.5cm)
17.3 (43.9cm)
21 RU
36.5
(92.7cm)
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
15
Nexus 7010 Chassis
Optional front
doors
Front Rear
System status
LEDs
Integrated cable
management
with cover
Supervisor
slots (5-6)
Payload slots
(1-4, 7-10)
Air intake with
optional filter
Air exhaust
Crossbar fabric
modules
System fan trays
Power supplies
Fabric fan trays
21RU
ID LEDs on
all FRUs
Front-to-
back airflow
Locking
ejector
levers
Common equipment
removes from rear
Two chassis
per 7 rack
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
16
Nexus 7018 18-Slot Chassis
Second chassis in Nexus 7000 product family
Ultra-high density
512 10G interfaces per system
768 1G interfaces per system
High performance
128 non-blocking 10G ports
2.5Tbps system bandwidth at initial release
80Gbps per slot
60Mpps per slot
Future proof
I nitial fabric provides up to 7.8Tbps
Chassis scaleable to 17.6Tbps
40/100G and Unified Fabric ready
Shared equipment
Supervisors, I /O modules, power supplies
common between chassis
Fabrics and fan trays chassis-specific
33.1-38
(84-96.5cm)
17.3 (43.9cm)
25 RU
43.5
(110.5cm)
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
17
Nexus 7018 Chassis
Optional front
doors
Front Rear
System status
LEDs
Integrated cable
management
Supervisor
slots (9-10)
Power supply
air intake
Crossbar
fabric
modules
Power supplies
25RU
ID LEDs on
all FRUs
Side-to-side
airflow
Locking
ejector
levers
Common equipment
removes from rear
System
fan trays
Payload slots
(1-8, 11-18)
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
18
Nexus 7000
Supervisor
Engine
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
19
Compact Flash cover
Supervisor Engine
Dual-core 1.66GHz Intel Xeon processor with 4GB DRAM
Connectivity Management Processor (CMP) for lights-out
management
2MB NVRAM, 2GB internal bootdisk, 2 external compact flash slots
10/100/1000 management port with 802.1AE LinkSec
Console & Auxiliary serial ports
USB ports for file transfer
Blue beacon LED for easy identification
Beacon
LED
Console Port
AUX Port
Management
Ethernet
USB Ports CMP Ethernet
Reset Button Status
LEDs
Compact Flash
Slots
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
20
Out-of-band
management
network
CMP
CMP
CMP
CMP
Data
Network
CMP
CMP
CMP
CMP
Connectivity Management Processor (CMP)
Standalone, always-on microprocessor on
supervisor engine
Provides lights out remote management and
disaster recovery via 10/100/1000 interface
Removes need for terminal servers
Monitor supervisor and modules, access log
files, power cycle supervisor, etc.
Runs lightweight Linux kernel and network stack
Completely independent of DC-OS on main CPU
Data
Network
consol e cables
Termi nal Servers
(out -of -band
console connect ivit y)
Out-of-band
management
network
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
21
Nexus 7000
I / O Modules
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
22
32-Port 10GE I/ O Module
32 10GE ports with SFP+
transceivers
80G full duplex fabric connectivity
I ntegrated 60Mpps forwarding
engine for fully distributed
forwarding
4:1 oversubscription at front panel
Virtual output queueing (VOQ)
ensuring fair access to fabric
bandwidth
802.1AE LinkSec on every port
Buffering:
Dedicated mode: 100MB ingress,
80MB egress
Shared mode: 1MB + 100MB
ingress, 80MB egress
Queues: 8q2t ingress, 1p7q4t
egress
Blue beacon LED for easy
identification
SFP+
SR at initial release 300m over MMF
LR post-release 10km over SMF
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
23
32-Port 10GE I/ O Module Architecture
2,4 6,8 10,12 14,16 18,20 22,24 26,28 30,32
Fabric Interface
and VOQ
Layer 2
Engine
Fabric Interface
and VOQ
Fabric ASIC
To Fabr ics
Port ASIC Port ASIC Port ASIC Port ASIC
CTS and
4:1 Mux
CTS and
4:1 Mux
CTS and
4:1 Mux
CTS and
4:1 Mux
MAC/
PHY
CTS and
4:1 Mux
CTS and
4:1 Mux
CTS and
4:1 Mux
CTS and
4:1 Mux
Port ASIC Port ASIC Port ASIC Port ASIC
Mezzanine Car d
1,3 5,7 9,11 13,15 17,19 21,23 25,27 29,31
Layer 3
Engine
FE Daught er
Car d
LC
CPU
To Cent r al Ar bit er EOBC
(t o Por t ASIC)
(t o LC CPU)
Inband
Replication
Engine
MET
MAC/
PHY
MAC/
PHY
MAC/
PHY
MAC/
PHY
MAC/
PHY
MAC/
PHY
MAC/
PHY
MAC/
PHY
MAC/
PHY
MAC/
PHY
MAC/
PHY
MAC/
PHY
MAC/
PHY
MAC/
PHY
MAC/
PHY
Replication
Engine
MET
Replication
Engine
MET
Replication
Engine
MET
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
24
48-Port 1GE I/ O Module
48 1GE 10/100/1000 RJ -45 ports
40G full duplex fabric connectivity
I ntegrated 60Mpps forwarding
engine for fully distributed
forwarding
Virtual output queueing (VOQ)
ensuring fair access to fabric
bandwidth
802.1AE LinkSec on every port
Buffer: 7.5MB ingress, 6.2MB
egress
Queues: 2q4t ingress, 1p3q4t
egress
Blue beacon LED for easy
identification
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
25
48-Port 1GE I/ O Module Architecture
Replication
Engine
MET
Port ASIC
17-24
Octal PHY
Layer 2
Engine
Fabric ASIC
FE Daught er
Car d
To Fabr ics
Layer 3
Engine
Fabric Interface
and VOQ
CTS CTS CTS
Port ASIC
CTS CTS CTS
Port ASIC
CTS CTS CTS
Port ASIC
CTS CTS CTS
9-16
Octal PHY
1-8
Octal PHY
41-48
Octal PHY
33-40
Octal PHY
25-32
Octal PHY
To Cent r al Ar bit er
LC
CPU
EOBC
(t o Por t ASIC)
(t o LC CPU)
Inband
Replication
Engine
MET
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
26
Nexus 7000
Forwarding
Engine
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
27
Forwarding Engine Hardware
Advanced hardware forwarding engine integrated on every I/O module
60Mpps Layer 2 bridging with hardware MAC learning
60Mpps IPv4 and 30Mpps IPv6 unicast
IPv4 and IPv6 multicast (SM, SSM, bidir)
IPv4 and IPv6 security ACLs
Cisco TrustSec security group tag support
Unicast RPF check and IP source guard
QoS remarking and policing policies
Ingress and egress NetFlow (full and sampled)
GRE tunnels
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
28
Forwarding Engine Details
Forwarding engine chipset consists of two ASI Cs:
Layer 2 Engine
Performs ingress and egress SMAC/DMAC lookups
Hardware MAC learning
True IP-based Layer 2 multicast constraint
Performs lookups on ingress I/O module, and egress I/O module for bridged
packets
Layer 3 Engine
60Mpps IPv4 and 30Mpps IPv6 Layer 3/Layer 4 lookups
Performs all FIB, ACL, QoS, NetFlow processing
Linear, pipelined architecture every packet processed in ingress and egress
pipe
Performs lookups on ingress I/O module, and egress I/O module for multicast
replicated packets
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
29
Nexus 7000
Fabric and
Bandwidth
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
30
Fabric Module
Provides 46Gbps per I/O
module slot
Also provides 23G per
supervisor slot
Up to 230Gbps per slot with 5
fabric modules
I nitially shipping I /O modules
do not leverage full fabric
bandwidth
Load-sharing across all fabric
modules in chassis
Multilevel redundancy with
graceful performance degradation
Non-disruptive OI R
Blue beacon LED for easy
identification
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
31
46Gbps 92Gbps 138Gbps 184Gbps 230Gbps
Fabric Capacity and Redundancy
Per-slot bandwidth capacity increases with each fabric module
1G module requires 2 fabrics for N+1 redundancy
10G module requires 3 fabrics for N+1 redundancy
4
th
and 5
th
fabric modules provide additional level of redundancy
Future modules will leverage additional fabric bandwidth
Fabric failure results in reduction of overall system bandwidth
Fabrics
Module
Slots
40G
1G Module
80G
10G Module
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
32
Access to Fabric Bandwidth
Supervisor engine controls access to fabric bandwidth
using central arbitration
Fabric bandwidth represented by Virtual Output Queues
(VOQs)
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
33
Virtual Output Queues (VOQs) on ingress modules represent
bandwidth capacity on egress modules
Guaranteed delivery to egress module for arbitrated packets
entering fabric
I f VOQ available on ingress, capacity exists on egress
VOQ is NOT equivalent to ingress or egress port buffer or queues
Relates ONLY to ASI Cs at ingress and egress to fabric
VOQ is virtual because it represents EGRESS capacity but resides
on INGRESS module
I t is PHYSI CAL buffer where packets are stored
What Are VOQs?
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
34
What Is VOQ?
Ingress module
Module 1
Module 2
(1G module)
Module 3
(10G module)
Module 4
(10G module)
VOQs for
Module 2
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
VOQs for
Module 3
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
VOQs for
Module 4
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
Egress modules
Fabric
module
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
Destination 1
Destination 2
Destination 3
Destination 4
Destination 5
Destination 6
Destination 7
Destination 8
Destination 1
Destination 2
Destination 3
Destination 4
Destination 5
Destination 6
Destination 7
Destination 8
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
Destination 1
Destination 2
Destination 3
Destination 4 0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
Egress
Capacity
(ability to receive traffic
from fabric)
VOQ Buffers correspond
to Egress Capacity
(send traffic into fabric based
on destination)
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
35
Centralized Fabric Arbitration
Access to fabric bandwidth on ingress module controlled
by central arbiter on supervisor
I n other words, access to the VOQ for the destination across the fabric
Arbitration works on credit request/grant basis
Modules communicate egress fabric buffer availability to central arbiter
Modules request credits from supervisor to place packets in VOQ for
transmission to destination over fabric
Supervisor grants credits based on egress fabric buffer availability for
that destination
Arbiter discriminates among four classes of service
Priority traffic takes precedence over best-effort traffic across fabric
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
36
Central
Arbiter
Module 2
Fabrics
VOQ Operation
Supervisor
Buffer
Credits
VOQ f or
e2/1,3,5,7
VOQ f or
e1/1,3,5,7
0 1 2 3
VOQ f or
e3/1,3,5,7
0 1 2 3 0 1 2 3
Capacit y
available!
Capacit y
available!
Capacit y
available!
Module 1 Module 3
0 1 2 3
Egress
Destination
Capacity
Egress
Destination
Capacity
0 1 2 3
Egress
Destination
Capacity
0 1 2 3
Egress modules have
capacity to receive traffic
from fabric
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
37
Fabrics
VOQ Operation
Supervisor
Module 1 Module 2 Module 3
0 1 2 3
VOQ for
e3/1
0 1 2 3
Egress
Destination
Capacity
0 1 2 3
Egress
Destination
Capacity
0 1 2 3
VOQ for
e2/1
INGRESS MODULE EGRESS MODULES
VOQs on ingress module
correspond to capacity
on egress modules
Central
Arbiter
Buffer
Credits
VOQ f or
e2/1,3,5,7
VOQ f or
e1/1,3,5,7
0 1 2 3
VOQ f or
e3/1,3,5,7
0 1 2 3 0 1 2 3
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
38
Fabrics
VOQ Operation
Supervisor
Module 1 Module 2 Module 3
0 1 2 3
VOQ for
e3/1
Dest ined t o
e3/1, pr ior it y
level 1
Request t o
t r ansmit t o
e3/1, pr ior it y 1!
Request
gr ant ed!
0 1 2 3
Egress
Destination
Capacity
Buf f er f or VOQ
pr ior it y 1 now
available!
0 1 2 3
Egress
Destination
Capacity
0 1 2 3
VOQ for
e2/1
INGRESS MODULE EGRESS MODULES
Central
Arbiter
Buffer
Credits
VOQ f or
e2/1,3,5,7
VOQ f or
e1/1,3,5,7
0 1 2 3
VOQ f or
e3/1,3,5,7
0 1 2 3 0 1 2 3
Deduct cr edit
f r om VOQ
pr ior it y 1
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
39
Benefits of Central Arbitration and
VOQ
Ensures fair access to bandwidth for multiple ingress
ports transmitting to one egress port
Prevents congested egress ports from blocking ingress
traffic destined to other ports
Priority traffic takes precedence over best-effort traffic
across fabric
Engineered to support Unified I/O
Can provide no-drop service across fabric for future FCoE
interfaces
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
40
Layer 2
Engine
Layer 3
Engine
Forwarding Engine
Nexus 7000 Packet Flow
Fabric Module 1
Fabric ASIC
Fabric Interface
and VOQ
Port ASIC
CTS and
4:1 Mux
MAC/
PHY
MAC/
PHY
Replication
Engine
Fabric ASIC
Module 1
e1/1
Layer 2
Engine
Layer 3
Engine
Forwarding Engine
Fabric Interface
and VOQ
Port ASIC
CTS and
4:1 Mux
MAC/
PHY
MAC/
PHY
Replication
Engine
Fabric ASIC
Module 2
Supervisor Engine
Central Arbiter
Fabric Module 2
Fabric ASIC
Fabric Module 3
Fabric ASIC
e2/7
Ingress
queueing and
scheduling
CTS LinkSec decryption and
verification
Ingress queueing and scheduling
in shared mode
Submit packet
for lookup
Ingress
multicast
replication
Layer 2 and
IGMP snooping
lookups
Layer 3
and Layer
4 lookups
Queueing and
VOQ arbitration
request
Transmit to
fabric
Credit grant for
fabric access
Packet transmission
Packet transmission
Receive from
fabric
Queue and
schedule
toward egress
Return buffer
credits
Submit packet
for lookup
Egress
multicast
replication
Layer 2 and
IGMP snooping
lookups
Layer 3
and Layer
4 lookups
Egress
queueing and
scheduling
CTS
LinkSec
encryption
Receive
packet
from
wire
Transmit
packet on
wire
Packet transmission
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
41
Virtual Device
Contexts
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
42
Virtual Device Contexts (VDCs)
VDC Virtual Device
Context
Flexible separation/distribution of
Software Components
Flexible separation/distribution of
Hardware Resources
Securely delineated
Administrative Contexts
Infrastructure
Layer-2 Protocols Layer-3 Protocols
VLAN mgr
STP
OSPF
BGP
EIGRP
GLBP
HSRP
VRRP
UDLD
CDP
802.1X IGMP sn.
LACP PIM CTS SNMP
RIB RIB
Protocol Stack (IPv4 / IPv6 / L2)
Layer-2 Protocols Layer-3 Protocols
VLAN mgr
STP
OSPF
BGP
EIGRP
GLBP
HSRP
VRRP
UDLD
CDP
802.1X IGMP sn.
LACP PIM CTS SNMP
RIB RIB
Protocol Stack (IPv4 / IPv6 / L2)
Kernel
VDC A
VDC B
VDC A VDC B
VDC n
VDCs are not
The ability to run different OS levels
on the same box at the same time
based on a hypervi sor model; there
is a single infrastructure layer that
handles h/w programming
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
43
Virtual Device Contexts
An I ntroduction to the VDC Architecture
Virtual Device Contexts provides virtualization at the device level allowing multiple instances of the
device to operate on the same physical switch at the same time
Linux 2.6 Kernel
Infrastructure
Protocol Stack (IPv4/IPv6/L2)
L2 Protocols
VDC1
VLAN Mgr
Physical Switch
VDCn
Protocol Stack (IPv4/IPv6/L2)
L3 Protocols
UDLD
VLAN Mgr UDLD
LACP CTS
IGMP 802.1x
RIB
OSPF GLBP
BGP HSRP
EIGRP VRRP
PIM SNMP
RIB
L2 Protocols
VLAN Mgr
L3 Protocols
UDLD
VLAN Mgr UDLD
LACP CTS
IGMP 802.1x
RIB
OSPF GLBP
BGP HSRP
EIGRP VRRP
PIM SNMP
RIB

2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
44
Virtual Device Contexts
The Default VDC
When the system is activated for the first time, it will have a default VDC enabled - this VDC is present
at all times during the operation of the switch
Linux 2.6 Kernel
Infrastructure
Protocol Stack (IPv4/IPv6/L2)
L2 Protocols
VDC1
VLAN Mgr
Physical Switch
L3 Protocols
UDLD
VLAN Mgr UDLD
LACP CTS
IGMP 802.1x
RIB
OSPF GLBP
BGP HSRP
EIGRP VRRP
PIM SNMP
RIB
Default VDC
VDC #1 (DEFALT_VDC) is the default VDC - by
default, all ports in the physical chassis are
assigned to the default VDC while not assigned to
any other VDC
Users cannot create or delete the default VDC
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
45
Virtual Device Contexts
VDC Fault Domain
A VDC builds a fault domain around all running processes within that VDC - should a fault occur in a
running process, it is truly isolated from other running processes and they will not be impacted
Linux 2.6 Kernel
Infrastructure
Protocol Stack
VDCA
Physical Switch
VDC A
P
r
o
c
e
s
s

A
B
C
P
r
o
c
e
s
s

D
E
F
P
r
o
c
e
s
s

X
Y
Z

Protocol Stack
VDCB
VDC B
P
r
o
c
e
s
s

A
B
C
P
r
o
c
e
s
s

D
E
F
P
r
o
c
e
s
s

X
Y
Z

Fault Domain
Process DEF in VDC B
crashes
Processes in VDC A are
not affected and will
continue to run unimpeded
This is a function of the
process modularity of the
OS and a VDC specific
IPC context
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
46
Nexus 7000 Roadmap
Lowering Power, Cost while I ncreasing Scale
15+ Terabit
Infrastructure
M1 Series I/O Modules
32 port 10G (80G/slot)
48 port 1G (46G/slot)
M1 Series I/O Module
48 port 1G (46G/slot)
M1 Series I/O Modules
8 port 10G-XL (80G/slot)
48 port 1G-XL (46G/slot)
M1 Series I/O Modules
16 port 10G (160G/slot)
D1 Series I/O Modules
32 port 10G DCB SFP+ (230G/slot)
32 port 10G DCB 10GBASE-T (230G/slot)
D2 Series I/O Modules
48 port 10G DCB SFP+ w/L3 (480G/slot)
48 port 10G DCB 10GBASE-T w/L3 (480G/slot)
40G/100G modu

vPC
1GbE
18 Slot Chassis
OTV, MPLS
N2K Support
DCB , L2MP, FCoE
LISP
CCN
Service Modules
ISSU
CTS
VDC
2008 2010 2011 2009
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
47
Nexus 5000
Nexus 2000
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
48
OS
Cisco Nexus 5000 Series
56-Port L2 Switch
40 Ports 10GE/FCoE/DCE, fixed
2 Expansion module slots
Ci sco Fabri c Manager and Ci sco Data Center Manager
Ci sco DC-OS
FC + Ethernet
4 Ports 10GE/FCoE/DCE
4 Ports 1/2/4G FC
Fibre Channel
8 Ports 1/2/4G FC
Mgmt
Ci sco DC-OS
Ethernet
6 Ports 10GE/FCoE/DCE
DC-NM and Fabric Manager
NX-OS
28-Port L2 Switch
20 Ports 10GE/FCoE/DCE, fixed
1 Expansion module slot
Nexus 5010 Nexus 5020
All 10GE switch/module ports are FCoE/Data Center Ethernet capable
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
49
Nexus 2000 Fabric Extender
1GE Connectivity
48 x 1 GE interfaces
4 x 10 GE interfaces
Beacon and status LEDs
Redundant, hot-swappable
power supplies
Hot-swappable fan tray
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
50
Nexus 2000 Fabric Extender
Virtual Chassis
The Nexus 2000 Fabric Extender (FEX) acts as a remote linecard
for the Nexus 5000, retaining all centralized management and configuration
on the Nexus 5000, transforming it into a Virtualized Chassis
Nexus 5000
Virtualized chassis
+
Nexus 5000
Nexus 2000 Fabric Extender
=
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
51
Data Center Access Architecture
Virtualized Access Switch
Nexus 5010/5020
Nexus 5000/2148T Virtualized
Access Switch provides a number of
design options to address evolving
Data Center requirements
Fabric Extender provides for flexibility
in the design of the physical
topologies
Aids in building larger layer 2 designs
safely
Support of latest spanning tree
enhancements
Single virtual access switch
(Simplifies the layer 2 design)
Support of 16-way 10GE
Etherchannel combined with vPC
provides increased network capacity
Nexus 2148T Fabric
Extender 48 GE Ports
4 x 10GE Fabric
Links per Fabric
Extender (CX-1
Cu)
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
52
Data Center Architecture
N5K/N2K - Logical Topology
Nexus 5000/2000
Virtualized Access
Switch Pods
. . .
Cisco Nexus 2148T Fabric
Extender (N2K) and Nexus
5000 (N5K) Pod
N2K + N5K Pod
represents networking
Access layer
Nexus 7000 at Distribution
Layer
Each Virtualized
Access Switch Pod
configured to support
up to 576 1GE server
ports at FCS
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
53
Data Center Access Architecture
Optimizing Layer 1 and Layer 2 Designs
1GE Attached Servers - Maintain Existing Cat5e Server
Wiring Infrastructure with EoR topology
Nexus 5000/2000
EoR
. . .
Cisco Nexus 2148T Fabric Extender and Nexus
5000 provide a Flexible Access Solution
De-Coupling of the Layer 1 and Layer 2 Topologies
Optimization of both Layer 1 (Cabling) and Layer 2
(Spanning Tree) Designs
Provides for simultaneous support of EoR, MoR and
ToR
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
54
Data Center Access Architecture
N5K/N2K Advantages Flexible Cabling
Combination of EoR and ToR cabling
Nexus 5000/2000
Mixed ToR & EoR
. . .
Cisco Nexus Fabric Extender (FEX) and Nexus
5000 provide a Flexible Access Solution
Migration to ToR for 10GE servers or selective
1GE server racks if required (mix of ToR and EoR)
Mixed cabling environment (optimized as required)
Flexible support for Future Requirements
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
55
Fabric Extender
Fabric Modes
Fabric Extender associates (pins)
a server side (1GE) port with an
uplink (10GE) port
Server ports are either individually
pinned to specific uplinks (static
pinning) or all interfaces pinned to
a single logical port channel
Behavior on FEX uplink failure
depends on the configuration
Static Pinning Server ports
pinned to the specific uplink are
brought down with the failure of
the pinned uplink
Port Channel Server traffic is
shifted to remaining uplinks based
on port channel hash
Static Pinning
Port Channel
Server Interface
goes down
Server Interface
stays active
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
56
Nexus 2148 Fabric Extender
Configuring the Fabric Extender
Two step process
Define the Fabric Extender (100-199) and the number of fabric
uplinks to be used by that FEX (valid range: 1-4)
Nexus- 5000# swi t ch# conf i gur e t er mi nal
swi t ch( conf i g) # f ex 100
swi t ch( conf i g- f ex) # pi nni ng max- l i nks 4
Nexus- 5000# swi t ch# swi t ch# conf i gur e t er mi nal
swi t ch( conf i g) # i nt er f ace et her net 1/ 1
swi t ch( conf i g- i f ) # swi t chpor t mode f ex- f abr i c
swi t ch( conf i g- i f ) # f ex associ at e 100
. . .
<r epeat f or al l 4 i nt er f aces used by t hi s FEX>
Configure Nexus 5000 ports as fabric ports and associate the
desired FEX
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
57
Nexus5000# show run interface 1/3
interface Ethernet1/3
switchport mode fabric
remote-chassis 100
Nexus5000# show interface brief
Interface Status IP Address Speed MTU Port
Channel
-------------------------------------------------------------------------------
Ethernet100/1/1 up -- -- 1500 --
Ethernet100/1/2 notConnect -- -- 1500 --
Ethernet100/1/3 notConnect -- -- 1500 --
Ethernet100/1/4 notConnect -- -- 1500 --
Ethernet100/1/5 notConnect -- -- 1500 --
Ethernet100/1/6 notConnect -- -- 1500 --
Ethernet100/1/7 notConnect -- -- 1500 --
Ethernet100/1/8 up -- -- 1500 --
Ethernet100/1/9 up -- -- 1500 --
Nexus 2148 Fabric Extender
Fabric Extender ports are Nexus 5000 ports
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
58
Data Center Access Architecture
vPC Redundancy Models Dual Chassis
MCEC from server to the
access switch
vPC provides two redundancy designs for the virtualized access switch
Option 1 - MCEC connectivity from the server
Two virtualized access switches bundled into a vPC pair
Full redundancy for supervisor, line card, cable or NIC failure
Logically a similar HA model to that currently provided by VSS
vPC peers
Two Virtualized access switches
Each with a Single Supervisor
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
59
Data Center Access Architecture
vPC Redundancy Models Dual Supervisor
Active/Standby NIC teaming
vPC Option 2 Fabric Extender connected to two Nexus 5000
From the server perspective a single access switch with each line card
supported by redundant supervisors
Full redundancy for supervisor, fabric via vPC and cable or NIC failure
via active/standby NIC redundancy
Logically a similar HA model to that currently provided by dual
supervisor based modular switch
Fabric Extender dual homed to
redundant Nexus 5000
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
60
Nexus 5010
28-Port 1RU Switch
Nexus 5020
56-Port 2RU Switch
Q2CY08
Q4CY08
Nexus 5000 & 2000 Roadmap
FEX-1GE
N2148T-1GE
48x1GE + 4x10GE
FEX-100M/1GT
N2248T-1GE
48 port 100/1GT
downlinks, 4x10GE
uplinks
Q1/Q2CY10
Q1CY09
Next-Generation Nexus 5000
48-ports & 96-ports Switch
FEX-10GE
N2232-10GE
32 ports 10GE
SFP+ downlinks,
8x10GE SFP+
uplinks
2HCY10
N2K
N5K
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
61
Nexus 1000V
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
62
The Story TodayNetworking with
VI3.5
Separation of Network and
Server provisioning and
management systems
Virtual Center managing
& provisioning ESX hosts
and vSwitches
Physical network
managed and
provisioning separately
Network visibility ends at
physical switch port
Different interfaces and tools
IOS or IOS-like cli for
physical network
VC GUI and esxcfg cli for
vSwitches
vSwitch vSwitch
vSwitch
Network
Management
Virtual Center
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
63
vNetwork Distributed Switch & Cisco Nexus
1000V
vSwitch
C
U
R
R
E
N
T
vSwitch
vSwitch
V
D
S
vNetwork Distributed Switch
Cisco Nexus 1000V
Enterprise networking vendors can
provide proprietary networking
interfaces to monitor, control and
manage virtual networks
First offering: Cisco Nexus 1000V
Virtual machines retain policies,
QoS as they move around the
datacenter
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
64
Cisco Nexus 1000V Components
Cisco VEM
VM1 VM2 VM3 VM4
Cisco VEM
VM5 VM6 VM7 VM7
Cisco VEM
VM9 VM10 VM11 VM12
Virtual Ethernet Module(VEM)
Replaces Vmwares virtual switch
Enables advanced switching capability
on the hypervisor
Provides each VM with dedicated
switch ports
vCenter Server
Virtual Supervisor Module(VSM)
CLI interface into the Nexus 1000V
Leverages NX-OS 4.04a
Controls multiple VEMs as a single
network device
Cisco VSMs
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
65
Cisco Nexus 1000V Virtual Chassis
Cisco VEM
VM1 VM2 VM3 VM4
Cisco VEM
VM5 VM6 VM7 VM8
pod5- vsm# show modul e
Mod Por t s Modul e- Type Model St at us
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1 0 Vi r t ual Super vi sor Modul e Nexus1000V act i ve *
2 0 Vi r t ual Super vi sor Modul e Nexus1000V ha- st andby
3 248 Vi r t ual Et her net Modul e NA ok
Cisco VSMs
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
66
Nexus 1000V Faster VM Deployment
Nexus 1000V VSM
vCenter
vSphere
Nexus
1000V
VEM
vSphere
Nexus
1000V
VEM
Defined Policies
WEB Apps
HR
DB
DMZ
VM Connection Policy
Defined in the network
Applied in Virtual Center
Linked to VM UUID
Pol i cy-Based
VM Connectivity
Mobi l i ty of Network &
Securi ty Properti es
Non-Di sruptive
Operati onal Model
Cisco VN-Link: Virtual Network Link
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
67
Nexus 1000V Richer Network Services
Nexus 1000V VSM
vSphere
Nexus
1000V
VEM
vSphere
Nexus
1000V
VEM
VN-Link Property Mobility
VMotion for the network
Ensures VM security
Maintains connection state
VMs Need to Move
VMotion
DRS
SW Upgrade/Patch
Hardware Failure
vCenter
Pol i cy-Based
VMConnectivity
Mobi l i ty of Network &
Securi ty Properti es
Non-Di sruptive
Operati onal Model
Cisco VN-Link: Virtual Network Link
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
68
Nexus 1000V - I ncreased Operational Efficiency
Nexus 1000V VSM
vSphere
Nexus
1000V
VEM
vSphere
Nexus
1000V
VEM
vCenter
Network Admin Benefits
Unifies network mgmt and ops
Improves operational security
Enhances VM network
features
Ensures policy persistence
Enables VM-level visibility
VI Admin Benefits
Maintains existing VM mgmt
Reduces deployment time
Improves scalability
Reduces operational workload
Enables VM-level visibility
Pol i cy-Based
VM Connectivity
Mobi l i ty of Network &
Securi ty Properti es
Non-Di sruptive
Operati onal Model
Cisco VN-Link: Virtual Network Link
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
69
Key Features of the Nexus 1000V
Switching
L2 Switching, 802.1Q Tagging, VLAN Segmentation, Rate Limiting (TX)
IGMP Snooping, QoS Marking (COS & DSCP)
Security
Policy Mobility, Private VLANs w/ local PVLAN Enforcement
Access Control Lists (L24 w/ Redirect), Port Security
Provisioning
Automated vSwitch Config, Port Profiles, Virtual Center Integration
Optimized NIC Teaming with Virtual Port Channel Host Mode
Visibility
VMotion Tracking, ERSPAN, NetFlow v.9 w/ NDE, CDP v.2
VM-Level Interface Statistics
Management
Virtual Center VM Provisioning, Cisco Network Provisioning, CiscoWorks
Cisco CLI, Radius, TACACs, Syslog, SNMP (v.1, 2, 3)
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
70
Cisco Nexus 1000V
3 new features that make a difference
Great for mixed use
ESX clusters
Segment VMs w/o
burning IP addresses
Supports isolated,
community and
promiscuous trunk
ports
Follows your VM w/
VMotion or DRS
Private VLANs
(PVLANs)
View flow based stats
for individual VMs
Captures multi-tiered
app traffic inside a single
ESX host
Export aggregate stats
to dedicated collector for
DC-wide VM view
Follows your VM w/
VMotion or DRS
Netflowv.9 with
Data Export
Mirror VM interface
traffic to a remote sniffer
Identify root cause for
connectivity issues
No host based sniffer
virtual appliance to
maintain
Follows your VM w/
VMotion or DRS
Encapsulated
Remote SPAN
(ERSPAN)
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
71
Differnet aspects of
Virtualization
in Data Center
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
72
Virtualization in the Data Center
Common Benefits
Increase resource usability Lower CAPEX
Loosely couple re-usable functions - Flexibility
Dynamic allocation of virtual instances - Automation
Centralized policy Management Lower TCO
Distributed Capabilities Broad use of functions
Servers: the capability of decoupling CPU, Mem and I /O functions from
physical devices that provide them to increase their effective utilization,
to enhance the flexibility in how they are utilized and to allow the
dynamic management of logical instances
Storage: the capability of abstracting the physical location of data
storage by presenting logical storage to the user for thus achieving
location independence
Network-based Services: the capability of manipulating service
instances independent from the service device thus providing flexibility
in their usage
Network Infrastructure: the capability of partitioning groups of
networked resources providing logical isolation and common policy
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
73
Virtual Switches: Logical instances of physical switches
- Many to one: grouping of multiple physical switches
- Reduce management overhead (single switch) and simplify configuration (single sw config)
- One to Many: partitioning of physical switches
- Isolate control plane and control plane protocols
Virtual PortChannels: Etherchannel across multiple chassis
- Simplify L2 pathing by supporting non-blocking cross-chassis concurrent L2 paths
- Lessen reliance on STP (loopfree L2 paths are not established by STP)
Virtual Switching Implementations
- Virtual Switching System VSS: Catalyst 6500
- Virtual Blade Switches VBS: 10GE-based Blade Switches
- Virtual Device Context VDC: Nexus 7000
- Virtual Port-Channel vPC: Nexus Family
Virtual Ethernet Switching
Improving Management and Pathing
Virtual Switching
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
74
Many to One
Many switches look like one
Two switches are physical
One switch is virtual
Virtual Switch:
i. All ports appear to be on the same physical switch
ii. Single point of management
iii. Single configuration
iv. Single IP/mac
v. Single control plane protocol instance
Benefits
i. Simplify infrastructure management
ii. 1 switch to manage
A
1
A
A
2
A
1
A
A
2
STP HSRP
OSPF SNMP
STP HSRP
OSPF SNMP
STP HSRP
OSPF IGMP
Virtual Switching:
Many to One VSS
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
75
Many to One
Many switches look like one
Up to Eight switches are physical
One switch is virtual
Virtual Switch:
i. All ports appear to be on the same physical switch
ii. Single point of management
iii. Single configuration
iv. Single IP/mac
Benefits
i. Simplify infrastructure management
ii. 1 switch to manage
A
A
A
2
A
1
A
8
A
7
A
4
A
3
A
6
A
5
A
2
A
1
A
8
A
7
A
4
A
3
A
6
A
5
2
2
Virtual Blade Switching
Many to One VBS
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
76
One to Many
One switch looks like one
1 switch is physical
Many switches are logical
Virtual Switch:
i. Switch ports only exist on a single logical instance
ii. Per virtual switch point of management
iii. Per virtual switch configuration
iv. Per virtual switch IP/mac
v. Per virtual switch control plane protocol instance
Benefits
i. Control plane isolation
ii. Control protocol isolation
Virtual Switching
One to Many VDC
A
A
1
A
2
A
3
A
4
A
A
1
A
2
A
3
A
4
STP HSRP
OSPF IGMP
STP HSRP
OSPF IGMP
STP HSRP
OSPF IGMP
STP HSRP
OSPF IGMP
STP HSRP
OSPF IGMP
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
77
Virtual Switching
One to Many VDC Topology
AG
1
AG
2
AC
VLAN X
AC
1
AC
2
VLAN X
AG
11
AG
21
AG
12
AG
22
AG
1
AG
2
AC
1
AC
2
VLAN Y
AC
11
AC
12
VLAN X
VLAN Y
AC
21
AC
22
VLAN X
AG
12
AG
22
AG
11
AG
21
One to Many
One switch looks like many
Devices connect to a single virtual switch
Virtual Switching Topologies are isolated from one another
Virtual Topology:
i. Distinct physical ports form virtual topologies
ii. Each topology has independent and isolated control protocols
Benefits
i.Support Isolated but parallel topologies
ii.Supports smaller logical environments
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
78
Two to one
Two Physical to a single logical
Devices connect to a single logical switch
Connections are treated as portchannel
Virtual PortChannel:
i. Ports to virtual switch could form a cross-chassis portchannel
ii. virtual Portchannel behaves like a regular Etherchannel
Benefits
i.Provide non-blocking L2 paths
ii.Lessen Reliance on STP
A
AG
1
AG
2
2 2
4
2
2
AG
1
AG
2
2 2
H
AG
8
AC
AC
1
AC
2
1 1
2
Virtual Portchannels
Topology VSS and vPC(through MCEC)
H
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
79
New Topology Using Virtual Switching
B
A
4 4
4
2 4 2 2
4
Physical View Logical View
Key characteristics
Optimized L2 Pathing through Virtual PortChannels
i. I ncrease bandwidth usage on: server to switch and switch to switch
ii. Loop-free forwarding paths
Less Reliance on STP
i. Loopfree topology established through Virtual Switch mechanism
ii. STP is strictly used as a fail-safe mechanism
Isolation of L2 Domains
i. Separate Logical Topologies
ii. Distinct STP Topologies
core1
core2
agg2 agg1
acc2 acc1
agg4 agg3
accY accN
Virtual
switch
Virtual
switch
Virtual
switch
Virtual
switch
Virtual
switch
C D
B
A
C D
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
80
New standards:
Fibre Channel over
Ethernet
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
81
What Is FCoE?
Fibre Channel over Ethernet
From a Fibre Channel standpoint its
FC connectivity over a new type of cable called an Ethernet
cloud
From an Ethernet standpoints its
Yet another ULP (Upper Layer Protocol) to be transported, but
a challenging one!
And technically
FCoE i s an extensi on of Fi bre Channel
onto a Lossl ess Ethernet fabri c
FCoE
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
82
Encapsulate Fibre Channel frames
onto Lossless Ethernet
FCoE
FC over Ethernet (FCoE)
Fibre
Channel
Traffic
Ethernet
Desti nati on MAC Address
Source MACAddress
IEEE 802.1Q Tag
ET = FCoE
Ver Reserved
Reserved
Reserved SOF
Encapsul ated FC Frame
(Incl udi ng FC-CRC)
EOF Reserved
FCS
Reserved
FCoE Frame Format
Bit 0 Bit 31
E
t
h
e
r
n
e
t
H
e
a
d
e
r
F
C
o
E
H
e
a
d
e
r
F
C
H
e
a
d
e
r
FC Payload
C
R
C
E
O
F
F
C
S
Byte 0
Byte 2179
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
83
Fibre Channel over Ethernet
Brief look at the Technology
A method for a direct mapping of FC frames
over Ethernet
Seamlessly connects to FC networks
Extends FC in the datacenter over the Ethernet
FCoE appears as FC to the host and the SAN
Preserves current FC infrastructure
and management
FC frame is unchanged
Can operate over standard switches
(with jumbo frames)
Priority Flow Control guarantees no-drops
Mimics FC credit-buffer system, avoids
TCP
Does not require expensive off-loads
Fibre
Channel
Traffic
Ethernet
FCoE
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
84
FC HBA
FC HBA
NIC
NIC
FC Traffic
FC Traffic
Enet Traffic
Enet Traffic
FC HBA
FC HBA
NIC
NIC
FC HBA
FC HBA
NIC
NIC
Today:
Parallel LAN/SAN Infrastructure
Inefficient use of Network
Infrastructure
5+ connections per server higher
adapter and cabling costs
Adds downstream port costs;
cap-ex and op-ex
Each connection adds additional
points of failure in the fabric
Power and cooling
Longer lead time for server
provisioning
Multiple fault domains complex
diagnostics
Management complexity firmware,
driver-patching, versioning
Unified I/ O Use Case
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
85
Management
SAN
B
SAN
A
LA
N
Today:
Ethernet
FC
Aggregation/ Core
switches
Access Top of the
Rack switches
Servers
Unified I/ O Use Case
FC HBA
FC HBA
NIC
NIC
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
86
Management
SAN
B
SAN
A
LA
N
FCoE
Ethernet
FC
Today
Unified I/ O Use Case
Unified I/O Phase 1
FCoE
Switch
Unified I/O
Reductionof server adapters
FewerCables
Simplificationof access layer & cabling
Gateway free implementation - fits in
installed baseof existing LAN and SAN
L2 Multipathing Access Distribution
Lower TCO
Investment Protection (LANs and SANs)
Consistent Operational Model
One set of ToR Switches
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
87
How the design
has changed
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
88
Discrete Network Fabrics
Typical Ethernet and Storage Topology
Fabric A
Fabric B
SAN Fabric
Enet
FC
Single Ethernet Network Fabric
Typically 3 tiers
Access Switches are dual-homed
Servers are single or multi-homed
VSAN 2 VSAN 3
A B E F C D
L2
VLAN A VLAN B
L3
VLAN C
L3
L2
Core
Aggregation
Access
Dual Storage Fabrics
Typically 2 tiers
Edge switches are dual-homed
Servers are dual-homed to different fabrics
FCoE
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
89
Unified Fabric: DCE
FCoE Server Access
Fabric A Fabric B
SAN Fabric
L2
VLAN A VLAN B
L3
L3
L2
Core
Aggregation
Access
Enet
FC
DCE
A B E D
VLAN C VLAN D
CNA
CNA Converged Network Adaptor
FCoE
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
90
New Topology
Enhanced L2 Design
VLAN A
Module 1
VLAN B
L2
Module 2
L3
VLAN D VLAN E VLAN C
aggx aggx+1
accX accX+1 accY accY+1
Enhanced L2 Topology
3-tier L2 Topology
Nexus at Core and Aggregation Layers
6500 at Aggregation and Services Layers
Topology Highlights
DC-Wide VLANs
Higher Stability of STP environment New STP Features
Lower Oversubscription - if Needed
Higher Density 10 GE at Core and Agg Layers
acc1 acc2 accN accN+1
core1 core2
agg2 agg1
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
91
Enhance L2 Topology
End to end Virtual Switching
VLAN A
Module 1
VLAN B
L2
Module 2
L3
VLAN D VLAN E
VLAN C
accX accX+1
Enhanced L2 Topology
3-tier L2 Topology
Nexus at Core and Aggregation Layers
6500 at Aggregation and Services Layers
Topology Highlights
DC-Wide VLANs
Higher Stability of STP environment New STP Features
Lower Oversubscription - if Needed
Higher Density 10 GE at Core and Agg Layers
acc1 acc2
accY
agg1 aggx
core1
1 X
ServerN
accN
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
92
New Topology I solating Collapsed L2 Domains
Virtual Device Contexts @ Agg Layer
Pods are isolated at aggregation layer
Each Pod runs its own STP instance (instance
per VDC)
Multiple pods could exist in a single VDC
VLANs contained within Agg Module per VDC
L2
L3
VLAN C VDC2
VLAN C VDC1
1 2
agg2
agg1
Pods are logically isolated two topologies
Each Pod belong to multiple VDCs
Each VDC topology requires dedicated Ports
VLANs contained within Agg Module per VDC
Higher 10GE Port Density Allows multiple Agg Pairs to be collapsed
Collapsed Agg Pair could still be L2 isolated (different STP instances)
VLAN IDs could be replicated on different VDC shared infrastucture
Module 1
L2
Module 1
L3
VLAN C VLAN C
acc1 acc2 accN accN+1
agg1 agg2
agg3 agg4
acc1 acc2 accN accN+1
L2
Module 1
L3
VLAN C VDC1 VLAN C VDC2
acc1 acc2 accN accN+1
agg1 agg2
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
93
Brief summary
Q&A
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
94
Internet
Partners
Subscriber A
Application 1
Subscriber B
Application 1
Subscriber A
Application 2
Subscriber B
Application 2
App 1
App 1
App 2
App 2
7600 Nexus 7000 Cat 6500 VSS
as Services
Chassis)
Nexus 5000 Rack Servers
UCS
Nexus 1000v SAN Switches
MDS
Consolidated
Storage Arrays
Application
Software
Virtual
Machine
VSwitch Access Services
Core/Agg.
Peering
IP-NGN
Backbone
VMWare
Vspehere 4
ESX 4
Storage
& SAN
Compute
Cisco and
Third-Party
Applications
10G Ethernet
10G FCoE
4G FC
1G Ethernet
VM to vSwitch
vSwitch to HW
App to HW / VM
ACE
FW
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
App
OS
IP-NGN
Subscriber C
Application 2
App 2
IaaSCloud Services
Solution Architecture
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
95
IaaS Service
= Compute + Storage + Network
Bronze
Gold
Bronze CPU RAM SAN
Web1 .5 4 500G
DB1 .5 8 500G
App1 .5 16 500G
Silver
Silver CPU RAM SAN
Web2 1 4 Nx500G
DB2 1 8 Nx500G
App2 1 16 Nx500G
Gold CPU RAM SAN
Web3 2 4 Nx500G
DB3 2 8 Nx500G
App3 2 16 Nx500G
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
96
Cisco Unified Data Center of 2010
Uni fi ed Compute
Nexus 7000
1/10GE / 10GBaseT
FCoE
N2K / N5K
1 GE
Nexus 5000
10 GE
FCoE
Top of Rack Designs
End of Row
N2K / N7K
1 GE
VM
VM
VM
VM
VM VM
VM
VM
VM
VM
VM VM
VM
VM
VM
VM
VM VM
VM
VM
VM
VM
VM VM
Core
L3 boundary to the DC network. Functional point for
route summarization, the injection of default routes
and termination of segmented virtual transport
networks
Aggregation
Typical L3/L2 boundary. DC aggregation point for
uplink , storage and DC services offering key
features: VPC, VDC, 10GE density and 1
st
point of
migration to 40GE and 100GE
1Gb to 10Gb to Unified Fabric
Price performance
Next gen N5K (48-96 Universal Port)
FCoE on N7K
100/1Gb FEX
10G FCoE + 10GBase-T
VM Visibility
VM Security
Optimized VMotion
Fast VM Provisioning
High Density VM Deployment
Access
Virtual Access
NEXUS 1000v
Vi rtual Adapter
Expanded Memory
NEXUS 5000
NEXUS 2000
NEXUS 7000
vPC
Catal yst
6500
Servi ce
Modul es
NEXUS 7000
NEXUS
1000v
MDS-Storage
NEXUS 7000
vPC, L2MP
FCoE
2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential Presentation_ID
97

Вам также может понравиться