Вы находитесь на странице: 1из 84
Data Center Architecture Strategy Update BRKDCT-2866 Session_ID 2 Presentation_ID © 2008 Cisco Systems, Inc.
Data Center Architecture Strategy Update BRKDCT-2866 Session_ID 2 Presentation_ID © 2008 Cisco Systems, Inc.
Data Center Architecture Strategy Update BRKDCT-2866 Session_ID 2 Presentation_ID © 2008 Cisco Systems, Inc.
Data Center Architecture Strategy Update BRKDCT-2866 Session_ID 2 Presentation_ID © 2008 Cisco Systems, Inc.

Data Center Architecture Strategy Update

Data Center Architecture Strategy Update

BRKDCT-2866

Session_ID

2

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Depth vs. Breadth Quick level set DCT-2703 Implementing DC Services DCT-2840 DC L2 Interconnect DCT-2867

Depth vs. Breadth

Quick level set

DCT-2703 Implementing DC Services DCT-2840 DC L2 Interconnect DCT-2867 DC Facilities DCT-2868 DC Virtualization
DCT-2703 Implementing DC Services
DCT-2840 DC L2 Interconnect
DCT-2867 DC Facilities
DCT-2868 DC Virtualization
DCT-2825 Nexus 5000 Architecture
RST-3470 Nexus 7000 Architecture
RST-3471 Nexus Software Architecture
SAN-2701 SAN Design
…Many, many more
Time
This Session
Breadth
Session_ID
3
Presentation_ID
© 2008 Cisco Systems, Inc. All rights reserved.
Cisco Public
Depth
The Data Center Dilemma EFFICIENCY AGILITY Increased Utilization Demand Capacity Consolidation Globalization

The Data Center Dilemma

EFFICIENCY AGILITY
EFFICIENCY
AGILITY
Increased Utilization Demand Capacity
Increased Utilization
Demand Capacity
Consolidation Globalization
Consolidation
Globalization
‘Green’ Availability
‘Green’
Availability

How do I align my Data Center strategy?

How can Cisco help me accomplish this?

Session_ID

4

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Agenda • Trends • Architecture Strategy • Architecture Evolution Session_ID 5 Presentation_ID © 2008

Agenda

Trends Architecture Strategy Architecture Evolution

Session_ID

5

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Agenda • Trends Consolidation Network Technology Software Services • Architecture Strategy • Architecture

Agenda

Trends

Consolidation Network Technology Software Services

Architecture Strategy Architecture Evolution

Session_ID

6

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Trends: Consolidation Session_ID 7 Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco

Trends:

Consolidation

Trends: Consolidation

Session_ID

7

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

W H A T CONSOLIDATION BY DEFAULT WILL BEGET ORGANIC IT’s FRUITION “ As IT

W H A T

CONSOLIDATION BY DEFAULT WILL BEGET ORGANIC IT’s FRUITION

As IT consolidation solidifies into standard procedure for infrastructure management, we will see operational benefits and technical innovations arise that will deliver fundamentally better efficiency than can be achieved today. Forrester expects that by 2010, nearly all Intel/AMDbased servers will ship with a pre-installed hypervisor and that the default allocation of any service will be the partition. This will allow the use of new management and HA tools that act at the hypervisor layer, allowing true Organic IT: dynamic, policy-driven reallocation of running production workloads to drive greater power efficiency, accelerate business change, and drive down operational costs. Through these tools will come abstraction between the infrastructure, the application, and even the data center itself. Such a change will give IT professionals new degrees of freedom, allowing services to be deployed where, when, and however needed to best meet the businesses’ objectives.

I T

M E A N S

The IT Consolidation Imperative: Out Of Space, Out Of Power, Out Of Money

© 2007, Forrester Research, Inc

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

8

Data Center Consolidation Reducing operational costs & improving manageability Reduce: Number of distributed server

Data Center Consolidation

Reducing operational costs & improving manageability

Reduce: Number of distributed server farms Operational costs Increase Flexibility on application rollouts Uptime
Reduce:
Number of distributed server farms
Operational costs
Increase
Flexibility on application rollouts
Uptime
Standardize:
Physical Requirements
Operational best practices
Server platform
Establish:
Future DC architecture
Initial phase network design
Technology Adoption Strategy
Migration Strategy
design Technology Adoption Strategy Migration Strategy Network Implications: Higher Server Farm Density Higher
Network Implications: Higher Server Farm Density Higher Average Traffic Loads Higher number of network-based
Network Implications:
Higher Server Farm Density
Higher Average Traffic Loads
Higher number of network-based
Services
Larger & Flatter Networks
At Least N+1 Redundancy
Facilities Implications
Higher Power Demands
Higher Cooling Demands
Higher Square Footage
Requirements:
Future DC architecture
Initial phase network design
Technology Adoption Strategy
Migration Strategy
design Technology Adoption Strategy Migration Strategy Session_ID 9 Presentation_ID © 2008 Cisco

Session_ID

9

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Server Consolidation Reducing capital costs & improving effeciency Reduce: Network Implications: Number of OSs

Server Consolidation

Reducing capital costs & improving effeciency

Reduce: Network Implications: Number of OSs Server Idle time Costs per RU Increase Higher uplink
Reduce:
Network Implications:
Number of OSs
Server Idle time
Costs per RU
Increase
Higher uplink capacity
Increase throughput per server
Larger & Flatter Networks
At Least N+1 Redundancy
Availability beyond a single DC
Application Performance
Application Uptime
Server Density
I/O, MEM and CPU capacity per RU
Facilities Implications
Standardize:
Higher Power Demands
Higher Cooling Demands
Higher Square Footage
Closer integration with DC Arch
SW architecture
NG HW platforms (bound to tiers)
I/O (capacity, cabling)
Requirements:
Establish:
Server architecture direction
Facilities support strategy
Migration Strategy
Scalability of DC architecture
Initial phase network design
Technology Adoption Strategy
Migration Strategy

Session_ID

10

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Server & Infrastructure Virtualization Improving utilization and agility Reduce:Reduce: IdleIdle CPUCPU cyclescycles

Server & Infrastructure Virtualization

Improving utilization and agility

Reduce:Reduce: IdleIdle CPUCPU cyclescycles ServerServer proliferationproliferation PowerPower andand CoolingCooling
Reduce:Reduce:
IdleIdle CPUCPU cyclescycles
ServerServer proliferationproliferation
PowerPower andand CoolingCooling DemandsDemands
IncreaseIncrease
WorkloadWorkload MobilityMobility
ServerServer rolloutrollout flexibilityflexibility
AverageAverage ServerServer CPUCPU UtilizationUtilization
I/O,I/O, MEMMEM andand CPUCPU capacitycapacity perper serverserver
Standardize:Standardize:
VirtualVirtual serverserver SWSW infrastructureinfrastructure
NGNG HWHW platformsplatforms (bound(bound toto tiers)tiers)
I/OI/O andand MEMMEM capacitycapacity
Establish:Establish:
ServerServer
architecturearchitecture directiondirection
ServerServer SupportSupport StrategyStrategy
MigrationMigration StrategyStrategy
Provisioning/managementProvisioning/management strategystrategy
strategystrategy Network Implications: Higher # of uplinks Increase
Network Implications: Higher # of uplinks Increase throughput per server L2 Adjacency (larger & flatter)
Network Implications:
Higher # of uplinks
Increase throughput per server
L2 Adjacency (larger & flatter)
Availability beyond a single DC
Server Trunking
More VLANs & IP subnets
10GE in the access
Facilities Implications
Higher power/cooling draw per server
Lower power/cooling overall (less
servers)
Cabling to match access requirements
Requirements:
Scalability of DC architecture
Broad L2 adjacency
Well-defined Access Layer Strategy
Migration Strategy
Well-defined Access Layer Strategy Migration Strategy Session_ID 11 Presentation_ID © 2008 Cisco

Session_ID

11

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Green-Field Data Centers Addressing growth and consolidation requirements Reduce:Reduce: WastedWasted rackrack

Green-Field Data Centers

Addressing growth and consolidation requirements

Reduce:Reduce: WastedWasted rackrack spacespace AfterAfter thethe factfact cablingcabling PowerPower oror
Reduce:Reduce:
WastedWasted rackrack spacespace
AfterAfter thethe factfact cablingcabling
PowerPower oror coolingcooling retrofittingretrofitting
IncreaseIncrease
PerPer rackrack serverserver densitydensity
DataData CenterCenter LongevityLongevity
DCDC SpaceSpace UtilizationUtilization
Standardize:Standardize:
HighHigh andand lowlow densitydensity areasareas
PowerPower toto serverserver andand networknetwork racksracks
CablingCabling
Establish:Establish:
ServerServer farmfarm growthgrowth potentialpotential
EnvironmentalsEnvironmentals ControlControl StrategyStrategy
UsabilityUsability StrategyStrategy
Provisioning/managementProvisioning/management strategystrategy
strategystrategy Network Implications: Predictable Scalability Increase
Network Implications: Predictable Scalability Increase Physical: ports, slots, boxes Logical: table sizes Well
Network Implications:
Predictable Scalability Increase
Physical: ports, slots, boxes
Logical: table sizes
Well Identified Access Model
Specific Oversubscription Targets
Server & Network Oversubscription
Facilities Implications
Per server, per rack and per pod
Power Requirements
Cooling Capacity
Cabling Selection
Requirements:
4-5 Architecture Strategy
Migration Strategy to new Architecture
Good handle on growth
Servers and storage
I/O interfaces and capacity
growth Servers and storage I/O interfaces and capacity Session_ID 12 Presentation_ID © 2008 Cisco

Session_ID

12

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Trends: Network Technology Session_ID 13 Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved.

Trends:

Network Technology

Trends: Network Technology

Session_ID

13

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Ethernet Standards Applicable to Data Center Environments 802.3 10GE 10GBase-T (IEEE 802.3an Ratified) 10GBase-CX4

Ethernet Standards

Applicable to Data Center Environments

802.3 10GE 10GBase-T (IEEE 802.3an Ratified) 10GBase-CX4 (IEEE 802.3ak) 10GBase-*X (IEEE 802.3ae) 802.3 HSSG Higher
802.3
10GE
10GBase-T (IEEE 802.3an Ratified)
10GBase-CX4 (IEEE 802.3ak)
10GBase-*X (IEEE 802.3ae)
802.3 HSSG
Higher Speed Study Group
The 802.3ak 10GbE standard defines copper categories
The 802.3ae 10GbE standard defines MM and SM fiber categories
40-100 GE (Project Authorization Request has been agreed)
Support Full Duplex Operation Only
Preserve 802.3 Ethernet frame format
Preserve Minimum and Maximum frame size
Support BER equal or better than 10 -12
Support Optical Transport Networks
40G
At least 100M over OM3 MMF
10M over Copper
100G
At least 40KM over SMF, 10Km over SMF, 100M over OM3 MMF
10M over copper
Session_ID

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

14

Demand for 40GE & 100GE in the DC 100GE in the 2010+ for switch interconnects

Demand for 40GE & 100GE in the DC

100GE in the 2010+ for switch interconnects

Switch platforms need to be architected for delivery of capacity in excess of 200GBps per slot

DC facilities environmental specifications need to accommodate the higher speed technology requirements:

Class 1: hazard level does not warrant special precaution 40/100 GE MMF may not meet Class1 Relax the Class1M (current proposal to IEEE):

Good for restricted location which include DC Facilities

More information at http://www.ieee802.org/3/ba/public/mar08/petrilla_02_0308.pdf

Session_ID

15

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Ethernet Interface Evolution: 40G and 100G   40G Muxed 40G Native 100G Native IEEE Standard

Ethernet Interface Evolution:

40G and 100G

 

40G Muxed

40G Native

100G Native

IEEE Standard

None

None

Call for interest: July 2006. Expect ratification in 2010-

2011.

Increased Bandwidth vs. 10GE

No, 4 x 10GE muxed solution

Yes, true 40G per interface

Yes, true 100G per interface

EtherChannel

2 links

8 links

8 links

Fiber savings

Yes

Yes

Yes

Approximate

2008

2009

2010-11

Availability

Estimated FCS Cost

2-3 x 10GE

10 x 10GE

At least 10 x 10GE

Session_ID

16

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Emerging Standards All applicable to Data Center Environments L2 Multipathing IETF TRILL WG Proposal to

Emerging Standards

All applicable to Data Center Environments

L2 Multipathing IETF TRILL WG Proposal to solve L2 STP forwarding limitations IEEE 802.1aq Enhancement
L2 Multipathing
IETF TRILL WG
Proposal to solve L2 STP forwarding limitations
IEEE 802.1aq
Enhancement to 802.1Q to provide Shortest Path Bridging (Optimal Bridging) in L2
Ethernet topologies
Data Center
IEEE 802.1Qbb – Priority-based Flow Control
Bridging
Intended to specify protocols, procedures and managed objects that support flow
control per traffic class as identified by the VLAN tag encoded priority code point
IEEE 802.1Qaz - Enhanced Transmission Selection
Specifies enhancement of transmission selection to support allocation of bandwidth
amongst traffic classes
Discovery and Capability Exchange Protocol - DCBX
Identify the DCB cloud nodes and their capabilities
IEEE 802.1Qau – Congestion Notification (Congestion Management)
Signal congestion information to end stations to avoid frame loss
.1Q tag encoded priority values to segregate flows
Support higher layer protocols that are loss sensitive
Unified I/O
T11 FCoE – FC- BB-5

Session_ID

17

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Data Center Ethernet Features Enhanced Ethernet Standards Feature Benefit Priority-based Flow Control (PFC) Provides

Data Center Ethernet Features Enhanced Ethernet Standards

Feature Benefit Priority-based Flow Control (PFC) Provides class of service flow control. Ability to support
Feature
Benefit
Priority-based Flow
Control (PFC)
Provides class of service flow control. Ability to support
storage traffic
CoS Based BW
Management
Grouping classes of traffic into “Service Lanes”
IEEE 802.1Qaz, CoS based Enhanced Transmission
Congestion Notification
(BCN/QCN)
Data Center Bridging
Capability Exchange
Protocol
Auto-negotiation for Enhanced Ethernet capabilities
DCBCXP (Switch to NIC)
L2 Multi-path for Unicast &
Multicast
Eliminate Spanning Tree for L2 topologies
Utilize full Bi-Sectional bandwidth with ECMP
Lossless Service
Provides ability to transport various traffic types (e.g.
Storage, RDMA)
to transport various traffic types (e.g. Storage, RDMA) End to End Congestion Management for L2 network

End to End Congestion Management for L2 network

RDMA) End to End Congestion Management for L2 network Session_ID 18 Presentation_ID © 2008 Cisco

Session_ID

18

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Evolution of Ethernet Physical layer enabling these technologies Mid 1980’s Mid 1990’s Early 2000’s Late

Evolution of Ethernet

Physical layer enabling these technologies

Mid 1980’s Mid 1990’s Early 2000’s Late 2000’s 10Mb 100Mb 1Gb 10Gb UTP Cat 3
Mid 1980’s
Mid 1990’s
Early 2000’s
Late 2000’s
10Mb
100Mb
1Gb
10Gb
UTP Cat 3
UTP Cat 5
UTP Cat 5
SFP Fiber
X2
SFP+ Cu
SFP+ Fiber
Cat 6/7 ??

Technology

Cable

Distance

Power

(each side)

Transceiver

Latency (link)

SFP+ CU 0W Twinax 10m ~0.1µµµµs Copper normalized SFP+ USR MM OM2 10m ultra short
SFP+ CU
0W
Twinax
10m
~0.1µµµµs
Copper
normalized
SFP+ USR
MM
OM2
10m
ultra short
1W
~0
MM
OM3
100m
reach
SFP+ SR
MM 62.5µµµµm
82m
1W
~0
short reach
MM 50µµµµm
300m
Cat6
55m
~8W
2.5µµµµs
10GBASE-T
Cat6a/7
100m
~8W
2.5µµµµs
Cat6a/7
30m
~4W
1.5µµµµs

Session_ID

19

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Trends: Software Services Session_ID 20 Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved.

Trends:

Software Services

Trends: Software Services

Session_ID

20

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

SaaS SaaS (Software as a Service) an alternative application/application suite built entirely on Web Services

SaaS

SaaS (Software as a Service) an alternative

application/application suite built entirely on Web Services

– Hosted and supported by the Software Vendor

–Per seat/ monthly $

–Available via the Internet

–Multi-Tenant structure

– API’s available for integration with other business applications

–Known for scalability and availability

–Broad Portfolio’s, Application Categories that meet the needs of the smallest

business to the largest (www.saas-showplace.com)

t h e l a r g e s t (www.saas-showplace.com) Session_ID 21 Presentation_ID © 2008
t h e l a r g e s t (www.saas-showplace.com) Session_ID 21 Presentation_ID © 2008
t h e l a r g e s t (www.saas-showplace.com) Session_ID 21 Presentation_ID © 2008
t h e l a r g e s t (www.saas-showplace.com) Session_ID 21 Presentation_ID © 2008
t h e l a r g e s t (www.saas-showplace.com) Session_ID 21 Presentation_ID © 2008
t h e l a r g e s t (www.saas-showplace.com) Session_ID 21 Presentation_ID © 2008

Session_ID

21

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

aa

rowt

re

ct ons

AMR found that 40% of all companies are currently using hosted applications, and 49% will use them within the next 12 months

Gartner forecasts large companies will fulfill 25% of their application demands with hosted software by 2010

IDC predicts the SaaS market will grow at a 21% compound annual growth rate (CAGR) during the next four years, reaching $10.7B worldwide in 2009

Forrester Research predicts the market for traditional on-premise enterprise applications will only grow 4% through 2008.

Gartner forecasts large companies will fulfill 25% of their application demands with hosted software by 2010

http://thinkstrategies.icentera.com/portals/file_getfile.asp?method=1&uid=11753&docid=5045&filetype=pdf

*The new estimate calls for an average annual growth rate of 22.1 percent with the
*The new estimate calls for an
average annual growth rate of
22.1 percent with the estimate
for 2007 to come in at around 21
percent, ultimately becoming an
$11.5 billion market by 2011.
http://www.formtek.com/blog/?p=380

Salesforce.Com Published Growth

http://www.salesforce.com/company /

Published Growth http://www.salesforce.com/company / Session_ID 22 Presentation_ID © 2008 Cisco

Session_ID

22

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

at

s

ou

omput ng

Grid New Computing? Development Parallel Platform? Computing? SaaS Cluster XaaS? Computing? Utility Computing?
Grid
New
Computing?
Development
Parallel
Platform?
Computing?
SaaS
Cluster
XaaS?
Computing?
Utility
Computing?
Stateless
Can all
applications
Be Cloud
enabled?
Computing?

Session_ID

23

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Cloud Computing No real simple answer… Users: Cloud appears as single application or “service” Transparent

Cloud Computing

No real simple answer…

Users: Cloud appears as single application or “service”

Transparent on geography

Transparent to a specific server – it could be one or many

Cloud manager:

Applications are provisioned dynamically in server clusters

Clusters can be clustered or geo- diverse for availability purposes

Goal is to provide a simpler scalable solution for large applications (allow server upgrade and refresh, simpler provisioning, reduce patch management)

Many Servers for the cloud

reduce patch management) Many Servers for the cloud Dynamic Provisioning Communications Cloud Network

Dynamic

reduce patch management) Many Servers for the cloud Dynamic Provisioning Communications Cloud Network Session_ID 24

Provisioning

Communications Cloud Network
Communications
Cloud
Network
cloud Dynamic Provisioning Communications Cloud Network Session_ID 24 Presentation_ID © 2008 Cisco

Session_ID

24

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

The Latest Evolution Of Hosting Session_ID Presentation_ID March 2008 “Is Cloud Computing Ready For The

The Latest Evolution Of Hosting

The Latest Evolution Of Hosting Session_ID Presentation_ID March 2008 “Is Cloud Computing Ready For The Enterprise?”
The Latest Evolution Of Hosting Session_ID Presentation_ID March 2008 “Is Cloud Computing Ready For The Enterprise?”

Session_ID

Presentation_ID

March 2008 “Is Cloud Computing Ready For The Enterprise?”

Cisco Public

© 2008 Cisco Systems, Inc. All rights reserved.

25

A map of the players in the Cloud Computing, SaaS and PaaS markets Source:

A map of the players in the Cloud Computing, SaaS and PaaS markets

of the players in the Cloud Computing, SaaS and PaaS markets Source:

Source: http://dev2dev.bea.com/blog/plaird/archive/2008/05/understanding_t.html

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Session_ID

Presentation_ID

26

Data Center Trends Summary Consolidation Data Centers, Servers, & Infrastructure Virtualization Servers, Storage,

Data Center Trends Summary

Consolidation

Data Centers, Servers, & Infrastructure

Virtualization

Servers, Storage, & Networks

Understand evolution of Ethernet technologies

10 Gig, 40 Gig, 100Gig, DCE, & FCoE

Plan for heterogeneous application environment

Internal/External hosting, SaaS/XaaS, & Cloud

Session_ID

27

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Agenda • Trends • Data Center Strategy Deployment Strategy Technology Strategy • Architecture Evolution

Agenda

Trends Data Center Strategy

Deployment Strategy Technology Strategy

Architecture Evolution

Session_ID

28

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Data Center Strategy: Deployment Strategy Session_ID 29 Presentation_ID © 2008 Cisco Systems, Inc. All rights

Data Center Strategy:

Deployment Strategy

Data Center Strategy: Deployment Strategy

Session_ID

29

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Topics in the Minds of Data Center Architects ApplicationApplicationApplicationApplication

Topics in the Minds of Data Center Architects

ApplicationApplicationApplicationApplication DeploymentDeploymentDeploymentDeployment
ApplicationApplicationApplicationApplication DeploymentDeploymentDeploymentDeployment
‘‘Green‘‘GreenGreen’Green’’’
ServerServerServerServer Farm,Farm,Farm,Farm, VM,VM,VM,VM, XaaSXaaS,XaaSXaaS,,, CloudCloudCloudCloud
PowerPowerPowerPower
CoolingCoolingCoolingCooling
DeploymentDeploymentDeploymentDeployment AgilityAgilityAgilityAgility
AutomatedAutomatedAutomatedAutomated ProvisioningProvisioningProvisioningProvisioning
LightsLightsLightsLights outoutoutout ManagementManagementManagementManagement
SecuritySecuritySecuritySecurity
ServiceServiceServiceService
ManagementManagementManagementManagement
IntegrationIntegrationIntegrationIntegration
VirtualizationVirtualizationVirtualizationVirtualization
RoleRoleRoleRole basedbasedbasedbased accessaccessaccessaccess
I/OI/OI/OI/O ConsolidationConsolidationConsolidationConsolidation
Ethernet,Ethernet,Ethernet,Ethernet, FC,FC,FC,FC, FCoEFCoEFCoEFCoE
FacilitiesFacilitiesFacilitiesFacilities
ConsolidationConsolidationConsolidationConsolidation
GreenfieldGreenfieldGreenfieldGreenfield
1/10/40/101/10/40/101/10/40/101/10/40/10
EndEnd-EndEnd--of-ofof-of--Row-RowRowRow
TopTop-TopTop--of-ofof-of--Rack-RackRackRack
BladeBladeBladeBlade SwitchSwitchSwitchSwitch
0000 GbpsGbpsGbpsGbps
Session_ID
30
Presentation_ID
© 2008 Cisco Systems, Inc. All rights reserved.
Cisco Public

ata

enter

trategy

Applications Applications
Applications Applications

Virtualized

Internal

Compute

Management

Provisioning

Operations

Hosting

External

Compute

Storage

Resources

External

Service

Network

Infrastructure

Data Center Facilities

Session_ID

31

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Data Center Strategy Utility Deployment Strategy All areas are interdependent Complex evaluation Do applications

Data Center Strategy

Utility Deployment Strategy

All areas are interdependent Complex evaluation

Strategy All areas are interdependent Complex evaluation Do applications dictate facilities? Do facilities dictate

Do applications dictate facilities? Do facilities dictate hosting alternatives?

Consider consistent user experience - SLAs Budgetary and costing model considerations Management and operational aspects Requires broad cross-functional collaboration

Session_ID

32

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Data Center Strategy Application Architecture • Key Considerations - Application Architecture - Monolithic - N-Tier

Data Center Strategy

Application Architecture

Key Considerations

- Application Architecture

- Monolithic

- N-Tier

- Web 2.0 / Mash-up

- Core Business

- Off the shelf

- Custom Application

- Security Considerations

- Data Warehousing

- Business Economics

- SaaS/XaaS

- Internally/Externally Hosted

- Cloud

- Application Redundancy

- At server level – backup server

- Single DC, Multiple DC

At server level – backup server - Single DC, Multiple DC • Determining ‘Care-Abouts’ - Utility
At server level – backup server - Single DC, Multiple DC • Determining ‘Care-Abouts’ - Utility

Determining ‘Care-Abouts’

- Utility Environment

- Demand Capacity

- Application RPO/RTO

- Projected longevity

- Service level requirements

- Anticipated annual growth

RPO – Recovery Point Objective RTO – Recovery Time Objective

Session_ID

33

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Data Center Strategy Compute Infrastructure • Key Considerations - % of Traffic Patterns - Client

Data Center Strategy

Compute Infrastructure

Key Considerations

- % of Traffic Patterns

- Client to server

- Server to server

- Server to storage

- Storage to storage

- Server Capacity

- Server BUS capacity mem/cpu

- # of Ethernet I/O interfaces

- # of FC I/O interfaces

- Expected outbound load

- Server Redundancy

- NIC teaming

- Clustering

load - Server Redundancy - NIC teaming - Clustering • Determining ‘Care-Abouts’ - Utility Server
load - Server Redundancy - NIC teaming - Clustering • Determining ‘Care-Abouts’ - Utility Server

Determining ‘Care-Abouts’

- Utility Server Infrastructure

- Virtualization

- Provisioning

- # of servers per application

- Size of subnet/VLAN

- % of server annual growth

- % of virtual server annual growth

Session_ID

34

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Data Center Strategy Storage Resources • Key Considerations - Storage Capacity - Internal/External Resources -

Data Center Strategy

Storage Resources

Key Considerations

- Storage Capacity

- Internal/External Resources

- Application requirements

- Host access model

- Fibre Channel

- FC over Ethernet – FCoE

- iSCSI

- Oversubscription

- Storage Virtualization

- N-Port Virtualization NPV

- N-Port ID Virtualization NPIV

- Volume Virtualization

- Number Storage of racks

- SAN Topology

- Number of SAN devices to manage

- Number of physical SAN interfaces

SAN devices to manage - Number of physical SAN interfaces • Determining ‘Care-Abouts’ - Sync/Async Replication
SAN devices to manage - Number of physical SAN interfaces • Determining ‘Care-Abouts’ - Sync/Async Replication

Determining ‘Care-Abouts’

- Sync/Async Replication

- SAN Interconnect

- Data RPO/RTO

- Data Security

- Data Growth & Migration

Session_ID

35

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Data Center Strategy Network Infrastructure • Key Considerations - Type of access model - Modular,

Data Center Strategy

Network Infrastructure

Key Considerations

- Type of access model

- Modular, ToR, Blade Switches

- Number of server per rack

- Number of racks

- Topology

- # of access switches & uplinks

- L2 Adjacency Boundaries

- Number of network devices to manage

- Number of physical interfaces per server

- Consolidated I/O

- Oversubscription

- Server

- Access to aggregation

- Aggregation to core

- L2 Adjacency

- Subnets/VLANs scope

Aggregation to core - L2 Adjacency - Subnets/VLANs scope • Determining ‘Care-Abouts’ - Fault isolation &
Aggregation to core - L2 Adjacency - Subnets/VLANs scope • Determining ‘Care-Abouts’ - Fault isolation &

Determining ‘Care-Abouts’

- Fault isolation & recovery

- Services insertion

- Data Center Interconnect

- L3 Features

- L2 Features

Session_ID

36

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Data Center Strategy Data Center Facilities • Key Considerations - Total DC power capacity -

Data Center Strategy

Data Center Facilities

Key Considerations

- Total DC power capacity

- Total DC space

- Per rack

- Power capacity

- Servers

- Cabling

- Number of racks per pod

- Power

- Cooling

- Racks of network equipment

- Power

- Cooling

- Cabling

- Number of pods per area

- Number of areas per DC

Cabling - Number of pods per area - Number of areas per DC • Determining ‘Care-Abouts’
Cabling - Number of pods per area - Number of areas per DC • Determining ‘Care-Abouts’

Determining ‘Care-Abouts’

- DC Tier target

- Disaster recovery

- ‘Green’ - Efficiency

- Airflow

- Cable routes

- Power routes

Session_ID

37

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Data Center Strategy Management Provisioning Operations • Key Considerations - Management - Monitoring - Measuring

Data Center Strategy

Management Provisioning Operations

Key Considerations

- Management

- Monitoring

- Measuring

- Tracking

- Provisioning

- Internal Compute

- External Compute

- Service Insertion

- Network

- Operations

- Power & Cooling

- Servers (Internal & External)

- Cabling

Cooling - Servers (Internal & External) - Cabling • Determining ‘Care-Abouts’ - Performance Criteria -
Cooling - Servers (Internal & External) - Cabling • Determining ‘Care-Abouts’ - Performance Criteria -

Determining ‘Care-Abouts’

- Performance Criteria

- RPO/RTO

- Monitoring - Netflow

- Fault isolation & recovery

- Testing

Session_ID

38

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Data Center Strategy: Technology Strategy Session_ID 39 Presentation_ID © 2008 Cisco Systems, Inc. All rights

Data Center Strategy:

Technology Strategy

Data Center Strategy: Technology Strategy

Session_ID

39

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Data Center Strategy in Action Physical Facilities HOT AISLE COLD AISLE DC Zone Pod Network

Data Center Strategy in Action

Physical Facilities

HOT AISLE COLD AISLE

HOT AISLE

COLD AISLE

HOT AISLE COLD AISLE
HOT AISLE COLD AISLE
HOT AISLE COLD AISLE
HOT AISLE COLD AISLE
HOT AISLE COLD AISLE
HOT AISLE COLD AISLE
HOT AISLE COLD AISLE
HOT AISLE COLD AISLE
HOT AISLE COLD AISLE
HOT AISLE COLD AISLE
HOT AISLE COLD AISLE
HOT AISLE COLD AISLE
HOT AISLE COLD AISLE
DC

DC

Zone

Zone

Pod

Pod

Network

Network

Servers

Servers

Storage

Storage

4 - 6 Zones Per DC & 6 – 15 MW per DC

60,000 – 80,000 SQF per zone – 1-3 MW per zone

200 – 400 racks/cabinets per zone

Cooling and power per pod (per pair of rack rows)

8 – 48 servers per rack/cabinet – 1-1.5 KW per

cabinet

2 – 11 interfaces per server

2500

– 30000 server per DC

4000

– 120,000 ports per DC

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

It all depends on server types and network access layer model
It all depends on server types
and network access layer model

40

Reference Physical Topology Network Equipment and Zones Pod COLD AISLE HOT AISLE Pod DC Zone

Reference Physical Topology

Network Equipment and Zones

PodCOLD AISLE HOT AISLE Pod

COLD AISLE

HOT AISLE

PodPod COLD AISLE HOT AISLE

DC Zone Pod
DC
Zone
Pod

Network Rack

Server Rack

Storage Rack

Pod Pod
Pod
Pod
Module 1 Module N
Module 1
Module N

Session_ID

41

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Pod Concept Network Zones and Pods COLD AISLE Pod HOT AISLE Pod DC Sizing •

Pod Concept

Network Zones and Pods

COLD AISLE Pod HOT AISLE Pod
COLD AISLE
Pod
HOT AISLE
Pod

DC Sizing DC: a group of zones (or clusters, or areas) Zone: Typically mapped to aggregation pair Not all use hot-cold aisle design Predetermined cable/power/cooling capacity

Pod
Pod
design • Predetermined cable/power/cooling capacity Pod Pod/Module Sizing ▪ Typically mapped to access topology

Pod/Module Sizing

Typically mapped to access topology

Size: determined by distance and density

Cabling distance from server racks to network racks

100m Copper

200-500m Fiber

Cable density: # of servers by I/Os per server

Racks

Server: 6-30 Servers per rack

Network (based on access model)

Storage: special cabinets

Session_ID

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

special cabinets Session_ID Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 42

42

Network Equipment Distribution End of Row and Middle of Row End of Row ▪ Traditionally

Network Equipment Distribution

End of Row and Middle of Row

Network Equipment Distribution End of Row and Middle of Row End of Row ▪ Traditionally used

End of Row

Traditionally used Copper from server to access switches Poses challenges on highly dense server farms ▫ Distance from farthest rack to access point ▫ Row length may not lend itself well to switch port density

Common Characteristics

Typically used for modular access Cabling is done at DC build-out Model evolving from EoR to MoR Lower cabling distances (lower cost) Allows denser access (better flexibility) 6-12 multi-RU servers per Rack 4-6 Kw per server rack, 10Kw-20Kw per network rack Subnets and VLANs: one or many per switch. Subnets tend to be medium and large: /24, /23

Middle of Row

Use is starting to increase given EoR challenges Copper from servers to access switches It addresses aggregation requirements for ToR access environments Fiber may be used to aggregate ToR

Patch panel Patch panel X-connect X-connect Network Access Point A - B Network Access Point
Patch panel
Patch panel
X-connect
X-connect
Network
Access Point
A - B
Network
Access Point
C - D
Network Access Point A - B Network Access Point C - D Fiber End of Row

Fiber

End of Row

Patch panel Patch panel server server server server
Patch panel
Patch panel
server
server
server
server
Copper Patch panel server server
Copper
Patch panel
server
server
Patch panel X-connect Network Access Point A - B
Patch panel
X-connect
Network
Access Point
A - B

Middle of Row

Patch panel Patch panel server X-connect Network Access Point C - D server
Patch panel
Patch panel
server
X-connect
Network
Access Point
C - D
server

Session_ID

43

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Network Equipment Distribution Top of Rack ToR ▪ Used in conjunction with dense access racks(1U

Network Equipment Distribution

Top of Rack

Network Equipment Distribution Top of Rack ToR ▪ Used in conjunction with dense access racks(1U servers)

ToR

Used in conjunction with dense access racks(1U servers) Typically one access switch per rack Some customers are considering two + cluster Use of either side of rack is gaining traction Cabling:

Within rack: Copper from server to access switch Outside rack (uplink):

Copper (GE): needs a MoR model for fiber aggregation Fiber (GE or 10GE):is more flexible and also requires aggregation model (MoR) Subnets and VLANS:

one or many subnets per access switch Subnets tent to be small: /24, /25, /26

Session_ID

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Cisco Systems, Inc. All rights reserved. Cisco Public Patch panel Top of Rack Patch panel Patch
Patch panel Top of Rack Patch panel Patch panel server X-connect X-connect Network Network Aggregation
Patch panel
Top of Rack
Patch panel
Patch panel
server
X-connect
X-connect
Network
Network
Aggregation
Aggregation
Point
Point
A
- B
A
- B
server
Patch panel
Top of Rack
Patch panel
Patch panel
Top of Rack
X-connect
X-connect
server
Network
Network
Aggregation
Aggregation
Point
Point
A
- B
C
- D
server
Aggregation Point Point A - B C - D server Patch panel Top of Rack server
Patch panel Top of Rack server server
Patch panel
Top of Rack
server
server
Patch panel Top of Rack Top of Rack server server
Patch panel
Top of Rack
Top of Rack
server
server

44

Network Equipment Distribution Blade Chassis Switch to Switch ▪ Potentially higher oversubscription ▪ Scales well

Network Equipment Distribution

Blade Chassis

Network Equipment Distribution Blade Chassis Switch to Switch ▪ Potentially higher oversubscription ▪ Scales well

Switch to Switch Potentially higher oversubscription Scales well for blade server racks (~3 blade chassis per rack) Most current uplinks are copper but the newer switches offer fiber Migration from GE to 10GE uplinks is taking place

Pass-through Scales well for pass-through blade racks Copper from servers to access switches

ToR

Have not seen it used in conjunction with blade switches May be a viable option on pass- through environments is the access port count is right Efficient when used with Blade Virtual Switch environments

Efficient when used with Blade Virtual Switch environments   Patch panel sw1 sw2 Blade Chassis sw1
 
 

Patch panel

sw1

sw2

Blade Chassis

sw1

sw2

Blade Chassis

sw1

sw2

Blade Chassis

sw2 Blade Chassis sw1 sw2 Blade Chassis Patch panel Pass-through Blade Chassis Pass-through
sw2 Blade Chassis sw1 sw2 Blade Chassis Patch panel Pass-through Blade Chassis Pass-through
sw2 Blade Chassis sw1 sw2 Blade Chassis Patch panel Pass-through Blade Chassis Pass-through
Patch panel Pass-through Blade Chassis Pass-through Blade Chassis Pass-through Blade Chassis
Patch panel
Pass-through
Blade Chassis
Pass-through
Blade Chassis
Pass-through
Blade Chassis
Patch panel Patch panel X-connect X-connect Network Aggregation Point A – B – C -
Patch panel
Patch panel
X-connect
X-connect
Network
Aggregation
Point
A – B – C - D
Network
Aggregation
Point
A – B - C - D
Patch panel X-connect Network Aggregation Point A – B – C - D
Patch panel
X-connect
Network
Aggregation
Point
A – B – C - D
Patch panel X-connect Network Aggregation Point A – B - C - D
Patch panel
X-connect
Network
Aggregation
Point
A – B - C - D
X-connect Network Aggregation Point A – B - C - D Patch panel sw1 sw2 Blade
Patch panel sw1 sw2 Blade Chassis sw1 sw2 Blade Chassis sw1 sw2 Blade Chassis

Patch panel

sw1
sw1

sw2

Blade Chassis
Blade Chassis
sw1
sw1

sw2

Blade Chassis
Blade Chassis
sw1
sw1

sw2

Blade Chassis
Blade Chassis
sw2 Blade Chassis sw1 sw2 Blade Chassis Patch panel Top of Rack Pass-through Blade Chassis
sw2 Blade Chassis sw1 sw2 Blade Chassis Patch panel Top of Rack Pass-through Blade Chassis
sw2 Blade Chassis sw1 sw2 Blade Chassis Patch panel Top of Rack Pass-through Blade Chassis

Patch panel

Top of Rack
Top of Rack
sw1 sw2 Blade Chassis Patch panel Top of Rack Pass-through Blade Chassis Pass-through Blade Chassis

Pass-through

Blade Chassis
Blade Chassis
Chassis Patch panel Top of Rack Pass-through Blade Chassis Pass-through Blade Chassis Pass-through Blade Chassis

Pass-through

Blade Chassis
Blade Chassis
Rack Pass-through Blade Chassis Pass-through Blade Chassis Pass-through Blade Chassis Session_ID 45

Pass-through

Blade Chassis
Blade Chassis

Session_ID

45

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Network Equipment Distribution End of Row, Top of Rack & Blade Switches End of Row

Network Equipment Distribution

End of Row, Top of Rack & Blade Switches

End of Row

ToR

Blade Switches

Network Component & Location

Modular Switch at the end of a row of server racks

Low RU, lower port density switch per server rack

Switches Integrated in to blade enclosures per server racks

Typically copper from server to Cabling access to aggregation switches Copper from server to ToR
Typically copper from server to
Cabling
access to aggregation switches
Copper from server to ToR switch
and fiber from ToR to aggregation
switches
Servers have intra-blade chassis
access switches and fiber from
connection to internal switches.
Devices
Switches use copper (and fiber) to
aggregation switches

240 – 336 ports

40 – 48 ports

14 – 16 Servers (dual-homed)

Cabling

Port Density

Server Density

6 – 12 multi-RU server per rack

8-30 1 RU server per rack

3-4 blade enclosures per rack

VLANs &

Subnets

One or more subnets/VLANs per access switch

One smaller VLAN/subnet per access switch

A subnet/VLAN is shared across multiple access switches

Session_ID

Presentation_ID

Row 1 Row 2
Row 1
Row 2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Row
Row
Session_ID Presentation_ID Row 1 Row 2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public
Session_ID Presentation_ID Row 1 Row 2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

Row

46

Reference Network Topology Hierarchical Architecture Core L3 Aggregation L3 L2 Access L2 VLAN A VLAN

Reference Network Topology Hierarchical Architecture

Core L3 Aggregation L3 L2 Access L2 VLAN A VLAN B VLAN C VLAN D
Core
L3
Aggregation
L3
L2
Access
L2
VLAN A
VLAN B
VLAN C
VLAN D
VLAN E
Module 1
Module 2

Hierarchical Design

Triangle and Square Topologies

Multiple Access Models: Modular, Blade Switches and ToR

Multiple Oversubscription Targets

Highly scalable

Session_ID

47

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Data Center Topology Scalable Server Architecture CoreCore LayerLayer AggregationAggregation LayerLayer AccessAccess

Data Center Topology

Scalable Server Architecture

CoreCore LayerLayer AggregationAggregation LayerLayer AccessAccess LayerLayer Large-Very Large Small - Medium
CoreCore
LayerLayer
AggregationAggregation
LayerLayer
AccessAccess
LayerLayer
Large-Very Large
Small - Medium
Medium - Large
Up to 192
Servers
192 – 1500
Servers
1500 – 4000
Servers
4000 to 10000
Servers

Session_ID

48

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Server Oversubscription What is the right number? 1. Single homed Servers: GE or 10GE 2.

Server Oversubscription

What is the right number?

1. Single homed Servers:

GE or 10GE

is the right number? 1. Single homed Servers: GE or 10GE 2. Multi-homed & Virtual Servers:

2. Multi-homed & Virtual Servers:

4 x GE 2 x 10GE Active-Standby
4
x GE
2
x 10GE
Active-Standby

Capacity: 1Gbps

GE NIC 10GE NIC Capacity: 10Gbps
GE NIC
10GE NIC
Capacity: 10Gbps

100 Mbps – 1/10:1

200

Mbps – 1/5:1

500

Mbps – 1/2:1

500 Mbps – 1/20:1

1

Gbps – 1/10:1

2

Gbps – 1/5:1

4

Gpbs – 1/2.5:1

5

Gbps – 1/2:1

– 1/5:1 4 Gpbs – 1/2.5:1 5 Gbps – 1/2:1 NIC NIC 4 2 1.x times

NIC

NIC

4

2

1.x times per system

times per system

times per systen

Depends…

Session_ID

49

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Server Oversubscription What to do… 1 s t Understand application and their traffic patterns: Client

Server Oversubscription

What to do…

1 st Understand application and their traffic patterns:

Client to server: low bandwidth Server to server: high bandwidth Server to storage: bulk Storage to storage: bulk

2 nd Consider Peak Time Behavior:

Maximum server peak: single server max capacity Average server peak: likely to be seen across server farm Aggregate server peak:

3 rd Plan Network Oversubscription based on Peak Loads:

Consider server BUS Consider server growth Consider steady vs failover states

4 th Network Oversubscription:

Factor I/O Module Oversubscription Consider Network Layers: Access, aggregation and core Ranges: 1:1 – 1:20 increase over time Factor in Server Oversubscription

Session_ID

50

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Server Virtualization Design Considerations Module VLAN A VLAN B VLAN D VLAN E VLAN maps

Server Virtualization Design Considerations

Module VLAN A VLAN B VLAN D VLAN E
Module
VLAN A
VLAN B
VLAN D
VLAN E

VLAN maps to Subnet… size of subnet? VM Mobility – within L2 boundaries Is VM cluster limited by:

VLAN single switch VLAN multiple switches VLAN all access switches single module

How many clusters

Hypothetical Example

1000 Servers using a single IP/MAC pair Virtualized using 20 VMs per server (1,000 x 20) + 1,000 = 21,000 or 20,000 New IP/MAC pairs 20,000 / 250 (/24 subnet) = 80 new subnets / VLANs

Session_ID

51

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Data Center Strategy Summary Complex interdependencies Focus on Applications <-> User experience Identify

Data Center Strategy Summary

Data Center Strategy Summary Complex interdependencies Focus on Applications <-> User experience Identify

Complex interdependencies Focus on Applications <-> User experience Identify key objectives for each aspect of infrastructure Map physical and logical topologies Consider I/O options and requirements Evaluate the Network impact of Virtualization

Session_ID

52

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Agenda • Trends • Architecture Strategy • Architecture Evolution Session_ID 53 Presentation_ID © 2008

Agenda

Trends Architecture Strategy Architecture Evolution

Session_ID

53

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Architecture Evolution Session_ID 54 Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco

Architecture Evolution

Architecture Evolution

Session_ID

54

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Data Center Architecture Mapping Initiatives to Architecture I T I n i t i a

Data Center Architecture Mapping Initiatives to Architecture

IT Initiatives

Architectural Goals

ApplicationApplication Flexibility:Flexibility: SaaS,SaaS, Internal/ExternalInternal/External compute,compute,
ApplicationApplication Flexibility:Flexibility: SaaS,SaaS,
Internal/ExternalInternal/External compute,compute,
VirtualizedVirtualized ImagesImages
ServerServer Consolidation:Consolidation: FasterFaster
CPUs,CPUs, MultiMulti--core,core, moremore
memory,memory, higherhigher I/OI/O CapacityCapacity
ServerServer Virtualization:Virtualization:
ApplicationApplication availabilityavailability andand
scalability,scalability, ServerServer utilizationutilization
ApplicationApplication Availability:Availability: LowerLower
RPO/RTO,RPO/RTO, betterbetter stabilitystability
WorkloadWorkload Management:Management: fasterfaster
applicationapplication rollout,rollout, dynamicdynamic
serverserver movementmovement
AutomatedAutomated Provisioning:Provisioning:
TemplateTemplate drivendriven configurationconfiguration
&& dynamicdynamic provisioningprovisioning
ImprovedImproved EfficiencyEfficiency ScalableScalable BandwidthBandwidth SimplifiedSimplified I/OI/O ImprovedImproved
ImprovedImproved EfficiencyEfficiency
ScalableScalable BandwidthBandwidth
SimplifiedSimplified I/OI/O
ImprovedImproved RobustnessRobustness
IntegratedIntegrated ServicesServices

Common

Systems

Architecture

Session_ID

55

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Network Architecture Mapping Architecture to Technology Architectural Goals ImprovedImproved Technology Requirements

Network Architecture

Mapping Architecture to Technology

Architectural Goals

ImprovedImproved
ImprovedImproved

Technology

Requirements

EfficiencyEfficiency ScalableScalable 10G10G InfrastructureInfrastructure ScalableScalable EfficientEfficient L2L2
EfficiencyEfficiency
ScalableScalable 10G10G
InfrastructureInfrastructure
ScalableScalable
EfficientEfficient L2L2
BandwidthBandwidth
pathingpathing
Common
SimplifiedSimplified I/OI/O
Technology
IncreaseIncrease STPSTP
StabilityStability
Architecture
ImprovedImproved
VirtualVirtual SwitchSwitch
PartitioningPartitioning andand
IsolationIsolation
RobustnessRobustness
ScalableScalable DCDC
ServicesServices
IntegratedIntegrated
ServicesServices
ServicesServices IntegratedIntegrated ServicesServices Cisco Technology Alignment 10G10G EthernetEthernet I/OI/O

Cisco

Technology

Alignment

10G10G EthernetEthernet I/OI/O ConsolidationConsolidation FCoEFCoE VirtualVirtual SwitchingSwitching
10G10G EthernetEthernet
I/OI/O ConsolidationConsolidation
FCoEFCoE
VirtualVirtual SwitchingSwitching

Session_ID

56

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Dense 10GE Network Topology High Density 10GE Aggregation 10G10G EthernetEthernet core1 core2 L3 10GE Uplinks

Dense 10GE Network Topology

High Density 10GE Aggregation

10G10G EthernetEthernet
10G10G EthernetEthernet
core1 core2 L3 10GE Uplinks Module 1 L3 Module 2 agg1 agg2 aggx aggx+1 L2
core1
core2
L3
10GE Uplinks
Module 1
L3
Module 2
agg1
agg2
aggx
aggx+1
L2
acc2
accN
accN+1
acc1
acc2
accN
accN+1
acc1
VLAN D
VLAN A
VLAN B
VLAN E
VLAN C
 

Common Topology – Starting Point

 

Nexus at Core and Aggregation Layers 2-Tier L2 topology VLANs contained within Agg Module

Topology Highlights

Lower Oversubscription Higher Density 10 GE at Core and Agg Layers

Session_ID

57

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

10GE Server Farms 10GE Access and Aggregation 10G10G EthernetEthernet agg1 agg2 8 64 10GE ports

10GE Server Farms

10GE Access and Aggregation

10G10G EthernetEthernet
10G10G EthernetEthernet
agg1 agg2 8 64 10GE ports 4 6 4 6 acc1 acc2 accN accY VLAN
agg1
agg2
8
64 10GE ports
4
6
4
6
acc1
acc2
accN
accY
VLAN A
VLAN B
40-44 Ports
VLAN C
52 10GE ports = 8 - 12 ToR Switches
agg1 agg2 8 64 10GE ports 4 8 4 8 acc1 acc2 accN accY VLAN
agg1
agg2
8
64 10GE ports
4
8
4
8
acc1
acc2
accN
accY
VLAN A
VLAN B
VLAN C
192 Ports
52 10GE ports = 4 - 12 Modular Switches

10GE in the Access Positioned for I/O Consolidation Using ToR – lower oversubscription 3.3:1 Using Modular – higher oversubscription 12:1 ToR uses Twinax cable Modular uses Fiber

Session_ID

58

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

10GE Server Access 10 Gig Ethernet End-host Mode 10G10G EthernetEthernet End Host Mode Switch Perspective

10GE Server Access

10 Gig Ethernet End-host Mode

10G10G EthernetEthernet
10G10G EthernetEthernet

End Host Mode

Switch Perspective

MAC based uplink Selection Active-active uplinks using different MACs No STP on access device BPDUs are not processed – they are dropped Separate loop avoidance mechanisms

Host Perspective

Active-standby only

Network Environment STP is not fully removed

Some switch would run it, some would not Looped conditions have to be considered w/o STP

Path to Service Devices is challenging Virtual-port Channels solves most issues

L3 L3 L2 L2 A B VLAN C
L3
L3
L2
L2
A
B
VLAN C

STP Cloud No STP Cloud

 
  Enet

Enet

DCE

DCE

Session_ID

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

59

I/O Consolidation in the Network I/OI/O ConsolidationConsolidation Processor Memory I/O I/O I/O Storage IPC LAN

I/O Consolidation in the Network

I/OI/O ConsolidationConsolidation
I/OI/O ConsolidationConsolidation
in the Network I/OI/O ConsolidationConsolidation Processor Memory I/O I/O I/O Storage IPC LAN
Processor Memory I/O I/O I/O Storage IPC LAN
Processor
Memory
I/O
I/O
I/O
Storage
IPC
LAN
Processor Memory I/O I/O I/O Storage IPC LAN Processor Memory I/O Subsystem Storage IPC LAN
Processor Memory I/O I/O I/O Storage IPC LAN Processor Memory I/O Subsystem Storage IPC LAN
Processor Memory I/O I/O I/O Storage IPC LAN Processor Memory I/O Subsystem Storage IPC LAN
Processor Memory I/O Subsystem Storage IPC LAN
Processor
Memory
I/O Subsystem
Storage
IPC
LAN

Session_ID

60

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

I/O Consolidation in the Host I/OI/O ConsolidationConsolidation Fewer CNAs (Converged Network adapters) instead of NICs,

I/O Consolidation in the Host

I/OI/O ConsolidationConsolidation
I/OI/O ConsolidationConsolidation

Fewer CNAs (Converged Network adapters) instead of NICs, HBAs and HCAs

Limited number of interfaces for Blade Servers

FC FC HBA HBA FC FC HBA HBA NIC NIC NIC NIC NIC NIC HCA
FC FC HBA HBA
FC FC HBA HBA
NIC NIC
NIC NIC
NIC NIC
HCA HCA
HCAHCA

FC TrafficFC FC HBA HBA NIC NIC NIC NIC NIC NIC HCA HCA HCAHCA FC Traffic LAN

FC TrafficHBA NIC NIC NIC NIC NIC NIC HCA HCA HCAHCA FC Traffic LAN Traffic LAN Traffic

LAN TrafficNIC NIC NIC NIC NIC HCA HCA HCAHCA FC Traffic FC Traffic LAN Traffic Mgmt Traffic

LAN TrafficNIC NIC HCA HCA HCAHCA FC Traffic FC Traffic LAN Traffic Mgmt Traffic IPC Traffic IPC

Mgmt TrafficHCA HCAHCA FC Traffic FC Traffic LAN Traffic LAN Traffic IPC Traffic IPC Traffic CNA CNA

IPC TrafficFC Traffic FC Traffic LAN Traffic LAN Traffic Mgmt Traffic IPC Traffic CNA CNA CNA CNA

IPC TrafficFC Traffic LAN Traffic LAN Traffic Mgmt Traffic IPC Traffic CNA CNA CNA CNA All traffic

CNA CNA CNA CNA
CNA CNA
CNA CNA

All traffic

goes over

10GE

Session_ID

61

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

What Is FCoE? Fibre Channel over Ethernet FCoEFCoE From a Fibre Channel standpoint it’s FC

What Is FCoE?

Fibre Channel over Ethernet

FCoEFCoE
FCoEFCoE

From a Fibre Channel standpoint it’s

FC connectivity over a new type of cable called… an Ethernet cloud

From an Ethernet standpoints it’s

Yet another ULP (Upper Layer Protocol) to be transported, but… a challenging one!

And technically…

FCoE is an extension of Fibre Channel onto a Lossless Ethernet fabric
FCoE is an extension of Fibre Channel
onto a Lossless Ethernet fabric

Session_ID

62

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Fibre Channel over Ethernet Brief look at the Technology FCoEFCoE A method for a direct

Fibre Channel over Ethernet

Brief look at the Technology

FCoEFCoE
FCoEFCoE

A method for a direct mapping of FC frames over Ethernet

Seamlessly connects to FC networks

Extends FC in the datacenter over the Ethernet

FCoE appears as FC to the host and the SAN

over the Ethernet FCoE appears as FC to the host and the SAN Ethernet Preserves current

Ethernet

Preserves current FC infrastructure and management

FC frame is unchanged

Fibre

Channel

Traffic

and management FC frame is unchanged Fibre Channel Traffic Can operate over standard switches (with jumbo

Can operate over standard switches (with jumbo frames)

Priority Flow Control guarantees no-drops

Mimics FC credit-buffer system, avoids TCP

Does not require expensive off-loads

Session_ID

63

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Discrete Network Fabrics Typical Ethernet and Storage Topology FCoEFCoE Core L3 Aggregation L3 L2 Access

Discrete Network Fabrics

Typical Ethernet and Storage Topology

FCoEFCoE
FCoEFCoE
Core L3 Aggregation L3 L2 Access L2 VLAN A VLAN B VLAN C
Core
L3
Aggregation
L3
L2
Access
L2
VLAN A
VLAN B
VLAN C

Single Ethernet Network Fabric Typically 3 tiers Access Switches are dual-homed Servers are single or multi-homed

SAN Fabric Fabric A Fabric B A B C D E F VSAN 2 VSAN
SAN Fabric
Fabric A
Fabric B
A
B C
D
E F
VSAN 2
VSAN 3

Dual Storage Fabrics

Typically 2 tiers Edge switches are dual-homed Servers are dual-homed to different fabrics

Enet FC
Enet
FC

Session_ID

64

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Unified Fabric: Phase I – DCE FCoE Server Access FCoEFCoE Core SAN Fabric L3 Fabric

Unified Fabric: Phase I – DCE FCoE Server Access

FCoEFCoE
FCoEFCoE
Core SAN Fabric L3 Fabric A Fabric B Aggregation L3 L2 Access L2 CNA VLAN
Core
SAN Fabric
L3
Fabric A
Fabric B
Aggregation
L3
L2
Access
L2
CNA
VLAN A
VLAN B
A
B
D
E
VLAN C
VLAN D
CNA – Converged Network Adaptor Enet
CNA – Converged Network Adaptor Enet

CNA – Converged Network Adaptor

CNA – Converged Network Adaptor Enet

Enet

FC

 
  DCE

DCE

Session_ID

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

65

Unified Network Fabric Benefits to Customers 10G10G EthernetEthernet I/OI/O ConsolidationConsolidation FCoEFCoE FC

Unified Network Fabric

Benefits to Customers

10G10G EthernetEthernet I/OI/O ConsolidationConsolidation FCoEFCoE
10G10G EthernetEthernet
I/OI/O ConsolidationConsolidation
FCoEFCoE
FC Traffic FCoE FC Traffic SAN A FCoE FCoE Enet Traffic SAN B FCoE Enet
FC Traffic
FCoE
FC Traffic
SAN A
FCoE
FCoE
Enet Traffic
SAN B
FCoE
Enet Traffic
FCoE SAN

Fewer Interfaces and Cables

Same SAN Management as Native FC

Display FCoE Adapter FC Storage FC Switch FCoE Server Switch
Display
FCoE
Adapter
FC Storage
FC Switch
FCoE
Server
Switch

No Gateway

Less Power and Cooling

Session_ID

66

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

N-Port Virtualization (NPV) Solves Domain-ID Explosion VirtualVirtual SwitchingSwitching NPV-Core Switches VSAN 15 FC FC

N-Port Virtualization (NPV)

Solves Domain-ID Explosion

VirtualVirtual SwitchingSwitching
VirtualVirtual SwitchingSwitching

NPV-Core Switches

VSAN 15 FC FC 10.1.1 20.2.1 F-port NP-port Can have multiple uplinks, on different VSANs
VSAN 15
FC
FC
10.1.1
20.2.1
F-port
NP-port
Can have multiple
uplinks, on different
VSANs
Nexus
5000
FC Interface
10.5.2
FC
Target
NPV Device
20.5.1
VSAN 10

10.5.7

Initiator
Initiator

Uses the same domain(s) as the NPV-core switch(es)

Session_ID

67

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Virtual Ethernet Switching Improving Management and Pathing VirtualVirtual SwitchingSwitching • Virtual Switches:

Virtual Ethernet Switching

Improving Management and

Pathing

VirtualVirtual SwitchingSwitching
VirtualVirtual SwitchingSwitching

Virtual Switches: Logical instances of physical switches

- Many to one: grouping of multiple physical switches

- Reduce management overhead (single switch) and simplify configuration (single sw config)

- One to Many: partitioning of physical switches

- Isolate control plane and control plane protocols

Virtual PortChannels: Etherchannel across multiple chassis

- Simplify L2 pathing by supporting non-blocking cross-chassis concurrent L2 paths

- Lessen reliance on STP (loopfree L2 paths are not established by STP)

Virtual Switching Implementations

- Virtual Switching System – VSS: Catalyst 6500

- Virtual Blade Switches – VBS: 10GE-based Blade Switches

- Virtual Device Context – VDC: Nexus 7000

- Virtual Port-Channel – vPC: Catalyst 6500, Nexus Family

Session_ID

68

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Virtual Switch – VSS Two to One OSPF SNMP OSPF SNMP STP HSRP A 1

Virtual Switch – VSS

Two to One

Virtual Switch – VSS Two to One OSPF SNMP OSPF SNMP STP HSRP A 1 STP
Virtual Switch – VSS Two to One OSPF SNMP OSPF SNMP STP HSRP A 1 STP

OSPF SNMP

Virtual Switch – VSS Two to One OSPF SNMP OSPF SNMP STP HSRP A 1 STP
Virtual Switch – VSS Two to One OSPF SNMP OSPF SNMP STP HSRP A 1 STP

OSPF SNMP

STP

HSRP

A 1

STP HSRP A 2
STP
HSRP
A 2
One OSPF SNMP OSPF SNMP STP HSRP A 1 STP HSRP A 2 VirtualVirtual SwitchingSwitching OSPF
VirtualVirtual SwitchingSwitching OSPF IGMP
VirtualVirtual SwitchingSwitching
OSPF
IGMP
STP HSRP A
STP
HSRP
A

Two Physical Switches into One Virtual

Two switches look like one

Two physical switches One virtual switch

Virtual Switch:

All ports appear to be on the same physical switch Single point of management Single configuration Single IP/MAC Single control plane protocol instance

Benefits

Simplify infrastructure management L2 DC Interconnect High Availability

Session_ID

69

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Virtual Blade Switch - VBS Many to One A A 1 2 A A 3

Virtual Blade Switch - VBS

Many to One

A A 1 2 A A 3 4 A A 5 6 A A 7
A
A
1
2
A
A
3
4
A
A
5
6
A
A
7
8
2 A 2
2
A
2
VirtualVirtual SwitchingSwitching
VirtualVirtual SwitchingSwitching

Many to One

Many switches look like one

Up to Eight physical switches One virtual switch

Virtual Switch:

All ports appear to be on the same physical switch Single point of management Single configuration Single IP/mac

Benefits

Simplify infrastructure management Single switch to manage

Session_ID

70

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Virtual Switching - VDC One to Many OSPF IGMP STP HSRP A OSPF IGMP STP

Virtual Switching - VDC

One to Many

OSPF IGMP STP HSRP
OSPF
IGMP
STP
HSRP

A

Virtual Switching - VDC One to Many OSPF IGMP STP HSRP A OSPF IGMP STP HSRP
OSPF IGMP STP HSRP A 1 OSPF IGMP STP HSRP A 3
OSPF IGMP
STP
HSRP
A
1
OSPF IGMP
STP
HSRP
A
3
VirtualVirtual SwitchingSwitching OSPF IGMP STP HSRP A 2 OSPF IGMP STP HSRP A 4
VirtualVirtual SwitchingSwitching
OSPF IGMP
STP
HSRP
A
2
OSPF IGMP
STP
HSRP
A
4

One to Many

One switch looks like many

One physical switch Many logical switches

Virtual Switch:

Switch ports only exist on a single logical instance Per virtual switch point of management Per virtual switch configuration Per virtual switch IP/mac Per virtual switch control plane protocol instance

Benefits

Control plane isolation Control protocol isolation

Session_ID

71

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Isolating Collapsed L2 Domains Though Virtual Device Contexts VirtualVirtual SwitchingSwitching VDCs at Aggregation Layer

Isolating Collapsed L2 Domains

Though Virtual Device Contexts

VirtualVirtual SwitchingSwitching
VirtualVirtual SwitchingSwitching

VDCs at Aggregation Layer

STP topology per VDC environment Access switches only on one VDC VLAN instances per VDC per access switch One STP process per access switch

VDCs at Aggregation and Access Layers

STP topology per VDC environment

Access switches support VDCs as well

VLANs instances per VDC

Two STP processes per access switch

agg1 L3 agg1 agg2 agg2 agg3 agg4 L3 L2 L2 acc1 acc1 acc2 acc2 accN
agg1
L3
agg1
agg2
agg2
agg3
agg4
L3
L2
L2
acc1
acc1
acc2
acc2
accN
accN
accN+1
accN+1
VLAN VLAN C – C VDC1
VLAN VLAN C – VDC2 C
Module Module 1 1
Module 1 agg1 agg2 L3 L2 acc1 acc2 accN accN+1 VLAN C – VDC1 VLAN
Module 1
agg1
agg2
L3
L2
acc1
acc2
accN
accN+1
VLAN C
– VDC1
VLAN C – VDC2

Session_ID

72

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Virtual Portchannels - vPC L2 Topology AG 1 AG 2 2 2 4 A VirtualVirtual

Virtual Portchannels - vPC

L2 Topology

AG 1 AG 2 2 2
AG 1
AG 2
2
2
Virtual Portchannels - vPC L2 Topology AG 1 AG 2 2 2 4 A VirtualVirtual SwitchingSwitching
Virtual Portchannels - vPC L2 Topology AG 1 AG 2 2 2 4 A VirtualVirtual SwitchingSwitching
4 A
4 A
4
4

A

4 A
4 A
Portchannels - vPC L2 Topology AG 1 AG 2 2 2 4 A VirtualVirtual SwitchingSwitching Two
VirtualVirtual SwitchingSwitching
VirtualVirtual SwitchingSwitching

Two to one

Two Physical to a single logical

Devices connect to a single “logical” switch Connections are treated as portchannel

Virtual PortChannel:

i. Ports to virtual switch could form a cross-chassis portchannel

ii. virtual Portchannel behaves like a regular Etherchannel

Benefits

i.Provide non-blocking L2 paths ii.Lessen Reliance on STP

Session_ID

73

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Simplifying the topology Through Virtual PortChannels VirtualVirtual SwitchingSwitching core1 core2 2 2 2 2 agg1

Simplifying the topology

Through Virtual PortChannels

VirtualVirtual SwitchingSwitching
VirtualVirtual SwitchingSwitching
core1 core2 2 2 2 2 agg1 agg2 4 4 4 2 2 accY accN
core1
core2
2
2
2
2
agg1
agg2
4
4
4
2
2
accY
accN
accX
acc1
acc2
VLAN A
VLAN B
VLAN D
VLAN C
VLAN E

Session_ID

Presentation_ID

Simplify network topology

Build loopfree topologies without STP Take advantage of all available L2 paths Use all available network bandwidth capacity STP is still used as a fail-safe mechanism

Simplify Server to Network Connectivity

Servers are also able to use more than 1 interface concurrently NIC Teaming is no longer necessary

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

74

Overlaying Stateful Services Leveraging Virtual PortChannels VirtualVirtual SwitchingSwitching Service Appliances of

Overlaying Stateful Services

Leveraging Virtual PortChannels

VirtualVirtual SwitchingSwitching
VirtualVirtual SwitchingSwitching

Service Appliances of Service Switches

Leverage Virtual Port Channels Non-blocking path to STP root/HSRP primary

Service Integration

Services Switches – housing service devices Service Appliances Most Services support 10GE connections

Services Switch 2 appliances modules
Services Switch 2
appliances
modules
core1 core2 2 2 2 2 agg1 agg2 svcs1 svcs2 4 4 4 2 2
core1
core2
2
2
2
2
agg1
agg2
svcs1
svcs2
4
4
4
2
2
accY
accN
accX
acc1
acc2
VLAN B
VLAN D
VLAN A
VLAN C
VLAN E

Session_ID

75

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Architecture Evolution: Summary Session_ID 76 Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved.

Architecture Evolution:

Summary

Architecture Evolution: Summary

Session_ID

76

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Data Center Architecture Summary DC-wide VLANs SAN Fabric L3 L2 Fabric A Fabric B Agg-wide

Data Center Architecture Summary

DC-wide VLANs SAN Fabric L3 L2 Fabric A Fabric B Agg-wide VLANs L3 L2 Pod-wide
DC-wide VLANs
SAN Fabric
L3
L2
Fabric A
Fabric B
Agg-wide VLANs
L3
L2
Pod-wide VLANs

Topology Layers:

Core Layer: Support high density L3 10GE aggregation Aggregation Layer: Support high density L2/L3 10GE aggregation Access Layer: Support EoR/MoR, ToR, & Blade for 1GE, 10GE, DCE & FCoE attached servers

Topology Service:

Services through service switches attached at L2/L3 boundary

Topology Flexibility:

Pod-wide VLANs, Aggregation-wide VLANs or DC-wide VLANs Trade off between flexibility and fault domain

Session_ID

77

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Architecture Evolution Summary 10 Gig Core, Aggregation, & Access DCE – Ethernet Enhancements I/O Consolidation

Architecture Evolution Summary

10 Gig Core, Aggregation, & Access DCE – Ethernet Enhancements

I/O Consolidation

Unified Fabric - FCoE

Virtualization

N-Port Virtualization – NPV

Virtual Switch - VSS

Virtual Blade - VBS

Virtual Device - VDC

Virtual Portchannel - vPC

Blade - VBS Virtual Device - VDC Virtual Portchannel - vPC Cisco Technology Alignment 10G10G EthernetEthernet

Cisco

Technology

Alignment

10G10G EthernetEthernet I/OI/O ConsolidationConsolidation FCoEFCoE VirtualVirtual SwitchingSwitching
10G10G EthernetEthernet
I/OI/O ConsolidationConsolidation
FCoEFCoE
VirtualVirtual SwitchingSwitching

Session_ID

78

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Additional Resources URLs • VSS Independent Testing

Additional Resources

URLs

VSS Independent Testing

http://www.networkworld.com/reviews/2008/010308-cisco-virtual-switching-test.html

6500 Cabinet Information: http://wwwin.cisco.com/dss/isbu/6500/enviro/index.shtml

Panduit http://www.panduit.com/default.asp

Chatsworth Chatsworth Cabinets http://www.chatsworth.com/common/n-series

TIA – Telecommunications Industry Association http://www.tiaonline.org/

ASHRAE – American Society of Heating, Refrigerating and Air-Conditioning

Engineers http://www.ashrae.org/

Uptime Institute http://uptimeinstitute.org/

Government work on server and DC Energy Efficiency:

http://www.energystar.gov/index.cfm?c=prod_development.server_efficiency

Session_ID

79

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Useful Standard Effforts Resources http://www.ietf.org/html.charters/trill-charter.html

Useful Standard Effforts Resources

http://www.ietf.org/html.charters/trill-charter.html http://www.ietf.org/internet-drafts/draft-ietf-trill-prob-01.txt http://www.ietf.org/internet-drafts/draft-ietf-trill-rbridge-protocol-02.txt --- o --- http://www.ieee802.org/1/files/public/docs2005/aq-nfinn-shortest-path-0905.pdf http://www.ieee802.org/1/files/public/docs2006/aq-nfinn-shortest-path-2-0106.pdf http://www.ieee802.org/1/pages/802.1au.html http://www.ieee802.org/3/ar/public/0503/wadekar_1_0503.pdf http://www.ieee802.org/1/files/public/docs2007/au-bergamasco-ecm-v0.1.pdf --- o --- http://grouper.ieee.org/groups/802/3/hssg/ --- o --- http://www.t11.org/index.html

Session_ID

80

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Q and A Session_ID 81 Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco

Q and A

Q and A

Session_ID

81

Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Recommended Reading for BRKDCT-2866 Data Center Fundamentals Storage Networking Protocol Fundamentals Storage Networking

Recommended Reading for BRKDCT-2866

Data Center Fundamentals

Storage Networking Protocol Fundamentals

Storage Networking Fundamentals: An Introduction to Storage Devices, Subsystems, Applications, Management, and File Systems