Вы находитесь на странице: 1из 64

Cisco Server Fabric Switching:

Enabling the High Performance Server Interconnect

April 26, 2006


Jason Walker
jlwalker@cisco.com
Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 1
Agenda

• InfiniBand Hardware Overview


• InfiniBand System Overview
• RDMA and Upper Layer Protocols
• High Performance Computing Architectures
• Virtualization

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 2
InfiniBand Hardware Overview

Session Number
Presentation_ID
Presentation_ID ©
© 2003, Cisco Systems,
2005 Cisco Systems, Inc.
Inc. All
All rights
rights reserved.
reserved. Cisco Confidential 3 3
What is InfiniBand?

• InfiniBand is a high speed – low latency technology


used to interconnect servers, storage and networks
within the datacenter
• Standards Based – InfiniBand Trade Association
http://www.infinibandta.org
• Scalable Interconnect:
1X = 2.5Gb/s
4X = 10Gb/s
12X = 30Gb/s

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 4
InfiniBand Physics

• Copper and Fibre interfaces are specified


• Copper
Up to 15m* for 4x connections
Up to 10m for 12x connections
• Optical
Initial availability via dongle solution
Up to 300m with current silicon
Long Haul possible, but not with current silicon

* 20m in certain circumstances

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 5
InfiniBand Physics

• Link is bonded 2.5Gbps (1x) links


Fiber is a ribbon cable
Copper is a multi-conductor cable
• Each Link is 8b/10b encoded
4x Link is 4 2.5Gbps Physical Connections
Each connection is 2Gbps data
SAR provides a single 8Gbps data connection (4x)
24 Gbps (12x)

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 6
Terms and Components

• InfiniBand Technology Terms:


– Host Channel Adapter: A device which terminates an IB link, and
transports it to another medium (ex: PCI-X) and exposes its
functionality through a verbs software layer.
– Multicast: InfiniBand’s ability to deliver a single packet to multiple
ports simultaneously. Implemented through a multicast group
subscription model
– Node: Device, usually a server, which can access the IB fabric.
– Partition: A group of IB ports that are allowed to communicate with
one another. Ports can be members of multiple partitions.
– Path (Route): The set of links, switch ports, and host ports a packet
traverses from source to destination.

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 7
Terms and Components (continued)

• InfiniBand Technology Terms:


– Reliable Connection (RC): A transport service where packets are
reliably delivered from one host to another (also called a “channel”).
Unreceived packets are automatically resent.
– Subnet: Set of switches and channel adapters interconnected by
links, and managed by a Common Subnet Manager.
– Subnet Manager: Software which runs on the switch and manages &
configures routes within the IB subnet fabric.
– Switch: A device that routes packets from one link to another on the
same subnet at line speed using a linear forwarding table
– Unreliable Connection (UC): A transport service where packets are
sent “best effort,” and unreceived packets are not automatically
resent
– Verbs: A software layer which exposes the functionality of an
InfiniBand channel adapter to the application/OS

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 8
Pluggable Optics Module
Transforms Powered Copper Ports to Optical Ports

Coverts a copper port to an


optical port on a port by
port basis Topspin Optical Module

Extends port to port reach


to 150m - 300m with fibre
ribbon cables

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 9
How does InfiniBand compare?

• IB is highest I/O fabric


BW available today

12X-IB •Very cost effective,


commoditizing rapidly
Bandwidth

10Gb/s
4X-IB 10G EN 10G ATM/
SONET

2G FC
1Gb/s
1G EN

Cost
Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 10
InfiniBand Nomenclature

Server Server
Host Host
Host Interconnect

CPU Host Host


Host Host
Host Host
Host Host
Mem System Host Host
Cntlr Memory Host Host
Host
HCA – Host Host
HostChannel Adaptor
Host
CPU SM - Subnet manager Host
Host
Host
Server
Server
TCA – Target Server
Channel Adaptor
HCA
IB Link

SM IB Link IB Link TCA Ethernet link

Switch

IB Link TCA FC link

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 11
InfiniBand Switch Hardware

• Hardware switch devices is a cut-through memory


switch
• Full-duplex, non-blocking 24 port tag forwarding
switch
• Tags are system Local ID, provided to all network
endpoints by the Master Subnet Manager on
system startup

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 12
InfiniBand Host Channel Adapter

• Network interface for IB attached Servers


• Provides hardware Virtual/Physical memory
mapping, Direct Memory Access (DMA), and
memory protection
• Provides RDMA (Remote DMA) data transfer engine
and reliable packet forwarding capabilities

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 13
InfiniBand Gateway

• Technically a Target Channel Adapter


• Similar to an HCA attached to an embedded device
• Usually doesn’t require virtual memory
manipulation and mapping
• Simplified HCA on a specialized device
Examples, Ethernet to InfiniBand or Fibre Channel to
InfiniBand packet fowarding engines

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 14
InfiniBand System Overview

Session Number
Presentation_ID
Presentation_ID ©
© 2003, Cisco Systems,
2005 Cisco Systems, Inc.
Inc. All
All rights
rights reserved.
reserved. Cisco Confidential 15 15
InfiniBand System Architecture

• Connection Oriented Architecture


Central connection routing management (SM)
All communications based on send/receive queue pairs
• Two primary connection types
Reliable Connection
Unreliable Datagram
• Unused connection types
Unreliable Connection
Reliable Datagram
Raw Datagram

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 16
InfiniBand Connections

• Reliable Connection
Host Channel Adapter based guaranteed delivery
Uses HCA onboard memory (or system memory with PCI-E) for
packet buffering
Primarily used for RDMA communications
Can use end-to-end flow control based on credits related to
available receive buffers
• Unreliable Datagram
Best effort forwarding
Used for IP over IB communications

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 17
Subnet Manager

• IB is a connection oriented fabric


• Connections are managed by the Subnet Manager
(SM)
• SM works in conjunction with the Subnet
Administration Agent (SA) on the host to provide
end to end context information
• SM has total network view available at all times
• SM sets routes for all connections through the IB
network.

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 18
SM Functionality

• Concept of Primary and Standby SM’s


Higher Priority Wins
Election of lower GUID
Non pre-emptive election process
• The IB spec does not specify how routing is setup in the
fabric and every vendor has its own implementation.
• On startup, SM sweeps the network. The sweep is parallelized
to provide rapid network convergence in Cisco’ SFS SM.
• SM receives asynchronous notification of port or link failure,
and re-establish connections and routes when possible

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 19
Clusters 2.0 Subnet Manager:
Fabric Sweep Performance
Number of Hosts Time
32 < 1 sec
64 < 1 sec
128 2 sec
256 4 sec
512 22 sec
1,024* 35-40 sec
2,048* 1-1:30 min**
4,096* 5-7 min**
* Requires HPC Subnet Manager for this performance
** Estimated based on simulation

ƒ Assumes InfiniSwitch-III based two tier topology


ƒ Embedded SM can handle up to 1,024 nodes
Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 20
IB Addressing

• 3 addresses: GUID, GID, LID GID GUID +1


• GUID GUID
GUID +2
Global Unique ID 64 bits in length
Used to uniquely identify a port or port group
HCA and each port has a GUID
(e.g 00:05:ad:00:00:01:02:03)

• GID
GUID plus Subnet prefix
Used for host lookup on a subnet
Used for inter-subnet IB routing (future)
(e.g. fe:80:00:00:00:00:00:00:00:05:ad:00:00:01:02:03)
Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 21
IB Addressing

• LID GID GUID +1


LID 1
Local ID GUID
Assigned by SM to define a switchable GUID +2
endpoint in the network LID 2

Subnet Local address

• Queue Pair
In conjunction with LID defines send/receive queues for
End to End context
Similar to a socket on an IP port
Process address within the host

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 22
Server Switch Applications

Server −High Performance


Computing (HPC)
Clustering −“Enterprise-Class” HPC
−Database Scalability

−I/O Consolidation
I/O −I/O Aggregation
Virtualization −Server Consolidation

Applications

Utility −Application Provisioning


−Server Re-purposing
Computing
−Server Migration

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 23
I/O Gateways for Network and Storage
Eliminating Technology Islands
Server Cluster InfiniBand
InfiniBand switches
switches for for cluster
cluster
interconnect
interconnect
ƒƒ Twelve
Twelve 10Gbps
10Gbps InfiniBand
InfiniBand ports
ports
Single
Single InfiniBand
InfiniBand link
link for:
for: per switch card
per switch card
-- Storage
Storage ƒƒ Up
Up to
to 72
72 ports
ports total
total ports
ports with
with
-- Network
Network optional
optional modules
modules
ƒƒ Single
Single fat
fat pipe
pipe toto each
each server
server for
for
all
all network
network traffic
traffic

Cisco Cisco
Cisco
MDS 9000 Series Catalyst 6500 Series
SFS 3012

Fibre
Fibre Channel
Channel to
to InfiniBand
InfiniBand gateway
gateway forfor storage
storage Ethernet
Ethernet to
to InfiniBand
InfiniBand gateway
gateway for
for LAN
LAN
access
access access
access
ƒƒ Two
Two 2-Gbps
2-Gbps Fibre
Fibre Channel
Channel ports
ports per
per gateway
gateway ƒƒ Six
Six Gigabit
Gigabit Ethernet
Ethernet ports
ports per
per gateway
gateway
ƒƒ Create
Create 10-Gbps
10-Gbps virtual
virtual storage
storage pipe
pipe to
to each
each server
server ƒƒ Create
Create virtual
virtual GigE
GigE pipe
pipe to
to each
each server
server

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 24
Ethernet Gateway Architecture

• The Interface Gateway Ports


The Gateway ports are the two internal
ports that connect the gateway to the
InfiniBand network.
The gateway ports are often referred to as
Internal Ports which are 10Gbps
InfiniBand ports.

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 25
Packet Flow with Ethernet Gateway

Industry Standard Server Ethernet


GigE Desktop
10Gbps Link
Link
10Gbps IB Link
IB Link
SFS 3012 with
Ethernet GW Ethernet Switch
SFS 3000

Application Application
TCP Session
TCP TCP

IP IP IP IP
IP Connection
IP over IB IP over IB

Verbs API Verbs API

IB Transport IB Transport

IB Network IB Network

IB Link IB Link IB Link IB Link Ethernet Link Eth Link Eth Link Ethernet Link

IB Physical IB Phy IB Phy IB Physical Ethernet Physical Eth Phy Eth Phy Ethernet Physical

Server TS-120 TS-360 Ethernet Switch Desktop

TCP / IP Layers in Infiniband layers Infiniband Layers Ethernet Layers in


Software in Software in Hardware Hardware

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 26
Ethernet Gateway Port Aggregation

• Support for ethernet trunk groups (link aggregation)


• Maximum of 6 ports allowed in one trunk
• Trunk cannot span gateways
• Only static trunk configuration.
PAgP or LACP not supported

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 27
FiberChannel Gateway Topology

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 28
How the FC gateway works
The FC gateway dynamically allocates world-wide node names (WWNNs) and world-
wide port names (WWPNs) to InfiniBand hosts to emulate Fibre Channel-attached
hosts.

The FC gateway logs into the fabric using FC-AL. No other mode (Like E port
functionality supported yet).

The Fibre Channel gateway and InfiniBand hosts appear, to the SAN, as groups of
hosts on a Fibre Channel hub with the dedicated bandwidth advantages of a
switched-based architecture.

The FC gateway translates between the Fibre Channel protocol (FCP) of the SAN and
the SCSI RDMA protocol (SRP) of the IB hosts. In this way, SANs and InfiniBand-
attached hosts communicate seamlessly. SAN management tools recognize IB and
FC devices alike in Fibre Channel terms, which permits all management paradigms
and security infrastructures to operate normally. After the FC gateway assigns
WWNNs and WWPNs to initiators, you must configure access control policies to
associate FC-attached LUNs to initiators.

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 29
How the FC gateway works (continued)

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 30
RDMA and Upper Layer Protocols

Session Number
Presentation_ID
Presentation_ID ©
© 2003, Cisco Systems,
2005 Cisco Systems, Inc.
Inc. All
All rights
rights reserved.
reserved. Cisco Confidential 31 31
Current NIC Architecture

Host Interconnect Server (Host)


CPU
System Memory

Mem App Buffer


Cntlr

CPU OS Buffer

NIC
interconnect

Data traverses bus 3 times

Multiple context switches robs CPU cycles


from actual work
Memory bandwidth and per packet interrupts
limit max throughput
OS manages end-to-end communications path
Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 32
With RDMA and OS Bypass

Host Interconnect Server (Host)


CPU
System Memory

Mem App Buffer


Cntlr

CPU OS Buffer

HCA
interconnect

Data traverses bus once, saving CPU and memory cycles

Secure Memory – Memory transfers with


no CPU overhead
PCI-X/PCI-e becomes the bottleneck for
network data transmission
HCA manages remote data transmission

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 33
Kernel Bypass

Traditional Model Kernel Bypass Model

Application Application

User User
Sockets Sockets RDMA
Kernel Layer Kernel Layer ULP

TCP/IP TCP/IP
Transport Transport

Driver Driver

Hardware Hardware

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 34
Upper Layer Protocols

• Variety of software protocols to handle high speed


communication over RDMA
• Protocols include
IP-over-InfiniBand – IETF http://www.ietf.org/internet-drafts/draft-
ietf-ipoib-ip-over-infiniband-09.txt
SDP – InfiniBand Trade Association http://infinibandta.org
SRP – ANSI T10 http://www.t10.org/ftp/t10/drafts/srp/srp-r16a.pdf
DAPL – DAT Collaborative http://www.datcollaborative.org
MPI – MPI Forum http://www.mpi-forum.org

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 35
IP over InfiniBand

• IETF draft specification


• Leverages InfiniBand Multicast for broadcast
requirements (ARP)
• Supports TCP, UDP, IP Multicast

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 36
Sockets Direct Protocol

• STREAM Sockets over InfiniBand Reliable


Connections
• TCP offload function for IB attached devices
• Can be used by TCP application without re-building
the application
• Asynchronous I/O model also available with true
RDMA forwarding – requires application re-write

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 37
SCSI RDMA Protocol

• SCSI Semantics over RDMA fabric


• Not IB specific
• Host drivers tie into standard SCSI/Disk interfaces
in kernel/OS
• Can be used for end-to-end IB storage
(implemented today!)

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 38
Direct Access Provider Library

• Two variants: User DAPL (uDAPL)/Kernel DAPL


(kDAPL)
• RDMA semantics API
• Provides low level interface for application direct or
kernel direct RDMA functions (memory pinning, key
exchange, etc.)

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 39
Message Passing Interface

• MPI is the defacto standard API for parallel


computing applications
• RDMA capabilities added via a set of patches to the
base MPI code (MPICH, one of many available MPI
libraries), initially developed at Ohio State
University
http://nowlab.cis.ohio-state.edu/projects/mpi-iba/

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 40
APIs and Performance
Application

uDAPL
SRP MPI
BSD Sockets Async I/O TS-API
extension
Direct
TCP Access
IP
SDP
IPoIB
1GE 10G IB

Throughput 0.8Gb/s 1.4Gb/s 3.6 Gb/s 6.2Gb/s 6.4Gb/s 6.4Gb/s

Latency > 40 30 usec 18 usec 18 usec 8 usec 5 usec


usecs(optimised)

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 41
InfiniBand Protocol Summary

Protocol / Application Summary Application Example

IPoIB Allows TCP/IP applications to run over the Standard IP-based applications. When
(IP over InfiniBand) InfiniBand transport. Provides server to server used in conjunction with Ethernet
and in-band management traffic from mgt Gateway, allows connectivity
station to switch and HCAs. between IB network and LAN.

uDAPL Allows application to take maximum advantage of Used for IPC communication between
(Direct Access RDMA benefits through flexible programming cluster nodes for Oracle RAC.
Programming Library) API. Requires custom development.

SDP Adds RDMA benefits transparently to sockets-based Communication between database nodes
(Sockets Direct Protocol) applications. Can configure for all sockets and application nodes, as well as
applications or on a per port or application between database instances.
basis.

SRP Allows InfiniBand-attached servers to utilize block When used in conjunction with the Fibre
(SCSI RDMA Protocol) storage devices. Channel gateway, allows connectivity
between IB network and SAN.

MPI Low latency protocol used widely in HPC HPC applications.


(Message Passing Interface) environments.

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 42
IB Glossary

• IB – InfiniBand Architecture (not InfinityBand)


• HCA – Host Channel Adapter (NIC)
• RDMA – Remote Direct Memory Access
• SM – Subnet Manager (management process)
• SRP – SCSI RDMA Protocol
• SDP – Sockets Direct Protocol
• TCA – Target channel Adapter (gateway)

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 43
High Performance Computing
Architectures

Session Number
Presentation_ID
Presentation_ID ©
© 2003, Cisco Systems,
2005 Cisco Systems, Inc.
Inc. All
All rights
rights reserved.
reserved. Cisco Confidential 44 44
High Performance Computing Applications

• Parallel processing applications


Closely coupled
Finite Element Analysis (Crash Simulation)
Fluid Dynamics (Injection Molding)
Loosely coupled
Dataset searches (Terabyte->Petabyte datasets)
Monte-Carlo simulation (10,000s of repetitions)

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 45
High Performance Computing Networks

• Two Standards Based Technologies


Gigabit Ethernet/10 Gigabit Ethernet
InfiniBand
• Multiple Uses
HPC interconnect
Storage traffic
Load/Unload data movement
Application/Systems management

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 46
Case Study: University:
468-Node HPC Cluster

InfiniBand Fully Non-Blocking Research Cluster

Research cluster running parallel MPI applications Dell Blade Servers, PCI-E
RH AS 3.1 Linux, 2.4 Kernel
using Oscar for cluster management.

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 47
Case Study: Large Wall Street Bank

• Application:
Build scalable “on-demand” compute
grid for financial applications
• Environment
512 Intel Servers per slice (goal: 10,000
nodes)
RedHat Linux AS2.1
SFS Server Switch with Ethernet and
Fibre Channel Gateways
Hitachi RAID Storage
Brocade SAN Switches
Cisco Ethernet Switches

• Benefits:
20X Price/Performance Improvement over four years
30-50% Application Performance Improvement
Standards-based solution for on-demand computing
Environment that scales with multiple 500 node building blocks

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 48
Provisioning the Virtual Data Center

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 49
What is Virtualization?
Virtualization

What is Virtualization?
Cisco’s Definition

Virtualization is provisioning
unique combinations of data center resources
(e.g. compute, storage, I/O, network, peripherals, etc.)
independent of their physical architecture
to deliver business and application services
on-demand
(e.g. security, performance, availability, accounting, etc.)

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 50
What is VFrame?
Virtualization

What is VFrame™

• Cisco’s data center-wide virtualization software suite


• Delivers the end-to-end manageability, control, and
virtualization benefits of the mainframe on top of
today’s commodity components and the Cisco IIN
• Provides virtualization, orchestration, and provisioning
for the data center resources that sit between the “OS”
and the “wire”

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 51
Data Center Evolution
The 70’s: Mainframe

Management

Compute I/O Storage

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 52
Data Center Evolution

The 80’s: Distributed Computing

Compute Compute Compute Compute

Storage Storage Storage Storage Management

I/O I/O I/O I/O

Network Management

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 53
Data Center Evolution

The 90’s: Commoditization


Mgmt Mgmt Mgmt

Security Security Security

Compute I/O Storage

Compute I/O Storage

Compute I/O Storage

Compute I/O Storage

Network Network Network

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 54
Data Center Evolution

Today: Horizontal Data Center


Policies Provisioning
e.g. Tivoli

Management App/OS App/OS App/OS

Management Server Server Server

Management Security

Management Storage

Management Network (L4-7)

Management Network (L2-3)

Piece-meal Repetitive In-efficient

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 55
Data Center Evolution
VFrame = Vertical Data Center Provisioning
•Virtualization
Policies APIs VFrame™ •Orchestration
APIs •Provisioning
e.g. Tivoli

Management App/OS App/OS App/OS

Management Application-
Compute Centric

Service-
Management Security Oriented

Management Storage End-to-End

Management Network (L4-7)

Management Network (L2-3)

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 56
Compute Networking and Virtualization —
How Does It Work?

• Server is “taken apart”


into its basic
Stand-By components—I/O,
applications, compute
Virtual Server Virtual Server Virtual Server Virtual Server power and storage
• Fabric re-assembles
pools on demand to
create “Virtual Servers”
Intelligent Interconnect Fabric out of components
• Unified over an
Intelligent Interconnect
Applications Server Processing I/O Storage
Fabric

Resource Pool
Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 57
VFrame 3.0
Available Today

Features
Standby
• Unified Fabric via InfiniBand Group 1 Group 2 Pool

x
• Diskless Servers with Linux
and Windows Support
• Scalable to 128 nodes per
Director
• Full High Availability
• Robust Policy Management
Policy

VFrame™

Policy 1: Failover IB

Physical Server fails SFS 3012

VFrame detects fault


VFrame programs SFS
to map standby
Campus/
physical server to WAN/VPN
SAN
LUN and virtual I/O
New server restarts in
new group Data Center Grid

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 58
VFrame 3.0
Available Today

Features
Group 1 Group 2 Standby
• Unified Fabric via InfiniBand Pool
• Diskless Servers with Linux
and Windows Support
• Scalable to 128 nodes per
Director
• Full High Availability
• Robust Policy Management
Policy

VFrame™

Policy 2: Quickly IB
Redeploy Server(s)
SFS 3012
VFrame programs SFS
to remap physical
servers at 5pm

Servers restart in new Campus/


group with new SAN WAN/VPN
mappings
Data Center Grid

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 59
VFrame 3.0
Available Today

Features
Group 1 Group 2 Standby
• Unified Fabric via InfiniBand Pool
• Diskless Servers with Linux
and Windows Support
• Scalable to 128 nodes per
Director
• Full High Availability
• Robust Policy Management
Policy

VFrame™

Policy 3: Add Capacity


On-Demand IB

App monitors issue SFS 3012


triggers to VFrame
VFrame programs SFS
to add new servers
from standby pool Campus/
SAN WAN/VPN
Standby servers
restart in new group
with new mappings Data Center Grid

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 60
VFrame Vision
Cisco Virtual Data Center

Data Center Define application


Administrator services and pass
policy to VFrame
Server
GRID VFrame translates
policies to actions
and passes to
infrastructure
Policy
IB VFrame identifies right
App / OS Image
From storage
VFrame™ SFS 3012
VFrame picks server
Application: SAP
with right criteria to
run application and
Image GigE boots server
Performance VFrame gives new
server right VLAN and
Security FC
LUN info so it can
find/be found by right
MDS 9500
Availability Catalyst 6500 clients and storage
Accounting SAN CSM Load VFrame provisions
Balancer security policies to
FWSM FWSM
Servers Firewall
VFrame provisions
CSM to add new server
Campus/ to load balancing pool
Application Service Provisioned! WAN/VPN

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 61
VFrame Benefits

• Manage the data center from a service-oriented,


application-centric perspective
• Eliminate number of layers/devices required to be
touched to provision or modify
• Treat the entire data center infrastructure (from the
“OS” to the “wire”) as one manageable entity of
shared virtualized resources (Virtual Mainframe)
• Expose a single orchestration and provisioning
interface for all data center infrastructure
• Dramatically reduce TCO

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 62
Cisco SFS Solution Summary

• The scale and performance of high-end servers


with the ease and economics of commodity
Linux/Intel building blocks

• Significant server and/or I/O consolidation

• Tremendously enhanced performance between servers


and storage at lower cost

• Dramatically improved server utilization

• Provides a path to Grid/Utility/On-Demand Computing

Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 63
Session Number
Presentation_ID © 2005 Cisco Systems, Inc. All rights reserved. Cisco Confidential 64

Вам также может понравиться