Вы находитесь на странице: 1из 42

Prasad Kularatne

Objectives
Understanding the key technologies behind Storage Networking, implementation details and administration & management aspects
We will start by understanding the need for networked access

to storage and typical storage networking architectures and protocols


We will also discuss the common application of different

storage networking technologies


We will end up with typical implementation, administration

and management aspects in a networked storage environment

Traditional data storage landscape


Traditionally storage access was considered the sole responsibility of

server operating system


Direct Attached Storage (DAS) access model

Dedicated server-storage connectivity was considered the only way to

ensure reliability, performance and integrity in data transfer


Parallel SCSI / ESCON with limited device access was the storage

interconnect

Result was the build up of islands of storage in the enterprise with the

growing demand for new applications


Multiple management and administration mechanisms Significant under utilized storage resources due to non sharing of storage

resources

Need for networking storage

Source: IBM SAN Training: Introduction to Storage Area Networks 2004/11

Emergence of storage networking


Robustness, reliability and performance of parallel SCSI/ESCON world is

good but need for flexibility, resource sharing and dynamism was the need of the day
Need for a protocol that marries best of both worlds
Emergence of Fibre Channel A protocol to carry SCSI commands and data over robust, reliable and high

performance networking transport Fibre Channel gave birth to todays Storage Area Networks (SANs) SAN: A Managed, high speed network that enables any-to-any interconnection of heterogeneous servers and storage systems allowing organizations to exploit the value of their business information via universal access and sharing of resources

Fibre Channel: Best of both worlds

Source: Brocade Product Training: Why Fibre Channel Revision 0.1_FC101_2003

What is Fibre Channel (FC)?


A Standard: ISO/IEC standard for flexible, serial data transport at long
distances

High performance and Speed: hardware based transport for high


performance (4Gbps/8Gbps)

Low Latency: Port-to-port delay of a Fibre Channel Storage Area Network


is less than 2uS

Long distance: Maximum native distance of 10km and up to 300km with


distance extenders

Robust data integrity: Powered by robust 8B/10B encoding scheme for


robust data integrity and FC BER is 10-12 few

Large Connectivity: Support for a theoretical maximum of 16M devices


per SAN fabric. Support for heterogeneity, multiple media types and topologies

Fibre Channel Layered protocol stack

Source: IBM SAN Training: Introduction to Storage Area Networks 2004/11

Fibre Channel Layers (FC-0 -> FC-2)


FC - 0 and 1 layers specify physical and data link functions

needed to physically send from one port to another


FC-0 defines media connections and cables (speeds and feeds) FC-1 defines 8b/10b encoding, link control and ordered sets

FC 2 specify content and structure of information along

with how to control and manage information delivery


FC-2 defines exchange and sequence management; frame structure,

Class of Service and flow control mechanisms While FC-2 level concerns itself with the definition of functions with a single port

FC-0: Media types

Source: SNIA Tutorial: Fibre Channel Technologies, 2005

FC-0: Fibre Channel Adapters


Often called FC Host Bus Adapters
Support FC speeds 2, 4 and 8Gbps Supports all FC topologies

Does entire Fibre Channel processing (FC-0 through FC-4) Equipped with an on-board RISC processor Support heterogeneous operating systems
Mostly 64-bit PCI-X or PCI Express versions Should be installed on a high speed bus and multiple

adapters preferably on separate busses Industry best practice: Tape and disk traffic should not be on the same HBA

Fibre Channel Addressing


Each Fibre Channel Adapter port has a burnt in world-

wide unique 64-bit address


World Wide Port Name (WWPN) Not used for communication between nodes Used for uniquely identifying the FC port in the fabric

Assigning storage resources

All communications use a 3-byte (24-bit) Port Identifier

(Port ID)
Assigned by Fabric when an initiator of target logs in Consists of Domain Address: Area Address: Port Address

Fibre Channel topologies


24-bit address space Multiple high speed concurrent connections

with cut-through forwarding Basis of a FC Storage Area Network

127 devices sharing a loop Uses are diminishing Use case: Back-end adapters to Fibre Channel

disk drives
Dedicated connections between two nodes Direct Attached Storage (DAS)
Source: SNIA Tutorial: Fibre Channel Technologies current and future, 2009

FC-0: Fibre Channel Fabric


1, 2, 4 or 8 Gbps full duplex ports
Under 1 uS latency often the latency is calculated in nS Implements all the Fabric services required to function a

FC SAN
Fabric Controller service (controlling the switching function) Name Service Zone and Alias service

Login service

Fibre Channel SANs are almost fully auto configurable Available as switches up to 64-ports Beyond 64-ports due to increased port density and need for very high availability FC Directors shall be used

FC-2: Classes of Service (CoS)


Class 1: Path will be reserved from source to destination.

Dedicated link for the data flow. Class 2: Connectionless communication with end-to-end acknowledgements Class 3: Connectionless communication with no end-toend acknowledgements Class 4: Similar to Class 1, except that only a fraction of bandwidth will be reserved Class 6: Multicasting service Class F: Allow intermixing of Class 1 or 4 traffic with class 2 and 3 traffic

FC-2: Common CoS - Class 2

Note the presence of end-to-end acknowledgements


Source: SNIA Tutorial: Fibre Channel Technologies, 2005

FC-2: Common CoS - Class 3

Note the lack of end-to-end acknowledgements


Source: SNIA Tutorial: Fibre Channel Technologies, 2005

FC-2: Flow control

Source: SNIA Tutorial: Fibre Channel Technologies, 2005

FC-2: Flow Control


Fibre Channel employs a credit based flow control

mechanism
When the two devices establishes connection, each device gives the

other some number of credits indicating the number of Fibre Channel buffers that are available for that connection Sender decrements received credits by one each time it transmits a frame When the receiver receives and processes the frame it send a Receiver_Ready (R_RDY) signal Sender increments its credit count by one each time it received a R_RDY from the receiver

This flow control mechanism operates between node and fabric

(Buffer-to-buffer flow control) and between the end points (End to-end flow control)

Fibre Channel Layers (FC-3 & FC-4)


FC-3 level deals with functions that span multiple ports
Striping FC data across multiple FC ports

Multicasting

FC4 provides mapping of Fibre Channel

capabilities to pre-existing protocols, such as IP,

SCSI, or ATM, etc.


Mapping of SCSI to Fibre Channel capabilities is the

native mapping - FCP

FC-4: Fibre Channel mapping to SCSI


SNIA definition for Fibre Channel:
A serial I/O interconnect capable of supporting multiple protocols, including access to open system storage (FCP), access to mainframe storage (FICON), and networking (TCP/IP)

We will discuss Fibre Channel (FC) only as a SCSI I/O

interconnect technology
Fibre Channel Protocol (FCP) is a mapping of Fibre

Channel interconnect to serial SCSI command protocol

SCSI Protocol Service Model

Application Layer: SCSI application protocol used by a client to issue a request for SCSI I/O operation (command) and server to respond
Transport Layer: Communication protocol through which client and server communicates with each other

Interconnect Layer: Signaling, framing and flow control subsystem needed for physical transfer of data from sender to the receiver
Source: SNIA Tutorial: SCSI The protocol for all storage architectures, 2004

FC and FCP in SCSI protocol model

Fibre Channel (FC) is the interconnect Fibre Channel Protocol (FCP) is the communication protocol that maps

SCSI command set to the Fibre Channel interconnect standard.


Source: SNIA Tutorial: SCSI The protocol for all storage architectures, 2004

Fibre Channel Access Control

Zoning types:
Port zoning (Hard zoning) WWN Zoning (Soft zoning) Mixed zoning
Source: SNIA Tutorial: Fibre Channel Technologies, 2005

High Availability in FC SANs

Multi-pathing and path fail-over (Active/Passive,

Active/Active) Full redundancy


Source: SNIA Tutorial: Fibre Channel Technologies current and future, 2009

Why IP SANs?
Unleash the multitude of benefits in field proven TCP/IP

based networks to transport Storage traffic


Ubiquitous technology Universal connectivity Proven and common skill-set Significant R&D efforts and continuing

10 Gbps Gigabit Ethernet has emerged and roadmap extending to 40Gbps

Challenge: Ethernet & TCP/IP was not developed for

bulk traffic types like the storage traffic

IP SAN Protocols
iSCSI
Maps SCSI command set to TCP/IP Establishes and manages connections between hosts and IP-based

storage devices

FCIP
Tunneling protocol to carry Fibre Channel frames over wide area

distances Use case: Connecting two geographically distributed FC SANs

iFCP
TCP/IP based protocol for interconnecting Fibre Channel devices or

Fibre Channel SANs using IP routing and switching elements in place of Fibre Channel fabrics

iSCSI in SCSI protocol model

TCP/IP over Ethernet is the interconnect iSCSI is the communication protocol that maps SCSI

command set to the TCP/IP over Ethernet


Source: SNIA Tutorial: SCSI The protocol for all storage architectures, 2004

Comparison of IP SAN protocols

Source: SNIA Tutorial: IP Storage Part II, 2001

IP Storage protocols stacks

Source: SNIA Tutorial: Clearing the confusion primer on Internet Protocol Storage Part II, 2006

Pooling of storage resources


hdisk2 c6t4d1 Application Server Mail Server File Server Application Server
\\Physical Disk 1

/dev/sda

FC SAN

RAID 1+0 Groups (Mirror + Striping)


Controller A Controller B

Slice 1: Solaris_LUN Slice 2: Windows_LUN

Slice1: AIX_LUN Slice 2: Linux_LUN

Solaris_LUN(Mirror) Windows_LUN (Mirror)

AIX_LUN (Mirror) Linux_LUN (Mirror)

SAN Storage Array

LAN-Free Backups
LAN

Application Server

Mail Server

File Server

LAN Free Backup traffic

FC SAN

Centralized Backup/Recovery Server (Tape Library Control/ Backup Mgmt)

SAN Storage Array

Tape Library

Storage based IT Disaster Recovery


Application Server
System x3650
0 1 2 3 4 5 6 7

Mail Server
System x3650
0 1 2 3 4 5 6 7

File & Print Server Application Server [DR]


System x3650
0 1 2 3 4

Mail Server [DR]


System x3650
0 1 2 3 4 5 6 7

File & Print Server [DR]

System x3800

System x3800

System x3650
0

System x3650
0

1 2 3 4 5 6 7

1 2 3 4 5 6 7

Backup Server IP WAN FC to IP Converter FC SAN

Backup Server [DR]

FC to IP Converter Storage array based data replication FC SAN

DS4800

DS4800

TotalStorage

TotalStorage

EXP710

TotalStorage

EXP710

TotalStorage

EXP710

TotalStorage

EXP710

TotalStorage

EXP710

TotalStorage

EXP710

TotalStorage

SAN Storage Array

SAN Storage Array (DR)

File I/O vs. Block I/O

Source: IBM SAN Training: Introduction to Storage Area Networks 2004/11

Fibre Channel SAN: Block I/O


PC Clients

Heterogeneous Servers & Clients


Ethernet TCP/IP Network

IP Network
Database Servers Mail Server

BLOCK I/O

BLOCK I/O: Request for a set of blocks with a starting block address and the number of blocks to be accessed Directly address the storage device

FC SAN

Shares Storage Resources

File Server: File I/O


PC Clients
Ethernet TCP/IP Network

Heterogeneous Servers & Clients IP Network


CIFS or NFS Common Network Protocols

FILE I/O
File I/O translated into Block I/O

File Server (NAS)

File I/O: Request for specific file to be accessed Does not directly address the storage device that is done by the NAS.

Shares Files

What is a NAS?
Task optimized, high performance storage appliance directly attached to IP networks, providing file serving to clients and servers in a heterogeneous environments SNIA Definition
Task optimized appliance: Usually runs a special purpose operating

system that is optimized the file level access protocols such as CIFS (Windows) and NFS (Unix) Fault tolerant and high availability features inherent in the appliance design Scalable to multi terabyte capacities Supports multiple Gigabit Ethernet or 10 GigE interfaces Supports backing up its data to a directly attached tape library/drive using industry standard NDMP protocol

Вам также может понравиться