Вы находитесь на странице: 1из 149

Learning Objectives

u u u u u u

Basic Storage Technology SCSI & Disk Technology Concept of RAID and Different Raid levels FC SAN IP Storage & NAS Virtualization & Storage Management

5/11/11

Todays Storage Technologies


Magnetic storage Optical storage Solid state storage

5/11/11

Magnetic Storage Cassette Tape

Slow write and retrieval speeds, but inexpensive and portable. Digital Audio Tape (DAT) able to store from 2 to 24GB of data on a tape about the size of a credit card. Digital Linear Tape (DLT) is generally faster and able to hold 200GB or more per cassette. Primary use is data backup or archival, especially for servers.
5/11/11

Magnetic Storage Hard Drives

In 1957, IBM developed the first hard drive as part of their RAMAC system. It required 50 24-inch disks to store 5 megabytes of data and cost ~$35,000/year to lease. Rapid growth in late 80s/early 90s helped fuel demand for cheap storage devices. Most common storage device in use 5/11/11 today.

IBM Hard Drive Evolution

5/11/11

Solid State Devices


Known as flash memory. Reliability in portable environments and no noise

No moving parts Does not need spin up

Faster start up

Extremely low read latency Deterministic read performance

5/11/11

The performance does not depends on the location of data

SCSI

Small computer System Interface(SCSI) is a standard which define:

A command set A protocol for transactions Physical interface

is block-oriented, i.e. host's o.s see the storage devices like a contiguos sets of fixed data blocks
5/11/11

SCSI Conf i gur at i on Si ngl e I ni t i at or , Si ngl e Tar get

LUN 0
Initiator SCSI Bus Target

LUN 1 LUN 2

5/11/11

SCSI Block Diagram


System bus or I/O bus
SCSI bus controller

SCSI port
I/O device I/O device I/O device

SCSI bus Terminator

5/11/11

Purpose

Magnetic Disk
Secto r

Long term, nonvolatile storage Lowest level in the memory hierarchy

slow, large, inexpensive

General structure

Trac k

A rotating platter coated with a magnetic surface A moveable read/write head to access the information on the disk

Typical numbers

1 to 4 (2 surface) platters per disk of 1 to 3.5 in diameter Rotational speeds of 5,400 to 15,000 RPM 10,000 to 50,000 tracks per surface

5/11/11

cylinder - all the tracks under the head at a given point on all surfaces

Disk Platters

The storage resides on multiple platters.

Each platter has two recording surfaces

Shown here is a Rather old disk with 3 platters (or six surfaces)

5/11/11

Disk Heads

Each platter has 2 read / write heads


They move over the surfaces (in and out). This is called seeking. The time it takes is the seek time.

5/11/11

Each surface is partitioned into a set of tracks. Each track is partitioned into a set of sectors

Tracks and Sectors

Unlike in this picture, the number of sectors on each track is not constant on most disks. Outer tracks have more sectors (technique called zone bit recording)

5/11/11

Disk I/O Overview


Platters Tracks Platter Sectors

Track

To access data: seek: position head over the proper track (3 to 14 ms. avg.) rotational latency: wait for desired sector (.5 / RPM) transfer: grab the data (one or more sectors) 30 to 80 MB/sec 5/11/11

Calculating Access time to get your (sector of) data is: Disk Access time Wait time (for disk to free up from previous requests) +

Seek time (to move head to the right track) + Usually an average seek time is specified.

Rotational delay (to spin to right sector) + Determined by how fast the platters spin.

Transfer time (to read bytes off sector) + Interestingly, this is not constant since some tracks have more sectors than others. 5/11/11

The average time to read or write a 512B sector for a disk rotating at 10,000RPM with average seek time of 6ms, a 50MB/sec transfer rate, and a 0.2ms controller overhead

Typical Disk Access Time

Avg disk read/write = 6.0ms + 0.5/(10000RPM/(60sec/minute) )+ 0.5KB/(50MB/sec) + 0.2ms = 6.0 + 3.0 + 0.01 + 0.2 = 9.21ms

If the measured average seek time is 25% of the advertised average seek time, then
Avg disk read/write = 1.5 + 3.0 + 0.01 + 0.2 = 4.71ms

The rotational latency is usually the largest component of the access time

5/11/11

Disk Access Time for a What is the average time to read a 512-byte sector for a typical diskaverage seek time is disk rotating a Slower 5400 RPM? The advertised

12ms, the transfer rate is 5MB/sec, and the controller overhead is 2ms. Assume that the disk is idle so that there is no wait time.

12ms + .5 spin (5400 RPM) + .5KB/(5MB/sec) + 2ms = 12ms + 0.0055 (or 5.6 ms) + 0.1 ms + 2ms = 19.7ms

5/11/11

Disks: Other Considerations


Disk performance improves about 10%/year Capacity increases about 40-60%/year

5/11/11

RAID Configurations

5/11/11

Dependability, Reliability, Reliability a measure of the reliability measured by the mean Availability time to failure (MTTF). Service interruption is measured by mean
time to repair (MTTR) Availability a measure of service accomplishment Availability = MTTF/(MTTF + MTTR)

To increase MTTF, either improve the quality of the components or design the system to continue operating in the presence of faulty components
1.

Fault avoidance: preventing fault occurrence by construction Fault tolerance: using redundancy to correct or bypass faulty components (hardware)
q q

2.

Fault detection versus fault correction Permanent faults versus transient faults

5/11/11

RAIDs: Disk Arrays Redundant Array of


Inexpensive Disks
Arrays of small and inexpensive disks

Increase potential throughput by having many disk drives


Data is spread over multiple disk Multiple accesses are made to several disks at a time

Reliability is lower than a single disk But availability can be improved by adding redundant disks (RAID)

Lost information can be reconstructed from redundant information MTTR: mean time to repair is in the order of hours MTTF: mean time to failure of disks is tens of years

5/11/11

RAID: Level 0 (No Redundancy; Striping)


S 0 S 1 S 2 S 3

stripe number; stripe = sequence of blocks

Multiple smaller disks as opposed to one big disk

Spreading the data over multiple disks striping forces accesses to several disks in parallel increasing the performance

Four times the throughput for a 4 disk system

Same cost as one big disk assuming 4 small disks cost the same as one big disk

No redundancy, so what if one disk fails?

Failure of one or more disks is more likely as the number of disks in the system increases 5/11/11

RAID: Level 1 (Redundancy via Mirroring)


redundant (check) data Uses twice as many disks user data so there are always two copies of the data

S 0 S 1 .. S n

S 0 S 1 .. S n

# redundant disks = # of data disks so twice the cost of one big disk

writes have to be made to both sets of disks, so writes would be only 1/2 the performance of RAID 0

What if one disk fails?

If a disk fails, the system just goes to the mirror for the data

RAID 01 (mirrored stripes) and 10 (striped mirrors)


s 0 s 1 s 2 s 3 s 0 S 1 S 2 S 3

5/11/11

RAID 01 or 10?

redundant (check) data

S0,bit 0

RAID: Level 3 (Bit1 0 1 0 Parity) 1 Interleaved


S0,bit 1 S0,bit 2 S0,bit 3

disk fails

parity disk

In a bit-interleaved, parity disk array, data is conceptually interleaved bit-wise over the data disks, and a single parity disk is added to tolerate any single disk failure Each read request accesses all data disks and each write request accesses all data disks and the parity disk

Thus, only one request can be serviced at a time

Can tolerate l i mi t ed disk failure, since the data can be reconstructed

reads require reading all the operational data disks as well as the parity disk to calculate the missing data that was stored on the failed disk

Cost of higher availability is reduced to 1/N where N is the number of disks in a protection group

# redundant disks = 1 # of protection groups

5/11/11 Where would you use this?

They are simpler to implement than RAID Levels 4, 5, and 6

RAID: Level 4 (BlockInterleaved Parity)


parity disk

Cost of higher availability still only 1/N but the parity is stored as blocks associated with a set of data blocks

Still four times the throughput # redundant disks = 1 # of protection groups Supports small reads and small writes (reads and writes that go to just one (or a few) data disk in a protection group)

by watching which bits change when writing new information, need only to change the corresponding bits on the parity disk the parity disk must be updated on every write, so it is a bottleneck for back-to-back writes

Can tolerate l i mi t ed disk failure, since the data can be 5/11/11 reconstructed

RAID 4 small writes simple approach


New data D 0 D 1 D 2 D 0 D 1 D 2

Small Writes
D 3 P

3 reads and 2 writes involving all the disks

D 3

RAID 4 small writes optimized approach


New data D 0 D 1 D 2 D 1 D 2 D 3 P D 3 P

2 reads and 2 writes involving just two disks


5/11/11

D 0

RAID: Level 5 (Distributed Block-Interleaved Parity)

Cost of higher availability still only 1/N but the parity is spread throughout all the disks so there is no single bottleneck for writes

Still four times the throughput # redundant disks = 1 # of protection groups Supports small reads and small writes (reads and writes that go to just one (or a few) data disk in a protection group) Allows multiple simultaneous writes as long as the accompanying parity blocks are not located on the same disk

Can tolerate l i mi t ed disk failure, since the data can be reconstructed

5/11/11

Distributing Parity Blocks RAID RAID 4 4 vs. 5 RAID 5


0 P0 4 P1 8 P2 12 P3 1 5 9 13 2 6 10 14 3 7 11 15 0 4 7 8 11 12 15 1 5 9 P3 2 6 P2 13 3 P1 10 14 P0

By distributing parity blocks to all disks, some small writes can be performed in parallel

5/11/11

RAID-6 Definitio n

Data available w/ #disk failures <= 2 Optimal solutions require a capacity of N+2 disks user data occupies a capacity equivalent to N disks Overhead: e.g for 4+2 its 33%, or for 14+2 its 12.5%

Update of a single block requires computation of 2 i ndependentparity blocks Two methods of parity independence considered

Orthogonal function to calculate 2nd parity element (e.g. P+Q) 2-dimensional redundancy (two-way XOR)

Like RAID-5, cache can be used to reduce write penalty

5/11/11

Market Need

Three Trends impacting requirement

Wider RAID groups


e.g. reference data, tiered storage higher chance of secondary failure

Access Density

high disk capacity = long rebuild time

Use of SATA drives with lower MTBF

All three degrade MTDL

MTDL: Mean Time to Data Loss e.g. 2nd disk failure during RAID-5 rebuild

RAID-6 greatly improves MTDL

Data still available after 2nd failure Miniscule Chance of 3rd failure during rebuild window

Disk Failure w/ Latent Defects Di sk D i sk D i sk D i sk Spa


D ef ect i v e Sect or

D 0, 0 D 1, 0 D 2, 0 D3 , 0

D 0, 1 D 1, X 1 D2 , 1 D 3,
1

D 0, 2 D1 , 2 D 2, 2 D 3,
2

D0 , 3 D 1, 3 D 2, 3 D 3,
3

r eD i sk D 0 , 3 D 1, ? 3 D 2, 3 D 3,
3

D 1,
1

D at a Bl ock [ 1, 1] Par i t y [ 2, 1]

D2 , 1

RAID-2,4,5 fails to handle accumulation of latent defects! Defects accumulate undetected (even with disk auto-scrub) Another disk in array eventually fails (sudden) Subsequent rebuild to spare disk fails (stripe incomplete)

D S, Spar e 0 Sect or

User Data is lost!

This is the key motivation for RAID-6!

5/11/11

RAID-6 Array comprised of N+2 disks where N = # of data disks Example of a RAID-6 algorithm with N=4

DualXOR RAID-6
St r i pe 0 St r i pe St r i de 1 St r i pe n 2 St r i pe 3

Di sk 0

Di sk 1

Di sk 2

Di sk 3

Di sk D i sk 4 5

D0 , 0 D 1 , 0 D 2 , 0 D 3 , 0 D3 , 0

D0 , 1 D 1 , 1 D 2 , 1 D 3 , 1

D0 , 2 D 1 , 2 D 2 , 2 D 3 , 2

D0 , 3 D 1 , 3 D 2 , 3 D 3 , 3 D2 , 5

D0 , 4 D 1 , 4 D 2 , 4 D 3 , 4

D0 , 5 D 1 , 5 D 2 , 5 D 3 , 5

D1 , 1

D at a Bl ock [ 1, 1]

Di agonal Par i t y [ 2, 5]

H or i zont al Par i t y [ 2, 2]

5/11/11

RAID Summa ry

5/11/11

Direct Attached Storage (DAS)


Upon completion of this module, you will be able to:

Discuss DAS based storage Describe the elements of DAS Describe the connectivity options for DAS Discuss DAS limitations
5/11/11 Direct Attached 34 34

What is DAS?

Internal

External Direct Connect

5/11/11

Direct Attached

35 -

35

DAS Architecture

DAS uses an I/O channel Architecture, which resides between a computer(initiator) and the device (target) used to store its data.

Storage device is only accessible by attached host computer. 5/11/11 Direct Attached 36 36

Physical Elements of DAS

Connectivi ty C St ora g e

P U Inter nal Exte rnal


Motherboard Clustered group of processors Processor cards Complete system

Hard disk(s) CD-ROM drive Optical drive Removable media Tape devices/tape library RAID/intelligent array(s) Portable media drives

5/11/11

Direct Attached

37 -

37

DAS Connectivity

Block-Level Access Protocols: ATA (IDE) and SATA

Primarily for internal bus Parallel (primarily for internal bus) Serial (external bus)

SCSI

5/11/11

Direct Attached

38 -

38

DAS Connectivity: Internal

5/11/11

Direct Attached

39 -

39

Internal DAS Connectivity Examples


Paral l el C onnect i vi t y C abl es Seri al C onnect i vi t y C abl e

50-wire SCSI-2 cable

80wire IDE cable

34-wire floppy cable

Serial ATA cable

DAS Connectivity: External

External DAS

B A

H o s t

Storage Device

Example of an external connectivity cable

5/11/11

Direct Attached

41 -

41

DAS limitations
Client s

LAN

Application Servers

Win2 Linux Unix Win2 Linu Unix k k x

SCSI

FC

FC

Tape

Direct Attached Storage

5/11/11

Resource administration must be done singularly No optimization scalability performance limited maximum distance between devices Inaccessibility to data during maintenance difficult backup management

Introduction of SAN

Set of equipments and tecnologies to remote the storage in a network network resource used 5/11/11 exclusively for

San features

Evolution of Fibre Channel SAN


Servers H U B Sw i t ches Sw i t ches

St orage

Servers

St orage

Servers

SAN Islands FC Arbitrated Loop

Interconnect ed SANs FC Switched Fabric

A rrays

St orage

Enterprise SANs FC Switched Fabric

5/11/11

Fibre Channel Storage

44 -

44

Benefits of a SAN

High bandwidth

Fibre Channel Block I/O Centralized storage and management Up to 16 million devices

SCSI extension

Resource Consolidation

Scalability

5/11/11

Fibre Channel Storage

45 -

45

IP Storage Area Networks


Upon completion of this module, you will be able to:

Describe the benefits of IP SAN Describe IP convergence in the SAN and its implications Describe and discuss the basic architecture of

FCIP
Internet Protocol 46 46

iFCP 5/11/11

In this Module
This module contains the following lessons:

IP SAN overview IP SAN protocols Applications of IP SAN

5/11/11

Internet Protocol

47 -

47

Lesson: IP SAN Overview


Upon completion of this lesson, you will be able to:

Describe the benefits of IP SAN Describe the IP convergence in the SAN and its implications List the three common IP SAN approaches List the three deployment models (topologies)Internet for IP SAN 5/11/11 Protocol 48 48

Traditional SAN technology is built around Fibre Channel


Servers Servers

Introduction

Sw i t ch St orage

Sw i t ch St orage

Servers

= I P = FC

Servers

IP technology is emerging as an 5/11/11 Internet Protocol 49 alternative or supplemental transport


49

Block Storage over IP iSCSI Protocol Options SCSI over IP

IP encapsulation done on host / HBA(host bus adapter) Hardware-based gateway to Fibre Channel storage F C I P

I F P C I P

I P

FCIP

Fibre Channel-to-IP bridge / tunnel (point to point)

I P

F I C P

Fibre Channel end points

FC / I P FC / I P

I P/ FC

iFCP

I P

I P/ FC

IP as the inter-switch fabric

IP Storage Approaches
FC I P i FC P
Servers FC FC

i SC SI

FCIP Router
I P N et w ork

iFCP Switch
I P N et w ork I P N et w ork I P N et w ork

FCIP Router FC

iFCPSwitch FC

iSCSI/F C FC Gatew ay

St orage

5/11/11

Internet Protocol

51 -

51

IP Storage Deployment Models N at i ve


All Ethernet (No Fibre Channel) iSCSI Protocol Ethernet Switches & Routers

= I P = FC

Bri dgi ng

Servers - Ethernet Attached Storage - FC Attached (SAN or DAS) iSCSI Protocol

= I P = FC

Ext ens i on

Servers & Storage - SAN Attached FCIP or iFCP Protocol SRDF

= I P = FC

5/11/11

Internet Protocol

52 -

52

Benefits of IP SAN

Cost Effective Extend the reach of a SAN Most organizations already have IP networks and familiarity with traditional network management Leverages existing Fibre Channel applications
5/11/11 Internet Protocol 53 53

Extend the Reach of Your SAN


Standard Fibre Channel Distances IP Extends Fibre Channel applications over regional/global distances At higher link speeds, IP can handle synchronous applications

5/11/11

Internet Protocol

54 -

54

Lesson: IP SAN Protocols


Upon completion of this lesson, you will be able to:

Describe and discuss the basic architecture of

FCIP iFCP iSCSI

5/11/11

Internet Protocol

55 -

55

Fibre Channel over IP - FCIP


Encapsulates FC frames in IP packets Creates virtual FC links that connect devices and fabric elements Includes security, data integrity, Fi bre C hannel Fram e congestion and performance FC SC SI D at a Hea der specifications
SOF CRC EOF

FC I P Encaps ul at i on
IP Hea der TCP Hea der FCIP Hea der I P Payl oad I P Payl oad

5/11/11

Internet Protocol

I P D at agram

56 -

56

FCIP Benefits

FCIP

Best of both technologies


IP Widely available Accepted technology Trained user base Affordable Mature standards

Low latency High reliability Fibre Channel Off-the-shelf solutions Mature standards

5/11/11

Internet Protocol

57 -

57

Internet Fibre Channel Protocol iFCP Gateway-to-gateway protocol

IP switches & routers replace FC switches Transparent to FC drivers


Fi bre C hannel Fram e Point-to-multipoint networking possible
CRC SOF FC Hea der SC SI D at a EOF

FC transport uses TCP connections

i FC P A ddres s Trans l at i on & Encaps ul at i on


IP Hea der TCP Hea der iFCP Hea der I P Payl oad I P Payl oad

5/11/11

Internet Protocol

58 -

58

iFCP Benefits

Works with wide range of devices Flexible Less potential bottlenecking vs. FCIP

5/11/11

Internet Protocol

59 -

59

FC Tape Library

iFCP Maps FCP to an IP Fabric


FC Server iFCP Gateway

FC Loop Disks

iFCP Gateway

IP Network
iFCP Gateway

iFCP Gateway FC Switch

5/11/11

Device-todevice Internet Protocol session

60 -

60

iFCP
Servers G at ew ay G at ew ay Servers

Sw i t ch Sw i t ch

SA N A

SA N B

The t w o f abri cs rem ai n s eparat e


St orage St orage

5/11/11

Internet Protocol

61 -

61

iSCSI

A method to transfer blocks of data using the TCP/IP network Serialized service delivery subsystem SCSI protocol over IP

5/11/11

Internet Protocol

62 -

62

iSCSI Model Layers


St orage

SC SI D at a

SC SI encaps ul at i on
IP Hea der TCP Hea der iSCS I Hea der I P Payl oad I P Payl oad

5/11/11

Internet Protocol

I P D at agram

63 -

63

iSCSI Storage Models

Native Bridging
Sw i t ch

St orage G at ew ay

Servers

= I P = FC

Controlling IP SANS
A
iSNS
Target Z - Device A Initiator A Target Z - Device B Initiator B Target Z - Device C Initiator C

Targe tZ IP Network

B C

Initiator A 5/11/11

Initiato rB Internet Protocol

Initiato rC 65 65

Internet Storage Name Server Overview


iSNS is a client/server model The iSNS server is passive iSNS clients register & manipulate the objects in the iSNS server An iSNS server can be hosted on a target, initiator, or stand-alone server with a specified IP address
5/11/11 Internet Protocol 66 66

iSCSI Nodes

A single Initiator or Target Names are assigned to all Nodes Independent of address

5/11/11

Internet Protocol

67 -

67

Lesson: IP SAN Applications


Upon completion of this lesson, you will be able to:

Describe common applications of IP SAN technology such as:

Remote Backup and Restore Remote Data Replication Storage Consolidation

5/11/11

Internet Protocol

68 -

68

Remote Backup and Restore


Server Servers Di s ks

I P N et w ork
St orage

Di s ks Server

5/11/11

Internet Protocol

69 -

69

Di s ks

Remote Data Replication


Server Servers Di s ks

I P N et w ork
St orage

Di s ks Server

Di s ks

5/11/11

Internet Protocol

70 -

70

Module Summary
Topics in this module included:

The benefits of IP SAN The IP convergence in the SAN and its implications The basic architecture of FCIP The basic architecture of iFCP The basic architecture of iSCSI Application of IP SAN technology
5/11/11 Internet Protocol 71 71

NAS Network Attached Storage


After completing this module, you will be able to:

Discuss the benefits of NAS based storage strategy Describe the elements of NAS Discuss connectivity options for NAS Discuss NAS management considerations by environment
72 - for Identify the Network best Attached environments
72

5/11/11

In this Module
This module contains the following lessons:

What is NAS Managing a NAS environment NAS application examples

5/11/11

Network Attached

73 -

73

Lesson: What is NAS?


Upon completion of this lesson, you will be able to:

Define NAS and describe its key attributes List the benefits of NAS Describe NAS connectivity

5/11/11

Network Attached

74 -

74

NAS Evolution
Stand Alone PC Portable Media for File Sharin g Networked PCs Networked File Sharing

N et w ork At t ached St orage ( N A S)

5/11/11

Network Attached

75 -

75

What is NAS
NAS is shared storage on a Clien network infrastructure ts

NAS Head

Stora ge

5/11/11

Applicati on Server

Network Attached

Print Serv er

NAS Device

76 -

76

General Purpose Servers vs. NAS Devices


A ppl i cat i on s Pri nt D ri vers Fi l e Sys t e Im / O perat i ng O Sys t em N et wo rk Fi l e Sys t e O perat i ng m Sys t em N et wo rk

General Purpose Server (NT or Unix Server)

Single Function Device (NAS Server)

5/11/11

Network Attached

77 -

77

Why NAS

Supports global information access Improves efficiency Provides flexibility Centralizes storage Simplifies management Scalability High availability through native clustering 5/11/11 Network Attached 78 78

Customer Demands for NAS Have Changed


THE PAST
O ut s i de t he D at a C ent er

TODAY
C ri t i cal Bus i nes s A ppl i cat i ons ( D at abas es )

I s l ands of I nf orm at i o n

I nt egrat ed I nf ras t ruct ur e

Tool s and Scri pt s

Ent erpri s e M anagem ent

5/11/11

Network Attached

79 -

79

NAS Device Components


IP Net wor k

NAS Device N et w ork I nt erf ace N CI F F NS A S D evi ce S OS St orage I nt erf ace


SCSI, FC, or ATA

NAS File Services Protocols: NFS and CIFS


U n i x N F S IP Net wor k

CI Wind o w s F S

NAS Device N et w ork I nt erf ace N CI F F NS A S D evi ce S OS St orage I nt erf ace


SCSI, FC, or ATA

Network File System (NFS)


Client/server application Uses RPC mechanisms over TCP protocol Mount points grant access to remote hierarchical file structures for local file system structures Access to the mount can be controlled by permissions
5/11/11 Network Attached 82 82

Common Internet File System (CIFS)

Public version of the Server Message Block (SMB) protocol Client applications access files on a computer running server applications that accept the SMB protocol Better control of files than FTP Potentially better access than Web browsers and HTTP
5/11/11 Network Attached 83 83

NAS Connectivity: A Closer Look


A ppl i cat i o n Pres ent at i o n Ses s i o n Trans po rt N et wo rk D at a Li nk Phys i c al FTP, Tel net SM TP, SN M P TC P, UDP I P A R P / RARP N ot D ef i ned N FS X D R P C

O SI Seven Layer M odul e

I nt ernet Prot ocol Sui t e

5/11/11

Network Attached

84 -

84

I/O Example

A ppl i cat i o n O perat i ng Sys t em I / O R edi re N FS / ct CI FS TC P/ I P St ack N et w ork I nt erf ace


Cli e n t IP Net wor k

St orage I nt erf ace St orage Prot ocol N A S O perat i ng Sys t em N FS / CI FS TC P/ I P St ack N et w ork I nt erf ace
NAS Devi ce

Block I/O to storage device

5/11/11

Network Attached

85 -

85

UNIX and Windows Information Sharing


N FS Traf f i c F T P CI FS Traf f i c

C om m on Fi l e Sys t em ( C FS) M ul t i prot ocol s upport l ayer

Prot ocol Layer

I / O l a ye r

5/11/11

Network Attached

86 -

86

NAS Physical Elements


Data movers/filers Management interface

Configure network interfaces Create, mount, or export file system Install, configure and manage all data movers/filers Can be accessed locally or remotely
Network Attached head to storage 87 -

Connectivity
5/11/11 NAS
87

Integrated vs. Gateway NAS


Integrated NAS IP Net wor k

NAS Head

NAS Gatew ay

IP Net wor k

NAS Head

FC Fa bri c

5/11/11

Network Attached

88 -

88

Integrated NAS System


Integrated NAS System

IP Net wor k

Di r ect At t ach

NAS Head

Storage

5/11/11

Network Attached

89 -

89

Gateway NAS System


Clients IP Net wor k

Application Servers IP Net wor k FC Sw itc h

NAS Gatew ay

St orage

Lesson: NAS Examples


Upon completion of this lesson, you will be able to:

Discuss environments that would benefit from a NAS solution including:

NAS consolidation NAS solution for Gateway NAS system

5/11/11

Network Attached

91 -

91

NAS Server Consolidation Scenario


Current Environment
U N I X N T W 2 K Internet/Int ranet

5/11/11

N I General purpose OS serving files X via FTP, CIFS, NFS, HTTP. . .

Wind ows

Network Attached

92 -

92

NAS Server Consolidation Example


Solut io n
NA S File Ser ver

Internet/Int ranet

5/11/11

N I X purpose OS serving files General via FTP, CIFS, NFS, HTTP. . .

Wind ows

Network Attached

93 -

93

Lesson Summary
Key points covered in this lesson:

HTTP example Consolidation example

5/11/11

Network Attached

94 -

94

NAS Challenges

Speed

Network latency and congestion Protocol stack inefficiency Application response requirements

Reliability Connectivity Scalability


5/11/11 Network Attached 95 95

Module Summary
Key points covered in this module:

A NAS server is a specialized appliance optimized for file serving functions. Overview of physical and logical elements of NAS Connectivity options for NAS NAS connectivity devices
Network Attached 96 Best environments for NAS solutions
96

5/11/11

Virtualization Technologies
Upon completion of this module, you will be able to:

Identify different virtualization technologies Describe block-level virtualization technologies and processes Describe file-level virtualization technologies and processes
5/11/11 Virtualization 97 97

Lesson 1 Virtualization An Overview


Upon completion of this lesson, you will be able to:

Identify and discuss the various options for virtualization technologies

5/11/11

Virtualization

98 -

98

Defining Virtualization

Virtualization provides logical views of physical resources while preserving the usage interfaces for those resources Virtualization removes physical resource limits and improves resource utilization

5/11/11

Virtualization

99 -

99

What Makes Virtualization Interesting Pot ent i al Benef i t s :

Higher rates of usage Simplified management Platform independence More flexibility Lower total cost of ownership Better availability
Serv er St orag e net w or k St orag e

5/11/11

Virtualization

100 -

100

Virtualization Comes in Many Forms


Vi rt ua Each application sees its own logical m em ory,independent of physical l memory M em o ry Vi rt ual Each application sees its own logical w ork,independent of physical N et w or net network ks Vi rt ua Each application sees its own logical l s erver,independent of physical Server servers s Vi rt ua Each application sees its own logical s t orage,independent of physical l storage St orag e 5/11/11 Virtualization

101 -

101

101

Virtualization Comes in Many Forms


Virtua l Memo ry
Each application sees its own logical memory, independent of physical memory

Physical memory
Ap p Ap p Ap p

Swap space

5/11/11

Virtualization

Benef i t s of Vi rt ual M em ory Remove physical-memory limits Run multiple applications at 102 - 102 once

102

Virtualization Comes in Many Forms

Vi rt ual Each application sees its own logical w ork,independent of physical N et w or net network ks
VLA NA VLA NB VLA NC

5/11/11

Swit ch

VLAN trunk

Benef i t s of Vi rt ual N et w orks Common network links with access-control properties of separate links Manage logical networks instead of physical networks Swit Vi rt ual SA N s provide similar ch networks Virtualization benefits for storage-area 103 - 103

103

Server-Virtualization Basics
Before Server Virtualization:
Operating system

After Server Virtualization:


Operating A A A system p p p p p p

Application

Virtualization Operating layer

A A A system p p p p p p

Single operating system image per machine Software and hardware tightly coupled Running multiple applications on same machine often creates conflict Underutilized resources

Virtual Machines (VMs) break dependencies between operating system and hardware Manage operating system and application as single unit by encapsulating them into VMs Strong fault and security isolation Hardware-independent: They can be provisioned anywhere

5/11/11

Virtualization

104 -

104

Lesson 2 Storage Virtualization


Upon completion of this lesson, you will be able to:

Identify and discuss the various options for virtualization technologies

5/11/11

Virtualization

105 -

105

Storage Functionality Today


I nt el l i gence Li ves Pri m ari l y on Servers and St orage A rrays

Serv er

Pat h m anagem ent V ol um e m anagem ent R epl i cat i on

St orag e net w or k

C onnect i vi t y

V ol um e m anagem ent

St orag e

A cces s cont rol R epl i cat i on RAI D C ache prot ect i on

LU N s

5/11/11

Virtualization

106 -

106

Storage Virtualization Requires a Multi-Level I nt el l i gence Shoul d Be Pl aced Cl os es t t o W hat i t C ont rol s Approach
Serv er
A ppl i cat i on f unct i ons D at a acces s f unct i ons

St orag e net w or k St orag e

D at a pres ervat i on f unct i ons

5/11/11

Di s t r i but ed i nt el l i gence / cent r al i z ed m anagem ent Virtualization 107 107

Current Storage Virtualization St orage Vi rt ual i zat i on Exam pl es Probl em : Examples


I/O path performance and availability Consolidated file-based storage

File
NAS Heads

L A N

Block

Mgmt Station

Storage network

Sol ut i on:

MultiPathing Software

NAS Gateway

Four Challenges of Storage Virtualization

Scal e:

Vi rt ual i zat i on t echnol ogy aggregat es m ul t i pl e devi ces must scale in performance to support the combined environment Vi rt ual i zat i on t echnol ogy m as ks exi s t i ng s t orage f unct i onal i t y must provide required functions, or enable existing functions Vi rt ual i zat i on t echnol ogy i nt roduces a new l ayer of m anagem ent must be integrated with existing storage-management tools

Funct i onal i t y:

M anagem ent :

Vi rt ual i zat i on t echnol ogy adds new com pl exi t y i nt o t he s t orage net w orkrequires vendors to perform additional 5/11/11 Virtualization 109 interoperability tests
109

Support :

The Scaling Challenge


Bef ore:

Standard environment

Performance requirements are distributed across multiple storage arrays

20,000

20,000

20,000

20,000

20,000

Application performance Replication performance

Each array delivers units of performance (e.g., IOPS, SPEC-SFS, MB/s)

The Scaling Challenge Bef ore:

Virtualized environment

100,000+

Performance requirements are distributed across multiple storage arrays


Application performance Replication performance

20,000

+ 20,000 + 20,000 + 20,000 + 20,000

Af t er:

Aggregated performance (e.g., IOPS, SPEC-SFS, MB/s)

Storage network capabilities must support the aggregated environment

Aggregate application performance

The Functionality Challenge


Standard environment

Bef ore:

Applications have access to rich array functionality

Advanced local replication Advanced remote replication Array-level optimization

Advanced array functionality:


Mirrors, clones, and snapshots Protected and instant restores Synchronous and asynchronous replication Consistency Groups

The Functionality Challenge Bef ore:


Virtualized environment

Applications have access to rich array functionality

Network functionality (depending on implementation) Advanced array functionality:


Advanced local replication Advanced remote replication Array-level optimization

Mirrors, clones, and snapshots Protected and instant restores Synchronous and asynchronous replication Consistency Groups

Af t er:

Virtualization device must provide either required functionality

or

The Management Challenge


Standard environment

Bef ore:

Management tools provide integrated view of application to physical-storage mapping


End-to-end management

Monitoring and reporting Planning and provisioning

The Management Challenge Bef ore:


Vi rt ual i zed envi ronm ent

Ser ver t o vi r t ual i z at i on devi ce

Virtualizatio n device

Management tools provide integrated view of application to physical-storage mapping


Monitoring and reporting Planning and provisioning

Vi r t ual i z at i on devi ce t o phys i cal s t or age

Af t er:

Storage network requires modification of management tools to support a virtualized environment

The Support Challenge


Bef ore:

Standard environment

Interoperability:
server types OS versions network elements storage-software products

Storage vendor must support complexity of multi-vendor network environments


Servers and software Networks and software Arrays and software

The Support Bef Challenge ore:

Vi rt ual i zed envi ronm ent

C ons i derat i ons :

N ew hardw are qual i f i cat i on requi rem ent s Servi ce and s upport ow ners hi p Probl em es cal at i on and res ol ut i on

Storage vendor must support complexity of multi-vendor network environments


Servers and software Networks and software Arrays and software

M or e com pl exi t y r equi r es addi t i onal i nt er oper abi l i t y i nves t m ent s

Af t er:

Storagevirtualization

Virtual Storage
Storage Virtualization: Block and File Level

Storag e netwo rk

IP netwo rk

Vi rt ua l St orag e 5/11/11

Each application sees its own logical s t orage,independent of physical storage

Virtualization

Benef i t s of Vi rt ual St orage Nondisruptive data migrations Access files while migrating Increase storage utilization 118 - 118

118

Comparison of Virtualization Architectures


Out-ofBand

No state / no cache I/O at wire speed Full-fabric bandwidth High availability High scalability Value-add functionality

InBand

State / cache I/O latency Limited fabric ports More suited for static environments or environments with less growth Value-replace functionality

5/11/11

Virtualization

119 -

119

Lesson 3 Block-Level Virtualization


Upon completion of this lesson, you will be able to:

Describe Block-Level Virtualization technologies and functionality

5/11/11

Virtualization

120 -

120

Block-Level Storage Virtualization Basics Ties together multiple


independent storage arrays

Presented to host as a single storage device Mapping used to redirect I/O on this device to underlying physical arrays

Storage-area network (SAN)

Deployed in a SAN environment Nondisruptive data mobility and data migration


5/11/11 Virtualization

Multi-vendor storage arrays

121 -

121

Usage Scenarios for Blocklevel Storage Virtualization


N ext G enerat i on D at a C ent er O perat i ons
H et erogeneous St orage A ppl i cat i on G row t h C ons ol i dat i on St orage Ut i l i zat i on N ondi s rupt i ve D at a M obi l i t y Ext endi ng V ol um es O nl i ne Scal abi l i t y

Bus i nes s C ont i nui t y

5/11/11

Virtualization

122 -

122

122

O pt i mi zes R es ources and I m proves Fl exi bi l i t y

Block-Level Storage Virtualization


Bef ore After

S A N

S A N

Virtualizati on

All applications have direct knowledge of storage location


5/11/11 Virtualization

M ul t i vendor s t orage arrays

Simplify volume access Nondisruptive mobility 123 Optimize resources


123

Multi-vendor storage arrays

Lesson 4 File Level Virtualization


Upon completion of this lesson, you will be able to:

Describe File Level Virtualization technologies and functionality

5/11/11

Virtualization

124 -

124

File-Level Virtualization Basics


Bef ore Fi l e l evel Vi rt ual i zat i on:
I P net wor k

After File-level Server Virtualization: IP


networ k

N A S devi ces / pl at f orm s

NAS devices/platforms

Every NAS device is an independent entity, physically and logically Underutilized storage resources Downtime caused by data migrations

Break dependencies between end-user access and data location Storage utilization is optimized Nondisruptive migrations

File-Level Storage Virtualization


Fi l e A bs t ract i on t hat O pt i mi zes R es ources and I m proves Fl exi bi l i t y

Bef ore

After

Fi l e s ys t em s

I P

File Virtualizatio n

M ul t i vendor N A S s ys t em s

Multi-vendor NAS systems

All users have direct knowledge of file locations


5/11/11 Virtualization

Move data while writing and accessing existing data Update Global Namespace
126 126

Moving Files Online: A File Virtualization Example


DFS
Fi l e Vi rt ual i zat i o n A ppl i ance

File-data migration

Global Namespace Manager

AD Automoun t NIS LD AP NFS4 NFS4 Root root A dm i n NIS LD AP

Event Log

File Virtualization inserted into I/O Client redirection Global Namespace updated

5/11/11

Virtualization

127 -

127

Moving Files Online: A File Virtualization Example (continued)


DFS
Fi l e Vi rt ual i zat i o n A ppl i ance

File-data migration

Global Namespace Manager

AD Automoun t NIS LD AP NFS4 Root A dm i n 128 NIS LD AP

Event Log
File virtualization inserted into I/O Client redirection Global Namespace updated Migration complete without downtime 5/11/11 Virtualization

128

Usage Scenarios for FileLevel Storage Virtualization


N ext G enerat i on D at a C ent er O perat i ons
C ons ol i dat i on C apaci t y M anagem ent Bus i nes s C ont i nui t y Perf orm ance M anagem ent Ti ered St orage M anagem ent

Gl obal N am es pace M anagem ent

5/11/11

Virtualization

129 -

129

129

Accelerated Consolidation
Bef ore:

M ove Fi l es N ondi s rupt i vel y w i t h C ont i nuous A cces s t o D at a

Too many file servers

Af t er:

Buying more file servers for additional storage Increase d utilizatio Average n utilizatio n

IP network

File Virtualization

Eliminate servers Complex via migration to migrations underutilized servers Maintain full read/write access during migration Transparent to

Elimina te file servers


Server 1 Server 2

5/11/11

Virtualization

130 -

Server 3
130

Server 4

Usage Scenarios for FileLevel Storage Virtualization


N ext G enerat i on D at a C ent er O perat i ons
C ons ol i dat i on C apaci t y M anagem ent Bus i nes s C ont i nui t y Perf orm ance M anagem ent Ti ered St orage M anagem ent

Gl obal N am es pace M anagem ent

5/11/11

Virtualization

131 -

131

131

Si t uat i on

Global Namespace Management

Billions of files with thousands to hundreds of thousands of clients Update namespace and retain access to files while migrating Update 1,000 client namespaces over the weekend

Scenari o

95% successful50 typos or glitches Zer o mi s t ypes , 100% acces s 50 calls dur i ng mi gr at i on with 50 very angry employees

5/11/11

There goes Monday...and Tuesday

Virtualization

132 -

132

132

Bef ore:

Simplified Namespace A cces s t o Fi l es and Fol ders Management

Complex file-server environments Namespace changes are timeconsuming Multiple shares or mounts per client Multiple file systems appear as a single virtual file system via standard namespace Simplify management and ensure continuous access to files and folders Updates standard-namespace 5/11/11 Virtualization entries (UNIX, Linux, Windows)
Server 1 Server 2

Af t er:

SH A R E1 W i ndow s

T: \ s vr1\

SH A R E2 N et A pp

S: \ s vr2\

SH A R E3 C el erra

H: \ s vr3\

SH A R E4 UNI X

G: \ s vr4\

133 -

Server 3
133

Server 4

Nondisruptively Grow the System Rapidly and seamlessly

deploy new storage and/or applications


Proj ect s A A 1 A 2 A 3 B 1 B B 2 C C 1 C 2 C 3

No downtime required Transparent to clients and applications Namespace unchanged Example:

A A 3 A 1 A 2

B C C 1

C 2

C 3 B 1

B 2

Rebalance across new storage for better performance

M ove D at a t o N ew l y A dded St orage 5/11/11 134134

Module Summary
Key points covered in this module:

Virtualization technologies Block-level virtualization technologies and processes File-level virtualization technologies and processes

5/11/11

Virtualization

135 -

135

Storage Management Created byInitiative the Storage (SMI) Networking Industry


Association (SNIA) Integration of diverse multivendor storage networks Development of more powerful management applications Common interface for vendors to develop products that incorporate the management interface technology Key components

M anagem ent A ppl i cat i on I nt egr at i on I nf r ast r uct ur e


O bj ect M odel M appi ng Vendor U ni que Feat ur es

Pl at f or m I ndependent Di st r i but ed SM I S A ut M/ W B EM om at ed Di scover y CI I nt er f ace Securi Technol ogy t y Locki ng O bj ect Or i ent ed

Tape Li br ar y
M OF

Sw i t ch
M OF

Ar r ay
MOF

andar d M any Ot her St O bj ect


M OF

M odel per D evi ce Vendor U ni que Funct i on

Inter-operability testing Education and collaboration Industry and customer

SNI A s SM I S

Storage Management Initiative Specification (SMI-S)

Based on:

Gr aphi cal U ser


St or age R esour ce M anagem Per f or m ance ent C apaci t y Pl anni ng R em ovabl e M edi a

M anagem ent
M anagem ent Tool s C ont ai ner M anagem ent Vol um e M anagem ent M edi a M anagem ent Ot her

U ser s
D at a M anagem ent Fi l e Syst em D at abase M anager Backup and H SM

Web Based Enterprise Management (WBEM) architecture Common Information Model (CIM)

St or age M anagem ent I nt er f ace Speci f i cat i on


M anaged O bj ect s Physi cal C om ponent s R em ovabl e M edi a Tape Dr i ve Di sk Dr i ve R obot Encl osur e H ost Bus Adapt er Sw i t ch Logi cal C om ponent s Vol um e Cl one Snapshot M edi a Set Zone Ot her

Features:

A common interoperable and extensible management transport A complete, unified and rigidly specified object model that provides for the control of a SAN

5/11/11

Monitoring the Data

137 -

137

Common Information Model (CIM)


Describes the management of data Details requirements within a domain Information model with required syntax

5/11/11

Monitoring the Data

138 -

138

Web Based Enterprise Management (WBEM)

5/11/11

Monitoring the Data

139 -

139

Managing in the Data Center


After completing this module, you will be able to:

Describe individual component tasks that would have to be performed in order to achieve overall data center management objectives Explain the concept of Information Lifecycle Management
5/11/11 Managing the Data 140 140

Managing Key Data Center Components


Cl i e n t

I P

K eep Al i ve

B H A B A

Po

SAN

r Po t r t

Storage Arrays Availability

Network

i ng R eport

Capacity Performance
Cl us t e Hosts/Servers r

5/11/11

with Managing the Data Applications

Security
141 141

Data Center Management

Capacity Management

Allocation of adequate resources Business Continuity


Availability Management

Eliminate single points of failure Backup & Restore Local & Remote Replication

5/11/11

Managing the Data

142 -

142

Data Center Management,


continued

Security Management

Prevent unauthorized activities or access Configure/Design for optimal operational efficiency Performance analysis

Performance Management

Identify bottlenecks Recommend changes to improve Managing the Data 143 performance
143

5/11/11

Data Center Management,


continued

Reporting

Encompasses all data center components is used to provide information for Capacity, Availability, Security and Performance Management Examples

Capacity Planning

Storage Utilization File System/Database Tablespace Utilzation Port usage


144

5/11/11

Configuration/Asset Management Managing the Data 144 -

Scenario 1 Storage Allocation to a New Server


St orage Al l ocat i on Tas ks

Ho s File / File t Database Syste


Mgmt m Mgmt

SA N
Volum e Mgmt SAN Zoning

Allocate Volumes Hosts

Volumes Ports

A rr a Assign Config y
New Volum es

File System / Databas e Used

Host Used

Host Allocat ed Volume Group Allocat ed

Reser ve d

Map Unconfig p ured Configu e red d

5/11/11

Managing the Data

145 -

145

Array Management Allocation Tasks Configure new volumes (LUNs)

Choose RAID type, size and number of volumes Physical disks must have the required space available
Intelligent Storage System
End
N H Connectiv This is automatic on some arrays while on o ity 0 s LU others this step must be explicitly N t RAI D 0 R1 AI D 1 performed RAI D 5

Assign volumes to array front end ports Front Back Physical Disks

End

Cac h e

LU

5/11/11

Managing the Data

146 -

146

Server Management HBA Configuration Server must have HBA hardware


installed and configured

Install the HBA hardware and the software (device driver) and configure
New Server

H HBA Driver M B A u l t i p a t h H B A

5/11/11

Managing the Data

147 -

147

SAN Management Tasks Perform Allocation Zoning

Zone the HBAs of the new server to the designated array front end ports via redundant fabrics

Are there enough free ports on the switch? SW1 check the array port utilization? r
SW2
Po t r Po t r Po t r t Po

Storage Array

New

H Did you B A H B A Server

5/11/11

Managing the Data

148 -

148

Server Management Allocation Reconfigure Server to see new devices


Perform Volume Management tasks Perform Database/Application tasks
V G L V FS H H B A B A

DB App
5/11/11

Managing the Data

149 -

149

Вам также может понравиться