Академический Документы
Профессиональный Документы
Культура Документы
u u u u u u
Basic Storage Technology SCSI & Disk Technology Concept of RAID and Different Raid levels FC SAN IP Storage & NAS Virtualization & Storage Management
5/11/11
5/11/11
Slow write and retrieval speeds, but inexpensive and portable. Digital Audio Tape (DAT) able to store from 2 to 24GB of data on a tape about the size of a credit card. Digital Linear Tape (DLT) is generally faster and able to hold 200GB or more per cassette. Primary use is data backup or archival, especially for servers.
5/11/11
In 1957, IBM developed the first hard drive as part of their RAMAC system. It required 50 24-inch disks to store 5 megabytes of data and cost ~$35,000/year to lease. Rapid growth in late 80s/early 90s helped fuel demand for cheap storage devices. Most common storage device in use 5/11/11 today.
5/11/11
Faster start up
5/11/11
SCSI
is block-oriented, i.e. host's o.s see the storage devices like a contiguos sets of fixed data blocks
5/11/11
LUN 0
Initiator SCSI Bus Target
LUN 1 LUN 2
5/11/11
SCSI port
I/O device I/O device I/O device
5/11/11
Purpose
Magnetic Disk
Secto r
General structure
Trac k
A rotating platter coated with a magnetic surface A moveable read/write head to access the information on the disk
Typical numbers
1 to 4 (2 surface) platters per disk of 1 to 3.5 in diameter Rotational speeds of 5,400 to 15,000 RPM 10,000 to 50,000 tracks per surface
5/11/11
cylinder - all the tracks under the head at a given point on all surfaces
Disk Platters
Shown here is a Rather old disk with 3 platters (or six surfaces)
5/11/11
Disk Heads
5/11/11
Each surface is partitioned into a set of tracks. Each track is partitioned into a set of sectors
Unlike in this picture, the number of sectors on each track is not constant on most disks. Outer tracks have more sectors (technique called zone bit recording)
5/11/11
Track
To access data: seek: position head over the proper track (3 to 14 ms. avg.) rotational latency: wait for desired sector (.5 / RPM) transfer: grab the data (one or more sectors) 30 to 80 MB/sec 5/11/11
Calculating Access time to get your (sector of) data is: Disk Access time Wait time (for disk to free up from previous requests) +
Seek time (to move head to the right track) + Usually an average seek time is specified.
Rotational delay (to spin to right sector) + Determined by how fast the platters spin.
Transfer time (to read bytes off sector) + Interestingly, this is not constant since some tracks have more sectors than others. 5/11/11
The average time to read or write a 512B sector for a disk rotating at 10,000RPM with average seek time of 6ms, a 50MB/sec transfer rate, and a 0.2ms controller overhead
Avg disk read/write = 6.0ms + 0.5/(10000RPM/(60sec/minute) )+ 0.5KB/(50MB/sec) + 0.2ms = 6.0 + 3.0 + 0.01 + 0.2 = 9.21ms
If the measured average seek time is 25% of the advertised average seek time, then
Avg disk read/write = 1.5 + 3.0 + 0.01 + 0.2 = 4.71ms
The rotational latency is usually the largest component of the access time
5/11/11
Disk Access Time for a What is the average time to read a 512-byte sector for a typical diskaverage seek time is disk rotating a Slower 5400 RPM? The advertised
12ms, the transfer rate is 5MB/sec, and the controller overhead is 2ms. Assume that the disk is idle so that there is no wait time.
12ms + .5 spin (5400 RPM) + .5KB/(5MB/sec) + 2ms = 12ms + 0.0055 (or 5.6 ms) + 0.1 ms + 2ms = 19.7ms
5/11/11
5/11/11
RAID Configurations
5/11/11
Dependability, Reliability, Reliability a measure of the reliability measured by the mean Availability time to failure (MTTF). Service interruption is measured by mean
time to repair (MTTR) Availability a measure of service accomplishment Availability = MTTF/(MTTF + MTTR)
To increase MTTF, either improve the quality of the components or design the system to continue operating in the presence of faulty components
1.
Fault avoidance: preventing fault occurrence by construction Fault tolerance: using redundancy to correct or bypass faulty components (hardware)
q q
2.
Fault detection versus fault correction Permanent faults versus transient faults
5/11/11
Data is spread over multiple disk Multiple accesses are made to several disks at a time
Reliability is lower than a single disk But availability can be improved by adding redundant disks (RAID)
Lost information can be reconstructed from redundant information MTTR: mean time to repair is in the order of hours MTTF: mean time to failure of disks is tens of years
5/11/11
Spreading the data over multiple disks striping forces accesses to several disks in parallel increasing the performance
Same cost as one big disk assuming 4 small disks cost the same as one big disk
Failure of one or more disks is more likely as the number of disks in the system increases 5/11/11
S 0 S 1 .. S n
S 0 S 1 .. S n
# redundant disks = # of data disks so twice the cost of one big disk
writes have to be made to both sets of disks, so writes would be only 1/2 the performance of RAID 0
If a disk fails, the system just goes to the mirror for the data
5/11/11
RAID 01 or 10?
S0,bit 0
disk fails
parity disk
In a bit-interleaved, parity disk array, data is conceptually interleaved bit-wise over the data disks, and a single parity disk is added to tolerate any single disk failure Each read request accesses all data disks and each write request accesses all data disks and the parity disk
reads require reading all the operational data disks as well as the parity disk to calculate the missing data that was stored on the failed disk
Cost of higher availability is reduced to 1/N where N is the number of disks in a protection group
Cost of higher availability still only 1/N but the parity is stored as blocks associated with a set of data blocks
Still four times the throughput # redundant disks = 1 # of protection groups Supports small reads and small writes (reads and writes that go to just one (or a few) data disk in a protection group)
by watching which bits change when writing new information, need only to change the corresponding bits on the parity disk the parity disk must be updated on every write, so it is a bottleneck for back-to-back writes
Can tolerate l i mi t ed disk failure, since the data can be 5/11/11 reconstructed
Small Writes
D 3 P
D 3
D 0
Cost of higher availability still only 1/N but the parity is spread throughout all the disks so there is no single bottleneck for writes
Still four times the throughput # redundant disks = 1 # of protection groups Supports small reads and small writes (reads and writes that go to just one (or a few) data disk in a protection group) Allows multiple simultaneous writes as long as the accompanying parity blocks are not located on the same disk
5/11/11
By distributing parity blocks to all disks, some small writes can be performed in parallel
5/11/11
RAID-6 Definitio n
Data available w/ #disk failures <= 2 Optimal solutions require a capacity of N+2 disks user data occupies a capacity equivalent to N disks Overhead: e.g for 4+2 its 33%, or for 14+2 its 12.5%
Update of a single block requires computation of 2 i ndependentparity blocks Two methods of parity independence considered
Orthogonal function to calculate 2nd parity element (e.g. P+Q) 2-dimensional redundancy (two-way XOR)
5/11/11
Market Need
Access Density
MTDL: Mean Time to Data Loss e.g. 2nd disk failure during RAID-5 rebuild
Data still available after 2nd failure Miniscule Chance of 3rd failure during rebuild window
D 0, 0 D 1, 0 D 2, 0 D3 , 0
D 0, 1 D 1, X 1 D2 , 1 D 3,
1
D 0, 2 D1 , 2 D 2, 2 D 3,
2
D0 , 3 D 1, 3 D 2, 3 D 3,
3
r eD i sk D 0 , 3 D 1, ? 3 D 2, 3 D 3,
3
D 1,
1
D at a Bl ock [ 1, 1] Par i t y [ 2, 1]
D2 , 1
RAID-2,4,5 fails to handle accumulation of latent defects! Defects accumulate undetected (even with disk auto-scrub) Another disk in array eventually fails (sudden) Subsequent rebuild to spare disk fails (stripe incomplete)
D S, Spar e 0 Sect or
5/11/11
RAID-6 Array comprised of N+2 disks where N = # of data disks Example of a RAID-6 algorithm with N=4
DualXOR RAID-6
St r i pe 0 St r i pe St r i de 1 St r i pe n 2 St r i pe 3
Di sk 0
Di sk 1
Di sk 2
Di sk 3
Di sk D i sk 4 5
D0 , 0 D 1 , 0 D 2 , 0 D 3 , 0 D3 , 0
D0 , 1 D 1 , 1 D 2 , 1 D 3 , 1
D0 , 2 D 1 , 2 D 2 , 2 D 3 , 2
D0 , 3 D 1 , 3 D 2 , 3 D 3 , 3 D2 , 5
D0 , 4 D 1 , 4 D 2 , 4 D 3 , 4
D0 , 5 D 1 , 5 D 2 , 5 D 3 , 5
D1 , 1
D at a Bl ock [ 1, 1]
Di agonal Par i t y [ 2, 5]
H or i zont al Par i t y [ 2, 2]
5/11/11
RAID Summa ry
5/11/11
Discuss DAS based storage Describe the elements of DAS Describe the connectivity options for DAS Discuss DAS limitations
5/11/11 Direct Attached 34 34
What is DAS?
Internal
5/11/11
Direct Attached
35 -
35
DAS Architecture
DAS uses an I/O channel Architecture, which resides between a computer(initiator) and the device (target) used to store its data.
Storage device is only accessible by attached host computer. 5/11/11 Direct Attached 36 36
Connectivi ty C St ora g e
Hard disk(s) CD-ROM drive Optical drive Removable media Tape devices/tape library RAID/intelligent array(s) Portable media drives
5/11/11
Direct Attached
37 -
37
DAS Connectivity
Primarily for internal bus Parallel (primarily for internal bus) Serial (external bus)
SCSI
5/11/11
Direct Attached
38 -
38
5/11/11
Direct Attached
39 -
39
External DAS
B A
H o s t
Storage Device
5/11/11
Direct Attached
41 -
41
DAS limitations
Client s
LAN
Application Servers
SCSI
FC
FC
Tape
5/11/11
Resource administration must be done singularly No optimization scalability performance limited maximum distance between devices Inaccessibility to data during maintenance difficult backup management
Introduction of SAN
Set of equipments and tecnologies to remote the storage in a network network resource used 5/11/11 exclusively for
San features
St orage
Servers
St orage
Servers
A rrays
St orage
5/11/11
44 -
44
Benefits of a SAN
High bandwidth
Fibre Channel Block I/O Centralized storage and management Up to 16 million devices
SCSI extension
Resource Consolidation
Scalability
5/11/11
45 -
45
Describe the benefits of IP SAN Describe IP convergence in the SAN and its implications Describe and discuss the basic architecture of
FCIP
Internet Protocol 46 46
iFCP 5/11/11
In this Module
This module contains the following lessons:
5/11/11
Internet Protocol
47 -
47
Describe the benefits of IP SAN Describe the IP convergence in the SAN and its implications List the three common IP SAN approaches List the three deployment models (topologies)Internet for IP SAN 5/11/11 Protocol 48 48
Introduction
Sw i t ch St orage
Sw i t ch St orage
Servers
= I P = FC
Servers
IP encapsulation done on host / HBA(host bus adapter) Hardware-based gateway to Fibre Channel storage F C I P
I F P C I P
I P
FCIP
I P
F I C P
FC / I P FC / I P
I P/ FC
iFCP
I P
I P/ FC
IP Storage Approaches
FC I P i FC P
Servers FC FC
i SC SI
FCIP Router
I P N et w ork
iFCP Switch
I P N et w ork I P N et w ork I P N et w ork
FCIP Router FC
iFCPSwitch FC
iSCSI/F C FC Gatew ay
St orage
5/11/11
Internet Protocol
51 -
51
All Ethernet (No Fibre Channel) iSCSI Protocol Ethernet Switches & Routers
= I P = FC
Bri dgi ng
= I P = FC
Ext ens i on
= I P = FC
5/11/11
Internet Protocol
52 -
52
Benefits of IP SAN
Cost Effective Extend the reach of a SAN Most organizations already have IP networks and familiarity with traditional network management Leverages existing Fibre Channel applications
5/11/11 Internet Protocol 53 53
Standard Fibre Channel Distances IP Extends Fibre Channel applications over regional/global distances At higher link speeds, IP can handle synchronous applications
5/11/11
Internet Protocol
54 -
54
5/11/11
Internet Protocol
55 -
55
Encapsulates FC frames in IP packets Creates virtual FC links that connect devices and fabric elements Includes security, data integrity, Fi bre C hannel Fram e congestion and performance FC SC SI D at a Hea der specifications
SOF CRC EOF
FC I P Encaps ul at i on
IP Hea der TCP Hea der FCIP Hea der I P Payl oad I P Payl oad
5/11/11
Internet Protocol
I P D at agram
56 -
56
FCIP Benefits
FCIP
Low latency High reliability Fibre Channel Off-the-shelf solutions Mature standards
5/11/11
Internet Protocol
57 -
57
5/11/11
Internet Protocol
58 -
58
iFCP Benefits
Works with wide range of devices Flexible Less potential bottlenecking vs. FCIP
5/11/11
Internet Protocol
59 -
59
FC Tape Library
FC Loop Disks
iFCP Gateway
IP Network
iFCP Gateway
5/11/11
60 -
60
iFCP
Servers G at ew ay G at ew ay Servers
Sw i t ch Sw i t ch
SA N A
SA N B
5/11/11
Internet Protocol
61 -
61
iSCSI
A method to transfer blocks of data using the TCP/IP network Serialized service delivery subsystem SCSI protocol over IP
5/11/11
Internet Protocol
62 -
62
SC SI D at a
SC SI encaps ul at i on
IP Hea der TCP Hea der iSCS I Hea der I P Payl oad I P Payl oad
5/11/11
Internet Protocol
I P D at agram
63 -
63
Native Bridging
Sw i t ch
St orage G at ew ay
Servers
= I P = FC
Controlling IP SANS
A
iSNS
Target Z - Device A Initiator A Target Z - Device B Initiator B Target Z - Device C Initiator C
Targe tZ IP Network
B C
Initiator A 5/11/11
Initiato rC 65 65
iSNS is a client/server model The iSNS server is passive iSNS clients register & manipulate the objects in the iSNS server An iSNS server can be hosted on a target, initiator, or stand-alone server with a specified IP address
5/11/11 Internet Protocol 66 66
iSCSI Nodes
A single Initiator or Target Names are assigned to all Nodes Independent of address
5/11/11
Internet Protocol
67 -
67
5/11/11
Internet Protocol
68 -
68
I P N et w ork
St orage
Di s ks Server
5/11/11
Internet Protocol
69 -
69
Di s ks
I P N et w ork
St orage
Di s ks Server
Di s ks
5/11/11
Internet Protocol
70 -
70
Module Summary
Topics in this module included:
The benefits of IP SAN The IP convergence in the SAN and its implications The basic architecture of FCIP The basic architecture of iFCP The basic architecture of iSCSI Application of IP SAN technology
5/11/11 Internet Protocol 71 71
Discuss the benefits of NAS based storage strategy Describe the elements of NAS Discuss connectivity options for NAS Discuss NAS management considerations by environment
72 - for Identify the Network best Attached environments
72
5/11/11
In this Module
This module contains the following lessons:
5/11/11
Network Attached
73 -
73
Define NAS and describe its key attributes List the benefits of NAS Describe NAS connectivity
5/11/11
Network Attached
74 -
74
NAS Evolution
Stand Alone PC Portable Media for File Sharin g Networked PCs Networked File Sharing
5/11/11
Network Attached
75 -
75
What is NAS
NAS is shared storage on a Clien network infrastructure ts
NAS Head
Stora ge
5/11/11
Applicati on Server
Network Attached
Print Serv er
NAS Device
76 -
76
5/11/11
Network Attached
77 -
77
Why NAS
Supports global information access Improves efficiency Provides flexibility Centralizes storage Simplifies management Scalability High availability through native clustering 5/11/11 Network Attached 78 78
TODAY
C ri t i cal Bus i nes s A ppl i cat i ons ( D at abas es )
I s l ands of I nf orm at i o n
5/11/11
Network Attached
79 -
79
CI Wind o w s F S
Client/server application Uses RPC mechanisms over TCP protocol Mount points grant access to remote hierarchical file structures for local file system structures Access to the mount can be controlled by permissions
5/11/11 Network Attached 82 82
Public version of the Server Message Block (SMB) protocol Client applications access files on a computer running server applications that accept the SMB protocol Better control of files than FTP Potentially better access than Web browsers and HTTP
5/11/11 Network Attached 83 83
5/11/11
Network Attached
84 -
84
I/O Example
St orage I nt erf ace St orage Prot ocol N A S O perat i ng Sys t em N FS / CI FS TC P/ I P St ack N et w ork I nt erf ace
NAS Devi ce
5/11/11
Network Attached
85 -
85
I / O l a ye r
5/11/11
Network Attached
86 -
86
Configure network interfaces Create, mount, or export file system Install, configure and manage all data movers/filers Can be accessed locally or remotely
Network Attached head to storage 87 -
Connectivity
5/11/11 NAS
87
NAS Head
NAS Gatew ay
IP Net wor k
NAS Head
FC Fa bri c
5/11/11
Network Attached
88 -
88
IP Net wor k
Di r ect At t ach
NAS Head
Storage
5/11/11
Network Attached
89 -
89
NAS Gatew ay
St orage
5/11/11
Network Attached
91 -
91
5/11/11
Wind ows
Network Attached
92 -
92
Internet/Int ranet
5/11/11
Wind ows
Network Attached
93 -
93
Lesson Summary
Key points covered in this lesson:
5/11/11
Network Attached
94 -
94
NAS Challenges
Speed
Network latency and congestion Protocol stack inefficiency Application response requirements
Module Summary
Key points covered in this module:
A NAS server is a specialized appliance optimized for file serving functions. Overview of physical and logical elements of NAS Connectivity options for NAS NAS connectivity devices
Network Attached 96 Best environments for NAS solutions
96
5/11/11
Virtualization Technologies
Upon completion of this module, you will be able to:
Identify different virtualization technologies Describe block-level virtualization technologies and processes Describe file-level virtualization technologies and processes
5/11/11 Virtualization 97 97
5/11/11
Virtualization
98 -
98
Defining Virtualization
Virtualization provides logical views of physical resources while preserving the usage interfaces for those resources Virtualization removes physical resource limits and improves resource utilization
5/11/11
Virtualization
99 -
99
Higher rates of usage Simplified management Platform independence More flexibility Lower total cost of ownership Better availability
Serv er St orag e net w or k St orag e
5/11/11
Virtualization
100 -
100
101 -
101
101
Physical memory
Ap p Ap p Ap p
Swap space
5/11/11
Virtualization
Benef i t s of Vi rt ual M em ory Remove physical-memory limits Run multiple applications at 102 - 102 once
102
Vi rt ual Each application sees its own logical w ork,independent of physical N et w or net network ks
VLA NA VLA NB VLA NC
5/11/11
Swit ch
VLAN trunk
Benef i t s of Vi rt ual N et w orks Common network links with access-control properties of separate links Manage logical networks instead of physical networks Swit Vi rt ual SA N s provide similar ch networks Virtualization benefits for storage-area 103 - 103
103
Server-Virtualization Basics
Before Server Virtualization:
Operating system
Application
A A A system p p p p p p
Single operating system image per machine Software and hardware tightly coupled Running multiple applications on same machine often creates conflict Underutilized resources
Virtual Machines (VMs) break dependencies between operating system and hardware Manage operating system and application as single unit by encapsulating them into VMs Strong fault and security isolation Hardware-independent: They can be provisioned anywhere
5/11/11
Virtualization
104 -
104
5/11/11
Virtualization
105 -
105
Serv er
St orag e net w or k
C onnect i vi t y
V ol um e m anagem ent
St orag e
LU N s
5/11/11
Virtualization
106 -
106
Storage Virtualization Requires a Multi-Level I nt el l i gence Shoul d Be Pl aced Cl os es t t o W hat i t C ont rol s Approach
Serv er
A ppl i cat i on f unct i ons D at a acces s f unct i ons
5/11/11
File
NAS Heads
L A N
Block
Mgmt Station
Storage network
Sol ut i on:
MultiPathing Software
NAS Gateway
Scal e:
Vi rt ual i zat i on t echnol ogy aggregat es m ul t i pl e devi ces must scale in performance to support the combined environment Vi rt ual i zat i on t echnol ogy m as ks exi s t i ng s t orage f unct i onal i t y must provide required functions, or enable existing functions Vi rt ual i zat i on t echnol ogy i nt roduces a new l ayer of m anagem ent must be integrated with existing storage-management tools
Funct i onal i t y:
M anagem ent :
Vi rt ual i zat i on t echnol ogy adds new com pl exi t y i nt o t he s t orage net w orkrequires vendors to perform additional 5/11/11 Virtualization 109 interoperability tests
109
Support :
Standard environment
20,000
20,000
20,000
20,000
20,000
Virtualized environment
100,000+
20,000
Af t er:
Bef ore:
Mirrors, clones, and snapshots Protected and instant restores Synchronous and asynchronous replication Consistency Groups
Mirrors, clones, and snapshots Protected and instant restores Synchronous and asynchronous replication Consistency Groups
Af t er:
or
Bef ore:
End-to-end management
Virtualizatio n device
Af t er:
Standard environment
Interoperability:
server types OS versions network elements storage-software products
N ew hardw are qual i f i cat i on requi rem ent s Servi ce and s upport ow ners hi p Probl em es cal at i on and res ol ut i on
Af t er:
Storagevirtualization
Virtual Storage
Storage Virtualization: Block and File Level
Storag e netwo rk
IP netwo rk
Vi rt ua l St orag e 5/11/11
Virtualization
Benef i t s of Vi rt ual St orage Nondisruptive data migrations Access files while migrating Increase storage utilization 118 - 118
118
No state / no cache I/O at wire speed Full-fabric bandwidth High availability High scalability Value-add functionality
InBand
State / cache I/O latency Limited fabric ports More suited for static environments or environments with less growth Value-replace functionality
5/11/11
Virtualization
119 -
119
5/11/11
Virtualization
120 -
120
Presented to host as a single storage device Mapping used to redirect I/O on this device to underlying physical arrays
121 -
121
5/11/11
Virtualization
122 -
122
122
S A N
S A N
Virtualizati on
5/11/11
Virtualization
124 -
124
NAS devices/platforms
Every NAS device is an independent entity, physically and logically Underutilized storage resources Downtime caused by data migrations
Break dependencies between end-user access and data location Storage utilization is optimized Nondisruptive migrations
Bef ore
After
Fi l e s ys t em s
I P
File Virtualizatio n
M ul t i vendor N A S s ys t em s
Move data while writing and accessing existing data Update Global Namespace
126 126
File-data migration
Event Log
File Virtualization inserted into I/O Client redirection Global Namespace updated
5/11/11
Virtualization
127 -
127
File-data migration
Event Log
File virtualization inserted into I/O Client redirection Global Namespace updated Migration complete without downtime 5/11/11 Virtualization
128
5/11/11
Virtualization
129 -
129
129
Accelerated Consolidation
Bef ore:
Af t er:
Buying more file servers for additional storage Increase d utilizatio Average n utilizatio n
IP network
File Virtualization
Eliminate servers Complex via migration to migrations underutilized servers Maintain full read/write access during migration Transparent to
5/11/11
Virtualization
130 -
Server 3
130
Server 4
5/11/11
Virtualization
131 -
131
131
Si t uat i on
Billions of files with thousands to hundreds of thousands of clients Update namespace and retain access to files while migrating Update 1,000 client namespaces over the weekend
Scenari o
95% successful50 typos or glitches Zer o mi s t ypes , 100% acces s 50 calls dur i ng mi gr at i on with 50 very angry employees
5/11/11
Virtualization
132 -
132
132
Bef ore:
Complex file-server environments Namespace changes are timeconsuming Multiple shares or mounts per client Multiple file systems appear as a single virtual file system via standard namespace Simplify management and ensure continuous access to files and folders Updates standard-namespace 5/11/11 Virtualization entries (UNIX, Linux, Windows)
Server 1 Server 2
Af t er:
SH A R E1 W i ndow s
T: \ s vr1\
SH A R E2 N et A pp
S: \ s vr2\
SH A R E3 C el erra
H: \ s vr3\
SH A R E4 UNI X
G: \ s vr4\
133 -
Server 3
133
Server 4
A A 3 A 1 A 2
B C C 1
C 2
C 3 B 1
B 2
Module Summary
Key points covered in this module:
Virtualization technologies Block-level virtualization technologies and processes File-level virtualization technologies and processes
5/11/11
Virtualization
135 -
135
Pl at f or m I ndependent Di st r i but ed SM I S A ut M/ W B EM om at ed Di scover y CI I nt er f ace Securi Technol ogy t y Locki ng O bj ect Or i ent ed
Tape Li br ar y
M OF
Sw i t ch
M OF
Ar r ay
MOF
SNI A s SM I S
Based on:
M anagem ent
M anagem ent Tool s C ont ai ner M anagem ent Vol um e M anagem ent M edi a M anagem ent Ot her
U ser s
D at a M anagem ent Fi l e Syst em D at abase M anager Backup and H SM
Web Based Enterprise Management (WBEM) architecture Common Information Model (CIM)
Features:
A common interoperable and extensible management transport A complete, unified and rigidly specified object model that provides for the control of a SAN
5/11/11
137 -
137
Describes the management of data Details requirements within a domain Information model with required syntax
5/11/11
138 -
138
5/11/11
139 -
139
Describe individual component tasks that would have to be performed in order to achieve overall data center management objectives Explain the concept of Information Lifecycle Management
5/11/11 Managing the Data 140 140
I P
K eep Al i ve
B H A B A
Po
SAN
r Po t r t
Network
i ng R eport
Capacity Performance
Cl us t e Hosts/Servers r
5/11/11
Security
141 141
Capacity Management
Availability Management
Eliminate single points of failure Backup & Restore Local & Remote Replication
5/11/11
142 -
142
Security Management
Prevent unauthorized activities or access Configure/Design for optimal operational efficiency Performance analysis
Performance Management
Identify bottlenecks Recommend changes to improve Managing the Data 143 performance
143
5/11/11
Reporting
Encompasses all data center components is used to provide information for Capacity, Availability, Security and Performance Management Examples
Capacity Planning
5/11/11
SA N
Volum e Mgmt SAN Zoning
Volumes Ports
A rr a Assign Config y
New Volum es
Host Used
Reser ve d
5/11/11
145 -
145
Choose RAID type, size and number of volumes Physical disks must have the required space available
Intelligent Storage System
End
N H Connectiv This is automatic on some arrays while on o ity 0 s LU others this step must be explicitly N t RAI D 0 R1 AI D 1 performed RAI D 5
Assign volumes to array front end ports Front Back Physical Disks
End
Cac h e
LU
5/11/11
146 -
146
Install the HBA hardware and the software (device driver) and configure
New Server
H HBA Driver M B A u l t i p a t h H B A
5/11/11
147 -
147
Zone the HBAs of the new server to the designated array front end ports via redundant fabrics
Are there enough free ports on the switch? SW1 check the array port utilization? r
SW2
Po t r Po t r Po t r t Po
Storage Array
New
5/11/11
148 -
148
DB App
5/11/11
149 -
149