Вы находитесь на странице: 1из 22

Introduction to Storage

Dell Storage
Appendix A - Objectives
• On completion of this module, you will be able to:
– Identify the different storage types
– Differentiate between a SAN and NAS storage network
– Explain Storage Area Network (SAN) terms, components and technologies
– Explain the benefits of a SAN
– Describe the differences between Fibre Channel and iSCSI SANs

2
Storage Technologies
Direct Attach Storage Network Attach Storage Storage Area Network

LAN

Disk Storage SAN

Disk
Storage
Disk Storage
Tape Storage

 High Cost of Ownership  Optimized for File  Optimized for Block I/O
 Inflexible transactions data movement
 File storage traffic  Separate Storage
travels over Ethernet network and data
network 3
Direct Attached Storage (DAS)
 DAS – Direct Attached Storage
 Only the attached server can access the
storage
 DAS characteristics
 Difficult to manage, especially in scale
 Limited functionality
 Poor asset utilization
 Trapped or captive storage
 Limited scalability
 Large % of storage is still DAS
 Large area and easy pickings for iSCSI
4
Network Attached Storage – NAS
Consolidation – With Scaling and Compatibility Issues

 Dedicated file service


protocol using IP-based
File Storage protocols like CIFS, NFS
 NAS storage operations
performed at the file level.
Utilized for general purpose
file sharing.
 Limited application use; only
used for file services, HTTP,
limited acceptance with
Ethernet databases
Network
 All traffic runs across the
same network
 Data (client server traffic)
 Metadata
 Storage traffic
5
Storage Area Network (SAN)
Consolidate All Storage

Block Storage  Storage operation


performed at block level
Network Traffic only  Ideal for data-intensive
application servers
 Delivers speedy data
transfer, backup, and
Storage
recovery
Traffic  SAN Traffic
SAN  Any server can access
any/all storage
 Ability to separate data
traffic from storage traffic

6
NAS vs. SAN or NAS and SAN
 SAN and NAS have often been misunderstood and incorrectly
pitted against one another. In truth, these technologies are
complementary rather than competing.
 In concept, both NAS and SAN provide storage resources to a
computing environment but they differ greatly in:
 Protocols used to communicate
 Configuration
 Network delivery methods
 I/O characteristics
 Both have unique applications where they are suited. In
practice, they can be used together for powerful solutions which
leverage features from both technologies.

7
The Migration to Storage Networks
Direct-Attached
Storage
SCSI over TCPIP
SCSI

Switched
SCSI
Gigabit Ethernet

SCSI over TCPIP


SCSI

 Storage is migrating from direct-attached to networked.


 Strategic value drives the case for networking storage:
 Accommodates storage growth
 Dramatically reduces overall ownership costs
 DAS isolates disks and limits overall utilization
 Improves ease of management, speed and distance
 Dramatically improves the difficult, time-consuming process of data
backup
 Leverages the benefits of cluster computing
8
Two Major Types of SANs
Fibre Channel SAN iSCSI SAN
Server Server

FC Switch FC Switch GbE Switch

Fabric A Fabric B

Disk Array Disk Array


Tape Library Tape Library

9
PS Series Concepts – Groups, Members, and Pools

 Fibre Channel  iSCSI


 HBAs Server  NICs Server
 ~ $1000.00 each  Free if using the on board NIC
 Typically require two for redundancy -  Free software initiator
 HBAs
 ~ $2000.00  ~ $400.00 each
 FC Switch  Typically require two for redundancy -
$800.00
~

 ~Switch
FC $25,000 for 24 port FC Switch
 ~ $900.00 per port  GE Switch
GbE Switch
 Times 2 ~ 1800.00  ~ $4,500 for 24 port
Fabric A Fabric B  ~ $190.00 per port
 Cost ~ $3800.00
 Times 2 ~ 380.00

 Best Cost ~
We forgot: $380.00
You typically require “2” fabrics.  Worst CostDisk Array
~ $1180.00
Disk Array
So, now you need
Tape Library Tape Library
additional FC switches.

1
The Promise of SANs
 Allows
 Massively extended scalability
Cluster
 Greatly enhanced device Back-up
connectivity Server

 Enables
 Storage consolidation
 Server clustering
 More efficient backup
 Better utilization of storage than
DAS
 Provides SAN
 Heterogeneous data sharing
 Disaster recovery - remote
mirroring

11
Scalability and Performance
 Storage expansion
 No impact on servers

 Server expansion
 No impact on storage

 Load balancing
 Active parallel paths, MPIO
 Data spreading

 Bandwidth
 Bandwidth on demand with robust
topology
 Larger frames (9000 bytes) with
jumbo frames
 High speed connections, Gigabit
Ethernet
12
SANs Provide High Availability
 Multiple levels of redundancy are
configurable throughout the data path

 Multiple access paths allow for failover


capabilities
 Multiple switched networks
 MPIO
 Multi-home servers
 Multi-homed storage

 De-coupling of storage from application


service allows it to be managed
independently

 Data vaulting and disaster recovery

13
iSCSI SAN Anatomy
Hosts are attached to switches using
HBAs or NICs.

A “network” may consist of one or more switches


(in this case Gigabit switches).

Storage is consolidated and partitioned for


different applications and purposes and
may consist of different types of disk and
tape.
Storage network management should
provide visibility and control.
14
SAN Components
 Servers/initiators commence read/write commands to
the storage.

 Storage/target receives the read/write commands.

 SAN network:
SCSI
 Specific infrastructure that encompasses switches over
that connect servers and storage together
SAN iSCSI
 HBAs/disk controllers
 Provides access point into the SAN
 Runs various protocols such as
fibre channel

 Traffic between servers and HBA


storage Host Bus Adapter
 Typically, block I/O services rather than file access
services
 SCSI commands are between initiator and target over
iSCSI and TCP/IP
 Standards-based

15
SAN Components
 Servers with HBAs or NICs

 Storage systems:
 RAID disk arrays
 JBOD disks
 Tape
 Optical

 Gigabit switches

 SAN management software

16
HBAs, NICs, and Cables
 Network cards
HBA
 HBAs
 NICs – server class PCI 64 bit

 Features
 Copper or fiber LC connector support
 Jumbo frames support
 SNMP and MIB compliance NIC
 Flow control support

 Cable
 Fiber optic
 Copper, Cat5E
LC Cat5E
SFP
17
SAN Storage – RAID
Redundant Array of Inexpensive (Independent) Disks
 Fault-tolerant grouping of disks that
the server sees as a single disk
volume
Port 1 Port 2 Port 3

 Set of methods and algorithms for


combining multiple disk drives as a Disk Controller
group in which the attributes of the
multiple drives are better than the
individual disk drives
Disk 0/ LUN 0 Disk 1/ LUN 1 Disk 2/ LUN 2
 Reduces the risk of data loss
(risk of losing data due to a defective 1 disk to 1 LUN approach
or failing disk drive), cost, or
performance
LUN 0

 Different available implementations LUN 1

offer tradeoffs between these three LUN 2


Disk 0 Disk 1 Disk 2
factors
Multiple disks to 1 LUN approach

18
Storage RAID levels 0, 1, 5 & 6
 RAID 0 – striping without parity RAID 0 -Four Disk RAID Set
 Not a fault tolerant RAID solution.
If one drive fails, all data within the entire array
is lost. It is used where raw speed is the major
objective.

 RAID 1 – mirroring
 Provides complete protection and is used in Two Disk RAID Set Two Disk RAID Set
applications containing mission-critical data. It
uses mirroring of paired disks where one
physical disk is partnered with a second
physical disk.

 RAID 5 – striping with parity


 Parity information is interspersed across the
drive array. RAID 5 requires a minimum of 3 RAID 5 - Four Disk RAID Set
drives.

 RAID 6 – striping with dual parity


Parity
 Extension of RAID 5, dual parity information is Parity Parity
Parity
interspersed across the drive array. RAID 5
requires a minimum of 4 drives
19
PS Series Storage Array Raid levels
 RAID 10 (performance)
 RAID 10. The group’s disks are automatically organized into multiple RAID 1
(mirrored) RAIDsets, and data is striped across the RAIDsets. One or more disks are
reserved as spares. For example:
 A 7-drive array will configure three RAID 1 RAIDsets with two disks in each RAIDset and
one spare disk.
 A 14-drive array will configure six RAID 1 RAIDsets with two disks in each RAIDset and
two spare disks.

 RAID 50 (capacity)
 RAID 50. The group’s disks are automatically organized into two RAID 5
(distributed-parity) RAIDsets, and data is striped across the RAIDsets. One or more
disks are reserved as spares. For example:
 A 7-drive array will configure two RAID 5 RAIDsets with three disks in each RAIDset and
one spare disk.
 A 14-drive array will configure two RAID 5 RAIDsets with six disks in each RAIDset and
two spare disks.

 RAID 5 (capacity)
 RAID 5. The group’s disks are automatically organized into RAID 5 (parity)
RAIDset, and data is striped across the RAIDset. One disk is reserved as spares. For
example:
 A 7-drive array will configure one RAID 5 RAIDset with 6 disks in the RAIDset and one
spare disk.
 A 14-drive array will configure one RAID 5 RAIDset and one spare disk.
20
Virtualization – two views
 Host or server
 Allows decoupling of the physical hardware from the operating system to deliver
greater IT resource utilization and flexibility.
 Virtualization allows multiple virtual machines, with heterogeneous operating systems
to run in isolation, side-by-side on the same physical machine. Each virtual machine has
its own set of virtual hardware (e.g., RAM, CPU, NIC, etc.) upon which an operating
system and applications are loaded. The operating system sees a consistent, normalized
set of hardware regardless of the actual physical hardware components.

 Storage
 The pooling of physical storage from multiple network storage devices into what
appears to be a single storage device that is managed from a central console. Storage
virtualization is commonly used in a storage area network (SAN). Users can implement
virtualization with software applications or by using hardware and software hybrid
appliances. The technology can be placed on different levels of a storage area network.
 Adding capacity
 Extending a volume

21
Virtualization
 Benefits of virtualization
 Reduces or defers capital expenditures
 Raises asset utilization ratio
 Lowers maintenance costs
 Reduces TCO
 Raises service levels
 Easier management results in fewer human errors
 High performance
 Ease of management and administration
 Reduces administrative time and activities
 Helps the storage administrator perform the tasks of backup, archiving, and
recovery more easily and in less time

22

Вам также может понравиться