Академический Документы
Профессиональный Документы
Культура Документы
Jinesh.shah@in.ibm.com
XIV was founded in 2002 and acquired by IBM in- December 31st, 2007
Disruptive, next generation grid technology providing one general purpose, fully
virtualized storage platform
Full global IBM integration, development, support and services
Usable Petabytes
350
300 348
289
250
256
200
226
209
150
160
100 128
95
50 77
46
1 1 3 12 18 31
0
1.0 1.1 2.0 2.1 2.2 2.3 2.3 2.4 2.4 6.2
3.0 3.01 3.1
Jan 2008 Dec 2008 Jul 2009 Apr 2010 July 2011 March 2012
IBM XIV Gen2 XIV Gen2 XIV Gen2 XIV Gen3 SSDs for Gen3
Acquisition 6 Module V10.1 S/W V10.2.1 S/W InfiniBand R11.1
Faster CPU 2TB Drives New Modules Gen2 Gen3
Concurrent Code Load 240GB Cache 360GB Cache mirroring
Capacity on Demand 161TB 8Gb FC GUI enhancements
2812-A14 for warranty capacity SRM5
LDAP
GUI Enhancements
Space
Utilization
Responsiveness / Cost to Business Units
The solution
Revolutionary best-in-class performance, reliability, scalability, manageability and TCO
The results:
Grid based block storage
In service at the largest, most demanding customer sites in the world
Steady increase in captured market share during first year within IBM portfolio
Strong, balanced cross-industry adoption
Outstanding customer satisfaction and reference-ability
Use cases spanning a wide variety of mission-critical applications and workload profiles
XIV has disrupted the storage market by doing one thing extraordinarily well: eliminating
complexity rather than merely masking it
Interconnect
Host Interface
Modules
Full rack
15 Modules
UPS UPS UPS
Partial rack
6, 9-14 modules
Aggressive, parallel
forpre-fetching
Other solutions FC
Massive parallelism IO processing
SSD
Host High cache-to-disk bandwidth
Simpler cache management
RAID protections
iSCSI XIV
XIV Module
Long RAID rebuilds
XIVModule
XIV
XIV Module
Module
Module Module services
Higher only
cache its own disks
hit-ratios
SSD
Host
Require LUN/Disk Layout Consistent performance through workload changes
Internal (peak or average)
Connectivity
XIV
XIV Module
XIVModule
XIV
XIV Module
Module
Module
Performance tuning
SSD
Active/Active IO parallel access Optional 6TB Read Cache Acceleration
Disk Hot-Spots
Islands of storage
FC
Other solutions Host XIV Module
SSD
Minimum operations impact upon disk failure (<1%) or
Manual intervention Host module (<7%)
SSD
Continual data movement
XIV utilization always balanced regardless of
add/delete/resize LUNs
XIV
XIV Module
High impact after any failure XIVModule
XIV
XIV Module
Module
Module
SSD
Linear scalability on capacity and performance
Exchange
Exchange
New Data
Oracle and Volumes
are laid out
on whole
new-larger
grid
VMware
New and
existing
volumes take
advantage of
the larger
grid
ERP
The result? We go
Tofrom this
this
IOPS
IOPS
Disks
Disks
[ hardware upgrade ]
20 IBM System StorageTM 2012 IBM Corporation
XIV Data Redistribution (Failure Scenario)
Intelligently reacts to HW faults
Equilibrium is always kept
Single Disk Idle Rebuild (idle)
Eliminates parity limitations
100% capacity used Gen2 1TB Gen3 2TB Gen3 3TB
Superior data integrity completion time (min) 30 48 76
Data Chunks
Snapshot
Snapshot
Pointer Snapshot
Map
Pointer Snapshot
Map
Pointer Map
Pointer Map Data Module 1 Data Module 2
IB
Data Chunks
Snapshot
Snapshot
Pointer Snapshot
Map
Pointer Snapshot
Map
Pointer Map
Pointer Map Data Module 1 Data Module 2
IB
Soft Space
XIV is data aware (Logical Capacity)
Works on written data only
Volumes, snapshots, replication, rebuilds
Hard Space
(Actual Capacity)
Storage pool logical construct
Used for administrative purposes
Volume 3
Physical partitions are spread across all drives
No performance impact
Volume 2
Storage pool types
Regular: Soft = Hard
Thin: Soft > Hard
Can convert between regular and thin pools
Volume 1
3. Define a Host
Add WWPN/IQN
Storage-based mirroring
Application-independent
Operating system-independent
No server cycles usage
Volume-based mirroring
Synchronous / Asynchronous mirroring
One-to-one relationship between a source volume and a target volume (a pair)
Up to 8 source target system pairings
Multiple volume pairs may be handled as a single unit (a consistency group)
Only actual data is replicated
Resizing and thin provisioning support
Failover / failback
No extra charge
34 IBM System StorageTM 2012 IBM Corporation
Uses of XIV Replication
Single location
Protection against hardware failure
High or continuous availability
Clustering
Metro region
Protection against local disaster
Out-of-region
Protection against regional disaster
Application Server
1
2
4
3
Application Server
3
2
Target
One local system to one remote system
S Most common configuration
M
Target
9 IP Network 9
8 8
7 IP Network 7
6 6
5 5
4 4
Host-Based Migration
AIX LVM Mirroring
VMware VMotion (not for RDM)
Unix DD command
Storage Virtualization
SVC / V7000
Volume Migration
Volume Mirror / Split
Individual hosts can optionally be added to one of four client created QoS Performance
Classes
Each class can set max rates for IOPS and/or bandwidth (BW)
Each Interface Node enforces the specified max rate for all hosts associated with the
corresponding QoS Performance Class
A particular host can appear in at most one QoS Performance Class
A host that is not part of a QoS Performance Class is not subject to rate limitations
Legacy Architectures
RAID Groups
LUNs and MetaLUNs
Host Volumes
Meta-
LUN
LUNs /oracle LUN /oracle
Disk Vols
RAID
groups
Hosts
Defined to manage volume access (mapping)
Volumes can be mapped to 1 or more hosts
Volumes can be added and removed dynamically
3 host types
Default
HP-UX
z/VM
Host Ports
Ports are defined for hosts
Can be FC or iSCSI
Defined by host WWPN or iSCSI Name
Server Cluster
Group of hosts
Assign volumes to clusters for shared access
When possible, engage the same zoning template/schema for all hosts
No need to manually balance the load via zoning
Balancing will happen naturally via combination of Host multipathing and XIV Grid structure
Always keep single initiator per zone
Multiple targets per zone can improve manageability
RoundRobin or similar algorithms offer good performance and low host overhead
Keep It Simple!
Host queue depth essentially controls how much data is allowed to be in flight onto the SAN from the host
HBAs
XIV algorithms are more efficient when I/O requests are coming in parallel
Queue depth becomes important factor in maximizing XIV performance
Large queue depth settings are recommended
64 is a reasonable starting point for most typical scenarios
QoS can be controlled via XIV QoS Classes
HBA queue depth can also assist in controlling unruly servers
Generally, with XIV there is no need to create a large number of small LUNs
That is a legacy limitation based on # disks, disk types, RAID groups, etc.
Goal is obtain best performance and maximum disk layout for each LUN
XIV will spread the data on all the drives regardless of the size of the LUN
On special configurations
Host Applications/OS might require more LUNs in order to maximize parallelism and
increase Data IO queues. E.g. VIOS limits queue depth at 32
As a rule of thumb, if the application needs to use multiple LUNs in order to allocate or
create multiple threads to handle the I/O, then use multiple LUNs
If the application is sophisticated enough to define multiple threads independent of the
number of LUNs, or the number of LUNs has no effect on application threads, then there is
no compelling reason to have multiple LUNs
Oracle practices
Maximize multi-threading tasks
No need to configure many LUNs for data files
Always separate data LUNs from log LUNs into separate volume groups
Create and mount Redo-log FS at 512KB, others are fine at 4KB
Use the backup/recovery strategy to identify tablespaces that can share same LUNs
Make sure database is set to support AIO, CIO, buffered JFS/NTFS IO
Take advantage of database Parallel Execution features:
Parallel Query, Parallel DML, Parallel DDL, Parallel Recovery, Parallel Replication
Apply the above DML features in a sensible manner
E.g. if it is known that a full scan tables will be better than direct index access
Imagine an OLTP environment performing full table scans?
Best suited for DWH environments
As much as possible, use large database buffers to enable pre-fetching
For DWH, large R/W sequential IOs (128K-1MB) are optimal
Asynchronous I/O is recommended for an Oracle database
59 IBM System StorageTM 2012 IBM Corporation
Oracle Configuration Best Practices ASM and XIV
8K
1M
8M
8M
8M
4M
4M
_1
12
_6
_6
l_
l_
l_
l_
No
vo
vo
vo
vo
l_
l
vo
vo
vo
_1
_1
_2
_6
_1
_2
_1
M
AS
AS
AS
AS
M
AS
AS
AS
Moving away from multiple small random IOs to fewer sequential larger IOs
XIV 1TB 40,000 / 1GB 1 8 360 / SATA 1TB 7.2K 63% 111 $$
Disks
XIV 2TB 40,000 / 3.5GB 0.18 10 360 / SATA 2TB 7.2K 88% 70 $$
Disks
XIV Gen3 2TB 120,000 / 1GB 0.13 24 360 / SAS 2TB 7.2K 88% 43.3 $$
EMC Clariion 60,000 / 1GB 0.18 36?! 480 SAS 600GB 10K 66% 22.5 $$$$
EMC VMax 100,000 / 2GB 0.14 20 880 / SATA 2TB 7.2K 45% 32 $$$$$
HDS 68,000 / 1GB 0.12 32 480 SAS 450GB 15K 44% 25.5 $$$$
HP EVA 9000 / 1.5GB 0.3 6 160 / SAS 450GB 15K 66% 25.3 $$
See the XIV VMware Toolbox for white papers, clips, and case studies covering XIV storage and VMware
Breakthrough GUI
Exceptional ease of use
Powerful management capabilities
Easy, rapid provisioning
Minimal administration
Minimal training required
GUI actions translated into XCLI commands and sent to XIV system via SSL
XCLI commands also logged where GUI is running for easy script creation
Anyone can download GUI and run in Demo Mode
http://www-
933.ibm.com/support/fixcentral/swg/selectFixes?parent=ibm~Storage_Disk&product=ibm/Sto
rage_Disk/XIV+Storage+System+%282810,+2812%29&release=3.0&platform=All&function=all
http://www-
933.ibm.com/support/fixcentral/swg/selectFixes?parent=ibm~Storage_Disk
&product=ibm/Storage_Disk/XIV+Storage+System+%282810,+2812%29&r
elease=All&platform=All&function=all
Events
Logging
SNMP alerts and E-mail notifications
Escalation capabilities and filtering
Performance
IOPS, latency, bandwidth
Link utilization
System, pool, volume level, host/HBA
Export to CSV file
Capacity
Trending for used and allocated storage
By volume or pool
Export to CSV file
Login
System view
Download
http://itunes.apple.com/us
/app/ibm-xiv-mobile-
dashboard-
for/id503500546?mt=8
Basic mode
Login (IP address and userid/pw) required for each command
xcli u userid p password m ip address vol_delete vol=p3_01 y
Additional output formatting options
- r option identifies file containing XCLI commands (enables use with shell scripts)
Interactive mode
Launched from
Desktop Icon
XIV GUI All Systems Panel
XIV System Panel
Login (IP address, userid/pw, XIV management IP address) provided once per session
>> prompt for XIV command
Command and argument completion (via tab key)
Reference documentation
XCLI Utility User Manual utility commands executed on client (help, formatting, defining XCLI configuration/login)
XCLI Reference Guide commands to be passed to XIV system for execution
SSD ready
Optional cache upgrade of up to 6.0TB
Gen2 Gen3
Ethernet Interconnect InfiniBand Interconnect 6TB SSD
Option
Capacity: 24-161TB Capacity: 54-161-243TB
Switches
Switches
Max Memory: 240GB Max Memory: 360GB
Data Modules
UPS units
24 4GB/8GB FC ports
22 iSCSI ports
8 4GB/8GB FC ports
0-22 iSCSI ports
Rack Configuration
Total number of modules 6 9 10 11 12 13 14 15
(Configuration type) partial partial partial partial partial partial partial full
Number of data modules 4 5 6 6 7 7 8 9
Module 6 state Disabled Disabled Disabled Disabled Disabled Enabled Enabled Enabled
Module 5 state Enabled Enabled Enabled Enabled Enabled Enabled Enabled Enabled
Module 4 state Enabled Enabled Enabled Enabled Enabled Enabled Enabled Enabled
Net capacity 2TB drives (rounded down in full TB) 55 TB 87 TB 102 TB 111 TB 125 TB 134 TB 149 TB 161 TB
Net capacity 3TB drives (rounded down in full TB) 84 TB 132 TB 154 TB 168 TB 190 TB 203 TB 225 TB 243 TB
FC ports 8 16 16 20 20 24 24 24
iSCSI ports (A14/Gen3) 0/6 4/14 4/14 6/18 6/18 6/22 6/22 6/22
Memory A14 (8/16 GB per module) 48/96 72/124 80/160 88/176 96/192 104/208 112/224 120/240
Memory Gen3 (24 GB per module) 144 216 240 264 288 312 336 360
Both deliver
60% lower TCO than competition
Broad workload affinity
Radical Simplicity
Fully autonomic data placement, self-healing
Advanced features out-of-the-box
Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of
multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to
the performance ratios stated here.
Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of
multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the
performance ratios stated here.
Clients MUST agree to enable, and support, IBM XIV call home feature
Daily email heartbeat that includes consumed capacity data
Warranty for all physically installed modules at code-20 date
Remember they are in fact being used
Adding more CoD modules will:
Effectively create a rolling CoD term
Starts the warranty period for each of these
Reduced complexity
http://w3.ibm.com/connections/wikis/home?lang=en#/wiki/W069af4782acc_42c5_bf26_8a6ba4387190/page/XIV%20Gen3%20Tech%20Portal
http://w3-01.ibm.com/sales/ssi/apilite?appname=crmd&mostrecentsort=yes&crv=no&additional=summary&alldocs=TRUE&infotype=CR&others=RFCS RFVI&contents=XIV
http://www-03.ibm.com/systems/services/training/storage/index.html
kVA Typical/Max 2.9 / 3.4 4.2 / 5.0 4.7 / 5.5 5.2 / 6.1 5.7 / 6.7 6.2 /7.2 6.7 / 7.8 7.2 / 8.4
Weight (KG) 629 713 740 767 794 821 848 884
kVA Typical/Max 2.8 / 3.1 4.5 /4.5 4.4 / 4.9 4.7 / 5.4 5.1 / 6.2 5.5 / 6.2 5.9 / 6.6 6.2 / 7.1
Weight (KG) 629 713 740 767 794 821 848 884
kVA Typical/Max 2.5/2.8 3.7/4.2 4.1/4.6 4.5/5.0 5.0/5.4 5.4/5.8 5.8/6.3 6.2/6.7
Weight (KG) 795 879 907 935 964 992 1020 1048
kVA Typical/Max 2.5/2.8 3.7/4.2 4.1/4.6 4.6/5.1 5.1/5.6 5.5/6.0 5.9/6.5 6.3/7.0
Weight (KG) 795 879 907 935 964 992 1020 1048
Maximum Capacity --> 55700 88000 102600 111500 125900 134900 149300 161300
Capacity per CoD Activation --> 9283 9778 10260 10136 10492 10377 10664 10753
Maximum Capacity --> 84100 132800 154900 168300 190000 203600 225300 243300
Capacity per CoD Activation --> 14017 14756 15490 15300 15833 15662 16093 16220