Вы находитесь на странице: 1из 88

V7.

6 Technical Update
Byron Grossnickle
Consulting IT Specialist
byrongro@us.ibm.com

Bill Wiegand
Consulting IT Specialist
wiegandb@us.ibm.com

Denis Frank
Spectrum Virtualize Performance Architect

Copyright IBM Corporation 2015

IBM Spectrum Virtualize Software Version 7.6


Functionality delivered in IBM SVC, Storwize family, FlashSystem V9000, VersaStack
Improve data security and reduce capital and operating expense
Single point of control for encrypting data on heterogeneous storage simplifies management
Eliminates need to buy new storage systems to enable encryption

Improve data protection and performance with lower cost storage


Distributed RAID improves drive rebuild time 5-10x: enables use of large drives with more confidence
All drives are active, which improves performance especially with flash drives

Reduce cost and complexity for high availability configurations


New GUI makes HyperSwap easier to configure and use
IP quorum support eliminates need for extra storage and fibre channel networking to third site

Reduce cost of storage: store up to 5x as much data in same space


Quicker, easier view of potential compression benefits with integrated Comprestimator

Simplify storage management in virtualized environments


vVol support delivers tighter integration with VMware vSphere
Provided in conjunction with IBM Spectrum Control Base Edition
Copyright IBM Corporation 2015

Agenda

Software based encryption


IP based quorum
Integrated CLI based Comprestimator
VMware vVols
Distributed RAID
HyperSwap updates
Miscellaneous enhancements

Copyright IBM Corporation 2015

V7.6 Technical Update


Byron Grossnickle
Consulting IT Specialist
byrongro@us.ibm.com

Copyright IBM Corporation 2015

Software Encryption for Data at Rest


Adds the ability to Encrypt externally virtualized storage (MDisks):
Encryption performed by software in the node/canister

SVC DH8, Storwize V7000 Gen2 and FlashSystem V9000


For external encryption all I/O groups must be external encryption capable
Uses AES_NI CPU instruction set and engines
8 cores on the 1 CPU are used for encryption
Each core capable of 1GB/sec (8GB/sec/node, 16GB/sec/iogrp)
AES 256-XTS Encryption, which is a FIPS 140-2 compliant algorithm
Any other statement is a misnomer
Not FIPS 140-2 certified or compliant

Encryption enabled at the storage pool level


A pool is therefore encrypting or not
All volumes created in an encrypted pool are automatically encrypted
MDisks now have an 'encrypted' or not attribute
Can mix external and internal encryption in same pool

If an MDisk is self-encrypting (and identified), then per-pool encryption will not encrypt any data to be sent to that
MDisk

Child pools can also have keys, which are different to the parent pool
USB key management support
External key manager support being planned for 1H16
Copyright IBM Corporation 2015

When is Data Encrypted/Decrypted


Data is encrypted/decrypted when it is written to/read from external storage
Encryption/decryption performed in software using Intel AES-NI instructions
Data is stored encrypted in storage systems
Data is encrypted when transferred across SAN between IBM Spectrum Virtualize
system and external storage (back end)
Data is not encrypted when transferred on SAN interfaces in other circumstances
(front end/remote system/inter node)
Intra-system communication for clustered systems
Remote mirror
Server connections
If appropriate, consider alternative encryption for on the fly data

Copyright IBM Corporation 2015

Implementing Encryption
Two methods
Create new encrypted pool
Move volumes from existing pool to new pool

Create an encrypted child pool in the parent pool


Migrate or volume mirror appropriate volumes, expand the child pool as required and continue moving existing data
Downside to this method is that you cannot create more child pools if 1 child pool consumes all the space

No convert in place function to encrypt existing pools


May require additional capacity

Unencrypted Pool
Copyright IBM Corporation 2015

Encrypted Pool
7

Mixed Encryption in a Pool

Data in this example is encrypted with 3 different keys


MDisk is created as an internal encrypted RAID array.
SAS Chip Encrypts on Storwize or DH8 SAS card in 24F
MDisk is external with the -encrypt option set
Back end storage array encrypts. Security characteristics could be different
MDisk is external without the -enrypt option set
External encryption is used to encrypt with the pool key
Copyright IBM Corporation 2015

Encryption Key Management


IBM Spectrum Virtualize has built-in key management
Two types of keys
Master key (one per system/cluster)
Data encryption key (one per encrypted pool)

Master key is created when encryption enabled


Stored on USB devices
Required to use a system with encryption enabled
Required on boot or re-key process, stored in volatile memory on system
May be changed

Data encryption key is used to encrypt data and is created automatically when an encrypted pool
is created
Stored encrypted with the master key
No way to view data encryption key
Cannot be changed
Discarded when an array is deleted (secure erase)
Copyright IBM Corporation 2015

CLI Changes for External Encryption


View changes
Add new encryption attributes for lsmdiskgrp
Use of encryption attribute for external MDisks in lsmdisk for self-encrypting MDisks
Adding an encryption attribute for lsvdisk
New command options
mkmdiskgrp adds -encrypt parameter
chmdisk adds -encrypt parameter to show an MDisk is self-encrypting
Additional policing of migrate commands
Additional policing of image mode volumes
Additional policing of addnode/addcontrolenclosure

Copyright IBM Corporation 2015

10

Resources Designated for External Encryption


DH8/V9000
8

CPU cores on first CPU


Each core capable of @ 1GB/sec
Encrypting everything will decrease I/O group throughput by @ 25%
13 GB read unencrypted, 10 GB read encrypted with a mixed block size
No delta if compression is enabled

V7000
No compression enabled 7 of 8 cores
Compression enabled 4 of 8 cores

Copyright IBM Corporation 2015

11

Ordering Encryption
FlashSystem V9000
Order feature AF14 on Model AE2 (flash enclosure). Includes USB devices and enablement

Storwize V7000 Gen2: Order for each control enclosure


Feature ACE1: Encryption Enablement
Feature ACEA: Encryption USB Flash Drives (Four Pack)
Ordering Encryption Includes Internal AND External Encryption

SVC: Order for each DH8 engine


Feature ACE2: Encryption Enablement
Feature ACEB: Encryption USB Flash Drives (Pair)
Once license of IBM Spectrum Virtualize Software for SVC Encryption 5641-B01 (AAS) or
This is the first hard license feature to go on SVC
All nodes in the cluster must be DH8

5725-Y35 (PPA)

Features are available for plant or field installation


Copyright IBM Corporation 2015

12

Encryption Recommendations
If

you can encrypt on the back end storage with no performance penalty or encrypt with
data in place, take that option
For example, an XIV can encrypt it's data without the need to move it
The DS8K, XIV and V7K Internal encryption can be done with no performance penalty

If

you need more granular key management or single methodology use external encryption
i.e. key per child pool
Single methodology for

entire environment (i.e. encryption is done the same way for everything)

Be

careful when mixing types of encryption in the same pool, as different forms of
encryption may have different security characteristics

Copyright IBM Corporation 2015

13

IP Quorum Devices
The

user will use either the CLI or the GUI to generate a Java-based quorum
application which is run on a host located at a third site.
The user must also regenerate and redeploy this application when certain aspects of
the cluster configuration change, e.g. a node is added.
The need for a regeneration will be indicated via a RAS event.
The maximum number of applications that can be deployed is five (5).
The IP topology must allow the quorum application to connect to the service IP
addresses of all nodes.
The IP network and host must fulfill certain constraints:
Requirement

Constraint

Comment

Round-trip latency

< 80ms

No exceptions

Port forwarding

1260

Owned by IBM Storage (IANA) must be open


on all hosts and nodes

Suggested JRE

IBM Java 7

(Others should work)

Copyright IBM Corporation 2015

14

Creating a Quorum Application


Use the mkquorumapp command to generate the quorum application
Make sure the cluster is in its final configuration before doing this
Download and extract the quorum app from the /dumps directory on to

the hosting device


Start the quorum application with Java

Copyright IBM Corporation 2015

15

Creating a Quorum Application - Continued

Start the Application


https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html

Copyright IBM Corporation 2015

16

Comprestimator Integration
With

R7.6, deployed as part of SVC/Storwize via CLI command vs separate host installable
Does not require compression license
Does not start RACE (RtC compression engine)
Same algorithm as host based tool so same results are expected
Schedule a volume or all volumes for estimation
Volumes are estimated in VDisk-id order
One volume per node at a given time within a given I/O group
Each I/O group processes its own volumes
Starts immediately (unless otherwise engaged)
Schedule all volumes in system
Display estimation results include thin provisioning, compression and overall results
All volumes output
Single volume output
Fast and accurate results
Rule of Thumb is <1 minute per volume (divided across nodes) with <5% error margin (same as
CLI)

Copyright IBM Corporation 2015

17

CLI Commands for Comprestimator


analyzevdisk - analyzes a specific volume

lsvdiskanalysisprogress - Gives you the progress of the analysis. Helpful if


doing the analysis on the entire system

Copyright IBM Corporation 2015

18

CLI Commands for Comprestimator (contd)


analyzevdiskbysystem Analyzes all volumes on system
lsvdiskanalysis - lists the output. Can be listed for a single volume or if no
volume identified, lists output for all volumes

Copyright IBM Corporation 2015

19

Examples - lsvdiskanalysis

Copyright IBM Corporation 2015

20

Examples lsvdiskanalysis (cont)

Copyright IBM Corporation 2015

21

lsvdiskanalysis States
Estimated Compression ratio has been determined. Give date and time of last
Active Volume is currently being estimated
Scheduled Volume is waiting to be estimated
Idle Volume has never been estimated and is not scheduled to be estimated

Copyright IBM Corporation 2015

run

22

VMware / IBM Storage Integration Capabilities


vCenter Management of IBM Storage
Provisioning, mapping and monitoring IBM storage in vSphere
vStorage APIs for Array Integration (VAAI)
Host server offload capability

Storage

vStorage APIs for Storage Awareness (VASA)


IBM Storage Overview
Profile driven storage direction
Site Recovery Manager (SRM) Integration
Replication simplification
vRealize Suite (vCloud Suite)
Includes VMware vCenter Orchestrator (vCO), VMware vCenter Operations Manager (vCOPS) and VMware vCloud Automation Center
(vCAC)
IBM Spectrum Control Base
Central Storage Control Plane for Cloud
vSphere Virtual Volumes (VVOLs)
XIV Storage abstraction delivers easy automated provisioning with tenant domains, policy-compliant service, snapshot and cloning
offloading, and instant space reclamation
Technology Demo available at https://www.youtube.com/watch?v=HZtf5CaJx_Y&feature=youtu.be
vStorage APIs for Data Protection (VADP)
Spectrum Protect Snapshot and Spectrum Protect for Virtual Environments
Copyright IBM Corporation 2015

23

Overview of IBM Storage VMware Integration


Disaster Recovery

Discovery
Provisioning
Optimization

Cloud Operation (vRealize Suite for vSphere6)

SRM
SRA
For XIV

SRA
For DS8000

SRA
For Storwize

vRA /
vCAC

vRO /
vCO

Backup
Snapshot
Management

Automation
Operations Management
Self-service

Server Virtualization

vROPS /
vCOPS

vCenter
VASA

VWC

Spectrum Control Base


Spectrum Virtualize, Storwize, V9000, XIV, Spectrum Accelerate, DS8000

Copyright IBM Corporation 2015

VADP

Spectrum
Protect

VAAI support (data path


integration)

24

Spectrum Control Base Edition


A centralized server system that consolidates a range of IBM storage provisioning, automation, and monitoring solutions
through a unified server platform
Supported by VMware High Availability Groups for VASA provider redundancy
Active/passive mode
IBM Spectrum Control Base Edition

Target Environment

Future

Storage Arrays

VASA

XIV Mgmt

Web Client

Common
Services

DS8000 Mgmt

(Authentication,
High availability,
Configuration storage,
Etc)

SVC Mgmt

vROPs
vRO
<future plugin>

Storage

FlashSystem
Mgmt
3rd Party
Mgmt

http://www-01.ibm.com/support/knowledgecenter/STWMS9/landing/IBM_Spectrum_Control_Base_Edition_welcome_page.html
Copyright IBM Corporation 2015

25

VMware / Spectrum Control Base Edition Integration

Copyright IBM Corporation 2015

26

New VM Virtual Volumes - compare the paradigms


Current

VDisks

VM Volumes

VDisks

VMFS datastore

VM Volume

Volume

Storage array
XIV/SVC support for VMware vSphere Virtual Volumes (VVOL)

Easy automated provisioning including multi-tenant


Policy-compliant service levels
Snapshot/cloning offloading, and instant space reclamation
Hotspot-free predictability and ultra-efficient utilization

Copyright IBM Corporation 2015

VM Volume

Storage array
IBM was a Virtual Volumes design
partner +3 years working together!
Delivers an excellent level of storage
abstraction through VVOL
27

vSphere 6.0 Support - vVols


Requirements
vSphere 6.0 installation
Including vCenter and ESXi Servers
VASA Provider for SVC/Storwize
Requires IBM Spectrum Control Base v2.2
SVC/Storwize running 7.6.x software
Will not be ported to 7.5 code base

Key Benefits

1-to-1 mapping of VMs drives to SVC Volumes


No shared storage = No IO contention
Granularity - More granular visibility, isolation and adjustment of VM
storage
Profile Based Storage Management to aid storage provisioning

Considerations

Spectrum Control Base utilizes IP connectivity to SVC config node


HyperSwap, Remote Copy, MSCS, DRS not currently supported

Copyright IBM Corporation 2015

28

VMware admin view of Child Pools


Child pool can be same as a volume
providing a VMFS datastore
Capacity is dedicated to
VMware admin
Taken from a parent storage
pool of a specific defined class

VM admin sees datastore that maps to


Storwize child pool that the storage
admin has given to VMware
Copyright IBM Corporation 2015

29

SVC Graphical User Interface changes


Enable VVOL functionality in SVC:
- Utility Volume is created
- 2TB Space Efficient
- Mirrored for redundancy

Change Host Type to VVOL:


Uses Protocol Endpoints (PE) for IO
Allows automated map/unmap of SVC
Volumes
Existing hostmaps still work
New User Role:
- VASA runs with special privileges
- Superuser cannot modify VVOLs

Copyright IBM Corporation 2015

30

V7.6 Technical Update

Bill Wiegand
Consulting IT Specialist
wiegandb@us.ibm.com

Copyright IBM Corporation 2015

Traditional RAID 6
Double parity improves data availability by protecting against single or double drive failure in
an array
However
Spare drives are idle and cannot
contribute to performance
Particularly an issue with flash drives

Rebuild limited by throughput of single drive

Longer rebuild time with larger drives


Potentially exposes data to risk of dual failure

Copyright IBM Corporation 2015

32

Traditional RAID 6
Active Drives

Stripe

Spares

D1
D1

D2
D2

D3
D3

PP

Q
Q

D1

D2

D3
D3

PP

D1
D1

D2

D3

PP

D1
D1

D2
D2

D3

D1
D1

D2
D2

D3
D3

D1
D1

D2
D2

D3
D3

Each stripe is made up of data


strips (represented by D1, D2
and D3 in this example) and
two parity strips (P and Q)
A strip is either 128K or 256K
with 256K being the default
Two parity strips means the
ability to cope with two
simultaneous drive failures
Extent size is irrelevant

Read from all drives

Copyright IBM Corporation 2015

Write to 1 drive

33

Problems with Traditional RAID


With Traditional RAID (TRAID), reading from a single
drive or multiple drives and writing to a single spare
drive, the rebuild time is extended due to the spare
drives performance. In addition the spares, when not
being used, sit idle wasting resources.

Copyright IBM Corporation 2015

34

Distributed RAID
Improved RAID implementation
Faster drive rebuild improves availability and enables use of lower cost larger drives with confidence
All drives are active, which improves performance especially with flash drives

Spare capacity, not spare drives


Rotating spare capacity position distributes
rebuild load across all drives
More drives participate in rebuild
Bottleneck of one drive is removed

More drives means faster rebuild


5-10x faster than traditional RAID
Especially important when using large drives

No idle spare drives


All drives contribute to performance
Especially important when using flash drives
Copyright IBM Corporation 2015

35

Distributed RAID 6
Distribute 3+P+Q over 10 drives with 2 distributed spares
Drive
In this instance
these 5 rows make
up a pack
We allocate the spare
space depending on
the pack number

D1
D1
P
D2
D2
Q
D3
D3

D2
D2
Q
D3
D3
D1
D1
P

D3
D1
P
D2
Q

PP
D2
D2
Q
D3
D3
D1

Q
D3
D3
D1
PP
D2

D1
PP
D2
Q
D3

D2
Q
D3
D1
D1
P

D3
D1
D1
P
D2
D2
Q

D3
D1
P
D2
Q

Row

The number of rows in


a pack depends on the
number of strips in a
stripe, this means the
pack size is constant for
an array
Extent size is irrelevant
Copyright IBM Corporation 2015

36

Problems with Traditional RAID


With Distributed RAID (DRAID), more drives
participate in the rebuild and the bottleneck of one
drive is removed as more drives means faster
rebuild and there are no idle drives as all drives
contribute to performance

Copyright IBM Corporation 2015

37

DRAID Performance Goals


A 4TB drive can be rebuilt within 90 minutes for an array width of 128 drives with no host I/O
With host I/O, if drives are being utilized up to 50%, the rebuild time will be 50% slower
Approximately 3 hours, but still that is much faster then TRAID time of 24 hours for a 4TB drive

Main goal of DRAID is to significantly lower the probability of a second drive failing during
the rebuild process compared to traditional RAID

Copyright IBM Corporation 2015

38

Distributed RAID
V7.6 supports Distributed RAID 5 & 6
Distributed RAID 10 is a 2016 roadmap item

Up to 10 arrays/MDisks in an I/O Group and a maximum of 32 arrays in a system


Array/MDisk can only contain drives from the same or a superior drive class
E.g. 400GB SSDs available to build array, so only superior drives are SSDs > 400GB
E.g. 450GB 10K SAS available to build array, so only superior drives are 10/15K/SSDs >450GB
Recommendation is to use same drive class for array/MDisk

Traditional RAID is still supported


New arrays/MDisks will inherit properties of existing pool you are trying to add it to
New array width default for RAID5 is 8+P
o If existing MDisks in pool are RAID5 7+P and/or 6+P then GUI will propose 6+P to match lowest width in pool

Conversion from traditional to distributed RAID is not supported


Ability to expand an array/MDisk is a 2016 roadmap item
Copyright IBM Corporation 2015

39

Distributed RAID
Minimum Drive Count in one array/MDisk
Distributed RAID5: 4 (2+P+S)
Distributed RAID6: 6 (3+P+S)

Maximum Drive Count in one array/MDisk is 128


If there are 128 disks of same drive class the system will recommend two 64 drive or three 42 drive
arrays/MDisks (not sure exactly until we get beta code to test with)
Goal is 48-60 per array/MDisk
1 to 4 spares worth of rebuild capacity allowed per array/MDisk no matter how many drives in the array
o Default rebuild areas:

Up to 36 drives: 1 spare
37-72 drives: 2 spares
73 to 100 drives: 3 spares
101 to 128 drives: 4 spares

Recommended stripe width


RAID5: 9 (8+P)
o Note that this is now the default width for TRAID in the GUI

RAID6: 12 (10+P+Q)
Copyright IBM Corporation 2015

40

Copyright IBM Corporation 2015

41

Drive classes available

Amount of usable
storage that will be
added to the pool

How many drives selected


out of total candidates
New capacity
of the pool
The actual arrays that will be created

Copyright IBM Corporation 2015

42

Raid type

Copyright IBM Corporation 2015

Number of spares

Array Width

43

Whats new in HyperSwap V7.6


Read I/O optimization
New Volume CLIs
GUI support for HyperSwap

Copyright IBM Corporation 2015

44

Read I/O in V7.5


Host Server

Host Server

Site 1

Site 2

Vol_1P

Node 1
I/O Group 1

Vol_1S

Node 2

Node 3

Site 1

Copyright IBM Corporation 2015

I/O Group 2

Site 2

Site 1

Storage Pool 1

Node 4

Site 2
Quorum

Site 3

Storage Pool 2

Reads are always


forwarded to the
primary volume,
even if the
secondary is up
to date
45

Read I/O in V7.6 Optimized


Host Server

Host Server

Site 1

Site 2

Vol_1P

Vol_1S

Node 1
I/O Group 1

Node 2

Node 3

Site 1

Copyright IBM Corporation 2015

I/O Group 2

Site 2

Site 1

Storage Pool 1

Node 4

Site 2
Quorum

Site 3

Reads are
performed locally,
as long as the
local copy is up to
date

Storage Pool 2
46

What makes up a HyperSwap volume?

Copyright IBM Corporation 2015

47

New Volume Commands


Creating a HyperSwap volume in V7.5

Creating a HyperSwap volume in V7.6

1)
2)
3)
4)
5)
6)
7)
8)

1) mkvolume my_volume

mkvdisk master_vdisk
mkvdisk aux_vdisk
mkvdisk master_change_volume
mkvdisk aux_change_volume
mkrcrelationship activeactive
chrcrelationship -masterchange
chrcrelationship -auxchange
addvdiskacces

Copyright IBM Corporation 2015

48

New Volume Commands


5 new CLI commands for administering Volumes:

mkvolume
mkimagevolume
addvolumecopy
rmvolumecopy
rmvolume

Also:
lsvdisk now includes volume_id, volume_name and function fields to easily
identify the individual volumes that make up a HyperSwap volume

Copyright IBM Corporation 2015

49

New Volume CLIs


mkvolume
Create a new empty volume using storage from existing storage pools
Volume is always formatted (zeroed)
Can be used to create:
o
o
o
o

Basic volume
Mirrored volume
Stretched volume
HyperSwap volume

- any topology
- standard topology
- stretched topology
- hyperswap topology

The type of volume created is determined by the system topology and the number of
storage pools specified

mkimagevolume
Create a new image mode volume
Can be used to import a volume, preserving existing data
Implemented as a separate command to provide greater differentiation between the action
of creating a new empty volume and creating a volume by importing data on an existing
MDisk
Copyright IBM Corporation 2015

50

New Volume CLIs


addvolumecopy
Add a new copy to an existing volume
The new copy will always be synchronized from the existing copy
For stretched and hyperswap topology systems this creates a highly available volume
Can be used to create:
o Mirrored volume
o Stretched volume
o HyperSwap volume

- standard topology
- stretched topology
- hyperswap topology

rmvolumecopy
Remove a copy of a volume but leaves the actual volume intact
Converts a Mirrored, Stretched or HyperSwap volume into a basic volume
For a HyperSwap volume this includes deleting the active-active relationship and the
change volumes
Allows a copy to be identified simply by its site
The -force parameter from rmvdiskcopy is replaced by individual override parameters,
making it clearer to the user exactly what protection they are bypassing
Copyright IBM Corporation 2015

51

New Volume CLIs


rmvolume
Remove a volume
For a HyperSwap volume this includes deleting the active-active relationship and the
change volumes
The -force parameter from rmvdisk is replaced by individual override parameters, making
it clearer to the user exactly what protection they are bypassing

Copyright IBM Corporation 2015

52

GUI Support for HyperSwap Configuring system topology


Add Nodes
----------------------------------------Rename System
Rename Sites
Modify System Topology
Turn Off All Identify LEDs
Flip Layout
Update>
----------------------------------------Power Off
----------------------------------------Properties

Copyright IBM Corporation 2015

53

GUI Support for HyperSwap Configuring system topology


Configure Multi-site

Set Multi-Site
Site 1:

London

Site 2:

Hursley

Site 3 (quorum):

Manchester

Back

Copyright IBM Corporation 2015

Next

Cancel

54

GUI Support for HyperSwap Configuring system topology


Topology:

Copyright IBM Corporation 2015

55

GUI Support for HyperSwap Configuring system topology


Topology:

Copyright IBM Corporation 2015

56

GUI Support for HyperSwap Creating a HyperSwap volume


Create Volumes
Quick Volume Creation

Advanced

Basic
Quantity:
1

HyperSwap
Capacity:
24

Consistency group:
London
Pool:
I/O group:

GiB

Capacity savings:
Compressed

Custom
Name:
My_hs_volume

None
Hursley
Pool1

Pool:

Auto select

I/O group:

Pool2
Auto select

Summary
1 volume
1 copy in Hursley
1 copy in London
1 active-active relationship
2 change volumes
Create
Copyright IBM Corporation 2015

Create and Map to Host

Cancel
57

GUI Support for HyperSwap Viewing volumes

Copyright IBM Corporation 2015

58

GUI Support for HyperSwap Creating a HyperSwap volume

Copyright IBM Corporation 2015

59

GUI Support for HyperSwap Viewing volumes


Volume copy status roll-up - change volumes are hidden

Copyright IBM Corporation 2015

60

SVC HyperSwap Cluster Layer Configuration


SITE-1
SVC (iogrp0)
Layer = replication

SITE-2
SVC (iogrp1)
Layer = replication

V7K / V7K-U / V5K

V7K / V7k-U / V5K

(external storage)

(external storage)

Layer = storage

Layer = storage

SITE-3
V3700 (Quorum)
Layer = storage

Copyright IBM Corporation 2015

61

Storwize HyperSwap Cluster Layer Configuration


SITE-1
V7K/V5K (iogrp0)
Layer = replication

SITE-2
V7K/V5K (iogrp1)
Layer = replication

V7K / V7K-U / V5K

V7K / V7K-U / V5K

(external storage)

(external storage)

Layer = storage

Layer = storage

SITE-3
V3700 (Quorum)
Layer = storage

Copyright IBM Corporation 2015

62

Miscellaneous changes

16Gb 4-port adapter


Increase max number of iSCSI host attach sessions
Remove 8G4 and 8A4 support
V7.6 with V3500/V3700
User configurable max single IO time for RC
Email setting allows '+'
Enhance ETv3 DPA log
Customizable login banner
SSL certificates

Copyright IBM Corporation 2015

63

16Gb 4-port Adapter


This is a new 16Gb FC card
Supported on SVC DH8 nodes and V7000 Gen2
Cant activate the 2 ports unused in existing 16Gb HBA
MES available to swap 2 port card for this 4 port card

Support up to four 4-port 16 or 8Gb FC ports per DH8 node


Only one 10GbE adapter per node supported
PCIe slot

Cards supported by V7.6

PCIe slot

Cards supported by V7.6

Empty
or
Compression Accelerator

Fibre Channel 4x8, 2x16, 4x16


or
10Gbps Ethernet

Fibre Channel 4x8, 2x16, 4x16


or
10Gbps Ethernet

Fibre Channel 4x8, 2x16, 4x16


or
12Gbps SAS

Empty
or
Compression Accelerator

Fibre Channel 4x8, 2x16, 4x16


1

Using any of slots 4-6 requires second CPU and 32GB cache upgrade
Copyright IBM Corporation 2015

64

16Gb 4-port Adapter


Storwize V7000 Gen2 (2076-524)
PCIe slot

Cards supported by V7.6

Compression Accelerator

Fibre Channel 4x8, 2x16, 4x16


or
10Gbps Ethernet

Fibre Channel 4x8, 2x16, 4x16


or
10Gbps Ethernet

Each one of the 4-port 16Gb ports will have 2 LEDs (amber and green)
LED behaviour is exactly the same as on the 2port 16Gb FC card

Supports maximum cable length of up to 5KM with single mode fibre and LW SFP
For certain use cases, cable length of more than 5KM can be used with DWDM/CWDM technology

Card supports 16G and 8G FC switches (minimum functional speed auto-negotiated)


When connected in direct attach mode, we support host HBAs running at 16G and 8G port speed
Only one 10GbE adapter per node canister supported
Copyright IBM Corporation 2015

65

Increase the max number of iSCSI host sessions per node


Hosts will continue to have a maximum of 4 iSCSI sessions per SVC node
Maximum of 8 paths to an I/O group

Administrators can now configure up to a maximum of 512 iSCSI Host IQNs per I/O group
Maximum of 2048 iSCSI Host IQNs for a 4 I/O group system

Copyright IBM Corporation 2015

66

Withdraw 8G4 and 8A4 support


SVC node models 2145-8G4 and 2145-8A4 will not support V7.6
Customers will need to upgrade hardware to upgrade beyond V7.5

Running the upgradetestutility (aka CCU checker) will call out the non-support of 8xx nodes
and any attempt to upgrade to V7.6 will fail

Copyright IBM Corporation 2015

67

Support of V7.6 with Storwize V3x00


Storwize V3x00 systems with 8GB of memory per node canister support installation of V7.6
If the node canisters only have 4GB of memory per canister the upgrade will fail

Solution is to order MES upgrade to 8GB of memory per node canister

Copyright IBM Corporation 2015

68

User configurable max single I/O time for Remote Copy


Problem
The current 1920 mechanism doesn't give the user fine granularity to prevent problems at secondary site
from causing I/O delays on primary site
Applies to MM and GM

Solution
Instead of slandering the link, we want to allow the customer to set a system-wide timeout value
If a particular I/O takes more than that specified amount of time to complete, we will look at the stream the
volume in question belongs to and do a version of a 1920 on it

CLI additions
Adding to chsystem
o chsystem -maxreplicationdelay <value>
o Sets a maximum replication delay in seconds: allows 0-360 (increments of 1)
o If set to 0, feature is disabled

Adding view of this setting to lssystem and lspartnership


Copyright IBM Corporation 2015

69

Email setting wont accept +


Fix the problem whereby the email settings field does currently not support +, and therefore
does not meet the Internet Message Format RFC 2822

Copyright IBM Corporation 2015

70

Enhance the DPA log


With V7.6, Easy Tier DPA log is enabled by default
Customers will see log files named like dpa_log*.xml.gz under the /dumps/easytier folder
Easy Tier heat files are also moved into this new folder

Copyright IBM Corporation 2015

71

Customizable Login Message-of-the-Day


With V7.6 a user-configurable message can be displayed on CLI and GUI login panels
Configurable in GUI and CLI

CLI commands:
chbanner -file /tmp/loginmessage
chbanner -enable / disable
chbanner -clear
Copyright IBM Corporation 2015

72

SSL Certificates

Currently, all SVC/Storwize systems use self-signed certificates


This causes security warnings in browsers, and breaks security guidelines

Now we will allow the system to generate SSL certificates which the customer can
then sign and re-upload to the system
Also we increased the strength of SSL certificates generated
Both self-signed and customer-signed

One certificate will be used for all uses of SSL on the system

Cluster GUI
Service GUI
Keyserver
Future enhancements

Copyright IBM Corporation 2015

73

New CLI Certificate Commands

svcinfo lssystemcert
Displays the information about the installed certificate

svctask chsystemcert

mkselfsigned: This will generate a self-signed certificate, similar to how it currently works
mkrequest: This generates an unsigned certificate, to be signed by an external authority
install: Install a certificate signed by an external authority
export: Export the current installed certificate in a format suitable for external use
o E.g. a web browser

Self-signed certificates have optional parameters with defaults provided by the system
Unsigned certificates must have all parameters included in the CLI command

Copyright IBM Corporation 2015

74

New Error Messages


New errors will be logged if the certificate is about to expire or has expired
There will be a new DMP for these events, which will create a new certificate
The same error/DMP will be used for self-signed and customer-signed certificates

Certificate storage

Certificates will be stored encrypted on nodes


The key used to encrypt the certificate with will be sealed by the TPM if it exists, otherwise stored plain text on the
node HDD
The key used to encrypt the certificate will not be in cluster data
The key used to encrypt the certificate will be transferred between nodes by secure key manager transfer
When a node is removed from the cluster, it will delete the key
A T3 may not be able to restore the existing certificate, depending on circumstances
A T4 will not try to restore the existing certificate. It will generate a new self-signed certificate

Copyright IBM Corporation 2015

75

Worldwide Support Change


Improvement in worldwide support processes, 24 hours a day, seven days a week:
Enhanced support includes the addition of 24-hour support response to severity 2 calls, seven days a
week, for SVC, V7000, V7K Unified, and V9000 customers on November 27, 2015

For additional details, consult your IBM Software Support Handbook:


http://www.ibm.com/support/handbook

Announcement letters:
IBM FlashSystem 900 and V9000 deliver enhanced worldwide support
IBM Spectrum Virtualize Software V7.6 delivers a flexible, responsive, available, scalable, and efficient
storage environment

Copyright IBM Corporation 2015

76

V7.6 Technical Update

Denis Frank
Spectrum Virtualize Performance Architect

Copyright IBM Corporation 2015

Performance relevant features in V7.6


Distributed RAID (DRAID)
Software Encryption
4 port 16Gb Fibre Channel adapter

Copyright IBM Corporation 2015

78

DRAID - Overview
Traditional RAID (TRAID) has a very rigid layout:
RAID-10: N data drives + N mirrors, spares
RAID-5/6: N data drives + P checksum drive (RAID-6: P+Q), spares

DRAID distributes blocks over any number of drives such that:


Spares are virtualised and distributed across drives
Host IO is balanced across drives
Rebuilding a failed drive balances IO across all other drives

DRAID keeps the same failure protection guarantees


(N data blocks plus P (+Q) checksum) balanced over more physical drives

DRAID recommendation is 40-80 physical drives (up to 128)


8+P (RAID-5) and 10+P+Q (RAID-6) striping

Copyright IBM Corporation 2015

79

DRAID Performance targets


Similar performance on recommended configurations to the same number of drives as in
traditional arrays
CPU limited workloads (e.g. SSDs, short I/O, few arrays):
1 DRAID similar to 1 TRAID
why: 1 CPU core used per array, V7000 Gen2 CPU has 8 cores
Future (2016): will provide better load balancing per stride

Drive limited workloads (e.g. Nearline):


1 DRAID (N drives) similar to M TRAIDs with N/M drives each
Example: 1 DRAID (10 drives) similar to 2 TRAID (2x5 drives) (neglecting spares)

Copyright IBM Corporation 2015

80

DRAID Rebuild performance targets


DRAID rebuild up to 10 times faster than TRAID
Without host I/O: A 4TB drive can be rebuilt within 90 minutes for an array width
of 128 drives
With host I/O: if drives are being utilized up to 50%, the rebuild time will be 50%
slower
Benefit: Lower probability of second drive failing during rebuild on
recommended configurations compared to Traditional RAID.
Planned (2016): will only rebuild actual used capacity on a drive

Copyright IBM Corporation 2015

81

Software Encryption - Overview


Encrypt data on external storage controllers with no encryption capability
Will not double encrypt use external / hardware encryption on

Virtualized FlashSystems
Storage controllers reporting encrypted disks in SCSI Inquiry page C2
SAS hardware encryption on internal storage (drives) on V7k v2, DH8
External MDisks manually defined as encrypted

Industry Standard XTS-AES256 with AES_NI CPU instruction set


Per-Pool encryption
Software Performance impact 10 20% worst case on systems under maximum load
Note: DRAID encryption delayed to V7.6 PTF release
Copyright IBM Corporation 2015

82

Software Encryption Performance measured


SVC DH8 over FlashSystem (SW encryption), 1 I/O group, 8Gb FC, cache miss
encrypted

unencrypted

% performance

4k random read (IOPs)

520k

600k

86%

4k random write (IOPs)

168k

185k

90%

256k random read (MB/s)

10700

13000

82%

256k random write (MB/s)

2900

3100

93%

Storwize V7000 Gen2 over 50% FlashSystem (SW encrytion) / 50% SSD RAID5 (HW
encryption), 1 I/O group, 8Gb FC, cache miss
encrypted

unencrypted

4k random read (IOPs)

270k

316k

85%

4k random write (IOPs)

74k

83k

89%

256k random read (MB/s)

7200

9200

78%

256k random write (MB/s)

2600

3100

83%

Copyright IBM Corporation 2015

% performance

83

16Gb Fibre Channel Adapter


Support for 4-port 16Gb/s Emulex G6 Lancer cards
SVC DH8: Up to 4 cards per node E.g. 8 cards (32 ports) per I/O group
V7000 Gen2: Up to 4 cards (16 ports) per control enclosure
Performance: Early measurements with 32 ports (DH8) / 16 ports (V7000 Gen2)
show 16Gb FC performance over similar 8G FC configuration below
(Cache hit maximum throughput)
Short 4k read/write (IOPs)

similar to existing 8Gb FC

Bandwidth 256k read (MB/s)

plus 75% improvement

Bandwidth 256k write (MB/s)

similar to existing 8Gb FC

Copyright IBM Corporation 2015

84

Questions?

Copyright IBM Corporation 2015

Legal Notices
Copyright 2015 by International Business Machines Corporation. All rights reserved.
No part of this document may be reproduced or transmitted in any form without written permission from IBM Corporation.
Product data has been reviewed for accuracy as of the date of initial publication. Product data is subject to change without notice. This document could include technical
inaccuracies or typographical errors. IBM may make improvements and/or changes in the product(s) and/or program(s) described herein at any time without notice. Any
statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. References in this document
to IBM products, programs, or services does not imply that IBM intends to make such products, programs or services available in all countries in which IBM operates or does
business. Any reference to an IBM Program Product in this document is not intended to state or imply that only that program product may be used. Any functionally equivalent
program, that does not infringe IBM's intellectually property rights, may be used instead.
THE INFORMATION PROVIDED IN THIS DOCUMENT IS DISTRIBUTED "AS IS" WITHOUT ANY WARRANTY, EITHER OR IMPLIED. IBM LY DISCLAIMS ANY
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NONINFRINGEMENT. IBM shall have no responsibility to update this information. IBM
products are warranted, if at all, according to the terms and conditions of the agreements (e.g., IBM Customer Agreement, Statement of Limited Warranty, International Program
License Agreement, etc.) under which they are provided. Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products in connection with this publication and cannot confirm the accuracy of performance,
compatibility or any other claims related to non-IBM products. IBM makes no representations or warranties, ed or implied, regarding non-IBM products and services.
The provision of the information contained herein is not intended to, and does not, grant any right or license under any IBM patents or copyrights. Inquiries regarding patent or
copyright licenses should be made, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 1 0504- 785
U.S.A.

Copyright IBM Corporation 2015

86

Information and Trademarks


IBM, the IBM logo, ibm.com, IBM System Storage, IBM Spectrum Storage, IBM Spectrum Control, IBM Spectrum Protect, IBM Spectrum Archive, IBM Spectrum Virtualize, IBM Spectrum Scale, IBM Spectrum
Accelerate, Softlayer, and XIV are trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. A current list of IBM trademarks is available on the Web at "Copyright and
trademark information" at http://www.ibm.com/legal/copytrade.shtml
The following are trademarks or registered trademarks of other companies.
Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries.
IT Infrastructure Library is a Registered Trade Mark of AXELOS Limited.
Linear Tape-Open, LTO, the LTO Logo, Ultrium, and the Ultrium logo are trademarks of HP, IBM Corp. and Quantum in the U.S. and other countries.
Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its
subsidiaries in the United States and other countries.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.
Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom.
ITIL is a Registered Trade Mark of AXELOS Limited.
UNIX is a registered trademark of The Open Group in the United States and other countries.
* All other products may be trademarks or registered trademarks of their respective companies.
Notes:
Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will
vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be
given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here.
All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual
environmental costs and performance characteristics will vary depending on individual customer configurations and conditions.
This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice.
Consult your local IBM business contact for information on the product or services available in your area.
All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.
Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility,
or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.
Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography.
This presentation and the claims outlined in it were reviewed for compliance with US law. Adaptations of these claims for use in other geographies must be reviewed
by the local country counsel for compliance with local laws.

Copyright IBM Corporation 2015

87

Special notices
This document was developed for IBM offerings in the United States as of the date of publication. IBM may not make these offerings available in other countries, and the information is
subject to change without notice. Consult your local IBM business contact for information on the IBM offerings available in your area.
Information in this document concerning non-IBM products was obtained from the suppliers of these products or other public sources. Questions on the capabilities of non-IBM products
should be addressed to the suppliers of those products.
IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give you any license to these patents. Send
license inquires, in writing, to IBM Director of Licensing, IBM Corporation, New Castle Drive, Armonk, NY 10504-1785 USA.
All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.
The information contained in this document has not been submitted to any formal IBM test and is provided "AS IS" with no warranties or guarantees either expressed or implied.
All examples cited or described in this document are presented as illustrations of the manner in which some IBM products can be used and the results that may be achieved. Actual
environmental costs and performance characteristics will vary depending on individual client configurations and conditions.
IBM Global Financing offerings are provided through IBM Credit Corporation in the United States and other IBM subsidiaries and divisions worldwide to qualified commercial and
government clients. Rates are based on a client's credit rating, financing terms, offering type, equipment type and options, and may vary by country. Other restrictions may apply. Rates
and offerings are subject to change, extension or withdrawal without notice.
IBM is not responsible for printing errors in this document that result in pricing or information inaccuracies.
All prices shown are IBM's United States suggested list prices and are subject to change without notice; reseller prices may vary.
IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.
Any performance data contained in this document was determined in a controlled environment. Actual results may vary significantly and are dependent on many factors including system
hardware configuration and software design and configuration. Some measurements quoted in this document may have been made on development-level systems. There is no
guarantee these measurements will be the same on generally-available systems. Some measurements quoted in this document may have been estimated through extrapolation. Users
of this document should verify the applicable data for their specific environment.

Copyright IBM Corporation 2015

88