Академический Документы
Профессиональный Документы
Культура Документы
Feedback
Hitachi Data Systems welcomes your feedback. Please share your thoughts by sending an email message to SolutionLab@hds.com. Be sure to include the title of this white paper in your email message.
Table of Contents
Product Features ..............................................................................................................2 Hitachi Unified Storage ..........................................................................................2 Hitachi Dynamic Provisioning Software .................................................................2 Hitachi Compute Blade 2000 .................................................................................3 VMware vSphere 5 ................................................................................................3 Test Environment .............................................................................................................4 Test Methodology .............................................................................................................9 Deploy a Virtual Machine from a Template..........................................................10 Perform Storage vMotion Operations on LUs Created with Hitachi Dynamic Provisioning .........................................................................................................11 Convert a Virtual Disk From Lazyzeroedthick to Eagerzeroedthick.....................12 Expand VMFS Datastore ....................................................................................12 Warm-up New Virtual Disks ................................................................................13 Simulate Workload on Standard LUs...................................................................13 Simulate Workload on Dynamically Provisioned LUs ..........................................19 Analysis ...........................................................................................................................22 Deploying a Virtual Machine From a Template....................................................22 Perform Storage vMotion Operations on LUs Created with Hitachi Dynamic Provisioning ........................................................................................................22 Convert a Virtual Disk From Lazyzeroedthick to Eagerzeroedthick.....................22 Expand VMFS Datastore ....................................................................................22 Warm-up New Virtual Disks ................................................................................23 Simulate Workload on Standard LUs...................................................................23 Simulate Workload on Dynamically Provisioned LUs ..........................................23 Test Results ....................................................................................................................24 Deploying a VM From Template ..........................................................................24 Perform Storage vMotion Operations on LUs Created with Hitachi Dynamic Provisioning ........................................................................................................24 Convert a Virtual Disk From Lazyzeroedthick to Eagerzeroedthick.....................25 Expand VMFS Datastore ....................................................................................25 Warm-up New Virtual Disks ................................................................................26 Simulate Workload on Standard LUs...................................................................32 Simulate Workload on Dynamically Provisioned LUs ..........................................39
1
1
Using VMware vSphere 5 with Hitachi Dynamic Provisioning on Hitachi Unified Storage
Lab Validation Report
This paper describes testing Hitachi Data Systems conducted to validate practices for making storage provisioning and virtual disk format decisions in Hitachi Unified Storage environments using VMware vSphere 5. It describes how Hitachi Dynamic Provisioning works and the different virtual disk format choices available. This testing supports the following recommendations:
eagerzeroedthick virtual disk format to eliminate performance anomalies Hitachi Dynamic Provisioning logical units (LUs) for wide striping benefits
When requiring maximum cost savings and low administrative overheard, use the following:
This lab validation report shows how you can leverage Hitachi Dynamic Provisioning running on Hitachi Unified Storage to maximize benefits and to minimize operational complexities in VMware vSphere 5 environments. At a high level, you can create LUs on Hitachi Unified Storage as standard RAID devices. LUs can be provisioned as dynamic provisioning volumes using Hitachi Dynamic Provisioning running on Hitachi Unified Storage. This paper is intended for you if you are a storage or datacenter administrator implementing storage within vSphere 5 environments.
Note Testing was done in a lab environment. Many things affect production environments beyond prediction or duplication in a lab environment. Follow recommended practice by conducting proof-of-concept testing for acceptable results before solution implementation in your production environment. This means to test applications in a non-production, isolated test environment that otherwise matches your production environment.
2
2
Product Features
These are the features of some of the products used in testing.
3
3 Deploying Hitachi Dynamic Provisioning avoids the routine issue of hot spots that occur on logical devices (LDEVs). These occur within individual RAID groups when the host workload exceeds the IOPS or throughput capacity of that RAID group. Dynamic provisioning distributes the host workload across many RAID groups, which provides a smoothing effect that dramatically reduces hot spots. When used with Hitachi Unified Storage, Hitachi Dynamic Provisioning has the benefit of thin provisioning. Physical space assignment from the pool to the DPVOL happens as needed using 1 GB chunks, up to the logical size specified for each DP-VOL. There can be a dynamic expansion or reduction of pool capacity without disruption or downtime. You can rebalance an expanded pool across the current and newly added RAID groups for an even striping of the data and the workload. For more information, see the Hitachi Dynamic Provisioning datasheet and Hitachi Dynamic Provisioning on the Hitachi Data Systems website.
A balanced system architecture that eliminates bottlenecks in performance and throughput Configuration flexibility Eco-friendly power-saving capabilities Fast server failure recovery using a N+1 cold standby design that allows replacing failed servers within minutes
For more information, see Hitachi Compute Blade Family on the Hitachi Data Systems website.
VMware vSphere 5
VMware vSphere 5 is a virtualization platform that provides a datacenter infrastructure. It features vSphere Distributed Resource Scheduler (DRS), high availability, and fault tolerance. VMware vSphere 5 has the following components:
ESXi 5.0 This is a hypervisor that loads directly on a physical server. It partitions one physical machine into many virtual machines that share hardware resources. vCenter Server This allows management of the vSphere environment through a single user interface. With vCenter, there are features available such as vMotion, Storage vMotion, Storage Distributed Resource Scheduler, High Availability, and Fault Tolerance.
4
4
Test Environment
This describes the test environment used in the Hitachi Data Systems lab. Figure 1 shows the physical layout of the test environment.
Figure 1
5
5 Figure 2 shows the virtual machines used with the VMware vSphere 5 cluster hosted on Hitachi Compute Blade 2000.
Figure 2
6
6 Figure 3 shows the configuration for Hitachi Unified Storage 150 used in testing.
Figure 3
7
7 Table 1 has the Hitachi Compute Blade 2000 components used in this lab validation report.
Table 1. Hitachi Compute Blade 2000 Configuration
Detail Description
Version A0195-C-6443
8-blade chassis 4 1/10Gb/sec network switch modules 2 Management modules 8 Cooling fan modules 4 Power supply modules 8 8 Gb/sec Fibre Channel PCI-e HBA 2 2-Core Intel Xeon E5503 processors @ 2GHz 144 GB RAM per blade
03-57
Table 2 lists the VMware vSphere 5 software used in this lab validation report.
Table 2. VMware vSphere 5 Software
Version 5.0.0 Build 455964 5.0.0 Build 455964 5.0.0 Build 441354 All virtual machines used in this lab validation report used Microsoft Windows Server 2008 R2 Enterprise Edition 64-bit for the operating system.
8
8 Table 3 has the VMware vSphere 5 virtual disk formats tested in this lab validation report.
Table 3. VMware vSphere 5 Virtual Disk Formats
Description Virtual disk is allocated only the storage capacity required by the guest OS. As write operations occur, additional space is allocated and zeroed. The virtual disk grows to the maximum allotted size. The virtual disk storage capacity is pre-allocated at creation. The virtual disk does not grow in size. However, the space allocated is not pre- zeroed. As the guest OS writes to the virtual disk, the space is zeroed as needed. The virtual disk storage capacity is pre-allocated at creation. The virtual disk does not grown in size. The virtual disk is pre- zeroed, so that as the guest OS writes to the virtual disk, the space does not need to be zeroed at that time.
Lazyzeroedthick
Eagerzeroedthick
9
9
Test Methodology
All Hitachi Unified Storage management functions were performed using Hitachi Storage Navigator Modular 2 software. All VMware vSphere 5 operations were performed using vCenter and vSphere client. All tests were conducted with the following VAAI primitives enabled:
Full copy This primitive enables the storage system to make full copies of data within the storage system without having the ESX host read and write the data. Block zeroing This primitive enables storage systems to zero out a large number of blocks to speed provisioning of virtual machines. Hardware-assisted locking This primitive provides an alternative means to protect the metadata for VMFS cluster file systems, thereby improving the scalability of large ESX host farms sharing a datastore. Thin provisioning stun API This primitive enables the storage system to notify the ESX host when thin provisioned volumes reach certain capacity utilization threshold. When enabled, this allows the ESX host to take preventive measures to maintain virtual machine integrity.
Deploy a Virtual Machine from a Template on page 8 Perform Storage vMotion Operations on LUs Created with Hitachi Dynamic Provisioning on page 9 Convert a Virtual Disk From Lazyzeroedthick to Eagerzeroedthick on page 10 Expand VMFS Datastore on page 10 Warm-up New Virtual Disks on page 11 Simulate Workload on Standard LUs on page 11 Simulate Workload on Dynamically Provisioned LUs on page 13
Note A thin-friendly operation allows cost savings through the use of over
provisioning of the storage. The file system and application make use of the space efficiently and do not claim blocks of storage currently not needed.
10
10
Deploying Virtual Machine from a Template Using a Lazyzeroedthick Format Virtual Disk
This is the test procedure to deploy a virtual machine from a template using lazyzeroedthick format virtual disk: 1. Create a RAID-5 (3D+1P) group using 600 GB SAS disks. 2. Add a 1.5 TB LU to the RAID group to create a VMFS datastore called RAID5-source. 3. Deploy a virtual machine using a lazyzeroedthick virtual disk format on RAID5-source and install Microsoft Windows Server 2008 R2. 4. Shut down the Windows virtual machine and convert it to a template. 5. Create a dynamic provisioning pool using Hitachi Dynamic Provisioning that contains three RAID-6 (6D+2P) groups with 600 GB SAS disks. 6. Create a dynamic provisioning LU with Hitachi Dynamic Provisioning of 50 GB and create a VMFS datastore called RAID-6-target. 7. Deploy a virtual machine from the template using a lazyzeroedthick format virtual disk on RAID-6-target using 40 GB virtual disks.
Deploying a Virtual Machine from a Template Using an Eagerzeroedthick Format Virtual Disk
This is the test procedure to deploy a virtual machine from a template using an eagerzeroedthick format virtual disk. 1. Using the template created in Deploying Virtual Machine from a Template Using a Lazyzeroedthick Format Virtual Disk on RAID-6-target using 40 GB virtual disks. 2. Enable Fault Tolerance on the newly created virtual machine. This operation converts the virtual disk from lazyzeroedthick to eagerzeroedthick format.
11
11
Perform Storage vMotion Operations on LUs Created with Hitachi Dynamic Provisioning
The goal of this test was to determine how much dynamic provisioning pool space is consumed when a lazyzeroedthick format virtual disk is moved from one dynamic provisioning pool to another. Source and destination dynamic provisioning pools were created using Hitachi Dynamic Provisioning. A virtual machine with a lazyzeroedthick virtual disk format was created and migrated with VMware Storage vMotion. This is the test procedure: 1. Create two dynamic provisioning pools with Hitachi Dynamic Provisioning on Hitachi Unified Storage 150, DP Pool-000 and DP Pool-001. 2. Create two 50 GB LUs with Hitachi Dynamic Provisioning, LU-000 in DP Pool000 and LU-001 in DP Pool-001. 3. Configure two VMFS datastores named RAID-6-vMotion-1 on LU-000, and RAID-6-vMotion-2 on LU-001. 4. Create a 40 GB virtual disk in lazyzeroedthick format on the VMFS datastore RAID-6-vMotion-1 and assign it to the virtual machine created in deployment tests. 5. On the Windows virtual machine, run VDbench with a 100 percent write, 4 KB block I/O workload to Physical Drive 2 until the dynamic provisioning LU shows 50 percent (20 GB) utilization. 6. Using the Storage vMotion GUI, migrate the virtual disk from the VMFS datastore RAID-6-vMotion-1 to the VMFS datastore RAID-6-vMotion-2, selecting the Same format as source option.
12
12
13
13
Guest OS storage metrics through VDBench output ESX metrics through ESXtop batch mode output Hitachi Unified Storage metrics through Hitachi Storage Navigator Modular 2 There was close attention paid to sequential versus random I/O.
In these lab tests, moving workloads between VMFS datastores was done by creating new eagerzeroedthick virtual disks.
14
14 Table 4 shows the I/O workload used during the workload simulation tests.
Table 4. Simulated I/O Workloads
Test OLTP
Workload
100% random 75% read 25% write 8k block 100% random 50% read 50% write 8k block 100% sequential 100% read 256k block 100% random 90% read 10% write 8k block 100% sequential 100% write 256k block
230 IOPS
Streaming Media
I/O workload of file serving, SharePoint applications where unstructured data is shared
400 IOPS
Backup to disk
15
15
Established a baseline Determined the degree of contention created by running multiple workloads on a single RAID-5 (3D+1P) group
The OLTP workload was the priority workload and run at maximum I/O throughput. For the other workloads, IOPS were limited by VDbench parameters. Figure 4 shows the test configuration with one standard LU for four workloads.
Figure 4
16
16
Figure 5
17
17
Figure 6
18
18
Figure 7
19
19
Figure 8
20
20
Detect sequential I/O If the virtual host bus adapter breaks up the sequential workload, making it appear as random to the storage system
This test used two dynamic provisioning pools, DP Pool-000 and DP Pool-001. Each pool contained a single LU. DP Pool-000 contained two RAID groups and was used for all random workloads. DP Pool-001 contained a single RAID group and was used for the sequential reads on the streaming media workload. Figure 9 shows the test configuration using two dynamic provisioning LUs.
Figure 9
21
21
Using the workloads used in Hitachi Dynamic Provisioning Test Configuration 2 the number of workloads and RAID groups was doubled for this test. However, a single dynamic provisioning pool was used instead of isolating streaming media workload. For these tests, DP Pool-000 was extended to 6 RAID groups, as shown in Figure 10.
Figure 10
22
22
Analysis
Deploying a Virtual Machine From a Template
The lazyzeroedthick format virtual disk only uses the actual space needed by the guest operating system, data, and applications from the storage system perspective. Deploying virtual machines from templates that have the guest operating system installed on a lazyzeroedthick format virtual disk is a Hitachi Dynamic Provisioning thin-friendly operation. Hitachi Data Systems recommends using lazyzeroedthick format virtual disks for virtual machine templates. An eagerzeroedthick format virtual disk uses the virtual disks actual total assigned space. Deploying virtual machines from templates in eagerzeroedthick format to any virtual disk format is not a storage system thin-friendly operation.
Perform Storage vMotion Operations on LUs Created with Hitachi Dynamic Provisioning
Storage vMotion is a Hitachi Dynamic Provisioning thin-friendly operation.
Note Hitachi Dynamic Provisioning on Hitachi Unified Storage has the ability to run a zero page reclaim function on a dynamically provisioned volume. With zero page reclaim, any zeroed pages can be reallocated as needed.
23
23
Sequential versus random workload Sequential workloads, when mixed with sequential or random workloads on standard LUs, can become random. Total IOPS available from the RAID group Additional physical disk reads and writes occur for every application write when using mirrors or XOR parity. Application requirements Total IOPS and response time are critical factors when designing RAID group architecture.
24
24
Test Results
These are the results from testing in the Hitachi Data System labs.
Deploying Virtual Machine from a Template Using a Lazyzeroedthick Format Virtual Disk
These are the observations: 1. Microsoft Windows Server used approximately 10 GB of disk space when deploying a virtual machine using lazyzeroedthick virtual disk format. 2. The datastore used 1 GB of space from the dynamic provisioning pool when creating a VMFS datastore. 3. The datastore used 9 GB of space from the dynamic provisioning pool when deploying a virtual machine from the template using a lazyzeroedthick format virtual disk.
Deploying a Virtual Machine from a Template Using an Eagerzeroedthick Format Virtual Disk
This is the observation:
The datastore used 41 GB of space from the dynamic provisioning pool when deploying a virtual machine using a thick format virtual disk.
Perform Storage vMotion Operations on LUs Created with Hitachi Dynamic Provisioning
The results created using the procedures in Perform Storage vMotion Operations on LUs Created with Hitachi Dynamic Provisioning on page 10 support the conclusions in Perform Storage vMotion Operations on LUs Created with Hitachi Dynamic Provisioning on page 21. These are the observations:
25
25
26
26
Figure 11
Figure 11 shows eagerzeroedthick virtual disk IOPs is the same on Hitachi Unified Storage 150 and the ESX hosts. There is no warm-up overhead observed with eagerzeroedthick virtual disk.
27
27 Figure 12 shows the lazyzeroedthick virtual disk IOPs during the warm-up routine.
Figure 12
Figure 12 shows the lazyzeroedthick virtual disk with an approximate 75% IOPs overhead during the warm-up. During the first 40 minutes, lazyzeroedthick IOPs is significantly lower than the IOPs of the Hitachi Unified Storage 150. After 160 minutes, the IOPs match the IOPs of eagerzeroedthick virtual disk.
28
28 Figure 13 shows the thin virtual disk IOPs during the warm-up routine.
Figure 13
Figure 13 shows the thin virtual disk with an approximate 75% IOPs overhead during warm-up. This is the same overhead percentage as the lazyzeroedthick virtual disk. However, there are significantly higher initial IOPs generated than lazyzeroedthick virtual disk. After 25 minutes, the IOPs of the thin virtual disk matched the IOPs of eagerzeroedthick virtual disk.
29
29 The captured virtual disk latency for all three virtual disk types is in Figure 14.
Figure 14
When comparing latency, the eagerzeroedthick virtual disk showed latency of less than 1 msec. The thin virtual disk showed higher latency until 20 minutes. The lazyzeroedthick virtual disk showed much higher latency until 50 minutes into the warm-up routine, but did not match the latency of the eagerzeroedthick virtual disk until 160 minutes.
30
30 VAAI primitives were enabled during the warm-up routines. VAAI specific metrics were captured from ESXtop. Figure 15 shows the block zeroing-specific metric, MBZero per second.
Figure 15
The eagerzeroedthick virtual disk did not show any MBZero per second because it was zeroed during the provisioning of the virtual disk. Thin virtual disk showed a higher rate of MBZero per second over time when compared to lazyzeroedthick. However, thin virtual disk shows significantly a shorter period when the block requires zeroing.
31
31 Figure 16 show the total block zero commands issued during the warm-up routine. The block zero commands issued counter shows total number of commands issued during the uptime of an ESX host. This counter resets during reboot.
Figure 16
The eagerzeroedthick virtual disk shows no zeroing during the warm-up routine. Thin virtual disk showed a total of 80,000 block zero commands issued until 20 minutes. The lazyzeroedthick virtual disk showed a total of 94,000 block zero commands issued until 55 minutes. Figure 16 showed the thin virtual disk completed zeroing of the blocks at 20 minutes and lazyzeroedthick completed at 55 minutes.
32
32
Figure 17
When comparing these graphs to the requirements for the applications in Table 4 on page 13, Simulated I/O Workloads, only one of the four workloads achieved the required IOPS. The disk percent busy rate for all four disks in the RAID group reached 100 percent. This means that there is a need of more disks to provide the required IOPS for the OLTP, streaming media, and unstructured data workloads.
33
33
Figure 18
Creating the second LU and moving some of the workload reduced the contention for the disk resources and allowed the OLTP and streaming media access to RG000. However streaming media still does not meet the IOPS requirement.
34
34 Figure 19 shows the busy percentage for each RAID group.
Figure 19
The busy percentage was at 97% for RG-000 and 94% for RG-001. The disks reached their maximum performance.
35
35
Figure 20
The streaming media and email workload met its IOPS requirements. However, all other workloads did not. OLTP and unstructured data workload have contention for disk resources. Figure 21 shows the busy percentage for each RAID group.
36
36
Figure 21
The busy percentage for each RAID group showed a significant imbalance. RG001 is significantly under utilized while RG-000 is at the maximum utilization.
37
37
Figure 22
38
38 Figure 23 shows the busy percentage of each RAID group.
Figure 23
Although each workload met the IOPS requirements, RG-002 is significantly under utilized. With RG-000 and RG-001 reaching their maximum load, neither RAID group can sustain a disk failure without significant performance degradation.
39
39
Figure 24
Figure 24 shows the OLTP and email workloads met their IOPS requirements.
40
40 Figure 25 shows the busy percentage for of each RAID group used in the dynamic provisioning pool.
Figure 25
All the disks in the dynamic provisioning pool were used to provide IOPS.
41
41
Figure 26
OLTP, email, and streaming media met their IOPS requirements. However unstructured data has disk resource contention.
42
42 Figure 27 shows the disk busy percentage for each RAID group used in the dynamic provisioning pool.
Figure 27
Because RG-190 and RG-191 are in the same dynamic provisioning pool, both RAID group are used actively because of the wide striping effect. RG-192 is in a separate dynamic provisioning pool, so it sees a different utilization percentage based on the sequential streaming media workload.
43
43
Figure 28
Every workload achieved the IOPS requirements. Since OLTP was the priority workload, it was set to run at the maximum I/O rate. The OLTP workloads surpassed its IOPS requirements. No manual administrative effort was used to balance the workload across the disks, unlike the standard LU tests. The sequential workloads mixed well with random workloads.
44
44 Figure 29 shows the I/O latency for each workload.
Figure 29
The latency for each workload is under 20 msec. The average latency across all workloads was 14 msec.
45
45 Figure 30 shows the disk busy percentage of each RAID group used in the dynamic provisioning pool.
Figure 30
The RAID groups have a fairly even distribution of utilization due to the wide striping effect. On average, there is about 80% utilization for each RAID group. There is additional performance still available in this test. Each workload ran with 16 threads. Increasing the threads would increase disk utilization, which would increase the workload IOPS.
Corporate Headquarters 750 Central Expressway, Santa Clara, California 95050-2627 USA www.HDS.com Regional Contact Information Americas: +1 408 970 1000 or info@HDS.com Europe, Middle East and Africa: +44 (0) 1753 618000 or info.emea@HDS.com Asia-Pacific: +852 3189 7900 or hds.marketing.apac@HDS.com
Hitachi is a registered trademark of Hitachi, Ltd., in the United States and other countries. Hitachi Data Systems is a registered trademark and service mark of Hitachi, Ltd., in the United States and other countries. All other trademarks, service marks, and company names in this document or website are properties of their respective owners.
Notice: This document is for informational purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered or to be offered by Hitachi Data Systems Corporation. Hitachi Data Systems Corporation 2012. All Rights Reserved. AS-139-00 April 2012