Вы находитесь на странице: 1из 50

Using VMware vSphere 5 with Hitachi Dynamic Provisioning on Hitachi Unified Storage

Lab Validation Report


By Henry Chu

April 19, 2012

Feedback
Hitachi Data Systems welcomes your feedback. Please share your thoughts by sending an email message to SolutionLab@hds.com. Be sure to include the title of this white paper in your email message.

Table of Contents
Product Features ..............................................................................................................2 Hitachi Unified Storage ..........................................................................................2 Hitachi Dynamic Provisioning Software .................................................................2 Hitachi Compute Blade 2000 .................................................................................3 VMware vSphere 5 ................................................................................................3 Test Environment .............................................................................................................4 Test Methodology .............................................................................................................9 Deploy a Virtual Machine from a Template..........................................................10 Perform Storage vMotion Operations on LUs Created with Hitachi Dynamic Provisioning .........................................................................................................11 Convert a Virtual Disk From Lazyzeroedthick to Eagerzeroedthick.....................12 Expand VMFS Datastore ....................................................................................12 Warm-up New Virtual Disks ................................................................................13 Simulate Workload on Standard LUs...................................................................13 Simulate Workload on Dynamically Provisioned LUs ..........................................19 Analysis ...........................................................................................................................22 Deploying a Virtual Machine From a Template....................................................22 Perform Storage vMotion Operations on LUs Created with Hitachi Dynamic Provisioning ........................................................................................................22 Convert a Virtual Disk From Lazyzeroedthick to Eagerzeroedthick.....................22 Expand VMFS Datastore ....................................................................................22 Warm-up New Virtual Disks ................................................................................23 Simulate Workload on Standard LUs...................................................................23 Simulate Workload on Dynamically Provisioned LUs ..........................................23 Test Results ....................................................................................................................24 Deploying a VM From Template ..........................................................................24 Perform Storage vMotion Operations on LUs Created with Hitachi Dynamic Provisioning ........................................................................................................24 Convert a Virtual Disk From Lazyzeroedthick to Eagerzeroedthick.....................25 Expand VMFS Datastore ....................................................................................25 Warm-up New Virtual Disks ................................................................................26 Simulate Workload on Standard LUs...................................................................32 Simulate Workload on Dynamically Provisioned LUs ..........................................39

1
1

Using VMware vSphere 5 with Hitachi Dynamic Provisioning on Hitachi Unified Storage
Lab Validation Report
This paper describes testing Hitachi Data Systems conducted to validate practices for making storage provisioning and virtual disk format decisions in Hitachi Unified Storage environments using VMware vSphere 5. It describes how Hitachi Dynamic Provisioning works and the different virtual disk format choices available. This testing supports the following recommendations:

When requiring maximum initial performance, use the following:

eagerzeroedthick virtual disk format to eliminate performance anomalies Hitachi Dynamic Provisioning logical units (LUs) for wide striping benefits

When requiring maximum cost savings and low administrative overheard, use the following:

lazyzeroedthick or thin virtual disks Hitachi Dynamic Provisioning LUs

This lab validation report shows how you can leverage Hitachi Dynamic Provisioning running on Hitachi Unified Storage to maximize benefits and to minimize operational complexities in VMware vSphere 5 environments. At a high level, you can create LUs on Hitachi Unified Storage as standard RAID devices. LUs can be provisioned as dynamic provisioning volumes using Hitachi Dynamic Provisioning running on Hitachi Unified Storage. This paper is intended for you if you are a storage or datacenter administrator implementing storage within vSphere 5 environments.

Note Testing was done in a lab environment. Many things affect production environments beyond prediction or duplication in a lab environment. Follow recommended practice by conducting proof-of-concept testing for acceptable results before solution implementation in your production environment. This means to test applications in a non-production, isolated test environment that otherwise matches your production environment.

2
2

Product Features
These are the features of some of the products used in testing.

Hitachi Unified Storage


Hitachi Unified Storage is a midrange storage platform for all data. It helps businesses meet their service level agreements for availability, performance, and data protection. The performance provided by Hitachi Unified Storage is reliable, scalable, and available for block and file data. Unified Storage is simple to manage, optimized for critical business applications, and efficient. Using Unified Storage requires a lower investment in storage. Deploy this storage, which grows to meet expanding requirements and service level agreements, for critical business applications. Simplify your operations with integrated set-up and management for a quicker time to value. Unified Storage enables extensive cost savings through file and block consolidation. Build a cloud infrastructure at your own pace to deliver your services. Hitachi Unified Storage 150 provides a reliable, flexible, scalable, and costeffective modular storage. Its symmetric active-active controllers provide inputoutput load balancing that is integrated, automated, and hardware-based. Both controllers in Unified Storage 150 dynamically and automatically assign the access paths from the controller to a logical unit (LU). All LUs are accessible, regardless of the physical port or the server that requests access. Although this lab validation report uses Hitachi Unified Storage 150, this information is relevant for other Hitachi Unified Storage family members, allowing for changes to account for capacity and performance differences.

Hitachi Dynamic Provisioning Software


On Hitachi storage systems, Hitachi Dynamic Provisioning provides wide striping and thin provisioning functionalities. Using Hitachi Dynamic Provisioning is like using a host-based logical volume manager (LVM), but without incurring host processing overhead. It provides one or more wide-striping pools across many RAID groups. Each pool has one or more dynamic provisioning virtual volumes (DP-VOLs) of a logical size you specify of up to 60TB created against it without allocating any physical space initially.

3
3 Deploying Hitachi Dynamic Provisioning avoids the routine issue of hot spots that occur on logical devices (LDEVs). These occur within individual RAID groups when the host workload exceeds the IOPS or throughput capacity of that RAID group. Dynamic provisioning distributes the host workload across many RAID groups, which provides a smoothing effect that dramatically reduces hot spots. When used with Hitachi Unified Storage, Hitachi Dynamic Provisioning has the benefit of thin provisioning. Physical space assignment from the pool to the DPVOL happens as needed using 1 GB chunks, up to the logical size specified for each DP-VOL. There can be a dynamic expansion or reduction of pool capacity without disruption or downtime. You can rebalance an expanded pool across the current and newly added RAID groups for an even striping of the data and the workload. For more information, see the Hitachi Dynamic Provisioning datasheet and Hitachi Dynamic Provisioning on the Hitachi Data Systems website.

Hitachi Compute Blade 2000


Hitachi Compute Blade 2000 is an enterprise-class blade server platform. It features the following:

A balanced system architecture that eliminates bottlenecks in performance and throughput Configuration flexibility Eco-friendly power-saving capabilities Fast server failure recovery using a N+1 cold standby design that allows replacing failed servers within minutes

For more information, see Hitachi Compute Blade Family on the Hitachi Data Systems website.

VMware vSphere 5
VMware vSphere 5 is a virtualization platform that provides a datacenter infrastructure. It features vSphere Distributed Resource Scheduler (DRS), high availability, and fault tolerance. VMware vSphere 5 has the following components:

ESXi 5.0 This is a hypervisor that loads directly on a physical server. It partitions one physical machine into many virtual machines that share hardware resources. vCenter Server This allows management of the vSphere environment through a single user interface. With vCenter, there are features available such as vMotion, Storage vMotion, Storage Distributed Resource Scheduler, High Availability, and Fault Tolerance.

For more information, see the VMware vSphere website.

4
4

Test Environment
This describes the test environment used in the Hitachi Data Systems lab. Figure 1 shows the physical layout of the test environment.

Figure 1

5
5 Figure 2 shows the virtual machines used with the VMware vSphere 5 cluster hosted on Hitachi Compute Blade 2000.

Figure 2

6
6 Figure 3 shows the configuration for Hitachi Unified Storage 150 used in testing.

Figure 3

7
7 Table 1 has the Hitachi Compute Blade 2000 components used in this lab validation report.
Table 1. Hitachi Compute Blade 2000 Configuration

Hardware Hitachi Compute Blade 2000 chassis

Detail Description

Version A0195-C-6443

8-blade chassis 4 1/10Gb/sec network switch modules 2 Management modules 8 Cooling fan modules 4 Power supply modules 8 8 Gb/sec Fibre Channel PCI-e HBA 2 2-Core Intel Xeon E5503 processors @ 2GHz 144 GB RAM per blade

Hitachi E55A2 server blades

03-57

Table 2 lists the VMware vSphere 5 software used in this lab validation report.
Table 2. VMware vSphere 5 Software

Software VMware vCenter Server VMware vSphere Client VMware ESXi

Version 5.0.0 Build 455964 5.0.0 Build 455964 5.0.0 Build 441354 All virtual machines used in this lab validation report used Microsoft Windows Server 2008 R2 Enterprise Edition 64-bit for the operating system.

8
8 Table 3 has the VMware vSphere 5 virtual disk formats tested in this lab validation report.
Table 3. VMware vSphere 5 Virtual Disk Formats

Virtual Disk Type Thin

Description Virtual disk is allocated only the storage capacity required by the guest OS. As write operations occur, additional space is allocated and zeroed. The virtual disk grows to the maximum allotted size. The virtual disk storage capacity is pre-allocated at creation. The virtual disk does not grow in size. However, the space allocated is not pre- zeroed. As the guest OS writes to the virtual disk, the space is zeroed as needed. The virtual disk storage capacity is pre-allocated at creation. The virtual disk does not grown in size. The virtual disk is pre- zeroed, so that as the guest OS writes to the virtual disk, the space does not need to be zeroed at that time.

Lazyzeroedthick

Eagerzeroedthick

9
9

Test Methodology
All Hitachi Unified Storage management functions were performed using Hitachi Storage Navigator Modular 2 software. All VMware vSphere 5 operations were performed using vCenter and vSphere client. All tests were conducted with the following VAAI primitives enabled:

Full copy This primitive enables the storage system to make full copies of data within the storage system without having the ESX host read and write the data. Block zeroing This primitive enables storage systems to zero out a large number of blocks to speed provisioning of virtual machines. Hardware-assisted locking This primitive provides an alternative means to protect the metadata for VMFS cluster file systems, thereby improving the scalability of large ESX host farms sharing a datastore. Thin provisioning stun API This primitive enables the storage system to notify the ESX host when thin provisioned volumes reach certain capacity utilization threshold. When enabled, this allows the ESX host to take preventive measures to maintain virtual machine integrity.

These are the test cases used to validate the recommendations:

Deploy a Virtual Machine from a Template on page 8 Perform Storage vMotion Operations on LUs Created with Hitachi Dynamic Provisioning on page 9 Convert a Virtual Disk From Lazyzeroedthick to Eagerzeroedthick on page 10 Expand VMFS Datastore on page 10 Warm-up New Virtual Disks on page 11 Simulate Workload on Standard LUs on page 11 Simulate Workload on Dynamically Provisioned LUs on page 13

Note A thin-friendly operation allows cost savings through the use of over
provisioning of the storage. The file system and application make use of the space efficiently and do not claim blocks of storage currently not needed.

10
10

Deploy a Virtual Machine from a Template


The goal of this testing was to show which VMware vSphere operations are thinfriendly operations using Hitachi Dynamic Provisioning. A template was created where virtual machines were deployed using lazyzeroedthick and eagerzeroedthick. The capacity utilization was captured from the VMFS datastore and Hitachi Dynamic Provisioning pool.

Deploying Virtual Machine from a Template Using a Lazyzeroedthick Format Virtual Disk
This is the test procedure to deploy a virtual machine from a template using lazyzeroedthick format virtual disk: 1. Create a RAID-5 (3D+1P) group using 600 GB SAS disks. 2. Add a 1.5 TB LU to the RAID group to create a VMFS datastore called RAID5-source. 3. Deploy a virtual machine using a lazyzeroedthick virtual disk format on RAID5-source and install Microsoft Windows Server 2008 R2. 4. Shut down the Windows virtual machine and convert it to a template. 5. Create a dynamic provisioning pool using Hitachi Dynamic Provisioning that contains three RAID-6 (6D+2P) groups with 600 GB SAS disks. 6. Create a dynamic provisioning LU with Hitachi Dynamic Provisioning of 50 GB and create a VMFS datastore called RAID-6-target. 7. Deploy a virtual machine from the template using a lazyzeroedthick format virtual disk on RAID-6-target using 40 GB virtual disks.

Deploying a Virtual Machine from a Template Using an Eagerzeroedthick Format Virtual Disk
This is the test procedure to deploy a virtual machine from a template using an eagerzeroedthick format virtual disk. 1. Using the template created in Deploying Virtual Machine from a Template Using a Lazyzeroedthick Format Virtual Disk on RAID-6-target using 40 GB virtual disks. 2. Enable Fault Tolerance on the newly created virtual machine. This operation converts the virtual disk from lazyzeroedthick to eagerzeroedthick format.

11
11

Perform Storage vMotion Operations on LUs Created with Hitachi Dynamic Provisioning
The goal of this test was to determine how much dynamic provisioning pool space is consumed when a lazyzeroedthick format virtual disk is moved from one dynamic provisioning pool to another. Source and destination dynamic provisioning pools were created using Hitachi Dynamic Provisioning. A virtual machine with a lazyzeroedthick virtual disk format was created and migrated with VMware Storage vMotion. This is the test procedure: 1. Create two dynamic provisioning pools with Hitachi Dynamic Provisioning on Hitachi Unified Storage 150, DP Pool-000 and DP Pool-001. 2. Create two 50 GB LUs with Hitachi Dynamic Provisioning, LU-000 in DP Pool000 and LU-001 in DP Pool-001. 3. Configure two VMFS datastores named RAID-6-vMotion-1 on LU-000, and RAID-6-vMotion-2 on LU-001. 4. Create a 40 GB virtual disk in lazyzeroedthick format on the VMFS datastore RAID-6-vMotion-1 and assign it to the virtual machine created in deployment tests. 5. On the Windows virtual machine, run VDbench with a 100 percent write, 4 KB block I/O workload to Physical Drive 2 until the dynamic provisioning LU shows 50 percent (20 GB) utilization. 6. Using the Storage vMotion GUI, migrate the virtual disk from the VMFS datastore RAID-6-vMotion-1 to the VMFS datastore RAID-6-vMotion-2, selecting the Same format as source option.

12
12

Convert a Virtual Disk From Lazyzeroedthick to Eagerzeroedthick


The goal of this test was to determine the dynamic provisioning pool space utilization when a lazyzeroedthick format virtual disk is converted to an eagerzeroedthick format virtual disk. The Fault Tolerant wizard was used for the conversion. This is the test procedure: 1. Create a 100 GB LU using Hitachi Dynamic Provisioning and assign it to the VMware vSphere host. 2. Create a VMFS datastore on the LU. 3. Create a 50 GB virtual disk on a virtual machine. 4. Convert the .vmdk file to eagerzeroedthick format using the Fault Tolerance wizard.

Expand VMFS Datastore


The goal of this test was to evaluate the effect of VMFS datastore expansion feature on the Hitachi Unified Storage. The VMFS datastore was expanded and capacity utilization was measured. This is the test procedure: 1. Using the dynamic provisioning pool and the LU created using Hitachi Dynamic Provisioning that is associated with the RAID-6-source VMFS datastore in previous tests, add a RAID group to the dynamic provisioning pool. 2. Increase the size of the LU from 50 GB to 100 GB. 3. After increasing the LU size, rescan the ESX host to find the new capacity. 4. Expand the VMFS datastore..

13
13

Warm-up New Virtual Disks


The goal of this test was to examine warm-up time and compare performance on newly created thin, lazyzeroedthick and eagerzeroedthick format virtual disks. These tests quantified warm-up anomalies to assist in deciding which virtual disk format best meets their requirements. The lazyzeroedthick and thin format virtual disks were warmed up and compared to the eagerzeroedthick format virtual disk. This warm-up occurs one time, when writing to a block on the virtual disk for the first time. These tests used a dynamic provisioning pool on two RAID-6 (6D+2P) groups. The dynamic provisioning pool had a 2 TB LU created with Hitachi Dynamic Provisioning and a 100 GB virtual disk. The warm-up routine ran a 100 percent write, 100 percent random, 16 threads, and 8k block workload. Hitachi Data Systems collected performance metrics from the Hitachi Unified Storage 150 and statistics from VDbench.

Simulate Workload on Standard LUs


The goal of these tests was to determine how multiple workloads on a VMFS datastore perform using simulated production workloads in a lab environment. The following performance metrics were observed:

Guest OS storage metrics through VDBench output ESX metrics through ESXtop batch mode output Hitachi Unified Storage metrics through Hitachi Storage Navigator Modular 2 There was close attention paid to sequential versus random I/O.

In these lab tests, moving workloads between VMFS datastores was done by creating new eagerzeroedthick virtual disks.

14
14 Table 4 shows the I/O workload used during the workload simulation tests.
Table 4. Simulated I/O Workloads

Test OLTP

Workload

Description I/O workload of SQL/Oracle database application

Performance Requirements 400 IOPS

100% random 75% read 25% write 8k block 100% random 50% read 50% write 8k block 100% sequential 100% read 256k block 100% random 90% read 10% write 8k block 100% sequential 100% write 256k block

Email

I/O workload of an Exchange or other email applications

230 IOPS

Streaming Media

I/O workload of streaming media application such as IPTV

100MB/sec (400 IOPS)

Unstructured data serving

I/O workload of file serving, SharePoint applications where unstructured data is shared

400 IOPS

Backup to disk

I/O workload of backing up to disk Maximum throughput

15
15

One Standard LU Test Configuration


This test did the following:

Established a baseline Determined the degree of contention created by running multiple workloads on a single RAID-5 (3D+1P) group

The OLTP workload was the priority workload and run at maximum I/O throughput. For the other workloads, IOPS were limited by VDbench parameters. Figure 4 shows the test configuration with one standard LU for four workloads.

Figure 4

16
16

Two Standard LUs Test Configuration 1


To relieve any IOPS constraints, an additional RAID-5 (3D+1P) group, RG-001, was configured. The email and unstructured data workloads were moved to RG001. This reduced the contention for 41the disk resources and allowed the OLTP and streaming media access to RG-000. Figure 5 shows the Two Standard LU Test Configuration 1 environment.

Figure 5

17
17

Two Standard LUsTest Configuration 2


To improve performance, the OLTP, email, and unstructured data workloads were moved to RG-000. This left the sequential streaming media to its own dedicated RAID group, RG-001. This test was conducted to see the effects of separating random workloads and sequential workloads. Figure 6 shows the Two Standard LU Test Configuration 2 environment.

Figure 6

18
18

Three Standard LUsTest Configuration


OLTP was given its own dedicated RG-000, with email and unstructured data on RG-001. Streaming media was given a dedicated RAID group, RG-002. Figure 7 shows the addition of a third RAID group and redistribution of the workload across 12 disks.

Figure 7

19
19

Simulate Workload on Dynamically Provisioned LUs


The goal of these tests was to demonstrate that wide-striping ability of Hitachi Dynamic Provisioning dramatically reduces the administrative effort required to balance multiple workloads. The same simulated workloads in Table 4 on page 13 were used.

Hitachi Dynamic Provisioning Test Configuration 1


This test used a dynamic provisioning pool containing two groups with a RAID-5 (3D+1P) configuration. A single LU was created and a VMFS datastore was built on the LU. Each virtual machine contained a single virtual disk of 100 GB. Figure 8 shows Hitachi Dynamic Provisioning Test Configuration 1.

Figure 8

20
20

Hitachi Dynamic Provisioning Test Configuration 2


This test determined if separating the sequential workload from the random workload on different dynamic provisioning pools allows Hitachi Unified Storage to do which of the following:

Detect sequential I/O If the virtual host bus adapter breaks up the sequential workload, making it appear as random to the storage system

This test used two dynamic provisioning pools, DP Pool-000 and DP Pool-001. Each pool contained a single LU. DP Pool-000 contained two RAID groups and was used for all random workloads. DP Pool-001 contained a single RAID group and was used for the sequential reads on the streaming media workload. Figure 9 shows the test configuration using two dynamic provisioning LUs.

Figure 9

21
21

Hitachi Dynamic Provisioning Scaling Tests


These tests determined that scaling a larger dynamic provisioning pool does the following:

Assists with balancing workloads Reduces manual administrative effort

Using the workloads used in Hitachi Dynamic Provisioning Test Configuration 2 the number of workloads and RAID groups was doubled for this test. However, a single dynamic provisioning pool was used instead of isolating streaming media workload. For these tests, DP Pool-000 was extended to 6 RAID groups, as shown in Figure 10.

Figure 10

22
22

Analysis
Deploying a Virtual Machine From a Template
The lazyzeroedthick format virtual disk only uses the actual space needed by the guest operating system, data, and applications from the storage system perspective. Deploying virtual machines from templates that have the guest operating system installed on a lazyzeroedthick format virtual disk is a Hitachi Dynamic Provisioning thin-friendly operation. Hitachi Data Systems recommends using lazyzeroedthick format virtual disks for virtual machine templates. An eagerzeroedthick format virtual disk uses the virtual disks actual total assigned space. Deploying virtual machines from templates in eagerzeroedthick format to any virtual disk format is not a storage system thin-friendly operation.

Perform Storage vMotion Operations on LUs Created with Hitachi Dynamic Provisioning
Storage vMotion is a Hitachi Dynamic Provisioning thin-friendly operation.

Convert a Virtual Disk From Lazyzeroedthick to Eagerzeroedthick


Converting a lazyzeroedthick virtual disk to an eagerzeroedthick virtual disk is not an immediate Hitachi Dynamic Provisioning thin-friendly operation. A further operation with the zero page reclaim utility will thin provision eagerzeroedthick virtual disks.

Note Hitachi Dynamic Provisioning on Hitachi Unified Storage has the ability to run a zero page reclaim function on a dynamically provisioned volume. With zero page reclaim, any zeroed pages can be reallocated as needed.

Expand VMFS Datastore


VMFS datastore expansion of VMware vSphere 5 on Hitachi Unified Storage is thin-friendly. Any further operations, such as expanding or adding a .vmdk file, are dependent on the virtual disk format used.

23
23

Warm-up New Virtual Disks


Eagerzeroedthick virtual disks have no warm-up overhead because blocks are zeroed when provisioning the virtual disk. Both thin and lazyzeroedthick virtual disks have warm-up overhead at initial write. Thin virtual disks have higher warm-up overhead compared to lazyzeroedthick virtual disks. However, thin virtual disks are more than 2.5 times faster to complete zeroing of the blocks, when compared to lazyzeroedthick virtual disks. The zeroing of blocks is offloaded to Hitachi Unified Storage through the integration of VAAI block zeroing primitive. Once blocks for virtual disks are zeroed, there is no performance penalty. Over time, thin, lazyzeroedthick, and eagerzeroedthick virtual disk have equal performance characteristics.

Simulate Workload on Standard LUs


Using standard LUs, careful consideration about the workloads needs to be made. Significant amount of administrative overhead is needed to carefully balance the workloads to meet the IOPs requirements.

Sequential versus random workload Sequential workloads, when mixed with sequential or random workloads on standard LUs, can become random. Total IOPS available from the RAID group Additional physical disk reads and writes occur for every application write when using mirrors or XOR parity. Application requirements Total IOPS and response time are critical factors when designing RAID group architecture.

Reaching a balanced utilization of the disk resources will be difficult.

Simulate Workload on Dynamically Provisioned LUs


LUs created using Hitachi Dynamic Provisioning have significantly less management overhead to include in the size for performance requirements compared to using standard LUs. Due to the wide striping capability, Hitachi Dynamic Provisioning can balance I/O load in pools of RAID groups. This eliminates underutilized or over utilized RAID groups. The result is a more efficient use of disk resources. Mixing random and sequential I/O in a single dynamic provisioning pool is possible to obtain certain levels of performance. However separating random and sequential workloads into dynamic provisioning pools is more efficient.

24
24

Test Results
These are the results from testing in the Hitachi Data System labs.

Deploying a VM From Template


The results created using the procedures in Deploy a Virtual Machine from a Template on page 9 support the conclusions in Deploying a Virtual Machine From a Template on page 21.

Deploying Virtual Machine from a Template Using a Lazyzeroedthick Format Virtual Disk
These are the observations: 1. Microsoft Windows Server used approximately 10 GB of disk space when deploying a virtual machine using lazyzeroedthick virtual disk format. 2. The datastore used 1 GB of space from the dynamic provisioning pool when creating a VMFS datastore. 3. The datastore used 9 GB of space from the dynamic provisioning pool when deploying a virtual machine from the template using a lazyzeroedthick format virtual disk.

Deploying a Virtual Machine from a Template Using an Eagerzeroedthick Format Virtual Disk
This is the observation:

The datastore used 41 GB of space from the dynamic provisioning pool when deploying a virtual machine using a thick format virtual disk.

Perform Storage vMotion Operations on LUs Created with Hitachi Dynamic Provisioning
The results created using the procedures in Perform Storage vMotion Operations on LUs Created with Hitachi Dynamic Provisioning on page 10 support the conclusions in Perform Storage vMotion Operations on LUs Created with Hitachi Dynamic Provisioning on page 21. These are the observations:

The datastore used 20 GB of space from dynamic provisioning Pool-001.

25
25

Convert a Virtual Disk From Lazyzeroedthick to Eagerzeroedthick


The results created using the procedures in Convert a Virtual Disk From Lazyzeroedthick to Eagerzeroedthick on page 11 support the conclusions in Convert a Virtual Disk From Lazyzeroedthick to Eagerzeroedthick on page 21. These are the observations: 1. The data store uses 1 GB of space from the dynamic provisioning pool when creating a 50 GB virtual disk. 2. The datastore uses 51 GB of space from the dynamic provisioning pool when using the Fault Tolerance wizard.

Expand VMFS Datastore


The results created using the procedures in Expand VMFS Datastore on page 11 support the conclusions in Expand VMFS Datastore on page 21. These are the observations: 1. Dynamic provisioning pool usage does not increase when increasing the size of the LU from 50 GB to 100 GB. 2. Dynamic provisioning pool usage does not increase when expanding the VMFS datastore.

26
26

Warm-up New Virtual Disks


Eagerzeroedthick virtual disk performance was captured and used as a baseline, as shown in Figure 11.

Figure 11

Figure 11 shows eagerzeroedthick virtual disk IOPs is the same on Hitachi Unified Storage 150 and the ESX hosts. There is no warm-up overhead observed with eagerzeroedthick virtual disk.

27
27 Figure 12 shows the lazyzeroedthick virtual disk IOPs during the warm-up routine.

Figure 12

Figure 12 shows the lazyzeroedthick virtual disk with an approximate 75% IOPs overhead during the warm-up. During the first 40 minutes, lazyzeroedthick IOPs is significantly lower than the IOPs of the Hitachi Unified Storage 150. After 160 minutes, the IOPs match the IOPs of eagerzeroedthick virtual disk.

28
28 Figure 13 shows the thin virtual disk IOPs during the warm-up routine.

Figure 13

Figure 13 shows the thin virtual disk with an approximate 75% IOPs overhead during warm-up. This is the same overhead percentage as the lazyzeroedthick virtual disk. However, there are significantly higher initial IOPs generated than lazyzeroedthick virtual disk. After 25 minutes, the IOPs of the thin virtual disk matched the IOPs of eagerzeroedthick virtual disk.

29
29 The captured virtual disk latency for all three virtual disk types is in Figure 14.

Figure 14

When comparing latency, the eagerzeroedthick virtual disk showed latency of less than 1 msec. The thin virtual disk showed higher latency until 20 minutes. The lazyzeroedthick virtual disk showed much higher latency until 50 minutes into the warm-up routine, but did not match the latency of the eagerzeroedthick virtual disk until 160 minutes.

30
30 VAAI primitives were enabled during the warm-up routines. VAAI specific metrics were captured from ESXtop. Figure 15 shows the block zeroing-specific metric, MBZero per second.

Figure 15

The eagerzeroedthick virtual disk did not show any MBZero per second because it was zeroed during the provisioning of the virtual disk. Thin virtual disk showed a higher rate of MBZero per second over time when compared to lazyzeroedthick. However, thin virtual disk shows significantly a shorter period when the block requires zeroing.

31
31 Figure 16 show the total block zero commands issued during the warm-up routine. The block zero commands issued counter shows total number of commands issued during the uptime of an ESX host. This counter resets during reboot.

Figure 16

The eagerzeroedthick virtual disk shows no zeroing during the warm-up routine. Thin virtual disk showed a total of 80,000 block zero commands issued until 20 minutes. The lazyzeroedthick virtual disk showed a total of 94,000 block zero commands issued until 55 minutes. Figure 16 showed the thin virtual disk completed zeroing of the blocks at 20 minutes and lazyzeroedthick completed at 55 minutes.

32
32

Simulate Workload on Standard LUs


These test results come from a workload simulation when using standard LUs.

One Standard LU Test Configuration


Figure 17 shows the IOPS generated by the OLTP workload when various workloads were added.

Figure 17

When comparing these graphs to the requirements for the applications in Table 4 on page 13, Simulated I/O Workloads, only one of the four workloads achieved the required IOPS. The disk percent busy rate for all four disks in the RAID group reached 100 percent. This means that there is a need of more disks to provide the required IOPS for the OLTP, streaming media, and unstructured data workloads.

33
33

Two Standard LUs Test Configuration 1


Figure 18 shows the resulting IOPS after configuring an additional RAID-5 group, RG-001, and moving the email and unstructured data workloads to RG-001. The streaming media workload is a sequential workload. All other workloads are random workloads.

Figure 18

Creating the second LU and moving some of the workload reduced the contention for the disk resources and allowed the OLTP and streaming media access to RG000. However streaming media still does not meet the IOPS requirement.

34
34 Figure 19 shows the busy percentage for each RAID group.

Figure 19

The busy percentage was at 97% for RG-000 and 94% for RG-001. The disks reached their maximum performance.

35
35

Two Standard LUs Test Configuration 2


Figure 20 shows the resulting IOPS after moving OLTP, email, and unstructured data workloads to RG-000. This left the sequential streaming media to its own dedicated RAID group, RG-001

Figure 20

The streaming media and email workload met its IOPS requirements. However, all other workloads did not. OLTP and unstructured data workload have contention for disk resources. Figure 21 shows the busy percentage for each RAID group.

36
36

Figure 21

The busy percentage for each RAID group showed a significant imbalance. RG001 is significantly under utilized while RG-000 is at the maximum utilization.

37
37

Three Standard LUs Test Configuration


OLTP was given its own dedicated RAID group, RG-000. Email and unstructured data had its own RAID group, RG-001. Streaming media had a dedicated RAID group, RG-002. Figure 22 shows the resulting IOPS after adding a third RAID group and redistribution of the workload across 12 disks.

Figure 22

All workloads met their IOPS requirements.

38
38 Figure 23 shows the busy percentage of each RAID group.

Figure 23

Although each workload met the IOPS requirements, RG-002 is significantly under utilized. With RG-000 and RG-001 reaching their maximum load, neither RAID group can sustain a disk failure without significant performance degradation.

39
39

Simulate Workload on Dynamically Provisioned LUs


These test results come from a workload simulation when using LUs created by Hitachi Dynamic Provisioning.

Hitachi Dynamic Provisioning Test Configuration 1


This test used a dynamic provisioning pool containing two RAID-5 (3D+1P) groups. A single LU was created using Hitachi Dynamic Provisioning and a VMFS datastore was built on the LU. Each virtual machine contained a single virtual disk of 100 GB. Figure 24 shows the IOPS results for the first dynamic provisioning LU test.

Figure 24

Figure 24 shows the OLTP and email workloads met their IOPS requirements.

40
40 Figure 25 shows the busy percentage for of each RAID group used in the dynamic provisioning pool.

Figure 25

All the disks in the dynamic provisioning pool were used to provide IOPS.

41
41

Hitachi Dynamic Provisioning Test Configuration 2


This test used two dynamic provisioning pools, DP Pool-000 and DP Pool-001. Each pools contained a single LU. DP Pool-000 contained two RAID groups and was used for all random workloads. DP-Pool-001 contained a single RAID group and was used for the sequential reads on the streaming media workload. Figure 26 shows the second test configuration using two dynamic provisioning LUs.

Figure 26

OLTP, email, and streaming media met their IOPS requirements. However unstructured data has disk resource contention.

42
42 Figure 27 shows the disk busy percentage for each RAID group used in the dynamic provisioning pool.

Figure 27

Because RG-190 and RG-191 are in the same dynamic provisioning pool, both RAID group are used actively because of the wide striping effect. RG-192 is in a separate dynamic provisioning pool, so it sees a different utilization percentage based on the sequential streaming media workload.

43
43

Hitachi Dynamic Provisioning Scaling Tests


Each of the eight virtual machines was assigned a 100 GB virtual disk without regard to data placement. All data was wide-striped across the 24 disks in the dynamic provisioning pool. Figure 28 shows the resulting IOPS.

Figure 28

Every workload achieved the IOPS requirements. Since OLTP was the priority workload, it was set to run at the maximum I/O rate. The OLTP workloads surpassed its IOPS requirements. No manual administrative effort was used to balance the workload across the disks, unlike the standard LU tests. The sequential workloads mixed well with random workloads.

44
44 Figure 29 shows the I/O latency for each workload.

Figure 29

The latency for each workload is under 20 msec. The average latency across all workloads was 14 msec.

45
45 Figure 30 shows the disk busy percentage of each RAID group used in the dynamic provisioning pool.

Figure 30

The RAID groups have a fairly even distribution of utilization due to the wide striping effect. On average, there is about 80% utilization for each RAID group. There is additional performance still available in this test. Each workload ran with 16 threads. Increasing the threads would increase disk utilization, which would increase the workload IOPS.

For More Information


Hitachi Data Systems Global Services offers experienced storage consultants, proven methodologies and a comprehensive services portfolio to assist you in implementing Hitachi products and solutions in your environment. For more information, see the Hitachi Data Systems Global Services website. Live and recorded product demonstrations are available for many Hitachi products. To schedule a live demonstration, contact a sales representative. To view a recorded demonstration, see the Hitachi Data Systems Corporate Resources website. Click the Product Demos tab for a list of available recorded demonstrations. Hitachi Data Systems Academy provides best-in-class training on Hitachi products, technology, solutions and certifications. Hitachi Data Systems Academy delivers on-demand web-based training (WBT), classroom-based instructor-led training (ILT) and virtual instructor-led training (vILT) courses. For more information, see the Hitachi Data Systems Services Education website. For more information about Hitachi products and services, contact your sales representative or channel partner or visit the Hitachi Data Systems website.

Corporate Headquarters 750 Central Expressway, Santa Clara, California 95050-2627 USA www.HDS.com Regional Contact Information Americas: +1 408 970 1000 or info@HDS.com Europe, Middle East and Africa: +44 (0) 1753 618000 or info.emea@HDS.com Asia-Pacific: +852 3189 7900 or hds.marketing.apac@HDS.com
Hitachi is a registered trademark of Hitachi, Ltd., in the United States and other countries. Hitachi Data Systems is a registered trademark and service mark of Hitachi, Ltd., in the United States and other countries. All other trademarks, service marks, and company names in this document or website are properties of their respective owners.

Notice: This document is for informational purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered or to be offered by Hitachi Data Systems Corporation. Hitachi Data Systems Corporation 2012. All Rights Reserved. AS-139-00 April 2012

Вам также может понравиться