Вы находитесь на странице: 1из 31

1

Microsoft Hyper-V Cloud Fast Track Reference Architecture on Hitachi Virtual Storage Platform
Reference Architecture
Rick Andersen August, 2011

Month Year

Feedback
Hitachi Data Systems welcomes your feedback. Please share your thoughts by sending an email message to SolutionLab@hds.com. Be sure to include the title of this white paper in your email message.

ii

Table of Contents
Solution Overview ..................................................................................................................... 2 Key Solution Components ........................................................................................................ 5 Hardware Components ............................................................................................................ 5 Software Components .............................................................................................................. 6 Solution Design ......................................................................................................................... 9 High-level Architecture ............................................................................................................. 9 Hitachi Compute Blade 2000 Chassis Configuration ............................................................... 11 Hitachi Compute Blade 2000 Server Architecture ................................................................... 12 Storage Architecture ................................................................................................................. 13 Clustered Shared Volumes ...................................................................................................... 14 RAID Configuration .................................................................................................................. 15 Pool and LUN Configuration..................................................................................................... 15 Storage Area Network Architecture .......................................................................................... 16 Path Configuration .................................................................................................................... 18 Network Architecture ................................................................................................................ 20 Management Architecture ........................................................................................................ 23 Engineering Validation .............................................................................................................. 27 Conclusion ................................................................................................................................. 27

iii

Microsoft Hyper-V Cloud Fast Track Reference Architecture on Hitachi Virtual Storage Platform
Reference Architecture Guide
The Microsoft Hyper-V Cloud Fast Track solution from Hitachi is a reference architecture to implement a private cloud for your organizations unique requirements with confidence. The Hitachi Compute Blade 2000 combined with the Hitachi Virtual Storage Platform provides a highly available and highly scalable platform on which to build a private cloud infrastructure. The following list tells how this solution has faster deployment, reduced risk, more predictability, and a lower cost of ownership:

Faster Deployment Speed deployment of an initial cloud by installing a pre-validated reference architecture. Adapt the infrastructure rapidly to market pressures by leveraging the scalable design of the
architecture.

Simplify infrastructure and virtual machine deployment with integrated management. Rapidly deploy and provision resources using a self-service portal. Reduced Risk Deploy a solution on a common cloud architecture with tested, end-to-end interoperability of
compute, storage and network.

Adapt to failures by utilizing automation that detects and reacts to events. Increase virtual machine density while maintaining performance through monitoring of
systems that ensures bottlenecks are detected and corrected.

Predictability Use an underlying infrastructure that assures a consistent experience to the hosted
workloads.

Reap the benefits of standardization of underlying physical servers, network devices, and
storage systems.

Lower cost of ownership Save with a cost-optimized platform and software-independent solution for rack system
integration.

Enjoy high performance and scalability with this Hitachi Virtual Storage Platform solution that
uses Microsoft Windows Server 2008 R2 with Hyper-V technology.

One of the primary objectives of this private cloud solution is to enable rapid provisioning and deprovisioning of virtual machines. Doing so on a large scale requires tight integration with the storage architecture and robust automation. Provisioning a new virtual machine on an already existing LUN is a simple operation. However, provisioning a new LUN to support additional virtual machines and adding it to a host cluster are complicated tasks that greatly benefit from automation. Storage architecture is a critical design consideration for Microsoft Hyper-V Cloud solutions. The challenging topic rapidly evolves in terms of new standards, protocols, and implementations. Storage and the supporting storage networking are critical to the overall performance of the environment, affecting the overall cost. The Hitachi Virtual Storage Platform is ideal for private clouds because of its ability to scale when hosting additional workloads in the cloud and for providing high availability to these workloads. A private cloud is more than an infrastructure providing computing resources to applications. A fundamental shift of cloud computing is due to IT moving from server operator to service provider. This requires a set of services to accompany the infrastructure, such as reporting, usage metering, and selfservice provisioning. If these services are unavailable, then the cloud service layer is unavailable and IT is little more than a traditional data center. For this reason, also provide high availability to management systems. This reference architecture guide is intended for IT administrators involved in data center planning and design. Specifically, this is for administrators with a focus on the planning and design of a Microsoft Hyper-V private cloud infrastructure. You need some familiarity with the Hitachi Virtual Storage Platform, Hitachi Storage Navigator, Microsoft Windows Server 2008 R2, and Microsoft Hyper-V failover clustering when reading this.

Solution Overview
The reference architecture described in this paper is built on the latest generation of hardware and software virtualization platforms from Hitachi and Microsoft. The Hitachi Compute Blade 2000 configuration has with the following:

Two distinct Microsoft Hyper-V failover clusters A six node tenant cluster that hosts the production or tenant virtual machines A two node management cluster
The management cluster contains the following:

The Hitachi and Microsoft software for deploying virtual machines to the tenant cluster The products and tools to manage the Microsoft Hyper-V private cloud infrastructure components.
The server blades used for this reference architecture can typically host an average of 32 virtual machines per blade. This means typically a total of 448 virtual machines can be hosted by the fourteen node tenant cluster.

This reference architecture provides the following capabilities:

Virtual machine high availability With the Hitachi Compute Blade 2000 running Microsoft
Hyper-V failover clustering, the virtual machines deployed in the failover cluster are made highly available. In case one of the blades in the cluster fails, the virtual machines residing on that blade automatically fail over to another blade in the cluster.

Virtual machine live migration The administrator can live migrate a virtual machine from one
blade in the cluster to another blade. This can be used to balance workloads or to move the virtual machine before performing server maintenance.

Template based virtual machine provisioning Virtual machine templates allow administrators
to deploy virtual machines rapidly.

Self-service virtual machine provisioning Administrators can delegate authority to other users
or a group of business owners. This allows them to create virtual machines using a web interface, based on a set of predetermined templates.

Integration with System Center Operations Manager Hitachi provides monitoring packs for the
Hitachi Compute Blade 2000. This enables the administrator to be notified of any alerts that require attention. This reference architecture size is based on the following goals:

Tolerate the failure of a single Hyper-V host in the tenant node cluster and continue to run all the
virtual machines from that failed node by restarting them on other nodes in the failover cluster.

Reserve the appropriate amount of memory for the Hyper-V management partition Provide adequate storage capacity and performance on the Hitachi Virtual Storage Platform to
support the virtual machines. Figure 1 shows the design of this reference architecture:

Figure 1

To support this reference architecture, have these components in place or deployed:

Active directory and domain name services (DNS) to support the Microsoft Hyper-V failover clusters
and the management products such as System Center, Virtual Machine Manager R2, and System Center Operations Manger R2

A LAN to support the out-of-band hardware management for the Hitachi Compute Blade 2000 and
the Hitachi Virtual Storage Platform.

Key Solution Components


These are the key hardware and software components used to deploy this solution.

Hardware Components
Table 1 has the key hardware components found in this solution.
Table 1. Hardware Components

Hardware Hitachi Virtual Storage Platform

Description 16 Fibre Channel ports 2 pair of front-end directors 2 pair of back-end directors 280 SAS 600GB 10K RPM drives 68 SAS 146GB 15K RPM disks 64 SATA 2TB 7.2K RPM disks 256GB cache 8-blade chassis 2 8Gb/sec Fibre Channel switches 4 1/10Gb/sec network switches 2 management modules 8 cooling fan modules 4 power supply modules E55A2 server blade 2 4-Core Intel Xeon X5640 2.66GHz 72GB memory per blade

Version 0897/A-Y

Quantity 1

Hitachi Compute Blade 2000 chassis

A0154-E-5234

Hitachi Compute Blade 2000 server blade

58.22

16

Hitachi Virtual Storage Platform


The Hitachi Virtual Storage Platform is a 3D scaling storage platform. With the ability to scale up, scale out, and scale deep at the same time in a single storage system, the Virtual Storage Platform flexibly adapts for performance, capacity, connectivity, and virtualization.

Scale Up Increase performance, capacity, and connectivity by adding cache, processors,


connections, and disks to the base system.

Scale Out Combine multiple chassis into a single logical system with shared resources. Scale Deep Extend the advanced functions of the Virtual Storage Platform to external
multivendor storage. This reference architecture offers improved virtual machine scalability and performance by the Hitachi Virtual Storage Platform when hosting a large number of virtual machines. Hitachi Dynamic Provisioning on the Virtual Service Platform allows the creation of storage pools in the private cloud. This capacity can be used on-demand to improve virtual machine performance and capacity utilization. Using Hitachi Dynamic Tiering on the Virtual Service Platform simplifies storage administration and offers predictable virtual machine performance. For more information, see the Hitachi Virtual Storage Platform on the Hitachi Data Systems website.

Hitachi Compute Blade 2000


The Hitachi Compute Blade 2000 is an enterprise-class blade server platform. It features the following:

A balanced system architecture that eliminates bottlenecks in performance and throughput Configuration flexibility Eco-friendly power-saving capabilities Fast server failure recovery using a N+1 cold standby design that allows replacing failed servers
within minutes The host server architecture is a critical component of the virtualized infrastructure. The ability of the host servers to handle the workload of a large number of consolidation candidates increases the consolidation ratio and helps provide the desired cost benefits. The Hitachi Compute Blade 2000 was chosen for this reference architecture due to the ability to support large numbers of virtual machines per blade and meets the requirements set forth by the Microsoft Cloud Fast Program for processor, RAM, and network capability.

Software Components
Table 2 has the key software components found in this solution.
Table 2. Software Components

Software Hitachi Storage Navigator Hitachi Dynamic Provisioning Hitachi Dynamic Link Manager Microsoft Windows Server

Version Microcode Dependent Microcode Dependent 6.6.0-00 2008 R2 SP1, Datacenter edition (Hyper-V server) 2008 R2 SP1, Enterprise edition (all virtual machines)

Microsoft SQL Server Microsoft Windows Server Hyper-V Microsoft Virtual Machine Manager Microsoft Systems Center Operations Manager Microsoft Systems Center Configuration Manager Microsoft Virtual Machine Manager Self-Service Portal Microsoft Deployment Toolkit Microsoft Windows Deployment Services

2008 R2 SP1, Enterprise edition 2008 R2 2008 R2 SP1 2007 R2 2007 R2 2.0 2010 Update 1 2008 R2

Hitachi Dynamic Provisioning


On the Hitachi Virtual Storage Platform, Hitachi Dynamic Provisioning provides wide striping and thin provisioning functionalities. Using Hitachi Dynamic Provisioning is similar to using a host-based logical volume manager (LVM), but without incurring host processing overhead. It uses one or more wide-striping pools across many RAID groups within a Hitachi Virtual Storage Platform. Each pool has one or more dynamic provisioning virtual volumes (DP-VOLs) of a user-specified logical size of up to 60TB created against it (with no initial physical space allocated). Deploying Hitachi Dynamic Provisioning avoids the routine issue of hot spots that occur on logical devices (LDEVs). These come from individual RAID groups when the host workload exceeds the IOPS or throughput capacity of that RAID group. This distributes the host workload across many RAID groups, which provides a smoothing effect that dramatically reduces hot spots. Hitachi Dynamic Provisioning has the benefit of thin provisioning. Physical space assignment from the pool to the DP-VOL happens as needed using 42MB pages, up to the logical size specified for each DP-VOL. There can be a dynamic expansion or reduction of pool capacity without disruption or downtime. An expanded pool can be rebalanced across the current and newly added RAID groups for an even striping of the data and the workload. For more information, see the Hitachi Dynamic Provisioning datasheet and Hitachi Dynamic Provisioning on the Hitachi Data Systems website.

Hitachi Dynamic Link Manager


Hitachi Dynamic Link Manager used for SAN multipathing, is configured using its round-robin multipathing policy. This policy selects a path by rotating through all available paths. This balances the load across all available paths, optimizing IOPS and response time. For more information, see Hitachi Dynamic Link Manager on the Hitachi Data Systems website.

Microsoft Windows Server 2008 R2


Microsoft Windows Server 2008 R2 builds on its Windows Server predecessors while delivering new functionality and improvements to the base operating system. New Web tools, virtualization technologies, security enhancements, and management utilities help save time, reduce costs, and provide a solid foundation for your information technology (IT) infrastructure. For more information, see Product Information on the Microsoft website.

Microsoft SQL Server 2008 R2


Microsoft SQL Server 2008 provides a data platform that enables you to run mission-critical applications, reduce time and cost of development and management of applications, and deliver actionable insight to your entire organization. Microsoft Virtual Machine Manager and System Center Operations Manager require a SQL instance for all data requirements. To satisfy the data needs of a large enterprise farm, use Microsoft SQL 2008 R2 Enterprise. For more information about the features of SQL Server 2008 R2, see the Whats New page of SQL Server 2008 R2 Books Online or Product Information on the Microsoft web site.

Microsoft Windows Server 2008 R2 Hyper-V


Microsoft Windows Server 2008 Hyper-V is available as an integral feature of Windows Server 2008. Hyper-V allows you to make the best use of your server hardware investments by consolidating multiple server roles as separate virtual machines (VMs) running on a single physical machine. For more information, see Virtualization with Hyper-V: Features on the Microsoft website.

Microsoft Virtual Machine Manager 2008 R2 (SCVMM)


Microsoft Virtual Machine Manager 2008 R2 helps enable centralized management of physical and virtual IT infrastructure, increased server utilization, and dynamic resource optimization across multiple virtualization platforms. It includes end-to-end capabilities such as planning, deploying, managing, and optimizing the virtual infrastructure. For this solution, SCVMM is used to manage only Hyper-V Cloud Fast Track hosts and guests in a single datacenter. For more information, see Virtual Machine Manager on the Microsoft website.

Microsoft Virtual Machine Manager 2008 R2 Self-Service Portal R2


Microsoft System Center Virtual Machine Manager Self-Service Portal 2.0 allows you to dynamically pool, allocate, and manage datacenter resources. Using the Self-Service Portal, you can reduce IT costs, while increasing agility for your organization. . Key benefits for using this in a private cloud architecture are:

Allocation of Data Center Resources The portal collects data center infrastructure resources,
such as storage, networks, and virtual machine templates, to make them available to business units to meet their infrastructure needs.

Simplifying bringing new applications online This portal can simplify bringing a new
application online by business unit online by providing methods so that a business unit owner can request resources from the infrastructure pool to host their IT services.

Self-Service Provisioning The portal provides an end-user self-service capability for virtual
machine provisioning to streamline the end user experience of managing virtual machine. For more information, see System Center Virtual Machine Manager Self-Service Portal 2.0 on the Microsoft website.

Microsoft System Center Operation Manager 2007 R2


Microsoft System Center Operations Manager 2007 R2 enables customers to reduce the cost of data center management across server operating systems and hypervisors through a single, familiar interface. Through numerous views that show state, health, and performance information, operators can gain rapid insight into the state of the IT environment, and the IT services running across different systems and workloads. The components of Microsoft System Center are database-driven applications. Microsoft SQL Server 2008 R2 provides a highly available and well-performing database platform that is critical to overall environment management.

Operation Manager agents are deployed to the fabric management hosts and virtual machines. These in-guest agents are used to provide performance and health monitoring of the operating system only. The Operation Manager instance is used for Hyper-V Cloud infrastructure monitoring only. For more information, see System Center Operations Manager on the Microsoft website.

Microsoft System Center Configuration Manager 2007 R2


Microsoft System Center Configuration Manager 2007 R2 comprehensively assesses, deploys, and updates servers, client computers, and devices across physical, virtual, distributed, and mobile environments. For more information, see Configuration Manager on the Microsoft website.

Microsoft Deployment Toolkit 2010 Update 1


Microsoft Deployment Toolkit 2010 provides a common console with the comprehensive tools and guidance needed to efficiently manage deployment of Windows 7 and Windows Server 2008 R2. This is the recommended process and toolset from Microsoft to automate desktop and server deployment. The software required for installation (operating system, drivers and updates) ships in deployment packages. The toolkit server deploys these packages over the network. For more information, see Microsoft Deployment Toolkit on Microsoft TechNet.

Solution Design
This is detailed information on the Microsoft Hyper-V Cloud Fast Track design used for this reference architecture. It includes required software and hardware design information to build the basic infrastructure for the environment.

High-level Architecture
This solution uses the Hitachi Compute Blade 2000 and the Hitachi Virtual Storage Platform as pooled resources in support of the private cloud architecture. The Hitachi Compute Blade 2000 provides the resources to host a large number of virtual machines. The Hitachi Virtual Storage Platform supports Hitachi Dynamic Provisioning to ease management and provide rapid deployment of virtual machines on the Virtual Storage Platform. This reference architecture deploys two Hitachi Compute Blade 2000 chassis with eight blades per chassis. All of the blades run Microsoft Windows 2008 R2 SP1 Hyper-V and failover clustering. A two node Hyper-V failover cluster is configured to support the management infrastructure and provide high availability. This two node management cluster supports the deployment and management of virtual machines through Microsoft Virtual Machine Manager 2008 R2 SP1 and the Microsoft Virtual Machine Manager Self-Service Portal. In addition, this high availability cluster hosts the tools and utilities to monitor and collect performance statistics for the private cloud infrastructure. A fourteen node Microsoft Hyper-V failover cluster is configured to host the tenant virtual machines deployed from the management cluster. This provides the ability to move virtual machines quickly between nodes in the cluster, enabling high availability.

In order to support the storage requirement for capacity, performance, and rapid provisioning of a private cloud infrastructure, the Hitachi Virtual Storage Platform was deployed and configured to utilize pools created with Hitachi Dynamic Provisioning. The storage configuration for the cloud fast track architecture consists of five dynamic provisioning pools configured on the Hitachi Virtual Storage Platform.

One pool was allocated to host the virtual hard disks for the virtual machine operating systems Two pools were allocated to host the application data One pool was allocated for backup data One pool was deployed for those application virtual hard disks that require high performance
In the OS and application data dynamic provisioning pools, cluster shared volume LUNs were allocated. The LUNs mapped from the backup pool were standard LUNs. For this reference architecture, the backup pool of 177TB has sufficient capacity to use Microsoft Data Protection Manager 2010 or a customer existing backup strategy to ensure proper protection of the data stored in this environment. Microsoft System Center Operation Manager and Microsoft System Center Configuration Manager, hosted in the two-node management cluster, were used to allocate the virtual machines and their associated application data LUNs across the dynamic provisioning pools. Use the round robin method for allocation of application LUNs to minimize the management of virtual machine application data in the pools used for application data. Figure 2 shows the physical layout of the Microsoft Hyper-V cloud fast track reference architecture built by Hitachi Data Systems for this solution.

10

Figure 2

Hitachi Compute Blade 2000 Chassis Configuration


This reference architecture uses two Hitachi Compute Blade chassis. Each chassis contains the following:

Eight X55A2 server blades Four 1/10Gb/sec LAN switch modules Two Fibre Channel switch modules
Each server blade has the following:

Two on-board NICs Four additional NICs provided by a mezzanine card.


Each of these NICs is connected to a LAN switch module.

11

Each blade has a 2-port Fibre Channel mezzanine card installed that is connected to the Fibre Channel switch modules. Figure 3 shows the front and back view of Hitachi Compute Blade 2000s used in this solution.

Figure 3

Hitachi Compute Blade 2000 Server Architecture


Table 3 and Table 4 list the server blade configuration for this two chassis configuration. Each blade runs Microsoft Windows 2008 R2, Datacenter Edition, with two 4-core Xeon X5640 2.66GHz processors and 72GB of RAM.
Table 3. Server Blade Configuration, Chassis One

Server Blade Blade 0 Blade 1 Blade 2 Blade 3 Blade 4 Blade 5 Blade 6 Blade 7

Server Name Chassis One-CFT-Node0 Chassis One-CFT-Node1 Chassis One-CFT-Node2 Chassis One-CFT-Node3 Chassis One-CFT-Node4 Chassis One-CFT-Node5 Chassis One-CFT-Node6 Chassis One-CFT-Node7

Role Hyper-V Host Hyper-V Management Hyper-V Host Tenant VMs Hyper-V Host Tenant VMs Hyper-V Host Tenant VMs Hyper-V Host Tenant VMs Hyper-V Host Tenant VMs Hyper-V Host Tenant VMs Hyper-V Host Tenant VMs

12

Table 4. Server Blade Configuration, Chassis Two

Server Blade Blade 0 Blade 1 Blade 2 Blade 3 Blade 4 Blade 5 Blade 6 Blade 7

Server Name Chassis Two-CFT-Node0 Chassis Two-CFT-Node1 Chassis Two-CFT-Node2 Chassis Two-CFT-Node3 Chassis Two-CFT-Node4 Chassis Two-CFT-Node5 Chassis Two-CFT-Node6 Chassis Two-CFT-Node7

Role Hyper-V Host Hyper-V Management Hyper-V Host Tenant VMs Hyper-V Host Tenant VMs Hyper-V Host Tenant VMs Hyper-V Host Tenant VMs Hyper-V Host Tenant VMs Hyper-V Host Tenant VMs Hyper-V Host Tenant VMs

This Hitachi Compute Blade 2000 configuration can host a total of 32 virtual machines per server blade. This means there can be a total of 448 virtual machines in a fourteen node Hyper-V failover cluster, as described in this reference architecture. Each server blades Microsoft Hyper-V host OS and paging files are located on two local 146GB SAS drives, configured as RAID-1 for high performance and availability.

Storage Architecture
The Hyper-V Cloud Fast Track architecture uses a Hitachi Virtual Storage Platform. For this architecture, pools created with Hitachi Dynamic Provisioning host the virtual machine OS virtual hard drives and application virtual hard drives. All LUNs in the environment were presented to the Microsoft Hyper-V hosts as cluster shared volumes (CSVs) except the LUNs used for backup data. Three dynamic provisioning pools were allocated to host the virtual machines virtual hard drives.

Pool 1 hosts the guest machine operating system. Pools 2 and 3 hosts the data LUNs for the guest virtual machines.
Figure 4 gives an overview of the Virtual Storage Platform storage configuration.

13

Figure 4

Clustered Shared Volumes


This reference architecture implements cluster shared volumes (CSV) to host the virtual machine operating system and application data. Exclusively for use with Microsoft Hyper-V failover clustering, CSVs enable all nodes in the cluster to access the same cluster storage volumes at the same time. This eliminates the one virtual machine per LUN requirement, allowing multiple virtual machines to be placed on a single CSV. This simplifies the management of the storage infrastructure in a private cloud environment. Because all cluster nodes can access all CSVs at the same time, standard LUN allocation methodologies, based on performance and capacity requirements of the expected workloads, can be used. Microsoft recommends isolating the virtual machine operating system I/O from the application data I/O. This is the reason for creating multiple pools with Hitachi Dynamic Provisioning to host the CSVs:

One pool contains the virtual machine OS virtual hard drives Three pools support application-specific virtual hard drives.
CSV architecture differs from other clustered file systems. This architecture frees it of scalability limitations, such as one virtual machine per LUN and drive letter limitations. There is no special guidance for scaling the number of Hyper-V nodes or virtual machines on a CSV volume.

14

All of the virtual machines virtual disks running on a particular CSV contend for storage I/O. Understand the I/O workload characteristics of the virtual machines hosted on CSVs located in dynamic provisioning pools. Consider the IOPS requirements and I/O profile of the virtual machines deployed, such as random read and write operations versus sequential write operations.

RAID Configuration
To satisfy the requirement to support up to 448 virtual machines this reference architecture uses the following:

A RAID-6 (6D+2P) configuration of 600GB 15K RPM SAS drives and 146GB 15K RPM SAS drives
to host the CSVs.

A RAID-6 (6D+2P) pool consisting of 2TB 7.2K RPM SATA drives to support backup of the CSV
volumes. The four pools created with Hitachi Dynamic Provisioning for this solution were created from 59 RAID-6 (6D+2P) groups on the Hitachi Virtual Storage Platform. Table 5 has the configurations for each dynamic provisioning pool used in the Hitachi Data Systems lab.
Table 5. Dynamic Provisioning Configuration

Dynamic Provisioning Pool 0 1 2 3 4

Number of RAID Groups 7 8 15 13 16

Number of Drives 56 64 120 104 128

Drive Size 146GB 600GB 600GB 600GB 2TB

Drive Speed 15K RPM 10K RPM 10K RPM 10K RPM 7.2K RPM

Usable Pool Capacity (TB) 22.5 6.4 48.3 41.8 177.6

Pool and LUN Configuration


Pool 0 contains a CSV LUN for the management cluster and CSV LUNS for the operating system virtual hard drives of the guest virtual machines that will be deployed in the tenant cluster. Pools 1, 2 and 3 contain CSVs for allocation to the guest virtual machines application databases and files. Table 6 shows the LUN allocation for each of the dynamic provisioning pools used in the Hitachi Data Systems lab. Pool 4, used for backup LUNs, is not shown in Table 6. This reference architecture follows the normal practice of allocating LUNs used for backup of virtual machines to a backup server outside the cluster configuration.

15

Table 6. LUN Allocation Dynamic Provisioning Pool Pool 1 Pool 0 LUN Size (GB) 5000 1 5000 LUN s 00 01 00 1B 2B 1C 2C 1D 2D 1E 2E 1F 2F 1G 2G 1H 2H 1B 2B 1C 2C 1D 2D 1E 2E 1F 2F 1G 2G 1H 2H Tenant Storage Ports 1A 2A Hyper-V Cluster Management

LUN Allocation Cluster Shared Volume 1 Mgmt Cluster Quorum Disk Cluster Shared Volume 1

Pool 2

Cluster Shared Volume 2 Cluster Shared Volume 3 Cluster Shared Volume 4 Cluster Shared Volume 5 Cluster Shared Volume 6 Cluster Shared Volume 7 Cluster Shared Volume 8 Cluster Shared Volume 9 Cluster Shared Volume 10 Cluster Shared Volume 11

5000 5000 5000 5000 5000 5000 5000 5000 5000 5000 5000 5000 5000 5000 5000 5000 5000 5000 1

01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 12 13

Tenant

Pool 3

Cluster Shared Volume 12 Cluster Shared Volume 13 Cluster Shared Volume 14 Cluster Shared Volume 15 Cluster Shared Volume 16 Cluster Shared Volume 17 Cluster Shared Volume 18 Cluster Shared Volume 19 Tenant Cluster Quorum Disk

1B 2B 1C 2C 1D 2D 1E 2E 1F 2F 1G 2G 1H 2H

Tenant

Storage Area Network Architecture


The storage area network architecture consists of two Fibre Channel switch modules within each of the Hitachi Compute Blade 2000 chassis. The Microsoft Hyper-V host management cluster has four paths to the Hitachi Virtual Storage Platform using ports CL1A/CL2A and CL1E/2E. The Hyper-V host tenant cluster has 12 twelve paths to the Hitachi Virtual Storage Platform using these ports:

CL1B/2B CL1C/2C CL1D/2D

CL1F/2F CL1G/2G CL1H/2H

16

The configuration shown in Figure 5 supports high availability by providing multiple paths from the hosts within each Hitachi Compute Blade 2000 to multiple ports on the Hitachi Virtual Storage Platform.

Figure 5

17

Path Configuration
All zoning is defined on the internal Fibre Channel switch modules. The 2-port mezzanine cards internal to each blade provide two HBA interfaces to the internal switches. Host groups on the Hitachi Virtual Storage Platform are used to make sure that each server blade could access only the LUNs allocated to that server blade. Table 6 lists the connections between the Microsoft Hyper-V failover clusters and the storage system ports for Chassis One. Table 7 lists the connections for Chassis Two.
Table 6. Path Configuration for Chassis One

Blade HBA and Port Number Blade 0 HBA 1 port 1 Blade 0 HBA 1 port 2 Blade 1 HBA 1 port 1 Blade 1 HBA 1 port 2 Blade 2 HBA 1 port 1 Blade 2 HBA 1 port 2 Blade 3 HBA 1 port 1 Blade 3 HBA 1 port 2 Blade 4 HBA 1 port 1 Blade 4 HBA 1 port 2 Blade 5 HBA 1 port 1 Blade 5 HBA 1 port 2

Switch

Zone Name

Storage System Port 1A

Storage System Host Group CH1_Blade0_HBA1_1

FCSW-0

CH1_Blade_0_HBA1_1_SW0_VSP_1A

FCSW-1

CH1_Blade_0_HBA1_2_SW1_VSP_2A

2A

CH1_Blade0_HBA1_2

FCSW-0

CH1_Blade_1_HBA1_1_SW0_VSP_1B

1B

CH1_Blade1_HBA1_1

FCSW-1

CH1_Blade_1_HBA1_2_SW1_VSP_2B

2B

CH1_Blade1_HBA1_2

FCSW-0

CH1_Blade_2_HBA1_1_SW0_VSP_1C

1C

CH1_Blade2_HBA1_1

FCSW-1

CH1_Blade_2_HBA1_2_SW1_VSP_2C

2C

CH1_Blade2_HBA1_2

FCSW-0

CH1_Blade_3_HBA1_1_SW0_VSP_1D

1D

CH1_Blade3_HBA1_1

FCSW-1

CH1_Blade_3_HBA1_2_SW1_VSP_2D

2D

CH1_Blade3_HBA1_2

FCSW-0

CH1_Blade_4_HBA1_1_SW0_VSP_1B

1B

CH1_Blade4_HBA1_1

FCSW-1

CH1_Blade_4_HBA1_2_SW1_VSP_2B

2B

CH1_Blade4_HBA1_2

FCSW-0

CH1_Blade_5_HBA1_1_SW0_VSP_1C

1C

CH1_Blade5_HBA1_1

FCSW-1

CH1_Blade_5_HBA1_2_SW1_ VSP_2C

2C

CH1_Blade5_HBA1_2

18

Blade HBA and Port Number Blade 6 HBA 1 port 1 Blade 6 HBA 1 port 2 Blade 7 HBA 1 port 1 Blade 7 HBA 1 port 2

Switch

Zone Name

Storage System Port 1D

Storage System Host Group CH1_Blade6_HBA1_1

FCSW-0

CH1_Blade_6_HBA1_1_SW0_VSP_1D

FCSW-1

CH1_Blade_6_HBA1_2_SW1_VSP_2D

2D

CH1_Blade6_HBA1_2

FCSW-0

CH1_Blade_7_HBA1_1_SW0_VSP_1B

1B

CH1_Blade7_HBA1_1

FCSW-1

CH1_Blade_7_HBA1_2_SW1_VSP_2B

2B

CH1_Blade7_HBA1_2

Table 7. Path Configuration for Chassis Two

Blade HBA and Port Number Blade 0 HBA 1 port 1 Blade 0 HBA 1 port 2 Blade 1 HBA 1 port 1 Blade 1 HBA 1 port 2 Blade 2 HBA 1 port 1 Blade 2 HBA 1 port 2 Blade 3 HBA 1 port 1 Blade 3 HBA 1 port 2 Blade 4 HBA 1 port 1

Switch

Zone Name

Storage System Port 1E

Storage System Host Group CH2_Blade0_HBA1_1

FCSW-0

CH2_Blade_0_HBA1_1_SW0_VSP_1E

FCSW-1

CH2_Blade_0_HBA1_2_SW1_VSP_2E

2E

CH2_Blade0_HBA1_2

FCSW-0

CH2_Blade_1_HBA1_1_SW0_VSP_1F

1F

CH2_Blade1_HBA1_1

FCSW-1

CH2_Blade_1_HBA1_2_SW1_VSP_2F

2F

CH2_Blade1_HBA1_2

FCSW-0

CH2_Blade_2_HBA1_1_SW0_VSP_1G

1G

CH2_Blade2_HBA1_1

FCSW-1

CH2_Blade_2_HBA1_2_SW1_VSP_2G

2G

CH2_Blade2_HBA1_2

FCSW-0

CH2_Blade_3_HBA1_1_SW0_VSP_1H

1H

CH2_Blade3_HBA1_1

FCSW-1

CH2_Blade_3_HBA1_2_SW1_VSP_2H

2H

CH2_Blade3_HBA1_2

FCSW-0

CH2_Blade_4_HBA1_1_SW0_VSP_1F

1F

CH2_Blade4_HBA1_1

19

Blade HBA and Port Number Blade 4 HBA 1 port 2 Blade 5 HBA 1 port 1 Blade 5 HBA 1 port 2 Blade 6 HBA 1 port 1 Blade 6 HBA 1 port 2 Blade 7 HBA 1 port 1 Blade 7 HBA 1 port 2

Switch

Zone Name

Storage System Port 2F

Storage System Host Group CH2_Blade4_HBA1_2

FCSW-1

CH2_Blade_4_HBA1_2_SW1_VSP_2F

FCSW-0

CH2_Blade_5_HBA1_1_SW0_VSP_1G

1G

CH2_Blade5_HBA1_1

FCSW-1

CH2_Blade_5_HBA1_2_SW1_VSP_2G

2G

CH2_Blade5_HBA1_2

FCSW-0

CH2_Blade_6_HBA1_2_SW0_VSP_1H

1H

CH2_Blade6_HBA1_1

FCSW-1

CH2_Blade_5_HBA1_2_SW1_VSP_2H

2H

CH2_Blade6_HBA1_2

FCSW-0

CH2_Blade_7_HBA1_1_SW0_VSP_1F

1F

CH2_Blade7_HBA1_1

FCSW-1

CH2_Blade_7_HBA1_2_SW1_VSP_2F

2F

CH2_Blade7_HBA1_2

Network Architecture
For private cloud solutions to provide both high network performance and high availability, the network needs to isolate traffic in a virtualized environment. Microsoft recommends breaking down network traffic into separate networks for private cloud implementations. Assign each network type to a different subnet. Using sub-nets breaks the configuration into smaller, more efficient networks. Achieve further network type isolation by using virtual LAN (VLAN) isolation and dedicated network switches. For VLAN-based network segmentation or isolation, configure the host servers, host clusters, VMM, and the network switches to enable rapid provisioning and network segmentation. With Microsoft Hyper-V and host clusters, define identical virtual networks on all nodes to allow a virtual machine to be able to failover to any node and maintain its connection to the network. Table 8 lists the multiple networks required in this reference architecture environment.

20

Table 8. Required Networks

Type Cluster Public/Management

Details This provides the management interface to the cluster. External management applications such as SCVMM and SCOM communicate with the cluster using this network. Use this to manage the Microsoft Hyper-V hosts to do the following: Avoid competition with the virtual machine guest traffic Provide a degree of separation for security Manage the system more easily Typically you dedicate one network adapter per host and also dedicate one port per network device to the management network. This is the primary network interface for communication between the nodes in the cluster. Use this for the cluster heartbeat network and cluster node communication. This must be separate from the cluster management network. This is for redirection of I/O when losing storage connectivity to CSVs from a failure in the Fibre Channel network. CSV traffic utilizes the same network specified for use by the cluster private network. The network for virtual machines to communicate with clients. Use this for virtual machine traffic. This reference architecture dedicated two networks to virtual machine traffic. Other architectures could have one or more networks for this. A Hyper-V virtual switch is required to support virtual machine network traffic. This network is for high-speed transfer of virtual machines between nodes in the Hyper-V failover cluster. By default, live migration traffic prefers private networks over public networks. In a configuration with more than one private network, live migration flows over the private network that is not being used by the cluster private or the CSV network. Set the priority of which network to use in the cluster management interface.

Cluster Private

Cluster Shared Volumes

Virtual Machine

Live Migration

To provide support for these network traffic requirements, the internal network switches on each Hitachi Compute Blade 2000 are configured as shown in Table 9. This configuration creates separate VLANS on the physical network interfaces.
Table 9. LAN Switch Module Configuration

Switch Module Number 0 1,2 1,2 3 0

VLAN Number 2500 2501 2502 2503 2504

Network Traffic Type Management Live Migration CSV/Cluster Virtual Machines Virtual Machines

IP Address Range 10.64.1.x 10.64.2.x 10.64.3.x 10.64.4.x 10.64.5.x

Figure 6 shows the internal network switch modules connection using two 10G cross connects for network communication between the server blades in the two chassis. Figure shows the network configuration used for this reference architecture for each Hitachi Compute Blade 2000 chassis.

21

Figure 6

22

Network Uplink Connectivity


The 1Gbps or the 10Gbps uplink capabilities of the network switch modules provide off-rack connectivity. These switches, when connected to the customer core network switching infrastructure, provide extensive connectivity into the environment.

Management Architecture
These are the management systems and tools that are utilized to deploy and operate the Microsoft Hyper-V Cloud Fast Track reference architecture. These consist of the different toolsets to manage the hardware and software. The Hitachi Command Suite and the Microsoft System Center Management Suite provide the capabilities you need to manage the private cloud end-to-end. To provide high availability for the management system, a two-node Hyper-V failover cluster is implemented to host the management software. Table 10 summarizes the virtual machines deployed in the management cluster. All of the virtual machines are run as highly available virtual machines.
Table 10. Management Cluster Virtual Machines

Virtual Machine VSP-MSFT-SNM2

Role Storage Management Host for Microsoft SQL Server 2008 R2 Host for Microsoft SQL Server 2008 R2 SCOM/VMM

Hyper-V Host Chassis One-Node0

Operating System, Processor, and Memory Microsoft Windows Server 2008 R2 Enterprise Edition, 1 VCPUs, 4 GB RAM. Microsoft Windows Server 2008 R2 Enterprise Edition, 4 VCPUs, 8 GB RAM. Microsoft Windows Server 2008 R2 Enterprise Edition, 4 VCPUs, 8 GB RAM. Microsoft Windows Server 2008 R2 Enterprise Edition, 2 VCPUs, 4GB RAM Microsoft Windows Server 2008 R2 Enterprise Edition, 2 VCPUs, 4GB RAM

VSP-MSFT- SQL1

Chassis One-Node0

VSP-MSFT- SQL2

Chassis One-Node0

VSP-MSFT-SCVMM

Chassis Two-Node0

VSP-MSFT-OpsMgr

SCOM/OpsMgr

Chassis Two-Node0

Microsoft SQL Server


This reference architecture deploys two highly available Microsoft SQL Server 2008 R2 virtual machines to support the management infrastructure. Each SQL virtual server is configured with 8GB of memory, four virtual CPUs, and runs Microsoft Windows 2008 R2, Enterprise Edition. Table 11 shows the specification for each SQL server virtual machine.
Table 11. SQL Server Virtual Machine Specifications

LU LUN1, CSV Volume LUN2, CSV Volume LUN3, CSV Volume

Purpose Operating System Database LU Log LU

Size 30GB 200GB 50GB

23

Table 12 shows the database configurations for each SQL virtual machine.
Table 12. SQL Server Database Configuration

Database Client VMM SSP VMM Ops Mgr Ops Mgr

Instance Name SQL1Instance SQL1Instance SQL1Instance SQL2Instance

Database Name SCVMMSSP VMM_DB Ops_Mgr_DB Ops_Mgr_DW_DB

Virtual Machine SQL1 SQL1 SQL1 SQL2

Microsoft Systems Center Virtual Machine Manager R2 (SCVMM)


Microsoft System Center Virtual Machine Manager is deployed for this reference architecture to manage the Microsoft Hyper-V hosts and guests in a single datacenter. The System Center Virtual Machine Manager instance that manages this solution should manage no virtualization infrastructure outside of this solution. It is designed to operate only within the scope of this reference architecture. Deployment of SCVMM is on a virtual machine running Microsoft Windows Server 2008 R2, Enterprise Edition, with two virtual CPUs and 4GB of memory. There is one 30GB OS virtual hard drive allocated in a CSV and one 500GB virtual hard drive provisioned to the SCVMM guest machine from a CSV. This 500GB LUN is the Library share for VMM that contains virtual machine templates, hardware profiles, and the system prepared virtual hard drives for deployment using Microsoft Virtual Machine Manager and Microsoft Virtual Machine Manager Self-Service Portal. This environment has these roles enabled within SCVMM:

SCVMM Administrator Administrator Console Command Shell SCVMM Library SQL Server Database (remote)
Table 13 shows the standardized set of VMM templates this configuration used to deploy virtual machines. These templates were based on the Microsoft Hyper-V Cloud Fast Track Reference Architecture Guide. These templates can be customized to fit your deployment requirements.

24

Table 13. Virtual Machine Templates

Template Template 1 (Small)

Specs 1 vCPU, 2GB Memory, 50GB Disk 2 vCPU, 4GB Memory, 100GB Disk 4 vCPU, 8GB Memory, 200GB Disk

Network VLAN 2503 VLAN 2504 VLAN 2503 VLAN 2504 VLAN 2503 VLAN 2504

OS Microsoft Windows Server 2008 R2 SP1 Enterprise Microsoft Windows Server 2008 R2 SP1 Enterprise Microsoft Windows Server 2008 R2 SP1 Enterprise

Template 2 (Medium) Template 3 (Large)

Microsoft Virtual Machine Manager Integration with Microsoft Systems Center Operations Manager
Microsoft Virtual Machine Manager is configured to integrate with Microsoft Systems Center Operations Manager. SCVMM uses System Center Operations Manager to monitor the health and availability of the Hyper-V hosts and the virtual machines that SCVMM manages. Also, SCVMM uses Systems Center Operations Manager to monitor the health and availability of the System Center Virtual Machine Manager server, databases, library servers, and self-service web servers. System Center Operations Manger provides views of the virtualized environment in the SCVMM administrator console. SCVMM integration with SCOM permits Performance and Resource Optimization (PRO) packs to enable the dynamic management of a virtualized infrastructure. The host-level PRO actions in the System Center Virtual Machine Manager 2008 Management Pack recommend migrating the virtual machine with the highest resource usage on the host whenever the CPU or memory usage on the host exceeds the threshold defined by a PRO monitor. The virtual machine migrates to another host in the host group or host cluster that runs the same virtualization software. If an IT organization has an unsuitable workload for migration when running in a virtual machine, that virtual machine can be excluded from host-level PRO actions. The virtual machine will not be migrated, even if it has the highest usage of the elevated resource on the host.

Virtual Machine Manager Self-Service Portal 2.0


For this reference architecture, VMMSSP is setup and configured to create virtual machines based on the templates previously defined in this paper to provision the appropriate hardware, OS settings, and storage.

Microsoft System Center Operations Manager 2007 R2


System Center Operations Manager agents were deployed to the Hyper-V hosts and virtual machines. The in-guest agents are used to provide performance and health of the operating system within the virtual machine. This System Center Operations Manager instance only monitors the Hyper-V cloud infrastructure. This instance does not monitor at the application level.

25

The following roles are enabled for this instance of System Center Operations Manager:

Root Management Server Reporting Server (Database resides on SQL Server) Data Warehouse (Database resides on SQL Server) Operator Console Command Shell
The following management packs are installed to provide for monitoring of the cloud infrastructure:

Hitachi Compute Blade 2000 Management Pack Microsoft Virtual Machine Manager 2008 R2 Microsoft Windows Server Base Operating System Microsoft Windows Server Failover Clustering Microsoft Windows Server 2008 Hyper-V Microsoft SQL Server Management Pack Microsoft Windows Server Internet Information Services(IIS) 2000/2003/2008 Microsoft System Center Management Packs Hitachi Data Systems Compute Blade 2000 Management Pack
The Hitachi Compute Blade 2000 Management Pack integrates with the Microsoft Systems Center Operations Management server to report and monitor on Hitachi Compute Blade 2000 chassis and server blades. This management pack displays Alert, Diagram, and State views inside the Monitoring area on the Hitachi Compute Blade on the Operations Manager console. The Alert View provides the following information:

Chassis Alerts Windows Server Alerts


The Diagram View provides the following information:

Hitachi Compute Blade Comprehensive Diagram Windows Server Blade Diagram


The State View provides the following information:

Blade State View Chassis State View Fan State View Management Module Sate View

Partition State View PCI State View Power Supply View Internal Switch State View

26

Engineering Validation
This Hyper-V Cloud Fast Track Reference architecture was designed to provide a robust infrastructure for the customer to deploy quickly a private cloud. Since each customers environment is unique, specific functionality testing was only performed to ensure that the underlying infrastructure met the requirements of the Hyper-V Cloud Fast Track program from Microsoft. The specific functionality tests performed were:

Ensure that the network design was properly configured for both performance and availability Ensure that the management structure was configured properly for monitoring and reporting on the
health of the infrastructure

Validate that the VMM Self-Service Portal correctly deployed virtual machines into the Tenant Node
cluster based on the pre-defined VMM templates The solution described in this white paper passed all of Microsoft validation testing for the Hyper-V Cloud Fast Track program.

Conclusion
Using this Hitachi reference architecture built for Microsoft Hyper-V Cloud Fast Track, you can deploy private cloud infrastructures with predictable results quickly. This solution provides a validated reference architecture to combine the Hitachi Compute Blade 2000 and Hitachi Virtual Storage Platform with Microsoft Windows Server 2008 R2 using Hyper-V and System Center. Hitachi Data Systems Global Services offers experienced storage consultants, proven methodologies and a comprehensive services portfolio to assist you in implementing Hitachi products and solutions in your environment. For more information, see the Hitachi Data Systems Global Services web site. Live and recorded product demonstrations are available for many Hitachi products. To schedule a live demonstration, contact a sales representative. To view a recorded demonstration, see the Hitachi Data Systems Corporate Resources web site. Click the Product Demos tab for a list of available recorded demonstrations. Hitachi Data Systems Academy provides training on Hitachi products, technology, solutions, and certifications. Hitachi Data Systems Academy delivers on-demand web-based training (WBT), classroom-based instructor-led training (ILT) and virtual instructor-led training (vILT) courses. For more information, see the Hitachi Data Systems Academy web site. For more information about Hitachi products and services, contact your sales representative, channel partner or visit the Hitachi Data Systems web site. .

27

Hitachi is a registered trademark of Hitachi, Ltd., in the United States and other countries. Hitachi Data Systems is a registered trademark and service mark of Hitachi, Ltd., in the United States and other countries. All other trademarks, service marks and company names mentioned in this document are properties of their respective owners. Notice: This document is for informational purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered or to be offered by Hitachi Data Systems Corporation

Hitachi Data Systems Corporation 2011. All Rights Reserved. AS-098-00 August 2011 Corporate Headquarters 750 Central Expressway, Santa Clara, California 95050-2627 USA www.hds.com Regional Contact Information Americas: +1 408 970 1000 or info@hds.com Europe, Middle East and Africa: +44 (0) 1753 618000 or info.emea@hds.com Asia Pacific: +852 3189 7900 or hds.marketing.apac@hds.com

28