Вы находитесь на странице: 1из 24

Hyper-V Management:

Addressing the Top


Nine Challenges

A discussion on ways to plan,


troubleshoot and optimize
environments based on best practices
discussed by Microsoft MVPs.
Savision- The Top Nine Problem Areas

Introduction
Hyper-V Server Troubleshooting is
Complex + Collaborative
With virtualization comes standards. More companies than ever before are virtualizing major workloads into
private clouds built atop a Hyper-V Server. Gone are the days where mission critical applications were reserved for
physical hardware. Hundreds of virtualized guests are orchestrated dynamically based on capacity requirements
and performance variations.

A Hyper-V dependent fabric is complex and takes specialized teams working together in order for it to keep running
at its best. Manually diagnosing, troubleshooting and resolving Hyper-V Server performance problems is not only
time-consuming, but also requires storage, server, and network IT teams to work flawlessly together. Without a
common, real-time view into Hyper-V host’s health, storage sub-systems and underlying networks, IT Pros are left
frustrated when trying to resolve Hyper-V problems.

Virtualized infrastructure outages are costly because of complexity. Consider these facts:

• It takes two (2) hours per incident to recover Virtualized OS related services; longer for hosted applications.
• 80 percent of IT organizations will have deployed a virtualization solution by the end of 2011. This means
there will be more VMs deployed in 2011 than in 2001 to 2009 combined.
• 100 percent of IT organizations say that their private cloud is supported by a cross-functional team of three or
more specialized members.

Without the right visibility and control of the private cloud, teams will spend endless hours troubleshooting problems
in the context of only their domain. A cross-domain dashboard – a single pane of glass that provides the same view
into the cloud fabric – is required for seamless communication. This communication speeds up troubleshooting
while reducing costly outages due to failed over clusters from bad capacity planning.

This white paper discusses the “Top Nine Problem Areas for Managing Hyper-V.” In each section, we discuss ways
to plan, troubleshoot and optimize environments based on best practices discussed by Microsoft MVPs.

Page 2
Savision- The Top Nine Problem Areas

Dozen of Tools, Dozens of Screens

The effective management of Hyper-V is not solely a matter of tools. There are dozens of
them, and each tool provides a different view into the many components of a virtual
infrastructure. Each tool is “domain” specific: storage administrators have either the
OnCommand or Symmetrix Management Console; network administrators have Smarts or
Brocade Switch Admin, while server administrators have Performance Monitor, System
Center Virtual Machine Manager, and System Center Operations Manager.

Unfortunately, this means the typical cross-functional team of virtual infrastructure


administrators each looks at an incomplete picture. What’s more, they must together
reference and cross-correlate dozens of screens and hundreds of data points in order to
solve a single incident.

For example, the typical Hyper-V administrator has the following tools open:

• System Center Virtual Machine Manager


• Performance Monitors or System Center Operations Manager
• RDP Sessions
• System Logs
• System Center Reports
• PowerShell
• Native Management Systems

It’s a matter of the right tool, a single tool that acts as the always-on, go-to dashboard for
all the infrastructure administrators, no matter his or her specialization. For the
administrators it must be transparent where the components reside, in the on-premises
data center, in the private cloud or public cloud. Everything must be accessible through
one tool.

Best Practices : Managing + Troubleshooting Microsoft Hyper-V

This whitepaper is neither a tutorial nor step-by-step handbook for common problems.
Rather, this whitepaper provides one with the wisdom of the smartest, cutting edge Hyper-
V administrators. It focuses on planning, architecting and troubleshooting specific types of
problems, including high availability, cluster-shared volumes, VHD storage configurations,
guest configurations, and snapshot usage.

Page 3
Savision- The Top Nine Problem Areas

Hyper-V Hardware
Configurations 1
Planning for and configuring your Hyper-V Infrastructure correctly is the most important part. Improper planning will
surely result in failure and headaches. Therefore it is very important that you do a thorough assessment of your
infrastructure before you start virtualizing that infrastructure. The assessment needs to tell you how much Memory,
CPU, Storage or Network bandwidth you need in your virtual infrastructure and thus how many hosts you need.
There are some very intelligent and easy to use tools like the Microsoft Assessment and Planning toolkit which will
help you to gather and assimilate this data to provide you with the needed hardware configuration for your new
virtual infrastructure.

When you decide to virtualize a part of or your entire IT infrastructure, this virtual infrastructure must be very
reliable. This is because all the applications needed by end users are dependent on this virtual infrastructure. As
such, the components for the virtual infrastructure must be highly available.

With Hyper-V you have several options to build a highly available virtual infrastructure. First of all, you need to
implement high availability at your hardware level: at least two storage controllers (HBA or iSCSI), redundant
network connections and redundancy at your storage level in the Hyper-V host and the connected storage system.

Additionally, you can implement high availability at the hypervisor (software) level. Windows Failover Clustering
makes it possible to create a cluster of Hyper-V nodes. In Windows Server 2008 R2 you can build a cluster with a
maximum of 16 nodes. For Windows Server 8 this will be 63 nodes. Virtual Machines reside as cluster resources in
this cluster and can be hosted by every node in the cluster. When a node fails, the virtual machine will automatically
be transferred to another node.

In a clustered environment it’s also possible to Live Migrate (without any downtime) a virtual machine from one
node to another. By using Windows Failover Clustering you can make your virtual machines highly available and
thus make the application and services provided by the virtual machines highly available.

When you decide to build a highly available Hyper-V environment, there are a couple of components which need
some special attention: (1) storage; (2) network; (3) server hardware – processors; and (4) server memory.

Page 4
Savision- The Top Nine Problem Areas

Storage IO + Striping

In a virtual environment, virtual machines are represented by a number of files. These files
must be saved on a storage system. This can be local storage, but must be shared storage
when you build a highly available environment.

As you probably understand, access to these files must be very fast. The speed of your
storage system determines the speed of your virtual machines. The storage design that is
utilized for the host server architecture has a major impact on host and guest performance.

Storage performance is a complicated mix of drive, interface, controller, cache, protocol,


SAN, HBA, driver, and operating system considerations. The overall performance of the
storage architecture is typically measured in terms of Maximum Throughput, Maximum IO
operations per second (IOPS), and Latency or Response Time. While each of these three
factors is important, IOPS and Latency are the most relevant to server virtualization.

The type of hard drive utilized in the host server or the storage array will have the most
significant impact on the overall storage architecture performance. The critical performance
factors for hard disks are the interface architecture (for example, U320 SCSI, SAS, SATA),
the rotational speed of the drive (7200, 10k, 15k RPM), and the average latency in
milliseconds. Additional factors, such as the cache on the drive, and support for advanced
features, such as Native Command Queuing (NCQ), can improve performance.

As with the storage connectivity, high IOPS and low latency are more critical than
maximum sustained throughput when it comes to host server sizing and guest
performance. When selecting drives, this translates into selecting those with the highest
rotational speed and lowest latency possible. Utilizing 15k RPM drives over 10k RPM
drives can result in up to 35% more IOPS per drive. Our recommendation is that you make
use of at least 10,000 RPM disks in your storage system.

When you place the disks in a RAID configuration (like RAID 1+0), you can achieve even
higher reliability and faster access to these files. This RAID level is also known as
“mirroring with striping.” RAID 1+0 uses a striped array of disks that are then mirrored to
another identical set of striped disks. For example, a striped array can be created by using
five disks. The striped array of disks is then mirrored using another set of five striped disks.

RAID 1+0 provides the performance benefits of disk striping with the disk redundancy of
mirroring. RAID 1+0 provides the highest read-and-write performance of any one of the
other RAID levels, but at the expense of using twice as many disks. Consult your storage
vendor to ask which RAID level you need to use at your specific storage system to achieve
high read/ write access because this can differ per storage system. To access your shared
storage system you should use Fiber Channels or iSCSI connections. Anything less will
result in I/O latency, failover problems, and in some cases, lost data.

Page 5
Savision- The Top Nine Problem Areas

Striping: Select your stripe unit size carefully; stripe units for Windows volume manager
are fixed at 64 KB. However, most hardware solutions range from 4 KB to 1 MB or more.
You want to select a unit which maximizes the disk activity without unnecessarily breaking
up requests by requiring multiple disks to service a single write request by the system.

Disk Speed: Modern enterprise disks access their media at 50 to 150 MB/s depending on
the rotation speed and sectors per track, which varies based on the type of disk and block
configuration.

Earlier, we addressed some of the best practices with disk and striping configurations.
However, volume layout and storage priority parameters are key in making sure that your
most important VMs have priority. Look carefully at the kind of workloads your virtual
machines run, and place virtual machines with similar workloads together on a volume.
Decide per volume which RAID level you will use for the volume.

Internal Priorities: In Windows Server 2008 R2, you can specify an internal priority on
individual IOs. You are able to lower the priority of background IO tasks to give priority for
response-sensitive IO tasks. For example, if you have a streaming video server and an
Online Analytical Processing (OLAP) database on the same node, you will want to ensure
that the streaming video server’s responses take precedence over reads on the OLAP
database. You can change the IO configuration of applications by setting this registry key:

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\DeviceClasses
\{Device GUID}\DeviceParameters\Classpnp\IdlePrioritySupported

Network Interfaces

Network configuration of your Hyper-V host is not an out-of-the-box configuration like any
other Windows Server 2008 R2 host. Within Hyper-V we have host networks and virtual
machine networks. Because of the importance of these components, network connections
must be made highly available. Therefore a fairly large number of NICs per host server
may be required. In some instances, this is the one factor that can mitigate against blade
servers.

When you build a highly available Hyper-V environment your host must have at least eight
network adapters:

• Two adapters for host management;


• Two adapters for the cluster network;
• Two adapters dedicated for Live Migration;
• Two adapters for the virtual machine network.

Page 6
Savision- The Top Nine Problem Areas

These adapters can be teamed through teaming software. However, NIC teaming is not a
built-in feature of Hyper-V, and we depend on the support of the hardware vendor. You
always need to check if your vendor supports teaming functionality in Hyper-V. This will
change in Windows Server 8 because teaming is a built-in feature of the OS. Within the
several networks you define in Hyper-V, it is possible to make use of VLAN’s to isolate
network traffic. If you need to isolate virtual machines in different VLANs, this can be done
by advertising these VLANs on your physical network, enabling VLAN transparency or
promiscuous mode on your virtual machine network, and specifying the VLAN ID on the
properties of your VM.

When you make use of an iSCSI SAN system, you need to isolate iSCSI traffic in a
separate network or VLAN. You also need to use dedicated network adapters for iSCSI
traffic. These adapters may not be shared with the other host or virtual machine networks.
It is recommended to exclude the iSCSI network from cluster communications.

Processors

Hyper-V requires x64 processor architecture from Intel or AMD, as well as support for
hardware-execute-disable and hardware virtualization, such as Intel VT or AMD-V. The
number of processor cores is a key performance characteristic. Windows Server 2008 R2
with Hyper-V makes excellent use of multi-core processors, so the more cores the better.

Another important characteristic is the processor clock speed, which is the speed at which
all cores in the processor will operate. It’s important because it will be the clock speed of all
of the guest virtual machines. This is a key variable in the consolidation ratio because it
impacts the amount of candidates that the host server can handle and the speed at which
those guests will operate. As an example, choosing 2 GHz processor rather than a 3 GHz
processor on a server that will host 20 guests means that all of those guests will run only
at 2 GHz.

Hyper-V has some built-in processor-optimizing capabilities, including: (1) guest idle
states; (2) timer coalescing, and (3) core parking in Windows Server 2008 R2. This saves
you power and money.

Memory

Dynamic memory is a powerful feature that—when used correctly—can dramatically


optimize the use of memory. Optimizing memory usage is critical in order to achieve a
high virtual machine density while maintaining the desired system and application
performance. However, before you configure your memory pool, you must first ensure that
you have adequate memory for your root and child partitions. Remember Hyper-V will
always allocate the memory for child partitions before the root. For information on how to
specify a memory reserve for the parent partition, refer to the Troubleshooting section in
the Hyper-V Dynamic Memory Configuration Guide found here. We recommend to
configure a host reserve of 400 MB +(0.03 * Total Memory Size of the host).

Page 7
Savision- The Top Nine Problem Areas

Capacity Management +
Performance Utilization 2
An entire paper could be written on capacity management planning and utilization for Hyper-V. For now, we will
focus on the aspects that matter most: processor, storage, memory, and network.

Generally, you want to allocate the same level of resources to your VM as would be required in a physical server.
For example, if the SQL server requires 4GB of RAM and 4 CPUs, then you should allocate to the same level in a
virtualized setting. The benefit will come, of course, when those resources are under-utilized and can be used by
other VMs.

Page 8
Savision- The Top Nine Problem Areas

Processors

Hypervisors perform complex calculations and queues to time slice jobs efficiently across
multiple processors. Inherently, there is an overhead introduced by the hypervisor to
emulate such operations. Beyond configuring integration services correctly, there are
several things to remember:

• Four Processors Per VM: Hyper-V R2 only supports 4 virtual processors per VMs.
VMs with lower loads should be configured to use one or two virtual processors.
Balancing and sizing your VMs for proper processor allocation is key to realizing
virtualized savings. Only allocate multiple virtual processors to a virtual machine if
the applications that run in that virtual machine are SMP aware.

• Expected Virtual Processors: If you have a quad core processor with hyper-
threading, you will see 8 instances for this object (one for each core).

Reporting on processor utilization requires that you collect data on the following
performance counters:

Page 9
Savision- The Top Nine Problem Areas

Storage

Overall storage capacity is something that you can estimate by looking at the applications
and data retention requirements. Generally, this involves estimating a storage driver (file
size, data per user, transactions per day) and multiplying it by growth and retention rates.

However, IO utilization is far more important than overall capacity. You can always buy
more disks; however, if the storage system is slow, then you have a serious problem on
your hands.

Synthetic Devices: Hyper-V supports synthetic and emulated storage devices. Avoid
emulated devices. Synthetic devices offer far superior throughput and response time, while
using less CPU. The only time you should consider emulated devices is when you have a
filter driver that loads and reroutes IOs to a Synthetic Storage device.

Note: VM integration services must be installed to use synthetic devices on the VM.

Virtual Disks: Hyper-V supports three types of virtual hard disks: (1) Dynamically
Expanding VHD; (2) Fixed-Size VHD and (3) Differencing VHD. Dynamically expanding
virtual hard disks provide storage capacity as needed to store data. The size of the .vhd file
is small when the disk is created and grows as data is added to the disk. The size of
the .vhd file does not shrink automatically when data is deleted from the virtual hard disk.
However, you can compact the disk to decrease the file size after data is deleted by using
the Edit Virtual Hard Disk Wizard.

Dynamically Expanding VHDs are known to increase the necessary disk IOs for each write
and increase CPU overhead. Fixed Size VHDs allocate the entire size of the VHD on
allocation. This has the lowest fragmentation likelihood and lowest CPU overhead hit.
Differencing virtual hard disks provide storage to enable you to make changes to a parent
virtual hard disk without altering that disk. The size of the .vhd file for a differencing disk
grows as changes are stored to the disk. This type of disk consumes the most CPU cycles
and incurs the largest amount of IO.

When a snapshot is taken, a differencing VHD is created. Snapshots essentially create a


chain of bits which point to a single parent VHD: The more snapshots in a chain, the
slower and more resource intensive the VM will perform. There is a logical limit of 50
snapshots; however, we recommend avoiding snapshots in production environments that
have no more than 5 snapshots in a chain for high performing servers.

Pass-through Disks: Pass-through disks are fast. Essentially, a pass-through disk is


mapped directly to a physical disk or storage LUN. Pass-through disks avoid the
congestion imposed by the file system in the root partition. Pass-through disks are great
for large data files or servers that require quick failover or migration.

Page 10
Savision- The Top Nine Problem Areas

Reporting on I/O performance and utilization requires that you collect data on the following
performance counters:

• \Physical Disk(*)\Current Disk Queue Length


• \Physical Disk(*)\Disk Bytes / sec
• \Physical Disk(*)\Disk Transfers/sec
• \Hyper-V Virtual Storage Device(*)\*
• \Hyper-V Virtual IDE Controller(*)\*

Memory

Hyper-V virtualizes the guest physical memory to isolate VMS from each other and provide
a contiguous, zero-based memory space for each guest operating system. If you’re using
non-SLAT hardware, constant modifications to the virtual address space in the guest will
consume IO and memory.

For most virtual machines, the use of dynamic memory is recommended. However, some
applications are not well-suited for the use of dynamic memory. Microsoft Exchange is an
example of an application that is not well-suited for dynamic memory.

All VMs should have an appropriate amount of memory assigned as startup memory when
using dynamic memory. Refer to the Appendix: Memory Settings in the Hyper-V Dynamic
Memory Configuration Guide found here for more detailed information on startup memory
recommendations.

Monitoring and reporting dynamic memory usage is one of the most is on the most critical
aspects. You should ensure that you are collecting the following data:

Page 11
Savision- The Top Nine Problem Areas

When you make use of Dynamic Memory, it’s important to take care of the Memory
Demand of a virtual machine. By “Memory Demand.” we mean the amount of memory that
a virtual machine “needs” (called “Total Committed Memory” in Windows Server 2008 R2
SP1).

The Memory Demand of a virtual machine is automatically calculated by Hyper-V. This is


done based on guest memory usage information. Beside of this there is a Memory Buffer.
This memory buffer is used for immediate memory needs of a virtual machine, and it is
also used as file cache by Windows.

The memory buffer is configurable per virtual machine. When you configure a 1024MB as
Startup RAM and set a buffer of 20 percent, there will be 1228MB memory reserved for
your virtual machine. By knowing this we can conclude this:

Ideal Memory = Memory Demand + Memory Buffer

Network

Measuring throughput and capacity on your NICs is critical in order to maintain a high-
performing virtual platform. In general, you must focus on the synthetic switch devices.

Make sure that your synthetic network adapters support VLAN tagging; VLAN tagging
provides significantly better network performance if the physical adapter supports
NDIS_ENCAPSULATION_IEEE_802_3_P_AND_Q_IN_OOB encapsulation for both large
send and checksum offload.

To report on Hyper-V capacity and monitor performance, the following performance


counters should be collected:

• \Network Interface(*)\*
• \Hyper-V Virtual Switch(*)\*
• \Hyper-V Legacy Network Adapter(*)\*
• \Hyper-V Virtual Network Adapter(*)\*

Page 12
Savision- The Top Nine Problem Areas

Enlighten OS Drivers
3
Enlightened VMs are virtual machines that have Hyper-V integration services installed. Integration services provide
the necessary drivers to ensure that emulation is avoided by the VM OS. A telltale sign that integration services are
not installed is if you log in to a VM and the NIC does not work; the mouse does not move outside the VM window.
Failing to choose operating systems versions which support integration services will result in significant
performance problems as everything is emulated.

Virtual Machines without the Integration Components installed don’t have a VSC, and thus can’t communicate over
the VMBus with the VSP for accessing hardware. These virtual machines make use of emulated devices. Emulated
devices for virtual machines are executed in the VM Worker Process. The VM Worker process runs in Ring 3 of the
Hypervisor (User Mode). For accessing the hardware, we need to talk to the Device Drivers of the parent partition,
and these drivers are running in Ring 0 of the Hypervisor (Kernel Mode) so a context switch is necessary. A context
switch always needs time and consumes CPU time, so avoid the use of emulated devices.

Page 13
Savision- The Top Nine Problem Areas

Live Migrations
4
Migrating existing workloads to Hyper-V is very common and useful, as it lets you shift a server from one node to
the next in a second or less. Doing it correctly takes a little finesse. Microsoft supports the following migration
scenarios:

Page 14
Savision- The Top Nine Problem Areas

Configurations Not Working

After migrating a VM from one cluster to another, you may experience on of the following
errors: (1) Network adapter for the VM shows, "Configuration Error" after it is moved. Live
migration and quick migration fails on these; (2) The "Enable Virtual LAN identification" is
unchecked and the wrong "VLAN ID" is inserted. (3) A totally different network is selected
for the network.

To prevent this from occurring, make sure that you always make any changes to the VM
via the Hyper-V manager. Select the VM in Cluster Manager and choose “More Actions” >
“Refresh Virtual Machine Configuration.” This should reset all of the configurations to the
new node.

CPU Compatibility

Ideally, all hardware in your virtual environment is the same. In reality, though, it’s not. Live
Migration is not supported between AMD and Intel Hyper-V hosts. Microsoft has published
a paper on the matrix of compatible CPUs here.

Physical to Virtual Migrations

Microsoft has made VMM a workhorse in making the conversion process straightforward,
not only for physical servers, but also for VMs hosted on VMware. Online migrations of
these servers are not recommended, even though they are supported. Make sure that the
server has not been in operation very long and that it is stable.

Back-ups

Always take a back-up of the server before you migrate. These backups will greatly
simplify your recovery process if something fails.

Page 15
Savision- The Top Nine Problem Areas

Backup and Restore


5
Backing-up and restoring is not simple with Hyper-V. There are a myriad of options, including host-based-, agent-
based- and SAN-snapshots. SAN-snapshots are not to be confused with VHD snapshots.

VHD snapshots should not be considered as back-ups as they essentially modify or chain differentially from a single
parent VHD. You should consider them rather as “points-in-time” check-points to roll-back from small changes. For
example: If a patch creates a new issue, rolling back to pre-patch snapshot is a good use of them. Always make
sure to add informative text to snapshots so that you and your team understand the state that was saved.

Here’s how you back-up a VM using Windows Server Back-up:

• When you perform a backup of the virtual machines, you must back up all volumes that host files for the
virtual machine, including the InitialStore.xml file (in C:\ProgramData\Microsoft\Windows\Hyper-V, by default)
and the volume(s) containing the VHD(s) and configuration XML files. For example, if the virtual machine
configuration files are stored on the D: volume, and the virtual machine virtual hard disk (VHD) files are stored
on the E: volume, and InitialStore.xml file is stored on the C: volume, you must back up the C:, D: and E:
volumes.
• Virtual machines which do not have Integration Services installed will be put in a saved state while the VSS
snapshot is created.
• Virtual machines which are running operating systems that do not support VSS (such as Microsoft Windows
2000 or Windows XP), will be put in a saved state while the VSS snapshot is created.
• Virtual machines which contain dynamic disks must be backed up offline.
• Note: Windows Server Backup does not support backing up Hyper-V virtual machines on Cluster Shared
Volumes (CSV volumes).

A great guide on all of the considerations for backing up and restoring your VM environment is located here.

Page 16
Savision- The Top Nine Problem Areas

High Availability
6
High availability is achieved through clustering locally and geographically. Hyper-V should always be clustered and
have a failover scenario that is exercisable.

Server 2008 R2 includes the first version of Windows Failover Clustering to offer a distributed file access solution.
Cluster Share Volumes (CSV) in R2 is exclusively for use with the Hyper-V role, and enables all nodes in the cluster
to access the same cluster storage volumes at the same time. This enhancement eliminates the 1 VM per LUN
requirement of previous Hyper-V versions. CSV uses standard NTFS and has no special hardware requirements. If
the storage is suitable for Failover Clustering, it is suitable for CSV.

Because all cluster nodes can access all CSV volumes simultaneously, we can now use standard LUN allocation
methodologies based on performance and capacity requirements of the workloads running within the VMs
themselves. Generally speaking, isolating the VM Operating System I/O from the application data I/O is a good rule
of thumb, in addition to application-specific I/O considerations, such as segregating databases and transaction logs
and creating SAN volumes that factor in the I/O profile itself (i.e., random read-and-write operations vs. sequential
write operations).

CSV provides not only shared access to the disk, but also disk I/O fault tolerance. In the event the storage path on
one node becomes unavailable, the I/O for that node will be rerouted via Server Message Block (SMB) through
another node. There is a performance impact while running this state; it is designed for use as a temporary failover
path while the primary dedicated storage path is brought back online. This feature can use the Live Migration
network and further increases the need for a dedicated, gigabit or higher NIC for CSV and Live Migration.

CSV maintains metadata information about the volume access, and requires that some I/O operations take place
over the cluster communications network. One node in the cluster is designated as the coordinator node and is
responsible for these disk operations. Virtual Machines, however, have direct I/O access to the volumes and only
use the dedicated storage paths for disk I/O, unless a failure scenario occurs as described above.

Microsoft recommends that you not expose the clustered shared volumes to servers which are not in the cluster.
One volume will function as the witness disk, and one volume will contain the files that are being shared between
the cluster nodes. This volume serves as the shared storage on which you will create the virtual machine and the
virtual hard disk.

Thomas Olzark provides a great rundown of his use case and best practices here.
Page 17
Savision- The Top Nine Problem Areas

Microsoft Mission-
Critical Applications 7
Two of the most mission-critical Microsoft Applications are: (1) Microsoft Exchange; (2) Microsoft SQL Server.
Virtualizing these two applications has largely been off-limits for IT teams. However, advancements in Windows
Server 2008 R2 have made virtualization for these roles a reality.

Page 18
Savision- The Top Nine Problem Areas

Virtualized Exchange Servers

Several years ago, most administrators laughed at the notion of virtualizing Exchange.
Today, 90% of enterprises are virtualizing a portion or all of their Exchange servers. As of
Exchange 2010 SP1, combining host level clustering with Exchange clustering is officially
supported.

There are several factors to consider when virtualizing Exchange, including memory
allocation, storage configuration, and performance management. In this section, we will
discuss using Hyper-V as the virtualization platform. However, other platforms such as
VMware may be chosen as well. When you use another platform than Hyper-V, always
check if the hypervisor is in the Server Virtualization Validation Program (SVVP).
Virtualization of Exchange is only supported if the hypervisor is a part of the SVVP.

A good rule of thumb is: Size your virtual Exchange servers just like physical Exchange
servers.

Deployment Architecture

Set-up a Hyper-V cluster in a high-availability configuration. At minimum, make sure to


configure two VMs with their required Exchange 2010 role topology. That is:

• Client Access Server (CAS)


• Hub Transport (HUB)
• Mailbox Server (MBX)

Combine the three roles on both servers. Make the MBX role highly available through a
Database Availability Group (DAG), and make the CAS and HUB role highly available
through a (virtual) load balancer. A DAG can be an active-active configuration for your
mailbox databases, so balancing the load to this role is also possible. Due to the huge
improvements in the database architecture and structure, the database can be placed on
normal VHD’s.

Differencing or Dynamic VHD’s are not supported for Exchange. This configuration makes
the virtual machine and the application inside of it (Exchange) highly available! For more
information about virtualizing Exchange 2010, you can read this article.

Page 19
Savision- The Top Nine Problem Areas

Virtualized SQL Servers

SQL Server is best virtualized on Hyper-V. Why Hyper-V is best for SQL server is really
founded in SQL Servers’ reliance on Windows Server Cluster Technology (WSCT) in highly
available configurations. WSCT is arguably the best technology solution for managing
database mirroring for SQL Server. Moreover, SQL Server is able to take advantage of
many of Hyper-V’s native, advanced features in a fail-over scenario.

When SQL Server is deployed in an HA configuration on Hyper-V, the following are still
possible:

• Dynamic Memory: SQL Server will receive the right level of memory based on its
demand.
• Live Migration: Administrators are able to move SQL Server VMs around without
failure or data loss.
• N-Port ID Virtualization: NPIV is supported via VMM. NPIV allows multiple fiber
channels to occupy a single port in a SAN.

Overall, there are a number of other performance and configuration reasons to use Hyper-
V as the virtualization platform for SQL Server. Microsoft explains the finer details here.

Page 20
Savision- The Top Nine Problem Areas

Proactive Monitoring +
Communication 8

Monitoring Hyper-V and Guests is critical. There are many different ways to monitor the availability, performance,
configuration and security of your Hyper-V environment. Solutions like System Center Operations Manager, Veeam,
or IBM Tivoli all offer basic monitoring capabilities. However, you should pay special attention to these systems
capabilities when moving into a clustered virtual environment. Many do not take into account the dynamic nature of
these environments and the need to connect performance counters from the original host to the new host.

Regardless of how basic or sophisticated your monitoring framework is, you should always ensure that you have
the basics covered:

I. Windows Server 2008 R2 – Hyper-V Role


a. Processor
b. Memory
c. Disk IO
d. Network
II. Guest Performance
a. Processor
b. Memory
c. Disk IO
d. Network
III. Mission Critical Apps
a. Monitor Application Specific Aspects

Page 21
Savision- The Top Nine Problem Areas

Tuning Performance
Baselines 9

Many monitoring tools come with preset thresholds. Most thresholds are neither designed nor set for your specific
environment. Bad thresholds mean a ton of alerts, alerts which you will forever ignore.

Thresholds need to be monitored and tuned. This can take time, and often requires constant evaluation and
updates. For example, CPU utilization on a larger, multi-processor server may need to be set lower, based on its
usage vs. the default 85% threshold. Windows Server publishes over 800 different performance counters. Picking
the right ones to tune is difficult. Luckily, there are many products out there, so select the most important ones for
you.

Page 22
Savision- The Top Nine Problem Areas

Conclusion:
Vital Signs:: One Screen. One Team.
Complexity Dismissed.
Work Smarter - Manage Your Private Cloud Complexity

It takes nearly 2 hours to troubleshoot basic virtualized fabric problems, according to Gartner. Hyper-V relies on a
complex fabric to function, which requires highly-specialized skills in order to successfully manage. When you do
not have enough hours in the day, training or manpower, let Vital Signs for Hyper-V point you in the right direction
immediately.

Work Faster - Fix Problems in Minutes

Well over 80 percent of the time spent troubleshooting private cloud problems is spent isolating the underlying
cause of the problem to a piece of its underlying infrastructure. Vital Signs eliminates troubleshooting time by
providing you with a single screen that visually correlates all events and points you toward a resolution in minutes.
Additionally, Vital Signs provides administrators with detailed information on each performance graph, thereby
ensuring that administrators can immediately understand and begin to resolve the problem. The end result is
problem resolution in minutes, rather than hours, days, or weeks.

Work Proactively – Improve Service Levels

The private cloud can and will likely run every application, including messaging, collaboration services, and
databases. So long as highly-visible applications are running atop Hyper-V whenever there is a problem within the
fabric, everyone in the organization will be affected in multiple ways. With Vital Signs, cross-functional
administrators can help detect and pinpoint the root cause of a problem immediately, well before multiple
applications decide to fail.

Work Efficiently - Reduce Operational Costs

Vital Signs enables enterprises to spend less time and money on manual troubleshooting and on trying to resolve
private cloud problems across a dozen tools. Vital Signs helps reduce the operational costs of maintaining your
cloud infrastructure.

Page 23
Savision- The Top Nine Problem Areas

Savision :: Vital Signs for Hyper-V - More Information

For more information regarding Savision and Vital Signs, please contact:
sales@savision.com or visit: www.savision.com/Hyper-V

Author:
Peter Noorderijk
Senior Consultant at Imara ICT (The Netherlands).
peter.noorderijk@imara-ict.nl
http://www.imara-ict.nl

Вам также может понравиться