Вы находитесь на странице: 1из 13

Best practices with Citrix XenServer on HP ProLiant servers

Technical white paper

Table of contents
Executive summary............................................................................................................................... 2 Defining best practices ......................................................................................................................... 2 Processor considerations ................................................................................................................... 2 Memory .......................................................................................................................................... 4 Storage........................................................................................................................................... 4 Networking ..................................................................................................................................... 6 The HP advantage ............................................................................................................................... 7 Virtual Connect Flex-10..................................................................................................................... 7 Storage........................................................................................................................................... 9 HP ProLiant BL4xx server blades ...................................................................................................... 10 Hints and tips .................................................................................................................................... 11 XenApp on XenServer ..................................................................................................................... 11 Summary .......................................................................................................................................... 12 For more information .......................................................................................................................... 13

Executive summary
More and more frequently, businesses are looking to virtualize for a multitude of reasons. It may be to consolidate old hardware running old applications and operating systems into newer more efficient hardware. The process of simultaneously upgrading the application, operating system, and hardware can be quite daunting. Migrating the existing application and operating system to a virtual machine on new hardware allows the server administrator to handle the upgrade in manageable steps, and helps keep disruption to the production environment to a minimum. The ability to create a redundant and highly available environment is also appealing. If the hardware fails, restart the VM on another server, or, if the hardware requires maintenance, migrate the VM to a different server to allow upgrading and updating of the current server. Cost reduction is also a major reason for virtualizing; running multiple virtual machines on a single server, or even pools of servers, can greatly reduce the data center footprint and operating costs. This document will look at approaches and ideas to help you define what is important in the virtualized solution to solve your issues. Target audience: Technical planners and decision makers looking to create a virtualized environment using Citrix XenServer and HP ProLiant servers.

Defining best practices


When implementing best practices around Citrix XenServer on HP ProLiant servers, you need to understand the performance implications for CPU utilization, storage IOPS, and network bandwidth. However, sizing and characterization tend to focus on a single server. In a virtualization solution, the approach is like building a data center, more than just the single server must be considered as there may be multiple servers as well as storage and networking considerations. A reference architecture to support 1000 users in a Citrix XenDesktop VDI configuration on BL460c G6 servers requires 16 servers with 12 HP StorageWorks P4500 SAN nodes 1. As with a data center, in a virtualized configuration, storage and networking must be part of the planning process. When using a pooled environment, networking and storage become the potential bottlenecks. When looking at storage, you must consider the type of storage to be used and the storage features desired. In larger deployments, a SAN - either iSCSI or fibre channel - may be used to create server pools for live migration, work load balancing and the implementation of high availability. Once the type of storage is determined, the impact on the networking configuration and the desired storage features need to be addressed. Understanding best practices requires looking at the CPU utilization on each server, the number of servers required, the storage to be implemented, and the networking required to support the infrastructure.

Processor considerations
Following sizing guidelines defined in the Performance and characterization of Citrix XenServer on HP BladeSystem, depending on the application load, 4 VMs can be supported on a single core, and a fully configured ProLiant BL460c G6 with two quad core processors can support up to 28 virtual machines2. In the event of a server failure, 28 VMs will fail. A better alternative would be to deploy two BL460c G6 servers, each with 14 VMs; in that scenario, losing a server would only impact 14 VMs. The worst case would be the need to import the downed VMs to the still running server and restart them. If the servers are in a pooled configuration with shared storage utilizing the XenServer
1 2

For more information, see http://h20195.www2.hp.com/V2/getdocument.aspx?docname=4AA1-4581ENW&cc=us&lc=en It is assumed that all guidelines defined in the Performance and Characterization of Citrix XenServer on HP BladeSystem for CPU utilization load, networking, and application definitions will be followed. Two socket quad core allows for 8 cores, 1 core reserved for hypervisor, 7 core times 4 VMs per core, 28 VMs.

HA feature, the failed VMs would be automatically restarted on the remaining server and still not exceed the defined limits for a single server. The shared storage also allows for moving running VMs between servers as desired with live migration using XenMotion. This migration allows avoiding any downtime for the VMs, while allowing the server administrator to perform repairs or upgrades on the hardware. If a third BL460c G6 is added to the pool, each server could run 9-10 VMs each. Using live migration and workload balancing from XenServer, VMs can automatically be moved between the servers to achieve the best balance and performance on all of the servers, while still allowing one server to be taken off-line with little effect on the overall performance of the pool. The application being run within the VMs is the driving factor for how many VMs can be supported on a single server. When looking at Citrix XenApp, a BL460c G6 Nehalem server performs best with four VMs per server, each VM configured with four vCPUs; on a BL465c G6 the best performance is 6 VMs with 2 vCPUs for each VM3. However, running Citrix XenDesktop 4 on a ProLiant BL460c G6, eight to nine VMs can be supported per core, up to 70 VMs on a single server 4. Understanding the application load of each VM determines the number of VMs that can be supported on a single server, and that determination helps establish the total number of physical servers required. When looking at the number of VMs to run on a server, it is best to reserve one core processor for use by the hypervisor/control domain of XenServer. With the release of Citrix XenServer 5.6, XenServer will now support 64 logical cores within the server. The virtual CPU limit per VM is eight. Virtual CPUs Consideration must be given to the use of virtual CPUs (vCPUs). Additional vCPUs may improve performance for a VM depending on the application being run, but too many vCPUs or overcommitting the vCPUs will affect performance. If the application running in the VM is multi-threaded, it will take advantage of additional vCPUs. However, just adding vCPUs will not necessarily improve performance. In Performance and characterization of Citrix XenServer on HP BladeSystem, a BL490c G6 running a VM with a multi-threaded CPU intensive application showed a 75% performance improvement with four vCPUs over the same test run with 1 vCPU in the VM. However, going past four vCPUs degraded performance, and with eight vCPUs, performance was little better than running with a single vCPU. Over-committing of VMs will also cause performance issues. Again, citing the Performance and characterization of Citrix XenServer on HP BladeSystem, a BL465c G6 with dual-socket six-core processors (six VMs with two vCPUs each) returns the same performance as a single VM with two vCPUs. However, eight VMs with two vCPUs had performance closer to a single VM with 1 vCPU. The rule of thumb for virtual CPUs is that if you add multiple vCPUs to VMs do not allow the total number of vCPUs assigned across all VMs to exceed the number of logical CPUs in the XenServer. For Intel servers with hyper-threading on, the additional logical processors can be used in determining the number of vCPUs to allocate. A Bl460c G6 with 2-socket quad-core processors with hyperthreading on shows 16 logical CPUs to XenServer. Intel Hyper-threading For the Intel Nehalem processors, it is best to run with hyper-threading on. Testing showed close to 30% improvement in performance with hyper-threading on. Prior to the Nehalem processors, turn hyper-threading off.

3 4

Details of XenApp on XenServer testing by HP are available at http://www.hp.com/solutions/ActiveAnswers/XenAPP HP reference configurations for Citrix XenDesktop is available at http://h20195.www2.hp.com/V2/getdocument.aspx?docname=4AA14581ENW&cc=us&lc=en

Memory
Starting with Citrix XenServer 5.6, up to 256 GB of RAM is supported in the physical server. Each VM can support up to 8 GB of RAM. The concept of memory over-commit is not supported in XenServer 5.6, although dynamic memory allocation is supported. For each VM, memory can either be set at a fixed amount, meaning when the VM starts it will require that amount of memory, or a minimum and a maximum for memory can be configured. At start-up of the VM, the minimum amount of memory will be accessed. As more memory is required by the VM, it will be given access to more until the maximum is reached. However, under no circumstance can the amount of memory being used by the running VMs exceed the total physical memory in the server.

Storage
In a virtualized solution, storage is a cornerstone of the environment. First you must decide whether to use local or networked storage. Local storage is the storage in the server or storage that is directly attached to the server. This storage cannot be shared and; therefore, it cannot be used to support live migration or HA.

Note

HP does support direct attached shared storage that can be shared by up to eight physical servers and allows for live migration and HA, but is limited to a pool of no more than eight physical servers.

The other type of storage in XenServer is network attached storage. As the name implies, the storage is accessed over a network or fibre connection and includes the following: CIFS (Windows Shares) o ISO storage NFS (Network File Share) o ISO Storage o VM Storage o Accessible from VMs for data storage iSCSI o VM Storage o Accessible from VMs for data storage HBA SAN o VM Storage

ISO storage is the location of ISO images of operating systems and applications that can be utilized to install or create new virtual machines. VM storage contains the disk files associated with a VM. Directly attaching an HBA LUN to a VM is not currently supported in XenServer 5.6 or earlier. When looking at storage with a VM, ask yourself if the data is to be stored with the VM, in a separate file, or on the network. Best practices would recommend putting the data in a network store. This practice keeps the size of the VM down, making it more manageable. Backup of VMs in XenServer is accomplished by exporting the VM. A VM would need to be backed up whenever updates or patches are applied to the operating system. Storing data with this backup increases the size of the exported VM and puts probable stale data in the VM when it is imported. Another approach is to create additional files and directly attaching these to a VM. This approach is similar to attaching another

hard drive to a computer. These additional files are seen as hard drives within the VM. The additional files can either reside in the same storage repository as the VM VHD file, or on different storage repositories attached to the host server. If this is a pooled configuration, all servers in the pool must have access to the storage repositories or you will be unable to do live migration or HA. Attaching files in this manner to a VM also increases the over-all VM size when performing exports, imports, or copying unless you detach the file before doing one of these tasks. Another consideration is boot from SAN. The advantage of boot from SAN is the rapid replacement of a failed server. In an HBA/fibre channel scenario, when a server fails, replace the server with a new server, update the WWN (World Wide Name) and reboot the server. If using an HP BladeSystem configuration with HP Virtual Connect Fibre Channel modules, the WWN can be managed by the Virtual Connect (VC) module, eliminating the need to make changes to the WWN. With Virtual Connect Fibre Channel modules, the WWN is assigned to a slot in the BladeSystem enclosure, not to a physical HBA. However, XenServer uses a small footprint to hold the hypervisor and control domain, often between 5 and 10 GBs. Any remaining space is converted to local storage for the host. If using a 20 GB LUN to store the XenServer image, more than half of that LUN will be seen as local storage. Depending on your implementation, this space may be extremely under-utilized or not used at all. This may lead to under-utilized space on your SAN and a multitude of small LUNs to manage. In a pooled environment, with a XenServer host configuration backed up, a new server can be installed and added to the pool in less than 15 minutes. The process is to install XenServer to the drive, restore the host configuration file using XenCenter or xsconsole, and reboot. Upon reboot, the server will automatically re-join the pool and have access to all shared storage and VMs.
Note As of the writing of this document, boot from SAN is not supported on HP fibre-based storage or boot from iSCSI.

Defining your storage solution Another factor in determining the type of storage to use focuses on the storage reliability, features and expansion capabilities required. Is site-to-site failover a requirement? Storage redundancy? Ease of growth or expansion? Snapshots? In a pooled environment, if more processing power is required, add more servers to the pool. However, unless you are using something like the HP StorageWorks P4000 SANs, adding more IOPS to a storage configuration may be difficult. Choosing the correct storage for performance and growth is critical. Your solution may require only two physical servers to support the VMs you wish to run, but your storage requirements may require the features of an HP StorageWorks Enterprise Virtual Array (EVA) disk array. The solution is not driven by the number of servers, but rather by the storage requirements; therefore, we break the solutions down by storage rather than servers using basic, advanced, and enterprise, adding and replicating servers as needed to handle additional requirements. Basic Solution A basic solution requires no shared storage, and consists of a single or multiple servers running stand alone. These servers may have direct attached storage, but no shared storage. If VMs are to be moved between servers, then exporting and importing of the VMs is the process. There is no pool of servers and all storage associated with the server is seen as local to that server.

Advanced Solution An advanced solution is a multi-server solution with shared storage, but not all of the high end storage features are required. The storage will be used for storing VMs to create resource pools of multiple servers for live migration with XenMotion, workload balancing and HA capabilities of XenServer. The solution may consist of only a few physical servers and the shared storage, e.g. a small retail solution that needs to be highly available. In this scenario, there may be only a limited number of VMs, one for the point-of-sale application, one running a small email server, another running a small web service, and one running the infrastructure pieces like DNS/DHCP. The desire is to be able to move the VMs between the physical servers and have the VMs able to access store data on the storage devices. This solution can be replicated as needed to support additional users or different locations or sites. Enterprise Solution An enterprise solution requires enterprise level storage; the high end features like fault tolerance, replication, disaster recover, and site-to-site failover are required. Future growth is very likely in the form of adding more physical servers, more users and more VMs, and the ability to expand the storage is required, or the number of physical servers required may be two or three to support the VMs. Yet high end storage features will be required for the solution. For example, a small geological company may find that three physical servers will easily handle the VMs they wish to run, but the data is stored in a huge back-end database that must be maintained at all costs. This data needs to be backed up often, and quickly, and the storage must be able to survive disasters. Or it could be a company collecting on-line retail sales from multiple sites around the country; the loss of any one site would make a major impact on revenue. When looking at these types of solutions, you need to look at an enterprise level SAN storage like the HP StorageWorks P4000 or the HP EVA 4x00/6x00/8x00 SAN solutions. When looking at storage, make sure the IOPS (I/Os per second) for the storage you are planning to use will support your load.

Networking
When looking at networking, the goal is to separate out dissimilar types of traffic as much as possible. Citrix XenServer requires a management network to control the hosts. This network is used by Citrix XenCenter to connect to and manage the hosts. It is also the network on which live migrations, HA, and work load balancing are configured. The actual VMs do not have need to access this network, and should have another network defined for user data. It is not necessary, and, for security, best that the XenServer hosts are not given IP addresses on this production network. If using a provisioning server such as the one in Citrix XenDesktop, it is best to isolate the boot traffic associated with provisioning to its own network as well. In most situations at least three networks will be required to support the XenServer efficiently: Management Network used for accessing XenServer hosts, live migration, HA and work load balancing Storage/iSCSI provides access to iSCSI storage like the HP StorageWorks P4000 Production/User used for accessing applications and VMs. May have multiple production networks Citrix XenServer 5.6 does a very good job of load balancing the available network bandwidth across multiple VMs. As stated in the Performance and characterization of Citrix XenServer on HP BladeSystem, on a 1 Gb/s line, 8-10 VMs will each get approximately 100 Mb/s of transfer speed with the total utilization of the 1 Gb/s line at 94%5, or around 940 Mb/s. Each additional four VMs
5

In TCP/IP, it is not possible to achieve 100% line utilization due to packet acknowledgements and TCP/IP framing overhead

added to the network will drop throughput of each VM by about 30%, while maintaining close to 94% utilization of the 1 Gb/s line. Reducing the number of VMs on the 1 Gb/s line to four VMs gives each VM approximately 241 Mb/s transfer rate at the 94% line utilization. A single VM on the network will achieve close to 940 Mb/s transfer speed, utilizing the entire 1 Gb/s physical line. As the physical line speed increases, near linear growth is seen in the VMs. On a 2 Gb/s line, a single VM will have transfer speeds of close to 1.87 Gb/s, close to 94% utilization of the line, while eight VMs will each see approximately 236 Mb/s. Using these computations and the estimated throughput of performance you would like per VM, you can determine the number of networks/NICs required to support the data traffic. For example, if 100 Mb/s second is the goal for each VM, and you are looking to support 20 VMs, then a minimum of two NICs/networks is required to support the data traffic from the VMs, and one additional NIC/network is required to support the XenServer management network. If your environment also includes provisioning services, those should also be moved to a separate network. Citrix XenServer does allow for throttling of networks associated with a VM. With Citrix XenServer 5.6, there is a network throughput limit of just over 3 Gb/s. In a Flex-10 configuration setting line speed for any network to be greater than 3 Gb/s means no bandwidth above 3 Gb/s will be utilized, and therefore wasted. It is better to create multiple 2.5-3 Gb/s networks using the Flex-10 capabilities.

The HP advantage
When looking at best practices for implementing a virtualization solution, HP has hardware solutions that focus on optimizing the environment.

Virtual Connect Flex-106


As stated before, networking is an important possible bottleneck that must be addressed when building a virtualized solution. The number of NICs, and the hardware required to support those NICs, can be significant. For example, look at an ordinary network configuration using HP ProLiant BL495c7. By default, two embedded NICs, and two interconnect modules are required in a BladeSystem c7000 enclosure to create two networks. If four networks are required, a mezzanine network card and two more interconnect modules are required. For eight NICs, another mezzanine card and four more interconnect modules will be necessary.

6 7

Virtual Connect Technology: www.hp.com/go/virtualconnect Complete information on HP BladeSystem can be found at www.hp.com/go/blades

Figure 1. Ordinary network configuration vs. Virtual Connect Flex-10 Ordinary network configuration supporting eight NICs

With Virtual Connect Flex-10, the embedded Dual Port Flex-10 10GbE adapter supports eight network connections, four per port, and utilizes two Virtual Connect Flex-10 modules. This support allows for eight physical NICs without the addition of any hardware. The addition of two Flex-10 mezzanine cards and four Virtual Connect Flex-10 modules means a single G6 or higher Blade server can support up to 24 NICs. VC can be configured with four FlexNICs per 10Gb connection. Dedicated speeds can be defined for each FlexNIC, or Flex-10 will automatically assign speeds. The total of the speeds for the NICs cannot exceed 10Gb, and any ports not given a dedicated speed will be automatically allocated bandwidth from the remaining bandwidth.

Figure 2. Ordinary network configuration vs. Virtual Connect Flex-10 Virtual Connect Flex-10 network configuration supporting eight NICs

Virtual Connect Flex-10 is supported on all c-Class blades with the addition of mezzanine cards. Virtual Connect Flex-10 is supported with the embedded NICs on the BL460c G6, BL490c G6,

BL495c G5, BL495c G6, and BL685c G6 server blades. Citrix XenServer 5.6 fully supports the Virtual Connect Flex-10 solution, with one caveat. In a VC Flex-10 configuration, the best throughput of XenServer 5.6 is just over 3 Gb/s. Allocating over 3 Gb/s to a NIC for XenServer 5.6 will waste bandwidth. Citrix XenServer supports the creation of internal virtual networks that can be used to communicate between VMs without accessing the full networking infrastructure, but this communication can cause issues with the migration of VMs between servers. This internal network must exist on all servers in the pool and be configured identically or else moving VMs between servers will fail. Virtual Connect solves this problem by allowing creation of a network between the physical servers, yet the network traffic never leaves the blade enclosure or the Virtual Connect module. This does require dedicating a NIC within the physical server to the network, but all network traffic remains within the VC infrastructure, and Virtual Connect Flex-10 offers additional physical NICs without additional hardware.

Storage
HP has a complete StorageWorks portfolio8, ranging from the StorageWorks X1000 Network Storage Systems solutions to the XP arrays. For this environment, the X1000 through EVA solutions are supported. HP StorageWorks P4000 is iSCSI SAN solution, with an optional upgrade to10GbE, that allows storage clustering, network RAID, multi-site SAN capability, thin provisioning and snapshots and supports SATA and SAS drives. The StorageWorks P4000 SAN solution is center stage for entry to mid-level environments seeking to employ iSCSI technology to gain reliability, performance and ease of management. EVA arrays support storage consolidation, disaster recovery, remote copy, remote replication and recovery, and high performance over fibre-channel, and are designed for the large enterprise or environments requiring high capacity, high performance, and high availability. StorageWorks P40009 HP StorageWorks P4000 offers performance and ease of management over iSCSI networks. P4000 SANs are also easy to scale, allowing you to purchase what you need today and to add storage in the future. P4000 SAN focuses on five features:
1. Storage clustering 2. Network RAID 3. Thin provisioning 4. Snapshot 5. Remote copy

A P4000 cluster consists of individual storage modules, or nodes. Each node consists of a processor, two GbE network connections, Smart Array RAID controllers and either SATA or SAS hard drivers; each node runs the SAN/iQ storage management software. The nodes are combined to create a cluster, or pool, and are managed by a centralized management console. Volumes are created across the nodes in the cluster and accessed by servers or virtual machines. To grow a cluster, simply add another node to the cluster. SAN/iQ will restripe the volumes across all nodes now in the cluster, without requiring the volume to be taken off-line. Volumes can be dynamically migrated between clusters with no downtime if needed - all handled through the centralized management console. Network RAID means data protection, always-available data store, and allows for synchronous replication of volumes across nodes in the cluster. Data is mirrored across the cluster so if any one
8 9

Information about HPs storage portfolio can be found at www.hp.com/go/storageworks More information about StorageWorks P4000 solutions can be found at www.hp.com/go/P4000

node goes offline, all data is still available. Data protection defined on a per-volume basis, and can be configured on the fly with zero downtime. P4000 clusters also support multi-site SANs, either between racks in a data center or between data centers in different sites. Data is replicated in such a manner across the sites that an entire rack or data center in a multi-site SAN can fail and the data still remains available. Another feature of the P4000 is SAN/iQ Thin Provisioning. Traditionally, volume capacity is allocated and dedicated when the volume is created. If you use only 10GB of a 100GB volume, then 90GB is lost, or not utilized. In thin provisioning, capacity is deducted only when you write to the volume. In this example, the host would see 100GB, but only 10GB of space would be given to the volume on the SAN; as the host writes more data, the volume will be increased. As more data is written, the amount of physical free space decreases. If more physical space is required, add another node to the cluster. For example, if in a virtual desktop solution there are 100 users and each is allocated 50GB of storage, a total of 5000GB of storage required. However, most of the users will not need all 50 GB at the beginning, and some will never reach the 50 GB limit. With thin provisioning, start with 3000GB of storage, and as the physical storage fills up, add more nodes to the cluster to provide more physical storage. HP StorageWorks P4000 also has SAN/iQ SmartClone, which when combined with thin provisioning, allows for instant provisioning of volume copies for test and development with no impact on production or actual live data. HP StorageWorks P4000 has snapshots using SAN/iQ Snapshots, an instant point-in-time copy of a volume. The original volume is preserved, writes happen to the snapshot, reads are a combination from the snapshot and the original volume. Snapshots can be done manually or scheduled through the management console and can be easily rolled back, if necessary. For disaster recovery, remote copies can be done. Remote copies are based on snapshots, with the snapshot copied to a remote cluster using thin provisioning; therefore, not requiring pre-allocated space. The copies can be scheduled base on your DR needs, with the initial copy doing a complete volume copy, and all that following copying only the changed data. Remote copies can be done over WAN links and bandwidth can be managed through the management console. When looking at storage, IOPS (I/Os per second) is the factor to consider. You need to ensure that the IOPS for the storage will support your environment. If the VMs you are running require 100 IOPS and there will be 20 VMs, then 2000 IOPS are required from the storage to prevent it from becoming the bottleneck. With the HP StorageWorks P4000, gaining additional IOPS requires simply supplying additional nodes to the cluster. This action adds storage capacity and improves the IOPS of the cluster. The HP StorageWorks P4000 iSCSI solution fits very well with the Citrix XenServer environment. Utilizing iSCSI allows for configuring the environment for both VM storage and user data. However, when using the P4000 for VM storage LUN size creation becomes important. It is not recommended to put more than 32 VMs in a single LUN on a P4000 storage solution. If using Citrix StorageLink to manage the SAN, it is recommended to create a single LUN for each VM.

HP ProLiant BL4xx server blades10


With physical NICs and memory being very important in reducing the server foot print, power consumption, and cooling needs in defining a virtualized environment, the HP BladeSystem with ProLiant BL49x Virtualization Blades were designed to address all of these concerns. The BL490c G6 is a 2-socket Quad-Core or Six-core Intel Xeon server blade with 18 DIMM slots, up to 192GB of memory. The BL495c G6 is a 2-socket Six-Core AMD Opteron server blade with 16 DIMM sockets and up to 128 GB of memory.

10

More information about HP BladeSystem and server blades can be found at www.hp.com/go/blades

10

Both server blades have an embedded NC532i Dual-port Flex-10 GbE multifunction server adapter, allowing for eight network connections using Flex-10 technology. In addition, both server blades have two additional mezzanine card slots, allowing for additional Flex-10 adapters or fibre channel adapters. For disk capacity, both servers can be configured with up to two non-hot plug SATA Solid State Drives (SSDs) for local storage on the server. Both servers utilized the existing c7000 and c3000 blade enclosure architecture, and, with the Virtual Connect Flex-10 technology, can reduce network connection costs by 75%.

Hints and tips


When planning the virtualized environment, look to using internal networks, whether internal to XenServer, or internal to HP Virtual Connect, whenever possible to keep traffic down on the external networks. A multi-server pooled configuration allows for XenMotion live migration, the use of Citrixs XenServer Workload Balancing and the configuration of HA failover for VMs. Servers can be put into maintenance mode to handle server updates, software or hardware. From earlier in the document, a single server can support 28 VMs, but spreading that load across three physical servers gives more functionality with migration, load balancing, and HA. When backing up a XenServer, several steps must be taken. Running the backup option in XenCenter backs up only the system state of the physical server; neither the information tying a VM name in XenCenter to a physical storage location and VHD file on storage nor the physical VHD file are backed up. To backup the VMs, they must be exported. There is still one more step, and that is backing up of the storage repository (SR) metadata for the VMs. This metadata ties the information on the physical XenServer to the necessary files on the storage repository. By default, if an SR is detached from a server and attached to a new server, the new server has no information about the VM VHD files in the storage repository. Performing a system restore will restore the names of the VMs, but will not associate the VM names with the correct VHD file in the SR. To backup the SR metadata, you must go to the command line for the XenServer and run the xsconsole command. Under Backup, Restore, and Update is the option to backup the Virtual Machine Metadata. Once this metadata is backed up, a restore of a failed server then requires these steps:
1. Install XenServer to the server 2. Restore the system state information using XenCenter, PVC, or xsconsole 3. Reboot the server 4. Attach the storage repository to the server 5. Restore the VM metadata from the storage repository using xsconsole

If the failed server is part of a resource pool, to do a restore requires steps 1, 2 and 3. Upon reboot, the server will join the pool and have access to all storage repositories and VM information.

XenApp on XenServer
Testing conducted at HP, running XenApp in a virtualized environment, has shown the overhead versus running bare-metal to be 10% or less, depending on the hardware involved and, some cases, tests have shown negligible overhead. The application being virtualized will greatly affect performance. HP testing has shown that the optimal configuration for a BL460c G6 with the Intel Xeon 5500 series processor is 4/4/8, 4 VMs per server, each VM with four vCPUs, and eight GB of memory. For a BL685c G6 with Six-Core AMD Opteron processor, two different configurations were tested. The first was a 4/6/15 configuration, four VMs, six vCPUs, and 15 GB memory per VM. The

11

second was 6/4/10, six VMs, four vCPUs, and 10 GB memory per VM. These configurations allowed for the optimal load to be generated on the servers. Another example looks at running 32-bit XenApp. Two XenApp tests were performed, one bare-metal server to set the base line, and on virtualized. The virtualized test ran six VMs, each VM configured with four vCPUs, six GB of memory, and nine GB pagefile. The baseline bare-metal test supported 140 users; whereas the virtualized test supported 483 users. At issue is that when running the 32-bit XenApp on a 4P BL680c on Microsoft Windows Server 2003, there is a memory limitation and therefore the inability to fully utilize the physical server. By moving the environment into a virtual machine, we are able to support six instances, greatly improving the utilization of the physical server. It should be noted that pagefiles were created for the VMs simply to maintain an accurate comparison between the physical and virtual tests. In reality, the pagefile is not needed, and the VM should be created with enough memory to prevent needing a large pagefile.

Summary
As discussed in this white paper, planning is very important. Creating a virtualized solution is similar to creating a data center. When looking at a new data center, one must consider the storage requirements, server, requirements, networking configuration and layout. The same considerations are important in creating/sizing/characterizing a virtualized environment. It is best to look at storage and network requirements when creating a virtualized solution. Adding more servers to a pool, or adding more CPUs or physical memory to a server is fairly straight forward. Increasing the number of vCPUs or amount of memory in a VM can be simple when using tools like XenCenter. However, if the storage or network requirements are incorrect, the fix can be very complicated.

12

For more information


HP BladeSystem, www.hp.com/go/blades HP XenServer, www.hp.com/go/citrix HP Virtual Connect Flex-10, www.hp.com/go/virtualconnect Citrix, www.citrix.com HP/Citrix solutions, www.hp.com/go/citrix HP XenApp on XenServer testing, www.hp.com/solutions/activeanswers/citrix HP StorageWorks P4000, www.hp.com/go/lefthandnetworks HP Storage, http://www.hp.com/go/storage

To help us improve our documents, please provide feedback at


http://h20219.www2.hp.com/ActiveAnswers/us/en/solutions/technical_tools_feedback.html.

Copyright 2009 - 2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. AMD Opteron is a trademark of Advanced Micro Devices, Inc. Intel and Xeon are trademarks of Intel Corporation in the U.S. and other countries. 4AA2-8023ENW, Created July 2009; Updated July 2010, Rev. 1

Оценить