Вы находитесь на странице: 1из 10

SAP on Hyper-V Performance Considerations

Applies to:
SAP products in a virtualized environment running with Hyper-V on Windows Server 2008 R2

Summary
This paper and its conclusions are based on a small project within the Microsoft Platforms team at SAP, with the aim of comparing bare-metal and virtualized deployments on the same hardware. The final goal was to conclude how much throughput can be generated in a virtualized environment compared to a bare metal environment, and how functionality like NUMA or Hyperthreading impact the virtualized environments.

Authors: Company:

Samuel Lang (connmove GmbH), Juergen Thomas (Microsoft Corporation) SAP AG

Created on: 25 July 2012

Authors Bio
Samuel Lang is a Senior Consultant at connmove for SAP on the Microsoft platform. In the last years, he has been involved in many SAP implementations on different releases of Windows Server, SQL Server, and Hyper-V. Juergen Thomas is Principal Program Manager Lead at Microsoft. He is working in the SAP / Windows / SQL Server area for more than 16 years. Today he is leading a Microsoft development team that is mainly located in the SAP Headquarter in Germany to work hand in hand with SAP to integrate SAP NetWeaver on Windows and SQL Server.

SAP COMMUNITY NETWORK 2012 SAP AG

scn.sap.com 1

Table of Contents
Project Description .............................................................................................................................................. 3 Physical setup ................................................................................................................................................. 3
Software and server configuration and setup ............................................................................................................... 3

Windows Server 2008 bare-metal measurement ........................................................................................... 3 Windows Server 2008 based Hyper-V measurements ................................................................................... 3 Windows Server 2008 R2 based Hyper-V measurements ............................................................................. 4
Hyperthreading and Windows Server 2008 R2 Hyper-V .............................................................................................. 4

General virtualization considerations .............................................................................................................. 5


NUMA effects ............................................................................................................................................................... 5 CPU pinning and virtualization ..................................................................................................................................... 5 Dynamic Memory ......................................................................................................................................................... 5

Using Hyper-V Live Migration ......................................................................................................................... 6


Setup ........................................................................................................................................................................... 6 What is Live Migration .................................................................................................................................................. 6 Live Migration and physical dependencies .................................................................................................................. 6 Live Migration of a SAP ABAP dialog instance ............................................................................................................ 6

Additional Resources .......................................................................................................................................... 9 Copyright........................................................................................................................................................... 10

SAP COMMUNITY NETWORK 2012 SAP AG

scn.sap.com 2

Project Description
This paper and its conclusions are based on a small project within the Microsoft Platforms team at SAP with the aim of comparing bare-metal and virtualized deployments on the same hardware. The final goal was to conclude how much throughput can be generated in a virtualized environment compared to a bare metal environment, and how functionality like NUMA or Hyperthreading impact the virtualized environments. Results and findings of these measurements are provided in the following sections of this paper. Physical setup In order to have representative hardware, the measurements were conducted on 4-socket servers with Intel Xeon 7560 (Nehalem-EX) servers. Each processor had 8 cores and with Hyperthreading 16 logical CPUs. Hence a maximum of 64 logical CPUs were available. The servers had 128 GB memory. The architecture of all new commodity-type hardware is NUMA-oriented, which means that not all memory is close or local to each of the processors. In this particular case, every processor had 32 GB local memory and the remaining 96 GB were accessed remotely. The impact of such a configuration is discussed later. For a reasonable explanation of terms used to describe hardware, see the following blog, which is available at http://blogs.msdn.com/b/saponsqlserver/archive/2010/09/28/windows-2008-r2-groups-processors-socketscores-threads-numa-nodes-what-is-all-this.aspx Software and server configuration and setup As an optimized methodology to compare performance results on different virtualization layers, we decided to use a frequently used SAP workload simulation from the SAP Sales and Distribution area. Measurement size is the throughput of simulated users. More users mean more CPU consumption within the VM and therefore higher CPU consumption on the VM host. More users also create a higher load in terms of network I/O and network volume as well as higher I/O operations per second and higher I/O volume towards the storage. The OS-releases used were either Windows Server 2008 or Windows Server 2008 R2. The SAP release used was SAP ERP 6.0 including EHP4 on Microsoft SQL Server. Windows Server 2008 bare-metal measurement In order to have a baseline for comparison, we conducted tests on bare metal. We built one SAP system on the host with one SQL Server instance and multiple SAP instances working as one SAP ERP system. With Hyperthreading disabled (32 CPU threads), 6,457 users of our simulated workload could run their business process cycles with less than 1 sec average response time per dialog step. With Hyperthreading enabled (64 threads), the number of users as expected grew to 8,620. This represents an increase of about 33 % simply by enabling Hyperthreading on the server hardware. This value greatly exceeds the improvements that were achieved with the first generation of Intels Hyperthreading a few years ago. Windows Server 2008 based Hyper-V measurements In a first attempt, the performance and scalability of Windows Server 2008 as a host of Hyper-V was tested. To do this we built nearly one dozen VMs, which were small SAP systems with an SAP instance and an SQL Server instance running within the VM. As the host operating system, we used Windows Server 2008 as guest OS in these VMs. We wanted to measure a typical consolidation scenario where a customer run a multiple independent SAP systems isolated in VMS on one host server. As such, the scenario measured differed a bit from the scenario we measured for bare metal where we only had one SAP ERP system we applied workload against, and which utilized all the resources of the server. Hence, the hosted scenario from the beginning implicates more overhead from the start. Nevertheless, it reflects the majority of virtualized customer scenarios. As a result of the configuration and the fact that there is a virtualization layer involved, the throughput is expected to be lower than on the native installation. Comparing the results achieved in this scenario against the bare metal results, we can realize at least 80% of the potential of the bare metal hardware with the virtualized hosting scenario we measured in this step. This number of 80% approximately reflects our sizing

SAP COMMUNITY NETWORK 2012 SAP AG

scn.sap.com 3

recommendations for virtualized systems. It certainly differs depending on the load and the configurations compared between virtualized and bare metal. Note that we were not able to use Hyperthreading for this case since Windows Server 2008 only supports up to 32 CPUs on a host server. Hence the comparison to bare metal was done to the throughput achieved without Hyperthreading. Windows Server 2008 R2 based Hyper-V measurements The hosted virtualized scenario of multiple SAP ERP systems isolated in VMs, were tested again with Windows Server 2008 R2 as host and guest operating system. The other software components were not changed. Major scalability features like Second Level Address Translation, TCP/IP Chimney Offload and Jumbo Frames were added in Windows Server 2008 R2 Hyper-V. Improvements in the virtual hard disk stack and further improvements to Hyper-V Integration Services are also expected to improve the overall throughput and to decrease the overhead between the virtualized scenario and the bare-metal baseline scenario. Repeating the measurement series, it indeed turned out that the throughput with Windows 2008 R2 as host and guest OS increased compared to Windows Server 2008. The overhead compared to the bare-metal scenario shrank to around 11%. Comparing the bare-metal results with virtualized hosting result achieved with Windows Server 2008 and Windows Server 2008 R2, we see these percentages:

v1 vs v2 throughput
Hyper-V v1 throughput Nativ Hyper-V v2 throughput V2 performance gain [%] 86.6% 65.0% 76.8% 88.3% 79.1%

68.3% 51.2% 25.6% 12.8% 11.6% 1 x VM 2 x VM 4 x VM 6 x VM 23.2% 46.3%

8 x VM

10 x VM

Hyperthreading and Windows Server 2008 R2 Hyper-V Since the Windows Server 2008 R2 Hyper-V version extended its maximum limit of CPU threads for the host server, Hyperthreading could be used on our Intel hardware. In order to interpret and understand the result we need to keep in mind that the Windows Hypervisor does not distinguish between a physical CPU thread and a logical CPU Thread (Hyper Threaded CPU). As mentioned earlier when executing the workload on bare metal, we saw a 33% throughput increase by doubling the number of CPU threads by enabling Hyperthreading. However, doubling the number of CPU threads and only getting 33% more throughput, means that the average throughput achievable by one CPU thread is lower than without Hyperthreading. This lower per-thread throughput is not so much an effect of the physical CPU thread slowing down. Instead, it is rather due to the effect that the workload the hyper-threaded or logical CPU thread does not provide the same throughput as the physical one. Hence not distinguishing physical and logical CPU threads, assigning virtual CPUs of VMs in a hyperthreaded scenario, results in side effects when running VMs. The observed side effects were throughput and latency, with the result that the SAP instances within the VM become more unstable. This means the difference between maximum and minimum response time and throughput while having the same number of users trying to do their work within a VM was several orders of magnitude larger than in the runs done without using Hyperthreading, where the difference was minimal. As a result of this

SAP COMMUNITY NETWORK 2012 SAP AG

scn.sap.com 4

large variation in throughput and response times by the single VMs, the overall CPU consumption on the host server became very uneven. As a result it was hardly possible to achieve more throughput compared to the non-hyperthreaded, virtualized hosted scenario. Due to this fact it is common to disable SMT (Hyperthreading) if reliable and deterministic throughput and latency is required. This is different to other scenarios where deterministic and reliable throughput and latency is not a priority. So it is important to analyze the environment and decide about enabling or disabling SMT on hypervisor level. General virtualization considerations Besides the effects around Hyperthreading as described above, there are other effects that are worth discussing. NUMA effects One of the observations when operating with more than 4 VMs on the server we used was that some VMs showed very different performance results from other guest partitions with the same configuration. The fastest systems had a throughput of more than 20% compared to the slowest VM. After further tests we was discovered that the starting sequence of the guest partitions greatly influenced the throughput measured later. To explain this behavior it is essential to understand the special hardware and software design. As mentioned earlier, the hardware architecture consists of four sockets in combination with four times 32 GB DDR RAM. So each CPU socket has 32 GB local main memory, which is the fastest to be accessed, and 96 GB main memory connected remotely to the neighbor sockets. To access the remote main memory takes longer than to access the local memory DIMMs. Since Hyper-V is NUMA aware, assignment of the VMs is usually done so that one VM uses the memory of one NUMA node only and also uses CPU threads of that single NUMA node only. This is explained in detail in these excellent blog posts: http://blogs.msdn.com/b/tvoellm/archive/2008/09/29/hyper-v-performance-counters-part-five-of-many-hypervm-vm-vid-numa-node.aspx So the effect we observed that the VMs which started later provided less throughput was due to Hyper-V allocating the four VMs started first to the 4 NUMA nodes on the server. Hence, each VM was allocated 20 GB (our VM memory setting) of NUMA local memory. Also the virtual CPUs of those VMs were assigned within this NUMA node. However, starting more VMs (requesting 20 GB memory each), meant that the requested memory could no longer be allocated exclusively from one NUMA node, but needed instead to be allocated from at least 2 NUMA nodes. The result was that the CPUs running those VMs needed to access remote memory. Since remote memory access is slower, the VMs that had their memory spread over more than one NUMA node were able to deliver less throughput. CPU pinning and virtualization On native hardware it can be useful in some environments to configure CPU affinity for database and SAP processes. In a virtual machine it is also possible to set CPU affinity for disp+work or database processes. But this means pinning a process to a virtual processor, not to a physical one. Since the hypervisor schedules virtual processor time slices between different logical threads, no there is no usable relationship between a virtual CPU and the physical hardware. Therefore, do not set a CPU affinity for processes inside a guest partition as long as there is no relationship between virtual and physical processors. At best it brings nothing, at worst it costs throughput. Dynamic Memory CPU utilization in a physical environment is usually much lower than memory, disk, and sometimes network utilization. The Hyper-V dynamic memory is explained in detail in this blog series, part 1 - 5: http://blogs.technet.com/b/virtualization/archive/2010/03/18/dynamic-memory-coming-to-hyper-v.aspx In general, dynamic memory is useful to assure a higher consolidation grade of low-stressed systems on one physical server. However, using more memory within the guest partitions than physical memory available

SAP COMMUNITY NETWORK 2012 SAP AG

scn.sap.com 5

leads to paging. The Hyper-V paging approach to page out is directly linked to the guest partition operating systems so that the virtualized OS kernel can decide which segments should be paged on disk. Nevertheless, memory over-commitment should be avoided in a high-performance environment. SAP does not recommend over-committing memory and only provides performance support if no memory overcommitment is configured. Using Hyper-V Live Migration Setup In order to test the impact of Hyper-V Live Migration on SAP Deployment we used a different set of hardware than the one all the measurements so far were based on. We used a HP-Microsoft joint product called the Database Consolidation Appliance. More details about this product can be found here. As the appliance is built, it allows deployment of other software components than just SQL Server. Hence we used it to consolidate smaller SAP systems on this appliance. Some were running SAP and SQL Server in the same VM. Others were hosting SAP systems with multiple SAP dialog instances in several different VMs plus a dedicated DBMS server. However, the focus was to simply test the principles and the impact of Live Migration to different SAP components while under load. What is Live Migration Hyper-V Live Migration allows moving a VM from one host server to another host server without interrupting the services running on the VM. There a pre-requisites, which are necessary to allow Live Migration. A very good overview of these requirements can be found here. The hardware configuration we used is configured according to the best practices for Hyper-V and Live Migration. The principle way Live Migration works is to build up the memory image of the VM on the other host. This is done in several phases: Full copy of the VM from source host to destination host. While this full copy is taking place, changes on memory pages within the destination VM are tracked. Copy dirty pages: pages that were changed since full copy started are copied to the destination VM. Several more executions of copying dirty pages might be necessary to get to an extreme low number of outstanding changes to be copied. In order to get down to a low number of final pages to be copied, the throughput of the VM might be artificially throttled. Final transition where the services become active on the new source VM. Outages on network should be within the TCP/IP connection time-outs. Destruction of the old VM on the source system

Live Migration and physical dependencies The network bandwidth is one of the potential resource bottlenecks in the steps listed above. Assuming that there is a 1 Gbit ethernet card dedicated on the host server for Live Migration, the theoretical copy rate in Live Migration between two hosts is around 125 MB/sec. This assumes there is no overhead and no inefficiencies and no other Live Migration taking place in parallel. Applying this theoretical throughput on different VM Sizes, it might take around 128 sec to migrate a VM with 16 GB memory and more than 6 min to migrate a VM with 64 GB. Since we are not even getting close to these theoretical limits, we need to allow for double these theoretical times in normal productive setups with 1 Gbit ethernet bandwidth to perform a Live Migration. Live Migration of a SAP ABAP dialog instance First a VM running a SAP ABAP dialog instance was migrated from one host to another host. The VM itself had 16 GB of memory. We executed the migration under different grades of load and measured migration time, throughput CPU consumption, and eventual errors where transactions might fail. Baseline Tests: First test was migrating a VM with a running SAP instance with no load applied.

SAP COMMUNITY NETWORK 2012 SAP AG

scn.sap.com 6

In our environment it took 3:38 min to migrate the VM

Baseline test (no migration) of a workload resulted in: Around 35% CPU consumption Executed around 35,000 SAP dialog steps per hour of standard Sales and Distribution workload Response time of around 200 ms running a standard SAP Sales and Distribution load over a fixed interval

First Migration Test Taking the same scenario again and applying the same workload, but migrating the VM to another host, the following results were achieved: The migration took 4:47 min compared to the 3:38 min the migration took without load The CPU load was higher with an average of 40% and longer peaks of 100% CPU Response times within an 11 min load interval averaged 1.5 sec (w/o migration around 0.2 sec) with peaks in multi-second response times to the SAP clients. The worst response times were bundled in a 2 min interval, which represented the incremental copies of the dirty pages. Once the final move was done, the response times returned to normal. Response times during the initial copy of the VM were not impacted at all. The throughput was reduced to around 31,000 dialog steps, which represent around 12% less throughput achieved within the 7 min load interval. In terms of throughput the same pattern could be seen as with response time.

As a comparison the CPU consumption as measured within the VM for the workload running without being migrated (left side) and the CPU consumption measured in the VM with live migration of the VM (right side)

The workload distributes more evenly again after the migration and the resource consumption does not have as many spikes in the phase after the migration. However the time period marked in the red box is the period where the SAP instance shows very low in throughput and high response times towards the SAP GUI client. Second Load Test with Migration For the next test, the workload was increased. In the case of not migrating we measured the following data: We applied a workload of around 88,000 dialog steps of SAP standard Sales and Distribution workload We got an average response time to the SAP GUI of 250ms The average CPU resource consumption was around 67%

Applying the same workload again and monitoring the results we found: It took 5:07 min to migrate the VM to another host. The workload throughput in the 12 min workload interval was reduced to 73,000 SAP dialog steps/hour from original 88,000 dialog steps in the same 12 min period without migration. This

SAP COMMUNITY NETWORK 2012 SAP AG

scn.sap.com 7

represents around 17% lower throughput. Again the lower throughput was in around 3 min of the 12 min phase this period had extremely low throughput. The response time in the same 12 min interval was around 2.2 sec (0.22 sec without migration) with a 3 min interval, which was especially high in response times The CPU resource consumption was measured with an average of 73% with around 3 min at 100%. This was the very same 3 min where throughput was low and response times where high.

It is a fair assumption that the higher loaded the server and the more CPU resources are leveraged, the longer the actual phase in Live Migration which impacts the throughput and performance. Nevertheless the timing determined on this specific hardware might not be representative for other hardware configurations. Network throughput and capabilities are crucial for the timing of Live Migration and its impact on the workload. Based on the experience gained when measuring the impact with Live Migration, it is recommended to maximize the network bandwidth that can be leveraged by the Live Migration. Blade frameworks often allow the creation of virtual networks over one physical network adapter. Some vendors allow not only setting fixed bandwidths for those different networks, but allow defining ranges or QoS capabilities, which would express priorities for packages of the different virtual networks. With these methods and a 10 GigaBit network infrastructure underneath, the impact of Live Migration and the duration of a copy can be minimized.

SAP COMMUNITY NETWORK 2012 SAP AG

scn.sap.com 8

Additional Resources
Check the SAP interoperability area at: www.microsoft.com/sap and the .NET interoperability area in the SAP developer network at: http://sdn.sap.com for updates or additional information. Windows Server 2008 Hyper-V Overview http://www.microsoft.com/downloads/details.aspx?FamilyID=0fe4e411-8c88-48c2-89033fd9cbb10d05&DisplayLang=en Step-by-Step Guide to Getting Started with Hyper-V http://www.microsoft.com/downloads/details.aspx?FamilyID=bcaa9707-0228-4860-b088dd261ca0c80d&DisplayLang=en Windows Server 2008 Hyper-V Performance Tuning Guide http://www.microsoft.com/whdc/system/sysperf/Perf_tun_srv.mspx Converting Physical Computers to Virtual Machines in VMM (P2V Conversions) http://technet.microsoft.com/en-us/library/bb963740.aspx Step-by-Step Guide for Testing Hyper-V and Failover Clustering http://www.microsoft.com/downloads/details.aspx?FamilyID=cd828712-8d1e-45d1-a2907edadf1e4e9c&DisplayLang=en Quick Migration with Hyper-V http://download.microsoft.com/download/3/B/5/3B51A025-7522-4686-AA168AE2E536034D/Quick%20Migration%20with%20Hyper-V.doc Running SQL 2008 in a Hyper-V Environment Best Practices and Performance Recommendations http://sqlcat.com/whitepapers/archive/2008/10/03/running-sql-server-2008-in-a-hyper-v-environmentbest-practices-and-performance-recommendations.aspx Microsoft Virtualization Blog http://blogs.technet.com/virtualization/ SAP SDN Virtualization Information https://www.sdn.sap.com/irj/sdn/windows-virtualization Microsoft/SAP Customer Information Center http://www.microsoft.com/isv/sap/

SAP COMMUNITY NETWORK 2012 SAP AG

scn.sap.com 9

Copyright
Copyright 2012 SAP AG. All rights reserved. No part of this publication may be reproduced or transmitted in any form or for any purpose without the express permission of SAP AG. The information contained herein may be changed without prior notice. Some software products marketed by SAP AG and its distributors contain proprietary software components of other software vendors. Microsoft, Windows, Excel, Outlook, and PowerPoint are registered trademarks of Microsoft Corporation. IBM, DB2, DB2 Universal Database, System i, System i5, System p, System p5, System x, System z, System z10, System z9, z10, z9, iSeries, pSeries, xSeries, zSeries, eServer, z/VM, z/OS, i5/OS, S/390, OS/390, OS/400, AS/400, S/390 Parallel Enterprise Server, PowerVM, Power Architecture, POWER6+, POWER6, POWER5+, POWER5, POWER, OpenPower, PowerPC, BatchPipes, BladeCenter, System Storage, GPFS, HACMP, RETAIN, DB2 Connect, RACF, Redbooks, OS/2, Parallel Sysplex, MVS/ESA, AIX, Intelligent Miner, WebSphere, Netfinity, Tivoli and Informix are trademarks or registered trademarks of IBM Corporation. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. Adobe, the Adobe logo, Acrobat, PostScript, and Reader are either trademarks or registered trademarks of Adobe Systems Incorporated in the United States and/or other countries. Oracle is a registered trademark of Oracle Corporation. UNIX, X/Open, OSF/1, and Motif are registered trademarks of the Open Group. Citrix, ICA, Program Neighborhood, MetaFrame, WinFrame, VideoFrame, and MultiWin are trademarks or registered trademarks of Citrix Systems, Inc. HTML, XML, XHTML and W3C are trademarks or registered trademarks of W3C, World Wide Web Consortium, Massachusetts Institute of Technology. Java is a registered trademark of Oracle Corporation. JavaScript is a registered trademark of Oracle Corporation, used under license for technology invented and implemented by Netscape. SAP, R/3, SAP NetWeaver, Duet, PartnerEdge, ByDesign, SAP Business ByDesign, and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP AG in Germany and other countries. Business Objects and the Business Objects logo, BusinessObjects, Crystal Reports, Crystal Decisions, Web Intelligence, Xcelsius, and other Business Objects products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of Business Objects S.A. in the United States and in other countries. Business Objects is an SAP company. Sybase and Adaptive Server, iAnywhere, Sybase 365, SQL Anywhere, and other Sybase products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of Sybase, Inc. Sybase is an SAP company. All other product and service names mentioned are the trademarks of their respective companies. Data contained in this document serves informational purposes only. National product specifications may vary. These materials are subject to change without notice. These materials are provided by SAP AG and its affiliated companies ("SAP Group") for informational purposes only, without representation or warranty of any kind, and SAP Group shall not be liable for errors or omissions with respect to the materials. The only warranties for SAP Group products and services are those that are set forth in the express warranty statements accompanying such products and services, if any. Nothing herein should be construed as constituting an additional warranty.

SAP COMMUNITY NETWORK 2012 SAP AG

scn.sap.com 10

Вам также может понравиться