Академический Документы
Профессиональный Документы
Культура Документы
Contents
Introduction 3
Dynamic IT 4
Characteristics of a Dynamic IT Organization 4
Core Infrastructure Optimization Model 5
The Dynamic Datacenter 6
Virtualization as Part of Core Infrastructure Optimization Models 8
Products Engineered for a Dynamic Datacenter 9
Datacenter Challenges 10
Controlling Costs 10
Improving Availability 12
Increasing Agility 13
Virtualization Scenarios 14
Scenario 1: Server Consolidation 14
Scenario 2 Business Continuity 16
Centralized, Policy-Based Management 18
Virtualization Technologies 19
Windows Server 2008 19
Microsoft System Center 20
Host Clustering and Quick Migration 22
Conclusion 26
Introduction
This whitepaper examines strategies for moving an organization toward more dynamic IT using datacenter virtualization
technologies. Datacenters evolve from manual and reactionary to automated and proactive, and from cost centers to
strategic assets, through a series of stages. This paper will show how virtualization is a key technology to help datacenters
move through those stages, reduce cost, increase security and availability, and enable more agile business.
This paper provides concrete scenarios showing how virtualization can enable server consolidation and business continuity.
This paper also examines the technologies that underlie those solutions, which include Windows Server 2008, Hyper-V, and
System Center.
Finally, this paper explains the partnerships that Microsoft has formed with organizations such as XenSource/Citrix to ensure
that Microsoft supports heterogeneous environments including Linux workloads, and the engineering investments that
Microsoft has made to support non-Microsoft technologies such as Xen and ESX Server.
Dynamic IT
Technology accumulates in the datacenter over time, leaving many organizations in a position where their IT resources
are fully allocated simply maintaining what they have, with no time left over to focus on strategic initiatives. All legacy
applications must be maintained. IT organizations have to support existing capabilities, while meeting new business needs.
Often viewed as a cost center, IT must meet these challenges while operating under tight financial constraints.
Aligned
First, dynamic IT is aligned with the business. This seems
obvious, but creating this synergy is often easier said than
done.
Adaptable
The systems must be adaptable to change. Industry trends
and new technologies generate significant interest, but IT
must be able to evaluate new technologies with the business
needs in mind, and rapidly incorporate new technology as
part of strategic initiatives. While moving forward, IT must not jeopardize prior investments and tools that are already in place
providing critical functionality.
Efficient
IT organizations must stay within budget. Simply purchasing expensive technology does not enable a dynamic datacenter,
especially if such technology ends up as “shelfware.” While investments should be expected as organizations move from
reactive and manual approaches to proactive automated processes, these investments need to be done with key success
criteria and payoff calculated from the outset.
As IT moves from being viewed as infrastructure to being a business asset that provides information for decision makers, and
becomes a key component in new business initiatives, IT can garner additional budget, as IT is seen as enabling profit rather
than simply keeping the lights (or e-mail) on.
Empowering People
These elements combine to empower people within the organization. Helping to make the enterprise “people ready” means
allowing people access to the information they need. It means making sure that IT services becomes literally like a dial tone:
computing on demand, wherever people need it – in various form factors and with all the technologies that they require.
Customers benefit substantially by moving from a basic level to a standardized level—dramatically reducing costs through
developing standards, policies, and controls with an enforcement strategy, automating many manual and time consuming
tasks, adopting best practices, and aspiring to make IT a strategic asset rather than a burden.
Generally all patches, software deployments, and desktop services are provided through medium touch with medium to
high cost. However, they have a reasonable inventory of hardware and software and are beginning to manage licenses.
Content is consolidated and records retention is managed using disconnected repositories with basic search capabilities.
Security measures are improved with a locked down perimeter but internal security may still be a risk.
Customers benefit by moving from this standardized state to a rationalized state with their infrastructure and platform
by gaining substantial control and having proactive policies and processes that prepare them for the spectrum of
circumstances from opportunity to catastrophe. Service management becomes a recognized concept and the organization
is taking steps to implement it.
The use of zero-touch deployment minimizes costs, the time to deploy, and technical challenges. The number of images
is minimal and the process for managing desktops is very low touch. Organizations at a rationalized level have a clear
inventory of hardware and software, and only purchase those licenses and computers they need. Document and records
management and search are considered as strategic enablers for the business and are integrated with one or more business
productivity infrastructure investments and IT has defined processes and procedures to provision search integration with
new line-of-business applications.
Customers benefit on a business level by moving from this rationalized state to a dynamic state. The benefits of
implementing new or alternative technologies to take on a business challenge or opportunity far outweigh the incremental
cost. Service management is implemented for a few services with the organization taking steps to implement it more
broadly across IT.
Costs are fully controlled; there is integration between users and data, desktops, and servers; collaboration between users
and departments is pervasive; and mobile users have nearly on-site levels of service and capabilities regardless of location.
Processes are fully automated, often incorporated into the technology itself allowing IT to be aligned and managed
according to the business needs. Additional investments in technology yield rapid, measurable benefits for the business.
Customers benefit from increasing the percentage of their infrastructure and platform that is dynamic by providing
heightened levels of service, competitive and comparative advantage, and taking on bigger business challenges. Service
management is implemented for all critical services with service level agreements and operational reviews.
Self Evaluation
Currently, most organizations are at the basic stage, where IT is seen as a cost center. As organizations adopt standard
technologies and practices, IT can become an efficient cost center. But organizations really want to move beyond seeing
IT as a cost center; they want to rationalize IT so it becomes a business enabler. Eventually, organizations want IT to be
dynamic – a strategic asset that provides a competitive advantage.
As organizations move through the optimization models, they find it easier to lower and control IT costs; they’re able to
increase availability, security, and the agility of the business, shortening the time from idea to implementation.
Microsoft has developed a self-assessment tool that you can use to determine your current optimization level. It’s
recommended that you assess your organization before proceeding with virtualization solutions. This will help you and your
organization identify virtualization initiatives that will provide the most value at each level of the optimization model.
Physical Layer
At the physical layer, it’s important for the
dynamic datacenter to be able to provision
physical systems efficiently. This includes
configuring bare-metal hardware and
installing and configuring software, from the
operating system through the workloads,
without the IT personnel resorting to low-level
scripting. Once systems are provisioned, they need to be patched and kept up to date without manual intervention. Finally,
organizations need to be able to multicast configurations to provision numerous servers rapidly. Microsoft provides this
functionality to the dynamic datacenter with System Center Configuration Manager.
Virtual Layer
With virtualization, there’s another layer of provisioning for the dynamic datacenter. This includes the provisioning of the
hypervisor and the virtual machines. With Windows Server 2008, Microsoft provides the Hyper-V hypervisor as a feature of
the operating system, and Hyper-V is enabled through a server role.
As a consumer of its own technology, Microsoft has fully virtualized the Technet and MSDN Web sites, using Hyper-V,
realizing a significant cost savings. These sites respectively serve 11.5 million and 15 million visitors per month, and
Hyper-V has proven stable and high performing in this environment. Microsoft is continuing to roll out Hyper-V out across
its datacenters.
Application Layer
With just hardware virtualization, you can get great benefits from server consolidation; but if you have thousands of
physical servers, that will result in thousands of virtual servers. While this will save space and power and will help availability,
there are additional benefits that can be achieved with application virtualization.
By separating the operating system from the applications, an organization may find they need only ten or twenty base
images for hundreds or thousands of servers. Through application streaming technology, applications can be streamed to
the systems on demand. This is viable today with many desktop applications, and Microsoft is investing in engineering to
allow server workloads to stream to servers when needed.
This level of application virtualization allows a single base image to be patched, and all instances using that base image get
the benefit, without patching each instance individually. This also allows applications to be serviced, patched, and migrated,
without costly uninstall/reinstall or upgrade operations that may make the application unavailable.
Model Layer
Applications typically deploy across many servers. Most applications require three to five servers, while some require
hundreds. A model cohesively brings together those applications, servers, and configurations. It also allows the people who
build applications to understand the application components and configure them in a standard way.
This starts with the business analyst, who comes up with the application requirements. An architect then defines the
application architecture and deployment model. Developers implement the application, and it is deployed into the
environment as dictated by the model. The model can also apply governance rules.
Microsoft has started to apply this visionary process – in particular, for Operations Manager, Virtual Machine Manager,
Configuration Manager, and the Visual Studio development tools, and model-driven operations is an area where Microsoft
will continue to invest. In this environment, IT defines the models and the models drive the datacenter. The model directs
how the operating system and applications are pulled together, and the applications are composited on the fly.
Management
Microsoft brings datacenter management under one roof with the System Center suite of products. Microsoft has heard
from customers that they want one set of management tools to manage their physical and virtual environments, and
that virtualization solutions and management tools need to support a heterogeneous environment. Microsoft has gone
in exactly that direction by partnering with XenSource/Citrix, supporting Linux workloads, and managing Virtual Server,
Hyper-V, Xen, and ESX environments.
Enable
Increase Agility
Reduce Total Availability
Cost of
Ownership
Basic Standardized
d Rationalized Dynamic
Virtualization is also crucial when it comes to simplified backup and even disaster recovery, reducing downtime caused
by catastrophic events from days to hours or even minutes. Virtualization can ensure that applications remain available,
independent of hardware servicing. Virtualization greatly simplifies increasing the resources available to applications. At
this stage in the game, IT is seen no longer as a cost center but as an empowering agent that enables business goals and
increases agility.
In the most advanced organizations, business units can acquire their own infrastructure through self-service provisioning.
Dynamic provisioning can automatically bring new resources on- and off-line as the workload demands. Migration of
workloads can happen on the fly, with no interruption to users. Problems can be detected and mitigated with minimal
manual effort.
Profile
on
Virtualization
Document Redirection
ction
Offline files Server Virtualization
Presentation
Virtualization
Managementt
Desktop
Virtualization Application
Virtualization
Microsoft®
A truly dynamic datacenter utilizes a variety of technologies and best practices to optimize operations. Microsoft’s offerings
extend well beyond hardware virtualization, providing the technologies that organizations need. Individually, technologies
provide critical functionality, and in combination they provide the functionality needed for dynamic operations.
Terminal Services virtualizes the presentation layer, allowing administration and productivity independent of location.
Profile virtualization untethers users from specific desktop hardware. Server virtualization allows for consolidation and other
datacenter optimizations. Application virtualization disconnects applications from a particular operating system instance.
Desktop virtualization allows users to access their desktop from anywhere, and provides datacenter performance and
connectivity for user workloads. System Center provides interoperable management.
Datacenter Challenges
The main challenges to datacenters are well known. Many datacenters are seen as cost centers and are charged with the
task of providing the necessary services for the least expense. Hard costs typically come in the form of power, square
footage, hardware, and administrative staff. When IT is seen as an asset and a business enabler, datacenters are expected
to be extremely efficient and to provide services with relatively little administrative staff. Staff is expected to focus more on
strategic priorities and less on day-to-day operations.
Many datacenter services are expected to be “always on,” ready to meet the needs of a distributed and often global
workforce. Maintenance windows are exceedingly small, and the IT organization is expected to comply with internal service
level agreements. A high value is placed on any servicing that can be performed without service disruptions. Datacenters
are also required to be secure and to comply with applicable regulations. Security breaches and vulnerabilities affect
availability, result in large expense, expose the company to liability, and damage the company’s reputation. To maintain
security, patches must be applied on a regular basis with minimal to no impact on availability.
When seen as a strategic asset, IT is expected to facilitate company agility. Successful businesses seek to implement new
strategies, products, and services at great speed. Measurable initiatives are set, and IT must provide business intelligence
services to decision makers. To meet these needs, IT is expected to provision systems rapidly. If a workload spikes, IT must
allocate resources with no service disruption. Changes must be implemented quickly, without the datacenter devolving into
a hodgepodge of undocumented one-off configurations. IT must be able to swiftly certify new applications for operation
and ensure that updates and upgrades do not break existing workloads.
Controlling Costs
The “low-hanging fruit” in many datacenters is server consolidation. By consolidating servers, datacenters can see an
immediate reduction in power use. Datacenters can keep some unused servers as spare capacity and decommission others
to free up precious floor space and reduce cooling requirements.
The first step in server consolidation is converting appropriate physical servers to virtual servers. Servers with single
workloads and low utilization are the most logical initial candidates for consolidation. Server consolidation can be
especially valuable for legacy workloads that are tied to discontinued hardware.
Green IT
The U.S. Department of Energy has said that the datacenter is the fastest-growing energy consumer in the United States
today, with 61 billion kilowatt hours going toward datacenter power consumption and a projected ten to fifteen additional
power plants needed by 2011 to keep up.
According to Gartner Research, energy costs could soon account for more than 50 percent of the total information
technology budget for a typical datacenter .
This is largely because Windows Server 2008 includes updated support for Advanced Configuration and Power Interface
(ACPI) processor power management (PPM) features, including support for processor performance states (P-states) and
processor idle sleep states on multiprocessor systems. These features simplify power management in Windows Server 2008
and can be managed easily across servers and clients using Group Policies.
Microsoft’s measurements with Hyper-V show a near one-to-one energy savings for each server consolidated. In other words,
the power consumption of the host OS does not substantially increase as guests are added.
To put these savings into perspective, consider these actual measurements, which show the power consumption of 10 IIS
Web servers compared to that of 10 IIS Virtual Servers running on Hyper-V.
50000
45000
40000
35000
30000
kWh/Year
25000
20000
15000
10000
5000
0
1 server 4 servers 10 servers
Within Microsoft’s own IT, department servicing more than 100,000 employees and contractors, there has been a
tremendous savings in both test/development and production virtualization implementations. As shown in the table below,
the savings go beyond just power. Virtual machines allocate disk space only as needed, resulting in lower overall storage
requirements. The conversion from a physical to a virtual system also greatly lowers costs by reducing cabling needs and
the number of servers and racks required, among other costs.
Improving Availability
An IT organization is constrained by the skills and knowledge possessed by its staff. When different solutions require
specialized skill sets, the organization can become strained. Initially, virtualization was new and different, and it required
specialized skills and training. Virtual management tools often provided little or no functionality for the physical
environments and workloads.
But virtualization has matured, and Microsoft has worked to ensure that administrators can manage physical and virtual
environments using existing skills and knowledge. Microsoft’s virtualization is provided through the familiar “server role”
metaphor, and System Center tools are designed to provide consistency across heterogeneous environments. This includes
managing a variety of operating systems (including Windows, UNIX, and Linux) and a variety of virtualization technologies
(including Virtual Server, Hyper-V, and ESX).
Microsoft Virtualization also improves availability by building on top of Windows Clustering and enabling “quick migration”
of virtual machines between physical hosts. These technologies allow you to service and patch the host OS without
incurring downtime for the guest workload. Decoupling the workload from the hardware ensures that the guest OS can be
migrated if the host fails or needs servicing. System Center’s automated patch management keeps systems up to date, and
baseline monitoring keeps hosts and guests from drifting from a defined baseline configuration.
System Center Data Protection Manager uses the same technology to back up the host, guest virtual machines, and
workloads. For example, DPM can provide continuous data protection for a SQL Server or Exchange workload running in a
virtual machine, and can back up the virtual machine image for disaster recovery.
Virtual Machine Manager and Operations Manager can monitor host utilization, guest performance, and application
performance; can recommend migration of a guest to a host with more resources; and can even automate the move.
In combination, these technologies ensure that your datacenter applications remain secure and available.
Increasing Agility
As the perception of IT moves from cost to strategic asset, it becomes recognized as an enabler of business agility.
Companies that rapidly respond to market changes and opportunities need IT to provide the infrastructure that will power
new initiatives. This includes speedy provisioning of computing power for development, test, and production operations. In
some cases, departments may even be able to provision their own infrastructure without requiring IT assistance. This self-
service can support rapid prototyping and afford the services needed for development and quality assurance (QA).
Any long-lived organization has legacy workloads that entail chronic, time-consuming IT support. As IT resources are
diverted in order to procure discontinued hardware and complete lengthy certification processes, fewer IT resources are
available for strategic initiatives. Virtualization liberates the IT organization from these and other chronic issues related to
legacy applications. Because applications are isolated from the hardware, they free IT to host legacy virtual machines on the
latest hardware. Virtualization can also simplify backup and recovery as well as other common tasks.
Virtualization Scenarios
This next section provides a walk through of two real-world scenarios (Server Consolidation and Business Continuity) in
which Microsoft’s virtualization helped datacenters move toward dynamic IT. Centralized, Policy Based Management is
required to effectively manage the physical and virtual infrastructure needed for the implementation of these two scenarios
efficiently.
• Scenario 1: Server Consolidation
• Scenario 2: Business Continuity
»» High Availability
»» Disaster Recovery
• Centralized, Policy Based Management
Server consolidation scenario focuses on achieving lower costs through server consolidation, This includes reducing
hardware, space, power costs, as well as reducing management complexity.
Business continuity scenarios focus on maximizing system uptime and availability through server virtualization. This
includes reducing the impact of disruptive events and disaster recovery, and streamlining maintenance. This also includes
dynamic resource allocation and streamlining workload provisioning to efficiently support changing business needs.
Centralized management examines management and complexity and shows how centralized, policy-based management
brought the datacenter under control.
VM Library
System
Center
VMM
Microsoft Server
Virtualization
Virtual Servers
Physical Servers
Non-Virtualized Infrastructure
Benefits:
By consolidating multiple workloads onto a single hardware platform via server virtualization, you can maintain a one
workload/one server ratio while reducing physical server sprawl. Your business will be fully supported with less hardware,
resulting in lower equipment costs, lower electrical consumption (thanks to reduced server power and cooling), and less
physical space required to house the server farm.
Virtualization can also simplify and accelerate provisioning. The acquisition of workload resources and hardware can
be decoupled. Adding the capability required for a particular business process (say, a Web commerce engine) becomes
streamlined and immediate. In an advanced virtualized environment, workload requirements can be self-provisioning,
resulting in dynamic resource allocation.
While virtualization-based server consolidation can provide many benefits, it can also add complexity if the environment is
not managed properly. The savings from hardware consolidation could be offset by increases in IT management overhead.
Because creating VMs is so easy, an unintentional and unnecessary sprawl can result that far exceeds physical server sprawl
and that outpaces the tools used to manage VMs. A properly managed virtual infrastructure, however, automatically
determines which servers are the best candidates for virtualization, converts them to virtual machines, and provisions them
to the right hosts in minutes, rather than the weeks or months it takes to procure and configure physical servers manually.
HA Physical Server
Microsoft Server
Virtualization Standby Virtual Host N+1
Benefits:
Disruptive events and server downtime are reduced when virtualization is introduced, meaning increased availability of your
systems to your employees – and your business to your customers. When workloads do go down, however, they are quickly
and automatically migrated to an online server. Virtualization allows you to maintain an instant failover plan that provides
business continuity throughout disruptive events.
System
Center
Data
Protection
Manager
Globally
Managed
Virtualization System Center
S
Infrastructure Virtual Machine
Manager
System
Center
Data
Protection
Manager
A holistic virtualization strategy allows you also to maintain an instant failover plan that provides business continuity
throughout disruptive events. By enabling you to convert OS and application instances into data files, this approach can
help automate and streamline backup, replication, and transfer – providing more robust business continuity and speeding
recovery in the case of an outage or natural disaster.
Organizations use virtualization to create a more efficiently managed server infrastructure, which reduces disruptive events,
simplifies disaster and recovery planning, and decreases the costs associated with backing up servers.
Benefits:
Easy data backup, redundant infrastructure, and replication ensure that the impact of any disaster on your business is
greatly reduced. You’ll also discover that flexibility on a day-to-day basis is increased when workloads are shifted between
physical servers, enabling your organization to perform maintenance without disrupting service. This approach provides
data protection of Virtual Server hosts and virtual machines, regardless of which operating system they are running, while
automatically minimizing outage of the protected virtual machines.
Virtualizing the entire computing infrastructure provides tremendous time and cost savings, as well as flexibility benefits.
However, attempting to separately manage each layer of the stack and each instance within those layers (such as individual
virtual machines) creates a much more complex situation than is necessary. Using different tools for virtualized resources
can result in duplicate or competing processes for managing resources, adding complexity to the IT infrastructure. This can
undermine the benefits of virtualization. A virtualized world that isn’t well managed can be less reliable and perhaps even
more expensive than its nonvirtualized counterpart.
Storage
Management System Center
S
VMM
CM
OM
3rd Party
Solution
Workload Layer Management
Virtual Layer Management
Including 3rd Party HyperVisors
Benefits:
With virtualization, you will realize an enormous reduction in the resources and time needed to administer your business’s
computing infrastructure. This will allow you to simplify your support requirements, making you much more agile and
responsive to business needs.
In addition, with a unified toolset that manages both Microsoft and third-party virtualization applications (such as VMware
and Xen), you will find your management style much more sophisticated.
Virtualization Technologies
At this point, you’ve seen the vision of the dynamic datacenter, common datacenter challenges, and scenarios for
addressing those challenges. Next, you will see a more in-depth examination of the technologies that come together to
provide these virtualization solutions.
The foundation for datacenter virtualization is Windows Server 2008, which includes the Hyper-V hypervisor operating
system feature. The hypervisor is installed by the familiar administrative task of configuring a server role. Windows Server
2008 was designed for interoperability, and Hyper-V was specifically engineered to be a great hypervisor for Windows,
Linux, and UNIX guests.
Unified and consistent management is provided by the System Center family of products. Virtual Machine Manager
provides the administrative console for provisioning and maintaining virtual machines. Operations Manager monitors
physical and virtual environments and provides guidance to optimize IT operations. Configuration Manager allows
the quick provisioning of physical servers, along with automated patching for physical and virtual environments. Data
Protection Manager provides the foundation for backup, restore, and disaster recovery, and through a single tool allows IT
to back up virtual machines and their internal workloads.
With 64-bit technology and SMP support, virtual environments scale to meet the needs of demanding workloads. By
supporting up to four processors in a virtual machine environment, your virtual machines get the most performance from
multithreaded applications.
Hyper-V Hypervisor
The actual hypervisor is a very thin layer of code on top of the hardware that presents a very small attack surface. The
hypervisor was developed under the industry-leading Microsoft Security Development Lifecycle, which ensures product
team security education, threat modeling, code reviews, static analysis, fuzz and penetration testing, and a robust security
response.
Hypervisor
yp Drivers
Drivers
Drirviveersrs Drivers
Drivers
Drirviveersrs Drivers
Drivers
Drirviveersrs
Drivers
D Drivers
D Drivers
D
Drivers
Drivers
Drirviveersrs
Drivers
D
Hypervisor
Hardware Hardware
There are two kinds of hypervisors: monolithic and microkernel. A monolithic hypervisor is a relatively thick layer between
the guest operating systems and the hardware. Monolithic hypervisors carry their own hardware drivers, which are different
from the hardware drivers in the guest operating systems. The hypervisor controls guest access to processors, memory, and
input/output (I/O), and isolates guests from one another.
Because a monolithic hypervisor is relatively large and carries multiple drivers, it presents a significant attack surface. If the
hypervisor is compromised, through either the hypervisor code or the third-party drivers that it loads, the entire physical
host and all guests can be compromised, too.
Rather than accepting this unnecessary risk, Microsoft developed Hyper-V using microkernel architecture. In this model, the
hypervisor is a thin layer between the guests and the hardware. The hypervisor provides simple partitioning functionality
that leverages virtualization extensions to the processor. Guest operating systems use their own native drivers. This means
that the hypervisor contains no third-party code that could introduce vulnerabilities. The microkernel hypervisor also
supports more hardware, as OEMs already produce OS drivers and need not produce separate hypervisor drivers.
With a guest using its own drivers, the size of the trusted computing base (TCB) is reduced, as guests are not routed
through parent partition (or Dom-0) drivers.
Microsoft believes microkernel is the best approach, as it ensures that all hypervisor code is Microsoft code produced under
the Security Development Lifecycle, presenting the smallest attack surface possible. As OEMs are not required to produce
hypervisor drivers, more hardware is available, and the possibility of systems performing differently when virtualized is
diminished. Modern processors contain virtualization extensions, which allows the hypervisor to be a much thinner software
layer.
Customers have also said that they value the ease of use of graphical management tools, as well as the wizards that make
administrative tasks intuitive and consistent, and that increase the productivity of IT personnel. Customers also need
powerful scripting capabilities in order to perform consistent operations on hundreds or thousands of machines; scripting
allows the unique circumstances and needs of individual businesses and datacenters to be addressed. Virtual Machine
Manager 2008 provides both capabilities. At the end of every wizard function in Virtual Machine Manager 2008, you’re
presented with the option to save the wizard’s actions as a PowerShell script. In fact, Virtual Machine Manager is built
on top of PowerShell, ensuring that any operations performed by VMM are scriptable. This provides the effortlessness
that Windows administrators expect, along with the power to script complex operations customized for the needs of the
datacenter.
It’s important to place virtual machines on physical servers that can provide the needed resources. Virtual Machine
Manager’s intelligent placement recommends the best server for placement of a new machine and for migrating an existing
workload with the goal of providing more resources. For Hyper-V virtualization, VMM allows instantaneous migration
with the click of a button. When managing ESX, VMM allows you to perform live migrations using the same intelligent
placement. Even live migration and other ESX operations are scripted as PowerShell scripts.
For mission-critical workloads, you can simply click a “high availability” checkbox and VMM will place the virtual machine
on a clustered server. VMM handles all configuration on top of Windows Server 2008’s greatly simplified clustering.
Operations Manager
System Center Operations Manager allows datacenters to monitor their physical and virtual environments with a single
tool. Operations Manager has long allowed datacenters to monitor operating systems and workloads, and this functionality
continues whether the workload is running on a physical or a virtual server. In addition, Operations Manager allows
datacenters to monitor the physical hosts running the virtual machines.
It’s important to monitor not just the overall CPU, memory, and I/O of hosts, but also the performance of the workloads
within hosts in order to determine when more resources are needed so that workload performance meets requirements.
System Center is designed with these scenarios in mind, and coordinates between the physical and virtual environments.
Operations Manager also integrates with Virtual Machine Manager, providing tips that VMM can use when recommending
virtual machine migration to more suitable hosts, and can even perform the migration automatically.
DPM is capable of protecting virtual machines without hibernation or downtime. Using shadow copy-based block-level
protection of your virtual disks, DPM delivers fast backup that does not consume inordinate amounts of disk space. This
gives datacenters a single backup and recovery tool for both physical and virtual workloads.
With replication technologies, DPM facilitates disaster recovery by restoring system images to a backup datacenter.
For unplanned downtime, such as a physical host failure, quick migration can have the workload up and running on a new
host within seconds. Clustering of the host allows the virtual workloads to fail over to the new host from shared disks. This
works regardless of whether the guest operating system is Windows, Linux, or UNIX.
For planned downtime, guests can be clustered so that any node in the cluster can be taken off-line and serviced while
other cluster nodes handle the workload. This allows guest patching and other maintenance without service interruption.
Virtual Machine Manager 2008 will allow you to designate, with a simple checkbox, mission-critical virtual machines as
“high availability.” The appropriate server configuration will then be performed, and virtual machines will be properly sited
and clustered.
To address this need, Microsoft has focused on interoperability. Microsoft supports Windows, Linux, and UNIX guest
operating systems to ensure that an organization can virtualize its existing workloads onto a single hypervisor technology.
Microsoft is also leading the industry with management support for disparate hypervisors, allowing organizations to choose
the best hypervisor technology for specific workloads and to manage them through a single pane of glass.
To meet the needs of customers, Microsoft has formed strategic partnerships with Citrix and certain open source projects,
and has built interoperability to make certain that ESX operates as a first-class hypervisor in Virtual Machine Manager.
XenSource/Citrix Partnership
Microsoft has partnered with Citrix to provide first-class support for Xen-enabled Linux workloads on Hyper-V with
the Linux Integration Components. With these components, Linux operating systems achieve the same near-physical
performance of Windows virtual workloads by avoiding hardware emulation and utilizing the Virtual Service Provider (VSP),
Virtual Service Client (VSC), and VMBus. This allows Hyper-V to host Windows and Linux workloads, and ensures that those
workloads have great performance and scalability characteristics.
In addition, Citrix is enabling XenEnterprise to manage Hyper-V, in much the same way that we’ve enabled Virtual Machine
Manager to manage non-Microsoft hypervisors. Finally, Citrix’s development of XenDesktop allows customers to connect to
virtual desktops running in the datacenter.
ESX Interoperability
Microsoft recognizes that datacenters are heterogeneous environments, and as such, it is becoming the norm today for
many organizations to use a variety of hypervisors. Customers will migrate workloads to Hyper-V as it is adopted, since it is
substantially more cost-effective than other hypervisor solutions. Customers have said they want to manage their hosts and
guests using a single set of management tools. Microsoft has stepped up to provide first-class support for ESX, Hyper-V, and
Virtual Server in Virtual Machine Manager. This allows organizations to develop one set of virtualization skills to manage
all workloads through a single pane of glass. Microsoft’s System Center products integrate with one another to provide the
best provisioning, management, and monitoring functionality available.
The ROI Tool can assist organizations in rapidly determining their particular ROI with Microsoft’s virtualizations solution. The
tool allows you to enter information about the business’s current infrastructure, including hardware, operating systems, and
workloads. The tool assists in determining the virtualized infrastructure and estimating the cost to implement it, taking into
consideration hardware costs and software licensing in order to provide the most comprehensive pricing. The ROI Tool also
compares competitive products to illustrate the cost savings to your business of the Microsoft solution.
Customers see increased IT system cost-efficiency through server consolidation; a reduction in hardware, space, and utilities
costs; and centralized management of physical and virtual server assets. This offering drives greater IT operating efficiency
through managed virtualization, helping to reduce costs, maximize system availability, and increase operational agility.
Virtual
Servers
Physical Virtual
Servers Applications
Desktop
Infrastructure
Using familiar interfaces and common management consoles, an environment based on Microsoft technologies delivers
the promised cost, service level, and agility benefits while reducing system complexity that can result from disparate point
solutions. Your IT organization can harness the power of virtualization across the enterprise while simultaneously improving
the efficiency and effectiveness of your operations.
Conclusion
Dynamic IT shifts the IT organization from being a cost center to being a strategic asset of the business. With Dynamic IT,
common datacenter tasks are automated, freeing the IT organization from repetitive manual operations. As less IT time is
consumed maintaining existing infrastructure, more time is available to focus on strategic initiatives.
Server consolidation is the first step in controlling costs. Reducing the number of physical servers saves on power, space,
and cooling. However, virtualization can cause complexity by requiring the administration of physical and virtual servers.
The key to reducing complexity is unified tools that manage, monitor, and provision the physical and virtual environment.
Microsoft System Center suite of tools provides unified management of the physical and virtual environment, the operating
systems, and the applications.
Datacenters are heterogeneous environments. Virtualization will introduce even more heterogeneity, as companies
introduce different hypervisors into the same datacenter. System Center is designed for heterogeneity, with the ability to
manage Windows, Linux, and UNIX workloads, and Xen, ESX, Virtual Server, and Hyper-V hypervisors.
Microsoft’s virtualization solutions let you maximize uptime, and reduce the impact of disruptive events. Using quick
migration and clustering, workloads can be kept available while servers are patched, and hardware serviced. System Center
tools can monitor the physical and virtual environments, and alert personnel to issues before the result in a service outage.
Using Data Protection Manager, organizations can achieve near-continuous backup of virtual servers, and continuous data
protection of workloads running on the servers. This allows organizations to use one tool to recover something as small as
an individual user’s mailbox, to something as large as an entire datacenter.
Microsoft provides its Hyper-V as part of the Windows operating system, rather than as a completely separate technology.
This leverages existing tools, skills, and hardware, and insures seamless integration with technologies such as active
directory. Microsoft provides the full virtualization solution, including server, desktop, presentation, application, and
storage virtualization. Microsoft fully supports the entire stack, from the hypervisor, through the operating system,
and including the Microsoft server workloads. Integrated management insures that you have a complete view of your
operations through a single set of tools. Microsoft offers this technology at an affordable price, enabling a more rapid
return on investment.
To move forward with virtualization to enable a dynamic datacenter, it’s important to ready your team. Your team should
seek to understand virtualization solutions, and team members should see for themselves how Microsoft offers the lowest
cost and the most integrated, interoperable management tools.
Your first tactical step should be performing a MAP analysis to determine the level of impact that virtualization could
have on your organization. Next, use the ROI tool to discern the cost of a virtualized implementation and when that
implementation would pay for itself.
The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of
publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of
Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication.
This white paper is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS DOCUMENT.
Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this
document may be reproduced, stored in, or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical,
photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.
Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this
document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any
license to these patents, trademarks, copyrights, or other intellectual property.
Microsoft, list Microsoft trademarks used in your white paper alphabetically are either registered trademarks or trademarks of Microsoft Corporation
in the United States and/or other countries.