Академический Документы
Профессиональный Документы
Культура Документы
Overview of VMware
Infrastructure 3
Instructor Manual
VMware ESX 3.5 and VirtualCenter 2.5
Overview of VMware
Infrastructure 3
VMware ESX 3.5 and VirtualCenter 2.5
Part Number EDU-ENG-A-OVW35-LECT-INST
Instructor Manual
Revision A
All rights reserved. This work and the computer programs to which it relates are the property of, and embody
trade secrets and confidential information proprietary to, VMware, Inc., and may not be reproduced, copied,
disclosed, transferred, adapted or modified without the express written approval of VMware, Inc.
Copyright/Trademark
This manual and its accompanying materials copyright © 2008 VMware, Inc. All rights reserved. Printed in
U.S.A. This document may not, in whole or in part, be copied, photocopied, reproduced, translated,
transmitted, or reduced to any electronic medium or machine-readable form without prior consent, in writing,
from VMware, Inc.
The training material is provided “as is,” and all express or implied conditions, representations, and
warranties, including any implied warranty of merchantability, fitness for a particular purpose or non-
infringement, are disclaimed, even if VMware, Inc., has been advised of the possibility of such claims.
This training material is designed to support an instructor-led training course and is intended to be used for
reference purposes in conjunction with the instructor-led training course. The training material is not a
standalone training tool. Use of the training material for self-study without class attendance is not
recommended.
Copyright © 2008 VMware, Inc. All rights reserved. VMware and the VMware boxes logo are registered
trademarks of VMware, Inc. MultipleWorlds, GSX Server, ESX Server, VMware ESX, and VMware ESXi are
trademarks of VMware, Inc. Microsoft, Windows and Windows NT are registered trademarks of Microsoft
Corporation. Linux is a registered trademark of Linus Torvalds. All other marks and names mentioned herein
may be trademarks of their respective owners.
education@vmware.com
Overview_A.book Page i Friday, September 5, 2008 12:40 PM
CONTENTS
MODULE 8 VMware HA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
What Is VMware HA? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Architecture of a VMware HA Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
VMware HA Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Create a VMware HA Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Add an ESX Host to the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Configure Cluster-Wide Admission Control . . . . . . . . . . . . . . . . . . . . . . . 92
Failover Capacity Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Maintain Business Continuity if ESX Hosts Become Isolated . . . . . . . . . . 95
Lab for Module 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Contents iii
Overview_A.book Page iv Friday, September 5, 2008 12:40 PM
MODULE 1
Virtual Infrastructure
1
Overview 1
Lesson Topics
• Define virtual infrastructure
• Describe how virtualization works
• Introduce VMware Infrastructure components
• Describe virtual network components
• Introduce storage, datastores, and VMFS
How Does Virtualization Work?
1
Slide 1-6
• It allows multiple
VMware Infrastructure 3
1
Slide 1-8
VMware Infrastructure Components
1
Slide 1-9
Management Made Easy
1
Slide 1-10
DatabaseTemplate and FilePrint Template are virtual machine templates.
Virtual machine templates can be used to quickly deploy additional virtual
1
machines. Templates do not appear in the Hosts and Clusters view.
• ESX supports
multiple types of
storage:
• Local SCSI, SATA,
Fibre Channel,
iSCSI, NFS
• Flexibility to meet
cost, availability,
and performance
requirements
ESX storage support is flexible enough to meet most cost, availability, and
performance requirements. ESX supports the following kinds of storage:
• Direct-attached SCSI and SATA storage
• Fibre Channel storage
• iSCSI storage
• NFS storage
Storage accessed by ESX is called a datastore.
Multiple LUNs from different storage types can be combined to form a
single datastore. LUNs can be dynamically added to an existing datastore if
more storage space is required.
VMFS and NFS Datastores
1
Slide 1-13
Storage LUNs are typically formatted with the Virtual Machine File System
(VMFS). VMFS is a high-performance, cluster-aware file system designed
by VMware to hold virtual machine, virtual machine template, and ISO
files.
An alternative file system type is NFS. Like VMFS, NFS is a shared file
system and it also holds virtual machine, virtual machine template, and ISO
files.
Multiple ESX servers can simultaneously access the same VMFS or NFS
datastore. Simultaneous access is designed to support high availability and
resource-balancing features like VMotion or VMware DRS and HA
clusters, which are covered later in the course.
Flexible Network Connectivity
1
Slide 1-15
• There are three types of network connections:
Service VMkernel
Console port Virtual machine port groups
port
uplink ports
Virtual Switches Support VLANs
1
Slide 1-16
ESX supports virtual LANs with VLAN IDs between 1 and 4095 on
VMkernel, service console, and virtual machine connections. VLAN
functionality provides yet additional flexibility and cost savings in network
configuration.
• Using VirtualCenter
• In this lab, you perform the following tasks:
•Use the VI Client to log in to VirtualCenter
•View the VirtualCenter inventory
•View virtual network components
•View storage components
MODULE 2
Importance
2
Virtual infrastructure is based on virtual machines. The ability to quickly
provision virtual machines is critical. To quickly provision multiple virtual
Lesson Topics
• Define a virtual machine
• Virtual machine hardware
• Installing a guest operating system into a virtual machine
• Creating templates
• Deploying virtual machines from a template
• Guest operating system customization
ESX Virtual Machine Hardware
Slide 2-6
Up to 2 ports Up to 2 ports
1-2 drives
2
Up to 4
CD- ESX virtual
ROMs machine
Each guest operating system sees ordinary hardware devices. It does not
know these devices are virtual. Furthermore, all VMware® ESX 3 virtual
machines have uniform hardware, except for a small number of variations
the system administrator can apply. This makes virtual machines uniform
and portable across ESX hosts.
Each virtual machine has a total of six virtual PCI slots. One of these is used
for the virtual video adapter. Therefore, the total number of virtual Ethernet
and SCSI host adapters cannot exceed five. The virtual chipset is an Intel
440BX–based motherboard with an NS338 SIO chip. This chipset ensures
compatibility for a wide range of supported guest operating systems,
including legacy operating systems like Windows NT.
Fast, Flexible Guest OS Installations
Slide 2-8
2
Create a Virtual Machine
VM Console
Create a Template
Slide 2-10
2
VM, power it off,
then …
There are two ways to create a template using the VI Client: Clone to
Template and Convert to Template. Which method is chosen depends on
whether the original virtual machine is still needed. The original virtual
machine is no longer available when converted to a template.
You use the VI Client to provision a new virtual machine from a template.
You launch the Deploy Template wizard by right-clicking on the template
and answering a few simple questions. The datacenter administrator can
choose the new virtual machine display name, its location in the
VirtualCenter inventory, its ESX host, and the datastore to use.
Automating Guest OS Customization
Slide 2-12
2
template.
• For guest operating system customization to work, it must
Template Provisioning
In this lab, you perform the following tasks:
•Convert a virtual machine to a template
•Convert a template back to a virtual machine
•Deploy a virtual machine from a template
MODULE 3
Importance
Resource pools allow CPU and memory resources to be hierarchically
3
assigned to meet the business requirements of your enterprise. Virtual
machine CPU and memory resource controls provide finer-grained tuning
Lesson Topics
• How are virtual machines’ CPU and memory resources managed?
• What is a resource pool?
• Managing a pool’s resources
• A resource pool example
• Admission control
simultaneously scheduled.
Flexible Resource Allocation
Slide 3-6
• Proportional-share system for relative resource management
• Used to grant resources according to business requirements
• Applied during resource contention
• Prevents virtual machines from monopolizing resources
Number of Shares
3
CPU and Memory Resource Pools
• Change number of
shares
• Power on VM
• Power off VM
Limit
• A cap on the consumption of physical CPU
time by this VM, measured in MHz
Reservation
• A certain number of physical CPU cycles
reserved for this VM, measured in MHz
• The VMkernel chooses which CPUs, and may
migrate.
• A VM will power on only if the VMkernel can
guarantee the reservation.
• A reservation of 1,000MHz might be generous
for a 1-VCPU VM, but not for a 4-VCPU VM.
Shares
• More shares means this VM will win
competitions for physical CPU time more often.
ESX features three virtual machine CPU resource controls that are used to
tune virtual machine behavior. These CPU resource controls are dynamic in
that they can be modified while the virtual machine is powered on or off.
CPU “limit” defines the maximum amount of physical CPU, measured in
MHz, that a virtual machine is allowed.
CPU “reservation” defines the amount of physical CPU, measured in MHz,
reserved for this virtual machine at power up. As long as a virtual machine
is not using its total reservation, the unused portion is available for use by
other virtual machines. The VMkernel will not allow a virtual machine to
power on unless it can guarantee its CPU reservation.
Each virtual machine is assigned a number of CPU “shares.” The more
shares a virtual machine has relative to other virtual machines, the more
often it gets CPU time above its reservation when there is contention for
CPU resources.
Supporting Higher Consolidation Ratios (1)
Slide 3-8
Virtual memory
• Memory mapped by an application
inside the guest operating system
Application
Physical memory
• ESX presents the virtual machines
and the service console with physical
3
pages.
Guest OS
• Identical pages might be shared by
Virtual Machine Memory Resource Controls
Slide 3-10
Available memory
• Memory size defined when the VM was
created
Limit
• A cap on the consumption of physical
memory by this VM, measured in MB
3
• Equal to available memory by default
Reservation
ESX features four virtual machine memory resource controls that are used
to tune a virtual machine’s behavior. Three of these memory resource
controls are dynamic and can be modified while the virtual machine is
powered on.
“Available memory,” measured in megabytes, is assigned to the virtual
machine when it is created. It is the total amount of memory presented by
the virtual machine to the guest operating system at boot-up. Available
memory cannot be changed while the virtual machine is powered on.
Memory “limit,” measured in megabytes, defines the maximum amount of
virtual machine memory that can reside in RAM. It never exceeds available
memory. By default, available memory and memory limit are set to the
same value.
Memory “reservation,” measured in megabytes, is the amount of RAM
reserved by the VMkernel for the virtual machine at power-on. As long as a
virtual machine has not used its total reservation, the unused portion is
available for use by other virtual machines. The VMkernel will not allow a
virtual machine to power on, unless it can guarantee the memory
reservation.
Each virtual machine is assigned a number of memory “shares.” The more
shares a virtual machine has relative to other virtual machines, the more
often it is allocated RAM above its reservation when there is memory
contention.
The VMkernel might use disk space as virtual machine virtual memory in
unusual circumstances. The reserved disk space is calculated per virtual
machine, using the difference between the memory limit and the memory
reservation.
Using Resource Pools to Meet Business Needs
Slide 3-11
• A resource pool is a
logical abstraction
for hierarchically
managing CPU and
memory resources.
3
• Configurable on a
Resource pools provide a business with the ability to divide and allocate
CPU and memory resources hierarchically as required by business need.
Reasons to divide and allocate CPU and memory resources include such
things as maintaining administrative boundaries, enforcing charge-back
policies, or accommodating geographic locations or departmental divisions.
It is possible to further divide and allocate resources by creating child
resource pools.
Configuring CPU and memory resource pools is possible only on
nonclustered ESX hosts or on VMware® DRS-enabled clusters.
Clusters are indicated in the inventory with pie chart icons.
Resource pools have CPU and memory resource controls that behave like
virtual machine CPU and memory controls. Resource pool resource controls
can be modified while virtual machines are running.
CPU and memory limits define the maximum amount of CPU or RAM a
resource pool is allowed.
CPU and memory reservations define the amount of CPU or RAM reserved
for the resource pool when it is created. The VI Client interface will not
allow resource pool creation unless the reservation can be guaranteed.
Each resource pool is assigned a number of CPU and memory shares. The
more shares a resource pool has relative to other resource pools (and,
possibly, virtual machines), the more often it is allocated CPU and memory
resources above its reservations during periods of contention.
Expandable reservations allow a resource pool with insufficient capacity to
borrow CPU or memory resources from a parent pool to satisfy reservation
requests from a child resource pool or virtual machine. Requests to borrow
resources proceed up the pool hierarchy until the top level is reached or a
pool with no expandable reservations is encountered. Expandable
reservations provide great flexibility but have the potential to be abused.
Viewing Resource Pool Information
Slide 3-13
3
CPU and Memory Resource Pools
Use the resource pool’s Resource Allocation tab to view configuration and
current usage information for virtual machine and child pools.
Engineering Finance
CPU Shares: 1,000 CPU Shares: 2,000
~33% of PCPU ~67% of PCPU
22%
45%
22%
Admission Control for CPU and Memory Reservations
Slide 3-15
3
Yes Can this pool
No
No
Fail
Expandable
reservation?
MODULE 4
Importance
VMotion is a valuable tool for delivering higher service levels and
improving overall hardware utilization.
4
• Understand the VMotion migration process
Lesson Topics
• VMotion migration
• VMotion compatibility requirements
VMotion Migration
Slide 4-5
How VMotion Works (1)
Slide 4-6
4
VMotion
Network
Memory
Bitmap
VMotion Memory
Network
Production
Network
The virtual machine’s memory state is copied over the VMotion network
from the source to the target ESX host. While the virtual machine’s memory
is being copied, users continue to access the virtual machine and can update
pages in the source ESX host memory. A list of any modified memory
pages is kept in a memory bitmap on the source ESX host.
How VMotion Works (3)
Slide 4-8
4
VMotion Memory Bitmap
Network
After most of the virtual machine’s memory is copied from the source to the
target ESX host, the virtual machine is quiesced: this means that the virtual
machine is temporarily placed in a state where no additional activity will
occur. This is the only time in the VMotion procedure in which the virtual
machine is unavailable. Quiescence typically lasts approximately one
second. During this period, VMotion begins to transfer the virtual machine
state to the target ESX host. The virtual machine device state and the
memory bitmap containing the list of pages that have changed are also
transferred during this time.
If a failure occurs during the VMotion migration, the virtual machine being
migrated is failed back to the source ESX host. For this reason, the source
virtual machine is maintained until the virtual machine on the target ESX
host starts running.
Memory
Bitmap
The remaining memory identified in the bitmap is copied from the source to
the target ESX host.
How VMotion Works (5)
Slide 4-10
• Start VM A on esx02.
4
VMotion
Network
Immediately after the virtual machine is quiesced on the source ESX host,
the virtual machine on the target ESX host is initialized and starts running.
A virtual machine’s entire network identity, including MAC and IP address,
is preserved during a VMotion.
To update the physical switch port, the VMkernel sends a Reverse Address
Resolution Protocol (RARP) request with the virtual machine’s MAC
address to the physical network.
VMotion
Network
Production
Network
The original virtual machine is finally deleted from the source ESX host.
Users now access the virtual machine on the target ESX host.
ESX Host Requirements for VMotion
Slide 4-12
4
•Same vendor and features
Lab for Module 4
Slide 4-14
4
#1 #2
MODULE 5
Importance
VMware® DRS-enabled clusters assist your system administration staff by
providing automated resource management for multiple ESX hosts. Less
management and more efficient use of existing hardware resources reduces
costs.
5
policy control
• Cluster
•A collection of ESX hosts and
associated virtual machines
• DRS-enabled cluster
•Uses VMotion to balance workloads
across ESX hosts
•Enforces resource policies accurately
(reservations, limits, shares)
•Respects placement constraints
• Affinity rules and VMotion compatibility
•Managed by VirtualCenter Cluster
Create a DRS Cluster
Slide 5-6
5
VMware DRS Clusters
A DRS cluster is configured using a wizard in the VI Client. The user is
prompted to provide the cluster a unique name and enable it for DRS.
Once DRS has been enabled on a cluster, new configuration choices appear
in the wizard’s left-side menu. These choices include VMware DRS, Rules,
Virtual Machine Options, and Power Management.
The VMware DRS menu option is where the cluster-wide automation level
is configured. The cluster-wide automation level affects how DRS performs
its two main functions. These are the two main DRS functions:
Manual When a virtual machine is powered on, DRS
displays a star-ranked list of the ESX servers
based on their current CPU and memory
utilization. The user selects which ESX server to
use. When the workloads across the ESX servers
in the DRS cluster become unbalanced, DRS
displays a ranked list of VMotion
recommendations.
Partially automated When a virtual machine is powered on, DRS
automatically places it on the best-suited ESX
server. When the workloads across the ESX
servers in the DRS cluster become unbalanced,
DRS displays a ranked list of VMotion
recommendations.
5
Fully automated When a virtual machine is powered on, DRS
automatically places it on the best-suited ESX
To add an ESX host to a DRS cluster, you drag and drop the ESX host onto
the cluster icon in the VirtualCenter inventory. Supply the requested
information when prompted by the Add Host wizard.
Adding ESX Hosts to a DRS Cluster (2)
Slide 5-9
When adding
the host,
choose to
create a new
resource pool
for this host’s
virtual
machines and
resource
5
pools.
Adjusting DRS Operation for Performance or HA
Slide 5-11
• Affinity rules
• Run virtual machines
on same ESX host.
• Use for multi-VM
systems where
performance benefits.
• Anti-affinity rules
• Run virtual machines
on different ESX hosts.
• Use for multi-VM
systems that load-
balance or require high
5
availability.
Cluster Team
MODULE 6
Importance
Although the VMkernel and VirtualCenter work proactively to avoid
resource contention, maximizing and verifying performance levels requires
both analysis and ongoing monitoring.
6
Monitoring Virtual Machine Performance
Lesson Topics
• Virtual machine performance graphs
• Monitoring a virtual machine’s usage of the following:
• CPU
• Memory
• Disk
• Network
Provide students with a brief VirtualCenter features performance graphs for VMware® ESX hosts, virtual
overview of the features of
VirtualCenter’s performance
machines, clusters, and resource pools. ESX host and virtual machine
graphs. Specifically, point out: performance graphs display information about CPU, memory, disk I/O, and
network I/O usage. Cluster and resource pool performance graphs display
- The graph
- The legend only CPU and memory usage.
- The Options link
- Save as .csv Performance graphs provide an easy method to quickly display numerous
- The tear-off chart performance data points. To view a performance graph, select an object in
the VirtualCenter inventory and use its Performance tab. Graphs that display
Be sure to explain the
relationship between the real-time data, or historical data for the past day, week, month, or year, are
graph and the legend. available.
It is possible to export a comma-separated-value file (.csv) using the
performance graph interface. The .csv file may be imported into programs
such as Microsoft Excel.
Many performance graphs offer not only a wide choice of data types to
display but also a choice in the type of graph to display. This flexibility
allows large amounts of data to be more easily viewed and interpreted,
resulting in better decisions.
The ability to display real-time data allows an enterprise to react to
situations as they occur. Capturing up to a year of performance data
provides information for trend analysis to better plan for the future.
Example CPU Performance Issue Indicator
Slide 6-6
• Ready Time
• The amount of time the virtual machine is ready to run
but cannot, because there is no available physical CPU
• High ready time indicates possible contention.
6
Understanding both ESX host operation and the data types displayed by
performance graphs is critical to properly interpreting information and
Supporting Higher Consolidation Ratios
Slide 6-8
• The VMware Tools vmmemctl balloon driver supports higher
consolidation ratios.
• VMware Tools is installed in the guest operating systems.
• Deallocate memory from selected virtual machines when
machine memory (RAM) is scarce.
ample memory;
balloon remains
uninflated
guest is forced to page
out to its own paging
inflate
area;
balloon VMkernel reclaims
(driver
demands
memory
memory from
guest OS) guest may page
deflate balloon in; VMkernel
(driver relinquishes grants memory
memory)
6
The memory “balloon” driver is another ESX feature that supports efficient
use of RAM and higher consolidation ratios. It is informally called the
Are VMs Being Disk Constrained?
Slide 6-10
• Disk-intensive applications can
saturate the storage or the path.
• If you suspect that a VM is
constrained by disk access:
• Measure the resource consumption
using performance graphs
• Measure the effective bandwidth
between VM and the storage
• To improve disk performance:
• Ensure VMware Tools is installed.
• Reduce competition.
• Move other VMs to other storage.
• Use other paths to storage.
• Reconfigure the storage.
• Ensure that the storage’s configuration
(RAID level, cache configuration, etc.)
are appropriate.
6
Above is an example of using VirtualCenter to monitor virtual machine disk
I/O. Monitoring virtual machine disk I/O is useful as an early indicator of
Lab for Module 6
Slide 6-12
6
Monitoring Virtual Machine Performance
MODULE 7
VirtualCenter Alarms 7
Importance
VirtualCenter alarms proactively monitor VMware® ESX and virtual
machine performance. Alarms allow your system administrators to be more
responsive to changes in the datacenter. Alarms send notifications when
either the ESX host or the virtual machine state changes or user-defined
thresholds are exceeded.
Lesson Topics
• ESX host–based alarms
7
• Virtual machine–based alarms
• VirtualCenter SMTP and SNMP configuration
VirtualCenter Alarms
7
several ESX hosts or clusters into a folder and apply an alarm to that folder.
VirtualCenter Alarms
When you right-click on a virtual machine and choose Add Alarm, the
resulting window has four panels. You use the General panel to name this
alarm. You use the Triggers panel to control which load factors are
monitored and what the threshold for the yellow and red states are. The
Reporting and Actions panels are discussed in upcoming slides.
Name and
describe Click any
the new field
alarm. to modify.
Percentages
Connected,
disconnected,
not responding
The dialog box displayed when you right-click on an ESX host and choose
Add Alarm is very similar to that for a virtual machine. The key difference
is the list of available triggers.
7
VirtualCenter Alarms
Available
only for VM-
based alarms
You can specify actions to occur when an alarm is triggered (other than
simply displaying it in the VI Client). These actions include the following:
• Sending a notification email
• Sending a notification trap
• Running a script
• Powering on a virtual machine
• Powering off a virtual machine
• Suspending a virtual machine
• Resetting a virtual machine
Avoids threshold
repeat alarms
Avoids state-change
repeat alarms
7
controls on the Reporting pane to avoid such a flood.
VirtualCenter Alarms
• Click SNMP to
specify trap
destinations.
If you want to transmit SNMP or email alarms, you must supply the IP
address of the destination server.
If your SNMP community string is not public, specify it here.
Specify the email address to be used for the From address of email alerts.
7
VirtualCenter Alarms
MODULE 8
VMware HA 8
Importance
Services that are highly available are important to any business.
Configuring VMware® HA can increase service levels.
Lesson Topics
• Architecture of VMware HA
• VMware HA prerequisites
• Clustering virtual machines using VMware HA
• Admission control
• Restart priorities
• Isolation response
8
VMware HA
• A VirtualCenter feature
• Configuration, management, and monitoring done
through the VI Client
• Automatic restart of virtual machines in case of
physical ESX server failures
• Not VMotion
• Provides higher availability while reducing the need
for passive standby hardware and dedicated
administrators
• Provides restart capability to a range of applications
not configurable under MSCS
• Provides experimental support for per-VM failover
Virtual Machine Failure Monitoring
An additional VMware HA function called virtual machine failure
monitoring allows VMware HA to monitor whether a virtual machine is
available or not. VMware HA uses the heartbeat information that VMware
Tools captures to determine virtual machine availability.
On each virtual machine, VMware Tools sends a heartbeat every second.
Virtual machine failure monitoring checks for a heartbeat every 20 seconds.
If heartbeats have not been received within a specified (user-configurable)
interval, virtual machine failure monitoring declares that virtual machine as
failed and resets the virtual machine.
Virtual machine failure monitoring can distinguish between a virtual
machine that was powered on but has stopped sending heartbeats and a
virtual machine that is powered-off, suspended, or migrated.
Virtual machine failure monitoring is experimental and not supported for
production use. By default, virtual machine failure monitoring is disabled
but can be enabled by editing the VMware HA Virtual Machine Options in
the VI Client.
8
VMware HA
Module 8 VMware HA 87
Overview_A.book Page 88 Friday, September 5, 2008 12:40 PM
VMware HA Prerequisites
Slide 8-7
Module 8 VMware HA 89
Overview_A.book Page 90 Friday, September 5, 2008 12:40 PM
Enable VMware HA by
selecting the check
box.
Add an ESX Host to the Cluster
Slide 8-9
• Drag and drop
ESX host onto
cluster and …
• Use the Add Host
Wizard to complete
the process.
• Consider
configuring enough
redundant ESX
host capacity to
restart virtual
machines.
To add an ESX host to the cluster, you drag and drop the existing standalone
server into the HA cluster, then use the Add Host wizard to complete the
process.
8
VMware HA
Module 8 VMware HA 91
Overview_A.book Page 92 Friday, September 5, 2008 12:40 PM
Cluster-wide
Can prevent human error
settings. Per-
from starting more VMs
VM settings for
than can be restarted
each are also
available.
Restart priority is based on the criticality of virtual machines.
For example, in a Windows environment, DNS and domain controllers
would normally be specified as the highest restoration priority, due to other
servers depending on those infrastructure services.
This priority decision may be influenced if you have redundant DNS and
domain controller elements that are forced to be resident on different servers
at all times, such as if an anti-affinity rule is applied at a DRS level. Note
that this will not prevent someone from manually invoking migrations that
cause these virtual machines to be on the same ESX host.
There are also some virtual machines that are not essential in the event of a
failure and that may be disabled from being restored. This means that, if the
HA cluster will have drastically reduced available resources, shedding these
less-essential resource consumers will reduce contention for these limited
resources.
You set low, medium, and, high restart priorities to customize failover
ordering. The default is medium. High-priority virtual machines are
restarted first. Nonessential virtual machines should be set to Disabled
(automated restart will skip them).
You can set the default response in the event that an ESX host becomes
isolated. You can choose to do the following:
• Leave the virtual machines powered on
• Power off the virtual machines
This setting can be specified at the cluster level and on a per-virtual
machine basis—as the following page illustrates. 8
The user can also determine whether to power down the virtual machines or
VMware HA
Module 8 VMware HA 93
Overview_A.book Page 94 Friday, September 5, 2008 12:40 PM
Only 8 VMs could run and Only 4 VMs could run and
still be restarted. still be restarted.
NOTE
Both of these examples assume that all virtual machines require the same
amount of resources.
Maintain Business Continuity if ESX Hosts Become Isolated
Slide 8-12
• A network failure ?
might cause a “split-
brain” condition.
• VMware HA waits
15 seconds by default
before deciding that
an ESX host is
isolated.
A different isolation address can be specified by using the advanced HA option das.isolationaddress.
A different isolation response time can also be specified by using the advanced HA option
das.failuredetectiontime. These are cluster-wide settings, which can be set in the Advanced Options
menu of the VMware HA properties.
Module 8 VMware HA 95
Overview_A.book Page 96 Friday, September 5, 2008 12:40 PM
•Cause VMware HA to
restart virtual machines
following the “crash” of an ESX Host ESX Host
ESX host 1 2
MODULE 9
Importance
VMware’s latest products and features take full advantage of the
groundbreaking mobility and manageability characteristics of virtual
machines explored in the previous modules to deliver scalable, repeatable,
and efficient IT processes.
Lesson Topics
• Standardizing on virtualization
• VMware Infrastructure 3 products and features
Course Flow
The intent of this final module is to provide only a very brief introduction to the VMware VI3
product line. Many of these products leverage the core VI3 features introduced in the earlier
9
modules. This module does not attempt to provide technical details of the products. It is meant only
to inform customers these products exist and provide a few facts about each product. If the
customer has interest in learning more, then consider it a opportunity for sales to get involved.
VI3 Product and Feature Overview
Standardization
300
Rollout
Departmental 150
600
Rollout
400 100
Proof of Concept
200 50
0 0
2003 2004 2005 2006
Over the last few years, virtualization has gone from a technology being
tried out in test/dev to a production server consolidation technology. It is
now is gaining momentum as the industry-standard way of computing.
Early adopters of our technology used the hypervisor for basic partitioning.
As VMware technology matured and provided means to aggregrate multiple
virtualized nodes and centralized management, customers rolled it out into
mainstream production environments.
As our customers and our technology matured, virtualization began to go far
beyond its original use for server consolidation and live migration of virtual
machines. Ensuring availability and uptime helped customers achieve better
service levels. Using virtualization for business continuity and disaster
recovery helped customers achieve better recovery time objectives (RTOs)
and recovery point objectives (RPOs) at a fraction of the cost. With the end-
to-end management and automation capabilities available from VMware, it
became very easy for customers worldwide to make VMware virtualization
the default in the datacenter.
Forty-three percent of customers surveyed last year said that their default
policy for all or most new machines was a virtual machine.
Standardizing on VMware Infrastructure
Slide 9-6
• It takes more than just a hypervisor layer to create and successfully
manage a virtual infrastructure.
Infrastructure
Test & Server
Management
Development Consolidation
High Availability
Management &
Automation
Virtual Virtual
Infrastructure Infrastructure
This graph illustrates the typical customer adoption phases: Who is this large wireless
technology company?
• Proof of concept
Answer: Qualcomm
• Departmental rollout
• Expanded rollout
• Standardization
This company started a proof of concept in the first half of 2003. Today, it
has implemented a VMware-first policy (that is, it has standardized on
VMware for x86 workloads). Today, 60 percent of its x86 environment is
virtualized (of 1,900 total servers, 1,150 are virtualized).
9
The number of physical servers has grown from 950 to 1,900 over the past
2.5 years. Because of the much simplified provisioning with virtualization,
VI3 Product and Feature Overview
the company has been able to maintain the same number of server
administrators. It provisions 68 new virtual machines per month. This
would be impossible in the physical world without dramatic staffing
increases. This means that the number of physical servers a single system
administrator can manage has more than doubled. This translates into
substantial operational savings for the company.
p • Update Manager
• Virtual Desktop Manager
• Enterprise Converter
• Lab Manager
Management & • Guided Consolidation • Lifecycle Manager
Automation • Site Recovery Manager
o • VMware DRS
• VMware HA
• VMware Consolidated Backup
• Distributed Power Management
Virtual • VMotion • Storage VMotion
Infrastructure
n • ESX hypervisor
• ESXi hypervisor
• VMFS
• VSMP
Virtualization
Platform
Lab Manager Automates the setup, capture, storage, and
sharing of multimachine system configurations
Lifecycle Manager Implements a consistent, automated workflow
for provisioning, operating, and
decommissioning virtual machines
VMware DRS Monitors utilization continuously across
resource pools and intelligently allocates
available resources among the virtual machines
based on predefined rules that reflect business
needs and changing priorities
VMware HA Delivers cost-effective high availability for any
application running in a virtual machine,
regardless of its operating system or underlying
hardware configuration
VMotion Migrates running virtual machines from one
ESX host to another with no disruption
VMware Consolidated Enables LAN-free backup of virtual machines
Backup from a centralized proxy server
Distributed Power Minimizes power consumption by consolidating
Management workloads onto fewer ESX hosts while
guaranteeing service levels
Storage VMotion Perform live migration of virtual machine disk
files across storage arrays with no disruption in
service for critical applications
ESX Forms the robust foundation of the VMware
Infrastructure 3 suite
9
n • ESXi hypervisor
Virtualization
Platform
ESXi Hypervisor
Slide 9-9
minutes.
VI3 Product and Feature Overview
4. Connect VI Client to IP
address (or manage with
VirtualCenter).
Companies can quickly bring new ESXi servers online. All that is required
to do so is to power on the server, boot into the ESXi hypervisor, configure
an administrator password, optionally modify the network configuration,
and connect to the server through either the VI Client or VirtualCenter.
ESXi enables companies to quickly add additional computing resources to
their virtual infrastructure.
Additional VI Layer Products and Features
Slide 9-11
9
VI3 Product and Feature Overview
VCB offloads the load associated with performing backups. This leaves the
computing power of ESX hosts available for running virtual machines. And
VCB can bypass the local area network when performing backups so that
the network performance is not affected.
Recent changes to VCB that make it more attractive to the SMB market space:
• In addition to supporting SAN, VCB now supports iSCSI, NAS, and locally attached storage
(released in 3.0.2).
• VCB can run in a virtual machine, thereby eliminating the need for a dedicated backup proxy
server.
VMware Converter can be used to restore VCB images (released in 3.0.1). This provides a simple
graphical technique to restore virtual machines from tape and return them to operation in VI3.
9
VI3 Product and Feature Overview
Distributed Power Management (Experimental)
Slide 9-14
• Consolidates workloads
onto fewer servers when
the cluster needs fewer
resources
• Places unneeded servers
in standby mode
Resource Pool
• Brings servers back online
as workload needs
increase
• Minimizes power
consumption while
guaranteeing service
Physical Servers
levels
• No disruption or downtime
to virtual machines
periods
VI3 Product and Feature Overview
Storage VMotion
Slide 9-15
• Storage-independent
migration of virtual
machine disks
•Zero downtime to virtual
machines
•LUN-independent
•Supported for Fibre
Channel SANs
• Storage array migration
• Storage I/O
optimization
Management and Automation Layer Products
Slide 9-16
p • Update Manager
• Virtual Desktop Manager
• Enterprise Converter
• Lab Manager
Management & • Guided Consolidation • Lifecycle Manager
Automation • Site Recovery Manager
9
VI3 Product and Feature Overview
Update Manager and DRS
Slide 9-18
• Update Manager
Update patches entire DRS
Manager
clusters.
• Each host in the cluster
enters DRS maintenance
mode, one at a time.
• VMs are migrated off.
VMotion VMotion Host is patched and
rebooted if required.
• VMs are migrated back
Resource Pool on.
• Next host is selected.
• Automates patching of
large number of hosts
with zero downtime to
virtual machines
9
VI3 Product and Feature Overview
Guided Consolidation
Slide 9-20
• Automatically discovers
Discover physical servers
• Analyzes utilization and
usage patterns
• Converts physical servers
to VMs placed intelligently
Analyze
based on user response
• Lowers training
requirements for new
virtualization users
Convert • Guides users through the
entire consolidation
process
tutorial-like fashion.
VI3 Product and Feature Overview
Until now, keeping recovery plans and the runbooks that documented them
accurate and up-to-date has been practically impossible because of the
complexity of plans and the dynamic environment in today’s datacenters.
Adding to that challenge, traditional solutions do not offer a central point of
management for recovery plans and make it difficult to integrate the
different tools and components of disaster recovery solutions.
VMware Site Recovery Manager simplifies and centralizes the creation and
ongoing management of disaster recovery plans. Site Recovery Manager
turns traditional oversized disaster recovery runbooks into automated plans
that are easy to manage, store, and document. And Site Recovery Manager
is tightly integrated with VMware Infrastructure 3, so you can create,
manage, and update recovery plans from the same place that you manage
your virtual infrastructure.
Testing disaster recovery plans and ensuring that they are executed correctly
are critical to making recovery reliable. However, testing is difficult with
traditional solutions because of the high cost, complexity, and disruption
associated with tests. Another challenge is ensuring that staff are trained and
prepared to successfully execute the complex process of recovery.
Site Recovery Manager helps you overcome these obstacles by enabling
realistic, frequent tests of recovery plans and eliminating common causes of
failures during recovery.
Site Recovery Manager provides built-in capabilities for executing realistic,
nondisruptive tests without the cost and complexity of traditional disaster
recovery testing. Because the recovery process is automated, you can also
ensure that the recovery plan will be carried out correctly in both testing and
failover scenarios.
Site Recovery Manager leverages VMware Infrastructure to provide
hardware-independent recovery to ensure successful recovery, even when
recovery hardware is not identical to production hardware.
9
VI3 Product and Feature Overview
VMware Converter Enterprise Capabilities
Slide 9-23
as local conversions. This decreases the time and effort required in large-
scale virtualization implementations.
VI3 Product and Feature Overview
Remote conversions are accomplished by the Converter Server downloading a Converter agent to
the source system.
Local conversions are accomplished by booting the source system from the Converter CD.
• Provision new
environments
quickly.
• Test
• Development
• Support
VMware Lab Manager provides the ability to automate the setup, capture,
storage, and sharing of multimachine software configurations. Development
and test teams can access them on demand through a self-service, Web-
based portal. With its shared library and shared pool of virtualized servers
and templates, VMware Lab Manager lets you efficiently move and share
multimachine configurations across software development and test teams
and facilities.
VMware Lab Manager provides the ability to do the following:
• Allocate resources as needed instead of maintaining multiple static
systems that are only used sporadically. VMware Lab Manager lets you
pool and share resources between development and test teams for
maximum utilization—and increased cost savings.
• Provision new machines nearly instantly with VMware Lab Manager.
This eliminates the painstaking, multihour process of gathering
machines, installing operating systems, installing and configuring
applications, and establishing intermachine connections. Now software
developers and QA engineers can fulfill their own provisioning needs,
leaving IT in control of user management, storage quotas, and server
deployment policies—achieving the best of both worlds.
• Quickly reproduce software defects and resolve them earlier in the
software lifecycle—and ensure higher quality software and systems.
VMware Lab Manager enables “closed loop” defect reporting and
resolution through its unique ability to snapshot complex multimachine
configurations in an error state, capture them to the library, and make
them available for sharing—and troubleshooting—across development
and test teams.
You can give your outsourced partners secure, remote access to your
software lab—and maintain your flexibility to rapidly add, remove, or
replace outsourced resources as your needs change. Your intellectual
property remains securely in Lab Manager’s environment, and you
eliminate time-consuming and costly replication of equipment in your
partners’ labs.
9
VI3 Product and Feature Overview
Retire
Lifecycle Workflow Management
Slide 9-26
VM Tracking
Request for
VM
Provisioning
Archive Delete
Decommission
support the request. The user can log back in to Lifecycle Manager at any
time to check on the request status.
VI3 Product and Feature Overview
p • Update Manager
• Virtual Desktop Manager
• Enterprise Converter
• Lab Manager
Management & • Guided Consolidation • Lifecycle Manager
Automation • Site Recovery Manager
o • VMware DRS
• VMware HA
• VMware Consolidated Backup
• Distributed Power Management
Virtual • VMotion • Storage VMotion
Infrastructure
n • ESX
• ESXi
• VMFS
• VSMP
Virtualization
Platform