Вы находитесь на странице: 1из 464

Student Study Guide VTSP 5.

VMware
Technical Solutions Professional
Student Study Guide - VTSP 5.5

Student Study Guide - VTSP 5.5

Page 1

VTSP 5.5 - Student Study Guide


Course 1 ....................................................................................................................... 10
Module 1: vSphere Overview ..................................................................................... 10
vSphere Product Overview ........................................................................................ 10
Course Objectives ...................................................................................................... 11
vSphere Overview ...................................................................................................... 12
Module 1 Objectives .................................................................................................. 13
VMware Vision ........................................................................................................... 14
vSphere 5.5 Architecture ........................................................................................... 16
vSphere 5.5 Virtualization Layer ................................................................................ 18
Physical Topology of a vSphere 5.5 Data Center ...................................................... 19
Introduction to vSOM ................................................................................................. 20
vSphere with Operations Manager Overview ............................................................. 21
vCenter Operations Manager: Quick Facts ................................................................ 27
Learn More: vSOM Training ....................................................................................... 33
Module Summary ....................................................................................................... 35
Module 2: vSphere Infrastructure and Hypervisor Components ............................ 36
vSphere Infrastructure and Hypervisor Components ................................................. 36
Module 2 Objectives .................................................................................................. 37
vSphere Distributed Services vSphere vMotion ...................................................... 38
vSphere Distributed Services vSphere Storage vMotion ........................................ 39
vSphere Distributed Services vSphere High Availability ......................................... 40
vSphere Distributed Services vSphere Fault Tolerance .......................................... 42
vSphere Distributed Services vSphere DRS ........................................................... 43
vSphere Distributed Services vSphere Storage DRS ............................................. 44
vSphere Distributed Services vSphere DPM........................................................... 45
vSphere Replication ................................................................................................... 46
vSphere Networking Network Architecture.............................................................. 47
vSphere Networking vSphere Standard Switches................................................... 48
vSphere Networking vSphere Distributed Switches ................................................ 49
Network I/O Control: An Overview ............................................................................. 50
vSphere Storage Architecture .................................................................................... 51
Virtual Machine File System....................................................................................... 52
Virtual Disks ............................................................................................................... 53
Storage I/O Control .................................................................................................... 54

Page 2

VTSP 5.5

Student Study Guide VTSP 5.5

vSphere Hypervisor 5.5 Architecture ......................................................................... 55


Licensing Requirements for vSphere features ........................................................... 57
Module Summary ....................................................................................................... 58
Module 3: Mapping vSphere Capabilities to Solutions ............................................ 59
Module Overview ....................................................................................................... 60
OPEX Savings Scenario ............................................................................................ 61
Shared Access Optimization Scenario ....................................................................... 64
Migrating to 10Gb Ethernet Scenario ......................................................................... 67
Data Recovery (DR) Scenario.................................................................................... 70
Business Critical Systems Scenario ........................................................................... 73
Course Review........................................................................................................... 76
Course 2 ....................................................................................................................... 77
VTSP V5.5 Course 2: VMware vSphere: vCenter ...................................................... 77
Course Objectives ...................................................................................................... 78
Module 1: vCenter Overview - Features and Topology ............................................ 79
Module Objectives ..................................................................................................... 80
What is VMware vCenter? ......................................................................................... 81
vCenter Installable and vCenter Appliance ................................................................ 83
vCenter's Components and Connectivity ................................................................... 85
vCenter License Versions .......................................................................................... 88
vSphere Client User Interface Options ....................................................................... 91
vCenter Infrastructure Management Features Overview ........................................... 94
Perfmon DLL in VMware Tools ................................................................................ 105
vCenter Statistics & Database Size Calculator ........................................................ 107
Finding and Retrieving Logs .................................................................................... 109
vCenter Support Assistant ....................................................................................... 110
Fill in the missing Components ................................................................................ 111
Which Client? ........................................................................................................... 112
Module Summary ..................................................................................................... 113
Course2 Module 2: vCenter Server Design Constraints ........................................ 114
Module Objectives ................................................................................................... 115
Configuration Maximums for vCenter ....................................................................... 116
Customer requirements for multiple sites ................................................................. 117
Databases ................................................................................................................ 119

Student Study Guide - VTSP 5.5

Page 3

VTSP 5.5 - Student Study Guide


Directory Services .................................................................................................... 121
Web Client Server .................................................................................................... 123
Network Connectivity Requirements ........................................................................ 124
Required Ports - vCenter Sever ............................................................................... 125
Plugin and Add-Ons ................................................................................................. 126
Service and Server Resilience ................................................................................. 128
vCenter Server Heartbeat ........................................................................................ 130
Environment Scaling for vCenter Server .................................................................. 133
Knowledge Check - vCenter Multisite Configuration ................................................ 136
vCenter Database Selection .................................................................................... 137
Module Summary ..................................................................................................... 139
Course 2 Module 3: vCenter Scalability Features and Benefits ........................... 140
Module Objectives ................................................................................................... 141
Presenting vMotion .................................................................................................. 142
Presenting HA .......................................................................................................... 144
Presenting DRS ....................................................................................................... 146
Presenting DPM ....................................................................................................... 148
Presenting FT .......................................................................................................... 149
Presenting Storage Distributed Resource Scheduler (SDRS).................................. 150
Presenting Host Profiles .......................................................................................... 152
Presenting Storage Profiles ..................................................................................... 153
Distributed Virtual Switches ..................................................................................... 155
Auto Deploy ............................................................................................................. 157
Planned Maintenance .............................................................................................. 160
vSphere Standard License ....................................................................................... 161
Module Summary ..................................................................................................... 162
Course 3 ..................................................................................................................... 163
VTSP V5.5 Course 3: VMware vSphere: VM Management ..................................... 163
Course 3 Objectives ................................................................................................. 164
Module 1: Virtual Machine Architecture .................................................................. 165
Module 1 Objectives ................................................................................................ 166
What is a virtual machine? ....................................................................................... 167
Virtual Machine Hardware ........................................................................................ 170
Configuration Maximums ......................................................................................... 172

Page 4

VTSP 5.5

Student Study Guide VTSP 5.5

Virtual Machine Licensing Considerations ............................................................... 174


Core Virtual Machine Files ....................................................................................... 176
Knowledge Check: VM Configuration Maximums .................................................... 179
VMware Tools .......................................................................................................... 180
Using Operations Manager for Better Performance and Capacity Utilization ........... 182
Customizing Virtual Machine Settings ...................................................................... 184
NCIS ........................................................................................................................ 185
vRAM ....................................................................................................................... 187
CPUs ....................................................................................................................... 189
SCSI ........................................................................................................................ 190
Knowledge Check: Virtual Machine Customization .................................................. 192
Hot Extending Virtual Disks...................................................................................... 193
Hot-adding Hardware ............................................................................................... 194
Hot-Add CPU and Memory ...................................................................................... 198
VMDirectPath I/O Generation .................................................................................. 199
Single Root I/O Virtualization (SR-IOV) Support ...................................................... 201
Raw Device Mapping (RDM) Overview .................................................................... 203
Knowledge Check: Core Virtual Machine Files ........................................................ 204
Module Summary ..................................................................................................... 205
Module 2: Copying and Migrating Virtual Machines .............................................. 206
Module 2 Objectives ................................................................................................ 207
Templates ................................................................................................................ 208
Template Contents................................................................................................... 209
Cloning a Virtual Machine ........................................................................................ 210
Cloned VMs and Templates Compared ................................................................... 211
Knowledge Check: VM Templates ........................................................................... 212
Snapshots: An Overview .......................................................................................... 213
What is captured in a Snapshot? ............................................................................. 214
Snapshot Relationships in a Linear Process ............................................................ 216
Snapshot Relationships in a Process Tree .............................................................. 217
Best Practices for VM Snapshots............................................................................. 218
Knowledge Check: VM Snapshot Best Practices ..................................................... 220
Options for Moving a Virtual Machine ...................................................................... 221
Importing and Exporting ........................................................................................... 223

Student Study Guide - VTSP 5.5

Page 5

VTSP 5.5 - Student Study Guide


Migration Overview .................................................................................................. 225
Cold Migration .......................................................................................................... 226
vMotion Migration ..................................................................................................... 227
Designing for vMotion .............................................................................................. 228
Storage vMotion ....................................................................................................... 230
Storage vMotion Uses .............................................................................................. 231
Storage vMotion Design Requirements and Limitations .......................................... 232
Enhanced vMotion ................................................................................................... 233
Microsoft Cluster Services Support .......................................................................... 236
Knowledge Check: Storage Design Requirements for Migration ............................. 239
Module Summary ..................................................................................................... 240
Module 3: vSphere Replication and vSphere Update Manager ............................. 241
Module 3 Objectives ................................................................................................ 242
Why should a customer consider vSphere Replication? .......................................... 243
vSphere Replication ................................................................................................. 245
Replication Appliance .............................................................................................. 247
vSphere 5.5 Replication Server Appliances ............................................................. 248
Replication Design Requirements and Limitations ................................................... 250
What is vSphere Data Protection (VDP)? ................................................................ 253
What is vSphere Data Protection (VDP) Advanced? ............................................... 255
VDP Advanced Key Components ............................................................................ 257
VDP Advanced Implementation ............................................................................... 258
Upsell to VDP Advanced .......................................................................................... 261
Update Manager: An Overview ................................................................................ 262
Update Manager Components ................................................................................. 264
Knowledge Check: Update Manager Components .................................................. 266
Knowledge Check: VDP Advanced Implementation ................................................ 267
Module Summary ..................................................................................................... 268
Course 4 ..................................................................................................................... 269
VTSP V5.5 Course 4: VMware vSphere: vNetworks ............................................... 269
Module 1: vSphere Networks Overview ................................................................... 271
Module 1 Objectives ................................................................................................ 272
Data Center Networking Architecture ....................................................................... 273
vSphere Networking Overview ................................................................................. 276

Page 6

VTSP 5.5

Student Study Guide VTSP 5.5

Standard Switch Architecture ................................................................................... 277


Virtual Switch Connection Examples ....................................................................... 282
Distributed Switch .................................................................................................... 283
Distributed Switch Architecture ................................................................................ 284
Third-Party Distributed Switches .............................................................................. 288
Network Health check .............................................................................................. 289
Network Health Check: Knowledge Check .............................................................. 291
Export and Restore .................................................................................................. 292
Automatic Rollback .................................................................................................. 293
Link Aggregation Control Protocol (LACP) ............................................................... 295
Distributed Switches Versus Standard Switches...................................................... 298
Migrating to Distributed Virtual Switches .................................................................. 301
Specific Licensing Requirements ............................................................................. 303
Module Summary ..................................................................................................... 304
Module 2: vSphere Networks: Advanced Features ................................................ 305
Private VLANs: Overview ......................................................................................... 307
Private VLANs: Architecture..................................................................................... 308
Private VLANs: An Example .................................................................................... 310
VLAN limitations....................................................................................................... 313
Virtual Extensible Local Area Network (VXLAN) ...................................................... 315
VXLAN Sample Scenario ......................................................................................... 317
Load Balancing and Failover Policies ...................................................................... 319
Load Balancing Policies ........................................................................................... 321
Traffic Filtering ......................................................................................................... 326
Differentiated Service Code Point Marking .............................................................. 328
Failover Policies ....................................................................................................... 330
Network I/O Control ................................................................................................. 336
Network I/O Control Features .................................................................................. 338
Course 5 ..................................................................................................................... 353
VTSP 5.5 Course 5 vStorage .................................................................................... 353
Course Objectives .................................................................................................... 354
Module 1: vSphere vStorage Architecture .............................................................. 355
Module 1 Objectives ................................................................................................ 356
The vStorage Architecture - Overview ..................................................................... 357

Student Study Guide - VTSP 5.5

Page 7

VTSP 5.5 - Student Study Guide


Virtual Machine Storage ........................................................................................... 359
LUN, Volume, and Datastore ................................................................................... 360
Virtual Machine Contents Resides in a Datastore .................................................... 362
Types of Datastores ................................................................................................. 363
VMFS Volume .......................................................................................................... 365
NFS Volumes ........................................................................................................... 367
New vSphere Flash Read Cache ............................................................................. 369
Storage Approaches ................................................................................................ 374
Isolated Storage or a Consolidated Pool of Storage? .............................................. 378
Virtual Machine and Host Storage Requirements .................................................... 380
VMDK Types Thick and Thin Provisioning ............................................................ 383
vSphere Thin Provisioning at Array and Virtual Disk Level ...................................... 389
Planning for Swap Space, Snapshots and Thin Provisioning................................... 390
Storage Considerations ........................................................................................... 392
Space Utilization-Related Issues ............................................................................. 394
Raw Device Mapping ............................................................................................... 397
RDM Compatibility Modes........................................................................................ 398
Uses for RDMs......................................................................................................... 399
Functionality Supported Using Larger VMDK and vRDMS ...................................... 400
VMDirectPath I/O ..................................................................................................... 401
iSCSI Storage Area Networks .................................................................................. 404
Network Attached Storage - NAS............................................................................. 406
VSA Enables Storage High Availability (HA) ............................................................ 407
VSA 5.5 Capacity ..................................................................................................... 409
Running vCenter on the VSA Cluster....................................................................... 410
Drive Types .............................................................................................................. 411
Storage Tradeoffs .................................................................................................... 418
Design Limits - Knowledge Check ........................................................................... 420
Virtual Storage Types - Knowledge Check .............................................................. 421
Thick Provisioning - Knowledge Check .................................................................... 422
Usable Capacity - Knowledge Check ....................................................................... 423
Module Summary ..................................................................................................... 424
Module 2: Advanced Features for Availability and Performance .......................... 425
Module 2 Objectives ................................................................................................ 426

Page 8

VTSP 5.5

Student Study Guide VTSP 5.5

Pluggable Storage Architecture ............................................................................... 427


Processing I/O Requests ......................................................................................... 428
Extending PSA ......................................................................................................... 429
Knowledge Check - PSA .......................................................................................... 431
Multipathing.............................................................................................................. 432
FC Multipathing ........................................................................................................ 433
iSCSI Multipathing ................................................................................................... 434
Storage I/O Resource Allocation .............................................................................. 436
Datastore Cluster Requirements .............................................................................. 439
vSphere Storage APIs - Storage Awareness (VASA) .............................................. 445
Knowledge Check - Storage Vendor Providers ........................................................ 447
Profile-Driven Storage .............................................................................................. 448
Knowledge Check - Storage I/O Control .................................................................. 450
Module 3: Determining Proper Storage Architecture ............................................. 453
Module Objectives ................................................................................................... 454
Performance and Capacity Scenario ....................................................................... 455
Snapshots, SDRS, Templates Scenario .................................................................. 459
Which Solution to Offer ............................................................................................ 462
Module Summary ..................................................................................................... 464

Student Study Guide - VTSP 5.5

Page 9

VTSP 5.5 - Student Study Guide

Course 1
Module 1: vSphere Overview

vSphere Product Overview


Welcome to the VTSP 5.5 Course 1 - VMware vSphere Product Overview. There are 3
modules in this course as shown here..

Page 10

VTSP 5.5

Student Study Guide VTSP 5.5

Course Objectives
At the end of this course you should be able to:

Provide an overview of vSphere 5.5 and its new features.


Describe the physical and virtual topologies of a vSphere 5.5 Data Center and
explain the relationship between the physical components and the vSphere
Virtual Infrastructure.
Describe the features and capabilities of vSphere and explain their key benefits
for a customer.
Describe the vSphere Hypervisor Architecture and explain its key features,
capabilities and benefits.
Map vSphere Components to Solution Benefits and identify Value Propositions.

Student Study Guide - VTSP 5.5

Page 11

VTSP 5.5 - Student Study Guide

vSphere Overview
This is module 1, vSphere Overview. These are the topics that will be covered in this
module.

Page 12

VTSP 5.5

Student Study Guide VTSP 5.5

Module 1 Objectives
At the end of this module you should be able to:

Provide an overview of vSphere as part of VMwares Vision and Cloud


Infrastructure Solution
Describe the physical and virtual topologies of a vSphere 5.5 Data Center
Provide an overview of vCenter Operations Manager.
Describe the vApp architecture of vCenter Operations Manager.

Student Study Guide - VTSP 5.5

Page 13

VTSP 5.5 - Student Study Guide

VMware Vision
Before we discuss VMware Architecture, let's familiarize ourselves with the VMware
vision.
Our vision is to be efficient, automate quality of service, and have independent choices.
We aim to reduce capital and operational costs by over 50% for all applications,
automate quality of service, and remain independent of hardware, operating systems,
application stacks, and service providers.

Page 14

VTSP 5.5

Student Study Guide VTSP 5.5

VMware Vision
At VMware, our goal is to help businesses and governments move beyond IT as a Cost
Center to a more business-centric IT as a Service model. This new model of IT
creates improved approaches at each critical layer of a modern IT architecture:
Infrastructure, Applications, and End-User Access.

Student Study Guide - VTSP 5.5

Page 15

VTSP 5.5 - Student Study Guide

vSphere 5.5 Architecture


Now, let's look at vSphere 5.5 Architecture components and services, and how these
components fit into an existing data center environment.
Being a cloud operating system, vSphere 5.5 virtualizes the entire IT infrastructure servers, storage, and networks. It groups these heterogeneous resources and
transforms the rigid, inflexible infrastructure into a simple and unified manageable set of
elements in the virtualized environment.
Logically, vSphere 5.5 comprises three layers: virtualization, management, and
interface layers.
The Virtualization layer of vSphere 5.5 includes two service types:

Infrastructure Services such as compute, storage, and network services abstract,


aggregate, and allocate hardware or infrastructure resources. Examples include
but are not limited to VMFS and Distributed Switch.
Application Services are the set of services provided to ensure availability,
security, and scalability for applications. Examples include but are not limited to
VMware vSphere High Availability (HA) and VMware Fault Tolerance (FT).

The Management layer of vSphere 5.5 consists of the vCenter Server, which acts as a
central point for configuring, provisioning, and managing virtualized IT environments.

Page 16

VTSP 5.5

Student Study Guide VTSP 5.5

The Interface layer of vSphere 5.5 is comprised of clients that allow a user to access the
vSphere Data Center, for example, vSphere Client and vSphere Web Client.

Student Study Guide - VTSP 5.5

Page 17

VTSP 5.5 - Student Study Guide

vSphere 5.5 Virtualization Layer


Next, let's discuss the vSphere 5.5 Virtualization Layer.
vSphere 5.5 virtualizes and aggregates resources including servers, storage, and
networks and presents a uniform set of elements in the virtual environment. With
vSphere 5.5, you can manage IT resources like a shared utility and dynamically
provision resources to different business units and projects.
The vSphere 5.5 Virtual Data Center consists of:

Computing and memory resources called hosts, clusters, and resource pools.
Storage resources called datastores and datastore clusters.
Networking resources called standard virtual switches and distributed virtual
switches.
vSphere Distributed Services such as vSphere vMotion, vSphere Storage
vMotion, vSphere DRS, vSphere Storage DRS, Storage I/O Control, VMware HA,
and FT that enable efficient and automated resource management and high
availability for virtual machines.
And virtual machines.
These features are discussed in Module 2.

Page 18

VTSP 5.5

Student Study Guide VTSP 5.5

Physical Topology of a vSphere 5.5 Data Center


A typical vSphere 5.5 datacenter consists of basic physical building blocks such as x86
computing servers, storage networks and arrays, IP networks, a management server,
and desktop clients. It includes the following components:

Compute Servers: The computing servers are industry standard x86 servers that
run ESXi 5.5 on the bare metal. The ESXi 5.5 software provides resources for
and runs the virtual machines.
Storage Networks and Arrays: Fibre Channel Storage Area Network (FC SAN)
arrays, iSCSI (Internet Small Computer System Interface) SAN arrays, and
Network Attached Storage (NAS) arrays are widely used storage technologies
supported by vSphere 5.5 to meet different datacenter storage needs.
IP Networks: Each compute server can have multiple physical network adapters
to provide high bandwidth and reliable networking to the entire vSphere
datacenter.
vCenter Server: vCenter Server provides a single point of control to the
datacenter. It provides essential datacenter services, such as access control,
performance monitoring, and configuration. It unifies the resources from the
individual computing servers to be shared among virtual machines in the entire
datacenter.
Management Clients: vSphere 5.5 provides several interfaces such as vSphere
Client and vSphere Web Client for datacenter management and virtual machine
access.

Student Study Guide - VTSP 5.5

Page 19

VTSP 5.5 - Student Study Guide

Introduction to vSOM
The vSphere Management market is extremely large. More than 50 percent of physical
servers have been virtualized and more than 80% of virtualized environments are using
vSphere.
That 80 percent adds up to about 25 million unmanaged hosts.
This is a massive opportunity for vSphere with Operations Management.
This system combines the benefits of the worlds best virtualization platform, vSphere,
with the functionality of vCenter Operations Manager Standard Edition. vCenter
Operations Manager delivers more value to customers through operational insight into
the virtual environment for monitoring and performance, as well as optimized capacity
management.
vCenter Operations Manager is an integral part of vSphere with Operations
Management. It is available as part of the Standard, Enterprise and Enterprise plus
versions of vSphere with Operations Management.
Now lets explore vCenter Operations Management in more detail.

Page 20

VTSP 5.5

Student Study Guide VTSP 5.5

vSphere with Operations Manager Overview


In todays complex environments, operations management personnel need
management tools to enable the journey towards the private cloud, self-service, and IT
as a service. While they are being pushed to raise availability and performance,
organizations need to reduce cost and complexity.

Student Study Guide - VTSP 5.5

Page 21

VTSP 5.5 - Student Study Guide

vSphere with Operations Manager Overview


A classic reactive approach - to monitor, isolate, and remediate based on issues - no
longer meets todays needs.

Page 22

VTSP 5.5

Student Study Guide VTSP 5.5

vSphere with Operations Manager Overview


Modern operations management solutions need a proactive approach to reduce the
number of false alerts, lower incidents, raise visibility and increase control over the
environment.

Student Study Guide - VTSP 5.5

Page 23

VTSP 5.5 - Student Study Guide

vSphere with Operations Manager Overview


In todays complex environments, operations management personnel need
management tools to enable the journey towards the private cloud, self-service, and IT
as a service. While they are being pushed to raise availability and performance,
organizations need to reduce cost and complexity.
A classic reactive approach - to monitor, isolate, and remediate based on issues - no
longer meets todays needs.
Modern operations management solutions need a proactive approach to reduce the
number of false alerts, lower incidents, raise visibility and increase control over the
environment.
Lets take a look at how this affects a current performance issue. A performance issue
can be caused through a problem in a vApp, a datastore or network I/O. Alternatively, a
VMware vSphere cluster itself might be causing poor performance. This implies there
are dozens, or even hundreds of metrics to analyze.

Page 24

VTSP 5.5

Student Study Guide VTSP 5.5

vSphere with Operations Manager Overview


In todays complex environments, operations management personnel need
management tools to enable the journey towards the private cloud, self-service, and IT
as a service. While they are being pushed to raise availability and performance,
organizations need to reduce cost and complexity.
A classic reactive approach - to monitor, isolate, and remediate based on issues - no
longer meets todays needs.
Modern operations management solutions need a proactive approach to reduce the
number of false alerts, lower incidents, raise visibility and increase control over the
environment.
Lets take a look at how this affects a current performance issue. A performance issue
can be caused through a problem in a vApp, a datastore or network I/O. Alternatively, a
VMware vSphere cluster itself might be causing poor performance. This implies there
are dozens, or even hundreds of metrics to analyze.
By using its patented analytics engine, vCenter Operations Manager gives the
operations administrator the ability to combine all these metrics into a single view in the
easy-to-use vCenter Operations Manager dashboard. With the help of this accelerated
information, administrators can use Smart Alerts to reduce the number of fault alerts.

Student Study Guide - VTSP 5.5

Page 25

VTSP 5.5 - Student Study Guide

vSphere with Operations Manager Overview


VMwares approach to management helps administrators to become more proactive,
instead of reactive to issues. Administrators can identify many potential issues ahead of
time with planning, optimization and automation. Based on this modern toolkit,
administrators can fulfill the demand of the organizations CIO to improve availability
and performance while reducing cost and complexity.

Page 26

VTSP 5.5

Student Study Guide VTSP 5.5

vCenter Operations Manager: Quick Facts


In virtual and cloud environments, the relationships between performance, capacity,
costs and configuration become intricately linked.
Configurations are fluid, while capacity is shared and sourced from many places, such
as multiple providers, infrastructure tiers, and so on. All of these moving parts can
impact performance.
This means that customers need visibility across the system and analytics to figure out
what is important from the torrent of data produced. All customers benefit from a more
integrated, automated approach to operations management.
The user gets an integrated solution using vSphere with Operations Manager, through
management dashboards and smart alerts, that allows proactive management for dayto-day operating and support.

Student Study Guide - VTSP 5.5

Page 27

VTSP 5.5 - Student Study Guide

vCenter Operations Manager: Quick Facts


A process of permanently monitoring and analyzing data collection helps to identify
performance problems, support automated root cause analysis, and delivers information
for capacity and efficiency optimizations.

Page 28

VTSP 5.5

Student Study Guide VTSP 5.5

vCenter Operations Manager: Quick Facts


There are other vendor solutions, but vCenter Operations Manager stands apart.
vCenter Operations Manager differs from other vendor solutions through patented
performance analytics.
These include self-learning of normal behavior; service health baseline; trending; and
smart alerts of impending performance degradation.
vCenter Operations Manager provides automated capacity planning and analysis:
designed for vSphere and built for the cloud.

Student Study Guide - VTSP 5.5

Page 29

VTSP 5.5 - Student Study Guide

vCenter Operations Manager 5.7: vApp Architecture


The vCenter Operations Manager vApp consists of two virtual machines. Both of these
virtual machines are auto-connected through OpenVPN, which delivers a highlysecured data channel.
The Analytics virtual machine is responsible for collecting data from vCenter Server,
vCenter Configuration Manager, and third party data sources such as metrics, topology
and change events. This raw data is stored in its scalable File System Database
(FSDB).
The analytics engines for capacity and performance periodically process this raw data
and store the results in their respective Postgres or FSDB databases.
Users can access the results of the analytics, in the form of badges and scores, through
the WebApps of the UI virtual machine.
Before deploying the vCenter Operations Manager vApp, you must be aware of the
requirements for both virtual machines. You need to take into account the environment
size, landscape and complexity.
These requirements differ with the total amount of monitored resources and collected
metrics, which influence the amount of vCPU, memory and Disk Storage required.
The Analytics virtual machine also requires a certain amount of IOPS.
In larger environments, collecting all metrics from one or more vCenter Servers might
generate performance issues in vCenter Operations Manager. vCenter Operations

Page 30

VTSP 5.5

Student Study Guide VTSP 5.5

Manager 5.7 includes the new Metrics Profile feature that allows a subset to be chosen
from metrics collected from vCenter Server.
By default, this feature is set to Full Profile, meaning that all metrics from all registered
vCenter Servers are collected. The Balanced Profile setting ensures that only the most
vital metrics from vCenter Server are collected.
The Full Profile allows for 5 million metrics to be collected and the Balanced Profile
allows for 2.2 million metrics.
For larger deployments, you may need to add additional disks to the vApp.
vCenter Operations Manager is only compatible with certain web browsers and vCenter
Server versions.
For vApp compatibility and requirements you should consult the vApp Deployment and
Configuration Guide.

Student Study Guide - VTSP 5.5

Page 31

VTSP 5.5 - Student Study Guide

vCenter Operations Manager 5.7: High Level Architecture


The vCenter Operations Manager vApp collects data from many different sources such
as VMware vCenter Server, VMware vCenter Configuration Manager, or VMware
vCloud Director.
The vCenter Operations Manager Analytics virtual machine processes the collected
data, and presents the results through the UI virtual machine.
Possible user interfaces are the vCenter Operations Manager vSphere UI and vCenter
Operations Manager Custom UI - which is only available in the Advanced and
Enterprise editions.
vCenter Operations Manager also features an Admin UI to perform administrative tasks.
As discussed previously, the monitored resources and collected metrics require certain
computing resources. These should be taken into account when deploying the vApp.
vCenter Operations Manager is designed as an enterprise solution, so planning and
preparing your environment is critical to successful deployment.
Environment size, landscape, and complexity need to be taken into account.
The vCenter Operations Manager Architecture needs to take into account how large the
environment is, including the numbers of applications, data sources, resources and
metrics, the physical environment distribution and the number of users.
You also need to know which specific architectural and service level requirements must
be met, including security, availability, and accessibility.

Page 32

VTSP 5.5

Student Study Guide VTSP 5.5

Learn More: vSOM Training


To learn more about vCenter Operations Manager, visit the VMware Partner University
Site or the VMware Partner Mobile Knowledge Portal for iPad or Android Devices.

Student Study Guide - VTSP 5.5

Page 33

VTSP 5.5 - Student Study Guide

Learn More: vSOM Training

Page 34

VTSP 5.5

Student Study Guide VTSP 5.5

Module Summary
This concludes module 1, vSphere Overview.
Now that you have completed this module, you should be able to:

Provide an overview of vSphere as part of VMwares Vision and Cloud


Infrastructure Solution
Describe the physical and virtual topologies of a vSphere 5.5 Data Center
Provide an overview of vCenter Operations Manager
Describe the vApp architecture for vCenter Operations Manager

Student Study Guide - VTSP 5.5

Page 35

VTSP 5.5 - Student Study Guide

Module 2: vSphere Infrastructure and Hypervisor Components

vSphere Infrastructure and Hypervisor Components


This is module 2, vSphere Infrastructure and Hypervisor Components.
These are the topics that will be covered in this module.

Page 36

VTSP 5.5

Student Study Guide VTSP 5.5

Module 2 Objectives
At the end of this module you will be able to:

Identify the key features of vSphere 5.5, describing the key capabilities and
identifying the key value propositions of each one.
Identify any license level restrictions for each feature.

Student Study Guide - VTSP 5.5

Page 37

VTSP 5.5 - Student Study Guide

vSphere Distributed Services vSphere vMotion


Let's discuss Distributed Services. vMotion, Storage vMotion, VMware HA , FT, DRS,
DPM and Replication are distributed services that enable efficient and automated
resource management and high availability for virtual machines.
vMotion enables the migration of live virtual machines from one physical server to
another without service interruption. This live migration capability allows virtual
machines to move from a heavily loaded server to a lightly loaded one. vMotion is
discussed in Course 3.

Page 38

VTSP 5.5

Student Study Guide VTSP 5.5

vSphere Distributed Services vSphere Storage vMotion


Storage vMotion enables live migration of a virtual machine's storage to a new datastore
with no downtime. Extending the vMotion technology to storage helps the vSphere
administrator to leverage storage tiering, perform tuning and balancing, and control
capacity with no application downtime.
Storage vMotion copies disk blocks between source and destination and replaces the
need for the iterative pre-copy phase. This was used in the Changed Block Tracking
(CBT) method in earlier versions of vSphere. With I/O mirroring, a single-pass copy of
the disk blocks from the source to the destination is performed. I/O mirroring ensures
that any newly changed blocks in the source are mirrored at the destination. There is
also a block-level bitmap that identifies hot and cold blocks of the disk, or whether the
data in a given block is already mirrored in the destination disk.
Storage vMotion is discussed in Course 3.

Student Study Guide - VTSP 5.5

Page 39

VTSP 5.5 - Student Study Guide

vSphere Distributed Services vSphere High Availability


vSphere High Availability (HA) provides easy-to-use, cost effective high availability for
applications running in virtual machines.
In the event of physical server failure, the affected virtual machines are restarted on
other production servers which have spare capacity.
In the case of operating system failure, vSphere HA restarts the affected virtual machine
on the same physical server.

Page 40

VTSP 5.5

Student Study Guide VTSP 5.5

vSphere Distributed Services vSphere High Availability


vSphere App HA is a plug-in to the vSphere Web Client. This plug-in allows you to
define high availability for the applications that are running on the virtual machines in
your environment, reducing application downtime.
vSphere HA and App HA are discussed in Course 2.

Student Study Guide - VTSP 5.5

Page 41

VTSP 5.5 - Student Study Guide

vSphere Distributed Services vSphere Fault Tolerance


Fault Tolerance (FT) provides continuous availability for applications in the event of
server failures by creating a live shadow instance of a virtual machine that is in virtual
lockstep with the primary instance.
The Secondary virtual machine can take over execution at any point without service
interruption.
By allowing instantaneous failover between the two instances in the event of hardware
failure, FT eliminates even the smallest chance of data loss or disruption.
Fault Tolerance is discussed in Course 2.

Page 42

VTSP 5.5

Student Study Guide VTSP 5.5

vSphere Distributed Services vSphere DRS


Distributed Resource Scheduler (DRS) helps you manage a cluster of physical hosts as
a single compute resource by balancing CPU and memory workload across the physical
hosts.
DRS uses vMotion to migrate virtual machines to other hosts as necessary.
When you add a new physical server to a cluster, DRS enables virtual machines to
immediately take advantage of the new resources because it distributes the running
virtual machines.
DRS is discussed in Course 2.

Student Study Guide - VTSP 5.5

Page 43

VTSP 5.5 - Student Study Guide

vSphere Distributed Services vSphere Storage DRS


Storage DRS (SDRS) aggregates storage resources of several datastores in to a single
datastore cluster to simplify storage management at scale with vSphere Storage DRS.
During virtual machine provisioning Storage DRS provides intelligent virtual machine
placement based on the IO load and available storage capacity of the datastores.
Storage DRS performs ongoing load balancing between datastores to ensure space
and I/O bottlenecks are avoided as per pre-defined rules that reflect business needs
and changing priorities.
Storage DRS is discussed in Course 5.

Page 44

VTSP 5.5

Student Study Guide VTSP 5.5

vSphere Distributed Services vSphere DPM


Distributed Power Management (DPM) continuously optimizes power consumption in
the data center.
When virtual machines in a DRS cluster need fewer resources, such as at night and on
weekends, DPM consolidates workloads onto fewer servers and powers off the rest to
reduce power consumption.
When virtual machine resource requirements increase, DPM brings powered-down
hosts back online to ensure service levels are met.
DPM is discussed in Course 2.

Student Study Guide - VTSP 5.5

Page 45

VTSP 5.5 - Student Study Guide

vSphere Replication
vSphere Replication replicates powered-on virtual machines over the network from one
vSphere host to another without needing storage array-based native replication.
vSphere Replication reduces bandwidth needs, eliminates storage lock-in, and allows
you to build flexible disaster recovery configurations.
This proprietary replication engine copies only changed blocks to the recovery site,
ensuring both lower bandwidth utilization and more aggressive recovery point objectives
compared with manual full system copies of virtual machines.
Replication is discussed in Course 3.

Page 46

VTSP 5.5

Student Study Guide VTSP 5.5

vSphere Networking Network Architecture


The virtual environment provides similar networking elements as the physical world,
such as virtual network interface cards, vSphere Distributed Switches (VDS), distributed
port groups, vSphere Standard Switches (VSS), and port groups.
Like a physical machine, each virtual machine has its own virtual NIC called a vNIC.
The operating system and applications talk to the vNIC through a standard device driver
or a VMware optimized device driver just as though the vNIC is a physical NIC.
To the outside world, the vNIC has its own MAC address and one or more IP addresses
and responds to the standard Ethernet protocol exactly as a physical NIC would. In fact,
an outside agent can determine that it is communicating with a virtual machine only if it
checks the 6-byte vendor identifier in the MAC address.
A virtual switch, or vSwitch, works like a layer-2 physical switch. With VSS, each host
maintains its own virtual switch configuration while in a VDS, a single virtual switch
configuration spans many hosts.
Physical Ethernet adapters connected to physical switches provide an uplink for
vSwitches.

Student Study Guide - VTSP 5.5

Page 47

VTSP 5.5 - Student Study Guide

vSphere Networking vSphere Standard Switches


vSphere Standard Switches allow virtual machines on the same vSphere host to
communicate with each other using the same protocols used with physical switches.
The virtual switch emulates a traditional physical Ethernet network switch to the extent
that it forwards frames at the data link layer.
Standard switches are discussed in course 4.

Page 48

VTSP 5.5

Student Study Guide VTSP 5.5

vSphere Networking vSphere Distributed Switches


The vSphere Distributed Switch (VDS) simplifies virtual machine networking by enabling
you to set up virtual machine access switching for your entire data center from a
centralized interface.
VDS provides simplified virtual machine network configuration, enhanced network
monitoring and troubleshooting capabilities and support for advanced vSphere
networking features.
Distributed Switches are discussed in course 4.

Student Study Guide - VTSP 5.5

Page 49

VTSP 5.5 - Student Study Guide

Network I/O Control: An Overview


Network I/O control enables the convergence of diverse workloads on a single
networking pipe. It provides control to the administrator to ensure predictable network
performance when multiple traffic types are flowing in the same pipe.
Network I/O control provides sufficient controls to the vSphere administrator in the form
of limits, and shares parameters to enable and ensure predictable network performance
when multiple traffic types contend for the same physical network resources.
Network I/O Control is discussed in Course 4.

Page 50

VTSP 5.5

Student Study Guide VTSP 5.5

vSphere Storage Architecture


The VMware vSphere Storage Architecture consists of layers of abstraction that hide
and manage the complexity and differences among physical storage subsystems.
To the applications and guest operating systems inside each virtual machine, the
storage subsystem appears as a virtual SCSI controller connected to one or more
virtual SCSI disks.
These controllers are the only types of SCSI controllers that a virtual machine can see
and access.
The virtual SCSI disks are provisioned from datastore elements in the data center.
The guest virtual machine is not exposed to Fibre Channel SAN, iSCSI SAN, direct
attached storage, and NAS.
Each datastore is a physical VMFS volume on a storage device. NAS datastores are an
NFS volume with VMFS characteristics.
Datastores can span multiple physical storage subsystems. VMFS also supports raw
device mapping (RDM). RDM provides a mechanism for a virtual machine to have direct
access to a LUN on the physical storage subsystem (Fibre Channel or iSCSI only).
vSphere Storage Architecture is discussed in Course 5.

Student Study Guide - VTSP 5.5

Page 51

VTSP 5.5 - Student Study Guide

Virtual Machine File System


Virtual Machine File System (VMFS) is a high-performance cluster file system that
provides storage virtualization optimized for virtual machines.
VMFS is the default storage management interface for block based disk storage (Local
and SAN attached).
VMFS allows multiple instances of VMware vSphere servers to access shared virtual
machine storage concurrently.
VMFS is discussed in Course 5.

Page 52

VTSP 5.5

Student Study Guide VTSP 5.5

Virtual Disks
When you create a virtual machine, a certain amount of storage space on a datastore is
provisioned, or allocated, to the virtual disk files. Each of the three vSphere hosts has
two virtual machines running on it.
The lines connecting them to the disk icons of the virtual machine disks (VMDKs) are
logical representations of their allocation from the larger VMFS volume, which is made
up of one large logical unit number (LUN).
A virtual machine detects the VMDK as a local SCSI target.
The virtual disks are really just files on the VMFS volume, shown in the illustration as a
dashed oval.
Virtual Disks are discussed in Course 3 and Course 5.

Student Study Guide - VTSP 5.5

Page 53

VTSP 5.5 - Student Study Guide

Storage I/O Control


vSphere Storage I/O Control (SIOC) is used to provide I/O prioritization of virtual
machines running on a group of VMware vSphere hosts that have access to a shared
storage pool.
It extends the familiar constructs of shares and limits, which exist for CPU and memory,
to address storage utilization through a dynamic allocation of I/O capacity across a
cluster of vSphere hosts.
Configure rules and policies to specify the business priority of each virtual machine.
When I/O congestion is detected, Storage I/O Control dynamically allocates the
available I/O resources to virtual machines according to your rules, improving service
levels for critical applications and allowing you to virtualize more types of workloads,
including I/O-intensive applications.
SIOC is discussed in Course 5.

Page 54

VTSP 5.5

Student Study Guide VTSP 5.5

vSphere Hypervisor 5.5 Architecture


VMkernel is a POSIX-like OS developed by VMware and provides certain functionalities
similar to that found in other OSs, such as process creation and control, signals, file
system, and process threads. It is designed specifically to support running multiple
virtual machines and provides core functionalities such as resource scheduling, I/O
stacks and device drivers.
The key component of each ESXi host is a process called VMM. One VMM runs in the
VMkernel for each powered on virtual machine. When a virtual machine starts running,
the control transfers to the VMM, which in turn begins executing instructions from the
virtual machine. The VMkernel sets the system state so that the VMM runs directly on
the hardware.
The devices of a virtual machine are a collection of virtual hardware that includes the
devices shown. The ESXi host provides a base x86 platform and you choose devices to
install on that platform. The base virtual machine provides everything needed for the
system compliance with x86 standards from the motherboard up.
VMware virtual machines contain a standard set of hardware no matter what platform
you are running. Virtual device drivers allow portability without having to reconfigure the
OS of each virtual machine.
VMware Tools is a suite of utilities that enhances the performance of the virtual
machine's guest OS and improves management of the virtual machine. Installing

Student Study Guide - VTSP 5.5

Page 55

VTSP 5.5 - Student Study Guide


VMware Tools in the guest OS is vital. Although the guest OS can run without VMware
Tools, you lose important functionality and convenience.
ESXi uses five memory management mechanisms-page sharing, ballooning, memory
compression, swap to host cache, and regular swapping-to dynamically reduce the
amount of physical memory required for each virtual machine.
All of these topics are discussed in Course 3.

Page 56

VTSP 5.5

Student Study Guide VTSP 5.5

Licensing Requirements for vSphere features


From this table you can see the types of licenses which are required by each of the
features discussed. Take a few moments to take in this table.

Student Study Guide - VTSP 5.5

Page 57

VTSP 5.5 - Student Study Guide

Module Summary
This concludes module 2, vSphere Infrastructure and Hypervisor Components.
Now that you have completed this module, you should be able to:

Identify the key features of vSphere 5.5 describing the key capabilities and
identifying the key value propositions of each one.
Identify any license level restrictions for each feature.

Page 58

VTSP 5.5

Student Study Guide VTSP 5.5

Module 3: Mapping vSphere Capabilities to Solutions

Mapping vSphere Capabilities to Solutions


Welcome to Module 3, Mapping vSphere Capabilities to Solutions. These are the topics
that will be covered in this module.

Student Study Guide - VTSP 5.5

Page 59

VTSP 5.5 - Student Study Guide

Module Overview
By the time you have completed this module, you will be able to select vSphere
components to meet solution requirements by identifying the capabilities and benefits of
each solution component in order to present its value proposition.
The module presents a series of customer scenarios that define specific requirements
and constraints.
You will be asked to select vSphere components to meet solution requirements by
identifying the capabilities and benefits of each solution component in order to present
its value proposition.

Page 60

VTSP 5.5

Student Study Guide VTSP 5.5

OPEX Savings Scenario


The Clarke County Library has decided to overhaul its server infrastructure in order to
improve supportability and hopefully reduce ongoing expenditure.
They have a highly variable server workload that peaks during the 3 - 8pm period on
weekdays and all day (9am to 6pm) on Saturdays but outside of those periods server
loads on all metrics is typically < 25% of the average during the peak periods.
While out-of-hours load is substantially lower they run a number of critical systems that
have to achieve four nines service uptime.
Their CapEx request needs to demonstrate that any investment will enable substantial
OpEx savings.

Student Study Guide - VTSP 5.5

Page 61

VTSP 5.5 - Student Study Guide

OPEX Savings Scenario


What is the correct answer?
Which advanced vSphere service do they need to meet the library's requirements?

Page 62

VTSP 5.5

Student Study Guide VTSP 5.5

OPEX Savings Scenario


Why is DPM the correct vSphere service to meet their requirements?

Student Study Guide - VTSP 5.5

Page 63

VTSP 5.5 - Student Study Guide

Shared Access Optimization Scenario


Mornington Design has an existing vSphere environment running with a vSphere
standard license. All of the VMs are provisioned from a single iSCSI SAN.
Their business critical data warehousing and Exchange E-mail servers are experiencing
variable performance degradation during peak hours due to contention with other noncritical virtual machines whose workloads can temporarily stress the overall SAN
performance.
While they could implement a new independent SAN to isolate their business-critical
virtual machines they are looking for a mechanism to optimize the shared access to the
datastores during peak times when they suffer from contention.

Page 64

VTSP 5.5

Student Study Guide VTSP 5.5

Shared Access Optimization Scenario


What is the correct answer?
Which advanced vSphere service do they need at Mornington Design to meet their
requirements?

Student Study Guide - VTSP 5.5

Page 65

VTSP 5.5 - Student Study Guide

Shared Access Optimization Scenario


Why is Storage I/O Control (SIOC) the correct vSphere service for Mornington Design?

Page 66

VTSP 5.5

Student Study Guide VTSP 5.5

Migrating to 10Gb Ethernet Scenario


Bulldog Clothing have decided to upgrade their existing vSphere cluster hardware with
newer servers and want to migrate all of their core networks over to 10Gb Ethernet at
the same time.
As they move from 1Gb to 10Gb, they want to move away from their former policy of
dedicated individual network uplinks to specific services.
They want a solution that will help them aggregate diverse workloads into the reduced
number of 10Gb Ethernet uplink adapters that their new hardware will be outfitted with.

Student Study Guide - VTSP 5.5

Page 67

VTSP 5.5 - Student Study Guide

Migrating to 10Gb Ethernet Scenario


Which advanced vSphere feature does Bulldog Clothing need to meet their
requirements?

Page 68

VTSP 5.5

Student Study Guide VTSP 5.5

Migrating to 10Gb Ethernet Scenario


Why is Network I/O Control the correct vSphere feature to meet their requirements?

Student Study Guide - VTSP 5.5

Page 69

VTSP 5.5 - Student Study Guide

Data Recovery (DR) Scenario


Alleyn & Associates are an accountancy firm with a number of small satellite offices with
20-30 staff each, and a centralized head office where IT support and the core
infrastructure are located.
They are already using small vSphere clusters with 3 hosts in each office to provide all
services, and staff work on Virtual Desktops.
There is no consistent standard for shared storage, with some sites using NFS arrays
and some using FC storage.
A recent flooding incident resulted in significant downtime in one satellite office as they
do not have an effective disaster recovery (DR) process for their remote sites.
They would like to use storage array replication for DR but the diverse range of storage
solutions they use makes the cost of this prohibitive.

Page 70

VTSP 5.5

Student Study Guide VTSP 5.5

Data Recovery Scenario


Which advanced vSphere feature do Alleyn & Associates need to meet their
requirements?

Student Study Guide - VTSP 5.5

Page 71

VTSP 5.5 - Student Study Guide

Data Recovery Scenario


Why is vSphere Replication the correct vSphere feature for Alleyn & Associates?

Page 72

VTSP 5.5

Student Study Guide VTSP 5.5

Business Critical Systems Scenario


Catskills Shipping Inc. provides an online order fulfillment service for a range of
component businesses.
Their front-end order handling system is business critical and they cannot tolerate any
service downtime at all.
They want to move from a physical infrastructure to virtual in order to improve hardware
maintainability, but this will require them to abandon their current high availability
clustering solution as it is not supported in virtual environments.

Student Study Guide - VTSP 5.5

Page 73

VTSP 5.5 - Student Study Guide

Business Critical Systems Scenario


Which advanced vSphere feature do Catskills Shipping need to meet their
requirements?

Page 74

VTSP 5.5

Student Study Guide VTSP 5.5

Business Critical Systems Scenario


Why is vSphere Fault Tolerance the correct vSphere feature for Catskills Shipping?

Student Study Guide - VTSP 5.5

Page 75

VTSP 5.5 - Student Study Guide

Course Review
This concludes the course vSphere Overview.
Now that you have finished this course, you should be able to:
Provide an overview of vSphere as part of VMwares Vision and Cloud Infrastructure
Solution,
Describe the physical and virtual topologies of a vSphere 5.5 Data Center and explain
the relationship between the physical components and the vSphere Virtual
Infrastructure,
Describe the features and capabilities of vSphere and explain their key benefits for a
customer,
Describe the vSphere Hypervisor Architecture and explain its key features, capabilities
and benefits, and
Map vSphere Components to solution benefits and identify value propositions.

Page 76

VTSP 5.5

Student Study Guide VTSP 5.5

Course 2

VTSP V5.5 Course 2: VMware vSphere: vCenter


Welcome to the VTSP 5.5 Course 2: VMware vSphere: vCenter.
There are three modules in this course as shown here.

Student Study Guide - VTSP 5.5

Page 77

VTSP 5.5 - Student Study Guide

Course Objectives
After you complete this course, you should be able to:
Explain the components and features of vCenter
Communicate design choices to facilitate the selection of the correct vCenter solution
configuration
Explore the customers requirements to define any dependencies that those
requirements will create.
Explain the key features and benefits of the distributed services to illustrate the impact
those features will have on a final design.

Page 78

VTSP 5.5

Student Study Guide VTSP 5.5

Module 1: vCenter Overview - Features and Topology

Module 1: vCenter Overview - Features and Topology


This is module 1 Overview Features and Topology.
These are the topics that will be covered in this module.

Student Study Guide - VTSP 5.5

Page 79

VTSP 5.5 - Student Study Guide

Module Objectives
After you complete this module, you should be able to:

Describe vCenters components and infrastructure requirements.


Present the management clients, the VI Client and the Web Client.
Provide an overview of their interfaces with specific emphasis on features that
are only available via the Web Client.

Page 80

VTSP 5.5

Student Study Guide VTSP 5.5

What is VMware vCenter?


The first section discusses the capabilities and components of vCenter and the various
ways of accessing it.
You will start by looking at what vCenter is and what it enables the vSphere
administrator to do in the virtualized infrastructure.
vCenter Server is the primary management tool for vSphere administrators. It provides
a single point of control for all the components in the virtual data center.
vCenter Server provides the core management functionalities and services, which are
required by the vSphere administrator to perform basic infrastructure operations.
These operations include configuring new ESXi hosts, configuring storage, network, and
the virtual hardware characteristics of various infrastructure components.
Using vCenter Server, you can manage the storage and resource requirements for each
host machine.
Infrastructure operations also include creating or importing new virtual machines and
monitoring, reporting, and alerting on performance characteristics of guest operating
systems, virtual machines and the underlying hosts.
Additionally, infrastructure operations include managing rights, permissions, and roles at
various levels of the virtual infrastructure.

Student Study Guide - VTSP 5.5

Page 81

VTSP 5.5 - Student Study Guide


vCenter Server is able to unify resources from individual ESXi hosts, enabling them to
be shared among virtual machines in the entire data center.
This is achieved by assigning resources to the virtual machines within a managed
cluster of hosts, based on the policies set by the system administrator.

Page 82

VTSP 5.5

Student Study Guide VTSP 5.5

vCenter Installable and vCenter Appliance


vCenter Server is available in two options: vCenter Server Appliance and vCenter
Server Installable.
vCenter Server Appliance is a pre-configured SUSE Enterprise Linux based virtual
appliance, which contains VMware's vCenter Management Server.
It is deployed as an OVF Template.
vCenter Server Installable is a Windows installable option, supported on Windows
platforms Windows 2008 64-bit R2 and Windows Server 2012.
Service pack information should be verified before installation. It can be installed in
either a physical or virtual machine.
There are many more differences between the two, which you need to know to ensure
you make the appropriate choice for your environment.
Previous versions of the vCenter Server appliance were limited, when using the
embedded database, to environments of up to 5 ESXi hosts and 50 virtual machines.
New in vSphere 5.5, the vCenter Server Appliance can support environments up to 100
ESXi Hosts and 3000 virtual machines.
However, both vCenter options can make use of an embedded database.

Student Study Guide - VTSP 5.5

Page 83

VTSP 5.5 - Student Study Guide


The vCenter Server Appliance uses a vPostgres database, while the vCenter Server
Installable uses a Microsoft SQL Express database, which is limited to small scale
deployments of up to 5 ESXi hosts and 50 virtual machines.
For larger environments, external databases are the correct solution.
The vCenter Server Appliance can only use an external Oracle database, whereas the
vCenter Server Installable version can be used with either a Microsoft SQL or Oracle
database.
Also new in vSphere 5.5 is support for clustering of the vCenter Server Database.
Auto Deploy, Syslog Collector and ESXi Dump Collector are separate installations on
the vCenter Server Installable. These are pre-installed in the vCenter Server Appliance.
Syslog Collector and ESXi Dump Collector must be registered as a plug-in in vCenter
Server.
The vSphere Web Client and Single Sign-On are installed as part of the vCenter
Server simple installation or on a separate host for multiple local site instances. They
are pre-installed in the vCenter Server Appliance.
For scripting and automation of the data center, vSphere CLI and PowerCLI are
separate installations for the vCenter Server Installable. They cannot be installed in the
vCenter Server Appliance.
vCenter Server Installable supports IPv4 and IPv6. The vCenter Server Appliance only
supports IPv4. Linked Mode is not compatible with the vCenter Server Appliance and
vCenter Heartbeat is not compatible with the vCenter Server Appliance.
vCenter Update Manager can be installed on the same server as vCenter Server
Installable, but cannot be installed in the vCenter Server Appliance.

Page 84

VTSP 5.5

Student Study Guide VTSP 5.5

vCenter's Components and Connectivity


Now lets review a whiteboard session looking at vCenters components and
connectivity.
We will discuss what connects to, or is managed by, vCenter, including: Hosts,
Directory Service (customers may know this as Inventory Service) and Single Sign On
(SSO) Server, Clients and Network Ports.
1. vCenter Server is comprised of a number of interlinked components and interfaces to
other services and infrastructure. We will now describe each of the key parts and the
role that they play in vCenter.
2. vCenter Server is heavily dependent on the database that is used to store
configuration and statistical data.
While there are options for environments that make use of integrated databases, these
are only for small installations.
In most environments the database will be provided by a separate database server or
servers. It is critically important that databases are sized and prepared before installing
vCenter Server.
It is important to note that only certain databases are supported and this selection may
influence the vCenter choice to be implemented.

Student Study Guide - VTSP 5.5

Page 85

VTSP 5.5 - Student Study Guide


We will see the specific database types that are supported for the vCenter Server
Appliance and installable versions later in this module.
3. There are four parts to vCenter server installations. These are:
vCenter Single Sign-On, Web Client, vCenter Inventory Service and vCenter Server
(Core).
4. VMware vCenter Single Sign-On offers administrators a deeper level of
authentication services that enable VMware solutions to trust each other.
Single Sign-On allows VMware solutions to utilize multiple directory services and is no
longer limited to Microsoft Active Directory.
It simplifies the management of multi-site and multi-installation environments by
allowing users to move seamlessly between multiple environments without reauthentication.
A Single Sign-On Server can be installed separately and can support multiple vCenter
installations.
5. VMware vCenter Inventory Service optimizes client server communications by
reducing the number of client requests on vCenter Server.
It is now a separate independent component that can be off-loaded to a separate
server.
This can be used to reduce traffic and improve client response times. It also enables
users to create and add inventory object-level tags.
These are then used to organize and provide quicker retrieval when performing
inventory searches.
6. Core Services are the basic management services for a virtual Data Center.
These include virtual machine provisioning; statistics and logging; host and virtual
machine configuration; alarms and event management; and task scheduling.
Distributed services are solutions that extend VMware vSphere capabilities beyond a
single physical server.
These solutions include VMware DRS, VMware HA, and VMware vMotion. Distributed
services are configured and managed centrally from vCenter Server.
7. The vCenter API provides access to the vSphere management components.
These are the objects that you can use to manage, monitor, and control life-cycle
operations of virtual machines and other parts of the virtual infrastructure (such as Data
Centers, datastores, networks and so on).
The vCenter API provides the interface to vCenter that is used by the vCenter Clients,
third party applications, plug-ins and VMware applications.
It is available for administrators, developers and partners to integrate and automate
solutions.

Page 86

VTSP 5.5

Student Study Guide VTSP 5.5

8. The vSphere Web Client provides a rich application experience delivered through a
cross-platform supporting Web browser.
This surpasses the functionality of the trusted VMware vSphere Client (the VI or
Desktop Client) running on Windows.
The vSphere Web Client can be installed on the vCenter server along with other
vCenter Server components, or it can be installed as a standalone server.
9. The Single Sign-On Server must be able to communicate with your identity sources
such as Active Directory, Open LDAP and a Local Operating System.
10. The Inventory service must be able to communicate with the Single Sign-On Server,
the vCenter Server and the client.
11. The vCenter Server must be able to communicate with the ESXi hosts in order to
manage them.
12. vCenter Server must also be accessible to any systems that will require access to
the API.
13. The Web Client is accessed via a Web browser that connects to the Web Client
Server. All of these services rely heavily on DNS.

Student Study Guide - VTSP 5.5

Page 87

VTSP 5.5 - Student Study Guide

vCenter License Versions


VMware vCenter Server provides unified management for VMware vSphere environments and is a required component of a complete VMware vSphere deployment. One instance of vCenter
Server is required to centrally manage virtual machines and their hosts and to enable all VMware vSphere features.

Page 88

VTSP 5.5

Student Study Guide VTSP 5.5

vCenter License Versions


All products and feature licenses are encapsulated in 25-character license keys that you
can manage and monitor from vCenter Server. Each vCenter Server instance requires
one license key.
VMware vCenter Server is available in the following packages:
VMware vCenter Server for Essentials kits is integrated into the vSphere Essentials and
Essentials Plus kits for small office deployment. This edition is aimed at IT environments
that run 20 or fewer server workloads.
VMware vCenter Server Foundation provides centralized management for vSphere
environments with up to three VMware vSphere ESXi hosts.
VMware vCenter Server Standard is the highly scalable management server that
provides rapid provisioning, monitoring, orchestration and control of all virtual machines
in a VMware vSphere environment of any size.
All editions of vCenter Server include the following capabilities:
The management service acts as a universal hub for provisioning, monitoring and
configuring virtualized environments.
The database server stores persistent configuration data and performance
information.
The inventory service allows administrators to search the entire object inventory of
multiple VMware vCenter Servers from one place.
Student Study Guide - VTSP 5.5

Page 89

VTSP 5.5 - Student Study Guide


VMware vSphere Clients provide administrators with a feature-rich console for
accessing one or more VMware vCenter Servers simultaneously.
The VMware vCenter APIs and .NET Extensions allows integration between vCenter
Server and other tools, with support for customized plug-ins to the VMware vSphere
Client.
vCenter Single Sign-On simplifies administration by allowing users to log in once and
then access all instances or layers of vCenter without the need for further
authentication.
vCenter Orchestrator streamlines and automates key IT processes.
vCenter Server Linked Mode enables a common inventory view across multiple
instances of vCenter Server.
Advanced features such as Distributed vSwitches also require that the individual host
licenses for the hypervisors in the cluster are at the appropriate level. For example a
vSphere Enterprise Plus license will be required for all hosts if distributed vSwitches
need to be supported.

Page 90

VTSP 5.5

Student Study Guide VTSP 5.5

vSphere Client User Interface Options


You have several ways to access vSphere components through vSpheres range of
interface options.
The vSphere Web Client was introduced with the release of vSphere 5.0 as a new
administration tool for managing your VMware vSphere 5.x environments.
With vSphere 5.5, VMware progresses its transition to the Web Client as the primary
administration interface.
It features a new enhanced usability experience with added support for OS X. In
vSphere 5.5, all of the new vSphere features are only available when using the vSphere
Web Client interface.
The vSphere Web Client is a server application that provides a browser-based
alternative to the traditional vSphere Desktop Client.
You must use a supported Web browser to connect to the vSphere Web Client to
manage ESXi hosts through vCenter Server.
The vSphere Web Client supports almost all of the functionality included in the
Windows-based vSphere Desktop Client, such as inventory display and virtual machine
deployment and configuration.

Student Study Guide - VTSP 5.5

Page 91

VTSP 5.5 - Student Study Guide


The vSphere Desktop Client is still available for installation with vSphere 5.5. The
Desktop Client must be installed on a Windows machine with direct access to the ESXi
host or the vCenter Server systems it will be used to manage.
The interface displays slightly different options depending on the type of server to which
you are connected.
A single vCenter Server system or ESXi host can support multiple simultaneously
connected vSphere Desktop Clients.
You can use vSphere Desktop Client to monitor, manage, and control vCenter Server.
The vSphere Desktop Client does not support vCenter Single Sign-On and
communicates directly with vCenter Server and Microsoft Active Directory.
The vSphere Client is still used for vSphere Update Manager (or VUM) along with a few
solutions such as Site Recovery Manager.

Page 92

VTSP 5.5

Student Study Guide VTSP 5.5

Student Study Guide - VTSP 5.5

Page 93

VTSP 5.5 - Student Study Guide

vCenter Infrastructure Management Features Overview


Now that you have seen an overview of vCenter and the licensing requirements we will
look at an overview of the Infrastructure Management features and capabilities of
vCenter.

Page 94

VTSP 5.5

Student Study Guide VTSP 5.5

Resource Maps
vSphere administrators can use resource maps to monitor proper connectivity which is
vital for migration operations, such as VMware vSphere vMotion or vSphere Storage
vMotion.
Resource maps are also useful to verify VMware vSphere High Availability, VMware
Distributed Resource Scheduler (DRS) cluster memberships are that host and virtual
machine connectivity is valid.
A resource map is a graphical representation of the data centers topology. It visually
represents the relationships between the virtual and physical resources available in a
data center.
Preconfigured map views that are available are: Virtual Machine Resources, which
displays virtual machine-centric relationships; Host Resources, which displays hostcentric physical relationships; and vMotion Resources, which displays potential hosts for
vMotion migration.
Maps help vSphere administrators find information such as which clusters or hosts are
most densely populated, which networks are most critical, and which storage devices
are being utilized.
Resource Maps are only available using the vSphere Desktop Client..

Student Study Guide - VTSP 5.5

Page 95

VTSP 5.5 - Student Study Guide

Orchestrator
Orchestrator or vCO is an automation and orchestration platform that provides a library
of extensible workflows.
It enables vSphere administrators to create and execute automated, configurable
processes to manage their VMware virtual environment.
Orchestrator provides drag-and-drop automation and orchestration for the VMware
virtual environment. Orchestrator is included with vCenter.
As an example, when you create a virtual machine in your environment, you make
decisions about how that virtual machine is configured, how many network cards,
processors memory etc. that you want it to be configured with. However, once the
machine is created and like many organizations, you have additional IT processes that
need to be applied.
Do you need to add the VM to active directory? Do you need to update the change
management Database, customize the guest OS or notify the VM owner or other teams
that the virtual machine is ready?
vCenter Orchestrator lets you create workflows that automate activities such as
provisioning a virtual machine, performing scheduled maintenance, initiating backups,
and many others. You can design custom automations based on vCenter Orchestrator
out-of-the-box workflows and run your automations from the workflow engine.

Page 96

VTSP 5.5

Student Study Guide VTSP 5.5

You can also use plugins and workflows published on VMware Solution Exchange, a
community of extensible solutions plug-ins, to connect to multiple VMware and 3rd party
applications.
Through an open and flexible plug-in architecture, VMware vCenter Orchestrator allows
you to automate server provisioning and operational tasks across both VMware and
third-party applications, such as service desks, change management and asset
management systems.
These plug-ins provide hundreds of out-of-the-box workflows to help you both
accelerate and dramatically reduce the cost of delivering IT services across your
organization.
In addition to plug-ins included with the vCenter Orchestrator, the latest plug-ins can be
found on the VMware Solution Exchange.
You need to understand the clients current IT workflow automation capabilities and if
they are using any other products for this already, you will have to be prepared to
research how Orchestrator integrates with them.
To understand how Orchestrator works, it is important to understand the difference
between automation and orchestration.
Automation provides a way to perform frequently repeated processes without manual
intervention. For example, a shell, Perl, or PowerShell script that adds ESXi hosts to
vCenter Server.
On the other hand, orchestration provides a way to manage multiple automated
processes across heterogeneous systems.
An example of this would be to add ESXi hosts from a list to vCenter Server, update a
CMDB with the newly added ESXi hosts, and then send email notification.
Orchestrator exposes every operation in the vCenter Server API, enabling the vSphere
administrator to integrate all these operations into the automated processes.
Orchestrator also enables the administrator to integrate with other management and
administration solutions through its open plug-in architecture. This enables the vSphere
administrator to capture manual and repetitive tasks for the vSphere environment and
automate them through workflows.
Orchestrator provides several benefits.
It helps vSphere administrators ensure consistency and standardization and achieve
overall compliance with existing IT policies. It also shortens the time for deployment of a
complex environment (for example, SAP) to hours instead of days. Orchestrator also
enables vSphere administrators to react faster to unplanned issues in VMware Data
Center.
For example, when a virtual machine is powered off unexpectedly, the vSphere
administrator can configure options to trigger the Power-On workflow to bring the
virtual machine back online.

Student Study Guide - VTSP 5.5

Page 97

VTSP 5.5 - Student Study Guide

Alarms
The vSphere alarm infrastructure supports automating actions and sending different
types of notifications in response to certain server conditions. Many alarms exist by
default on vCenter Server systems and you can also create your own alarms. For
example, an alarm can send an alert email message when CPU usage on a specific
virtual machine exceeds 99% for more than 30 minutes.
The alarm infrastructure integrates with other server components, such as events and
performance counters.
You can set alarms for objects such as virtual machines, hosts, clusters, data centers,
datastores, networks, vNetwork Distributed Switches, distributed virtual port groups, and
vCenter Server.
Alarms have two types of triggers.
They can be triggered by either the condition or state of an object or by events occurring
to an object.
You can monitor inventory objects by setting alarms on them. Setting an alarm involves
selecting the type of inventory object to monitor, defining when and for how long the
alarm will trigger, and defining actions that will be performed as a result of the alarm
being triggered. You define alarms in the Alarm Settings dialog box.
Alarms should be configured to detect and report. Avoid overly aggressive vCenter
Server alarm settings. Each time an alarm condition is met, vCenter Server must take

Page 98

VTSP 5.5

Student Study Guide VTSP 5.5

an appropriate action. Too many alarms place extra load on vCenter Server which
affects system performance. Therefore identify the alarms that you need to leverage.
You can use the SMTP agent included with vCenter Server to send email notifications
to the appropriate personnel that you wish to be notified when alarms are triggered. You
can also trap event information by configuring a centralized SNMP server and/or
alternatively even run a script when the alarm triggers.

Student Study Guide - VTSP 5.5

Page 99

VTSP 5.5 - Student Study Guide

Inventory Object Tagging


Tags were a new feature of vSphere 5.1 Their purpose is to allow you to add metadata
to objects. Tags allow you to bring information about your virtual infrastructure from
outside vSphere and attach it to objects inside so that actions and decisions can be
taken on the basis of that information. In order to avoid conflicts between the many
possible uses of tags, tags are organized into categories.
When you create a category, you specify whether multiple tags in that category can be
assigned to a given object at one time, or whether only one tag can be assigned to an
object at a time. For example, a category called Priority might contain the tags High,
Medium, and Low, and be configured to allow only one tag in the category to be applied
to an object at a time.
You also specify whether tags in a category can be applied to all objects, or only to
specific object types, such as hosts or datastores.
In order to leverage the benefits of inventory tagging, it is important to identify the
categories that you wish to use in your environment during the design phase.
Because of the power of the search and advanced search function, tags can be used to
associate metadata with objects, rather than using a complex folder hierarchy which
was required in earlier versions.
The main benefit is that you can reduce the complexity of environment hierarchies as
they scale.

Page 100

VTSP 5.5

Student Study Guide VTSP 5.5

Tagging replaces the custom attributes functionality found in previous versions of


vCenter Server. If you have existing custom attributes, you can convert them into tags.
The Inventory Tagging feature is only available using the vSphere Web Client.

Student Study Guide - VTSP 5.5

Page 101

VTSP 5.5 - Student Study Guide

Simple and Advanced Search


As with previous VMware clients, the vSphere Web Client provides a search capability.
This enables users to perform a variety of searches, from simple text-based searches to
more advanced searches utilizing Boolean logic.
The vSphere Web Client also enables administrators to save searches as named
objects. They can then create complex searches and refer back to them quickly instead
of recreating each search when it is needed.
To perform a task, one first must be able to find the objects upon which to work. In a
small environment, this might not seem very difficult. However, when the environment
scales out to cloud levels it becomes a larger challenge to find objects.
You can also perform Simple and Advanced search operations in vSphere Client. A
search field is available in all vSphere Client views for this purpose. To display the
search page, you can select Inventory and then select Search.
By default, a Simple Search can be performed for all the properties of the specified type
or types of objects for the entered search term. The available options are Virtual
Machines, Hosts, Folders, Datastores, Networks, and Inventory. vCenter Server filters
the search results according to permissions and returns the results.
If you are not satisfied with the results of the simple search, perform an advanced
search.

Page 102

VTSP 5.5

Student Study Guide VTSP 5.5

Advanced Search allows you to search for managed objects that meet multiple criteria.
For example, you can search for virtual machines matching a search string or the virtual
machines that reside on hosts whose names match a second search string.
If the vSphere Web Client is connected to a vCenter Server system that is part of a
Linked Mode group, you can search the inventories of all vCenter Server systems in
that group.
You can only view and search for inventory objects that you have permission to view. In
Linked Mode the search service queries Active Directory for information about user
permissions so you must be logged in to a domain account to search all vCenter Server
systems in a Linked Mode group. If you log in using a local account, searches return
results only for the local vCenter Server system, even if it is joined to other servers in
Linked Mode.

Student Study Guide - VTSP 5.5

Page 103

VTSP 5.5 - Student Study Guide

vCenter Server Plug-Ins


vCenter Server plug-ins extend the capabilities of vCenter Server by providing more
features and functions.
Some plug-ins are installed as part of the base vCenter Server product for example the
vCenter Service Status plugin which displays the status of vCenter Services and the
vSphere Update Manager which is used to apply patches and updates to ESXi hosts.
Some plug-ins are packaged separately from the base product and require separate
installation. You can update plug-ins and the base product independently of each other.
VMware offers third-party developers and partners the ability to extend the vSphere
Client with custom menu selections and toolbar icons that provide access to custom
capabilities.
A partner vendor therefore, could supply a plug-in that provides integration with vCenter
for users to monitor server specific hardware functions or create storage volumes.

Page 104

VTSP 5.5

Student Study Guide VTSP 5.5

Perfmon DLL in VMware Tools


VMware Tools include additional Perfmon DLL features that enable vSphere
administrators to monitor key host statistics from inside a virtual machine running a
Windows operating system.
The built-in Windows Perfmon counters may not reflect the true load as the OS is
unaware that it is running on virtual hardware.
The VMware Perfmon counters provide an accurate indication of what resources are
actually being consumed by the virtual machine.
Perfmon is an SNMP-based performance monitoring tool for Windows Operating
Systems. It displays performance statistics at regular intervals and can save these
statistics in a file. Administrators can choose the time interval, file format, and statistics
that are to be monitored. The ability to choose which statistics to monitor is based on
the available counters for the selected object.
Installing the ESXi 5.5 version of VMware Tools automatically provides the Perfmon
performance counters VM Processor and VM Memory. Using these counters, the
application administrator can collect accurate performance data for the host level CPU
and memory utilization and compare it side-by-side with the virtual machines view of
CPU and memory utilization.
This enables the application administrator to better understand how resources are
consumed in a virtualized environment.

Student Study Guide - VTSP 5.5

Page 105

VTSP 5.5 - Student Study Guide


Also, when a performance problem is identified, the vSphere administrator and the
application administrator can use a common tool such as Perfmon to isolate the root
cause.
Additionally, third-party developers can instrument their agents to access these
counters using Windows Management Instrumentation or WMI.

Page 106

VTSP 5.5

Student Study Guide VTSP 5.5

vCenter Statistics & Database Size Calculator


Each ESXi host is connected to a vCenter Server, and vCenter Server is connected to a
relational database. vCenter Server collects statistics from each ESXi host periodically
and persists this data to the relational database.
The database in turn executes a number of stored procedures that summarize this data
at various intervals.
Each ESXi host collects statistics at a 20-second granularity. In vCenter, these are
called real-time statistics.
You can view real-time statistics through vSphere Client by selecting the Advanced
button on the Performance tab. The client always receives real-time statistics directly
from the ESXi host. This ensures timeliness of the data and puts no stress on the
database. All statistics stored in the database are called historical statistics.
When sizing your database the primary design requirement is that adequate storage
space exists.
For vCenter Server system databases, Microsoft SQL and Oracle sizing tools are
available on the VMware web site.
The vCenter Server Database sizing calculator is an Excel spread sheet that estimates
the size of the vCenter Server database.

Student Study Guide - VTSP 5.5

Page 107

VTSP 5.5 - Student Study Guide


The calculator sizes your database based on the following assumptions: that your
database is SQL Server Version 2005 or later; That statistics collection intervals are
default values; and that the information entered is valid for the entire period. It also
assumes that the level of fragmentation of your database is around 50%.
Results assume a reasonable number of tasks and events.
The results for each statistics collection level are as follows: The number of samples
collected by vCenter Server every five minutes.
The calculation shows the impact of setting a higher statistics level. The estimated size
of the database after 1 year. The higher and lower bounds of the estimation assume a
15% variance.
The amount of space that the temporary database requires during the calculation of
rollup values such as averages. This additional space should be added to the primary
vCenter Server database.
An example of some of the collection statistics that influence the size of the database
are the number of hosts, virtual machines, clusters, resource pools, average number of
CPUs per host.
Another factor in the size of the database is the Statistics Collection Level. The statistics
level establishes which metrics are retrieved and recorded in the vCenter Server
database. You can assign a statistics level of 1- 4 to each collection interval, with level 4
having the largest number of counters.
There is also a What if calculator built into the vSphere client and web client that can
also be used to estimate the database size after installation. As with the excel
spreadsheet you can estimate the space required by the database by selecting the
interval duration, how long to save the statistics, the statistics level as well as the
number of physical hosts and virtual machines.
Because the statistics data consumes a large fraction of the database, proper
functioning of statistics is an important consideration for overall database performance.
Thus, statistics collection and processing are key components for vCenter Server
performance and the database is therefore a critical component of vCenter Server
performance.

Page 108

VTSP 5.5

Student Study Guide VTSP 5.5

Finding and Retrieving Logs


vCenter has several different log files that can be useful in monitoring or troubleshooting
your environment.
These log files can be viewed and searched using the vSphere Desktop Client or Web
Client.
Log entries can be searched for a particular word or time and you can filter or save your
search. You can also compare log files from two different hosts in the Log Browser.
From time to time a Diagnostic System Log Bundle may be requested by VMware
support. This information contains product specific logs and configuration files from the
host on which the product is run.

Student Study Guide - VTSP 5.5

Page 109

VTSP 5.5 - Student Study Guide

vCenter Support Assistant


VMware vCenter Support Assistant is a free, downloadable plug-in for vCenter Server.
It provides an easy-to-use, secure, one-stop shop for creating and managing service
requests, and generating and uploading logs. It also includes a VMware Knowledge
Base search capability, which enables customers to resolve common issues more
rapidly.
Support Assistant helps gather more of the information up front that VMware Technical
Support finds useful.
By minimizing further data requests from VMware Technical Support, vCenter Support
Assistant can help reduce time to resolution and, in turn, minimize system downtime.
Customers can use Support Assistant to open Support Requests for any VMware
product with a support entitlement.
Logs can only be generated from VMware vSphere hosts and vCenter Servers.
VMware vCenter Support Assistant is available as a VA (virtual appliance).
You will require vCenter 4.1 or above to install Support Assistant and it will generate
support bundle data from vSphere 4.1, 5.0, 5.1 and 5.5 hosts, and vCenter Server 4.1,
5.0, 5.1 and 5.5.

Page 110

VTSP 5.5

Student Study Guide VTSP 5.5

Fill in the missing Components


The diagram illustrates vCenter's Components and how they are related.
Complete the whiteboard with the missing labels from the left.

Student Study Guide - VTSP 5.5

Page 111

VTSP 5.5 - Student Study Guide

Which Client?
Your customer wants to be able to utilize the following features.
Which feature belongs to each client?

Page 112

VTSP 5.5

Student Study Guide VTSP 5.5

Module Summary
In summary:
vCenter Server is the primary management tool for vSphere administrators providing a
convenient single point of control for all the components in the data center. vCenter
Server provides the core management functionality and services for large environments,
which are required by the vSphere administrator to perform basic infrastructure
operations.

Student Study Guide - VTSP 5.5

Page 113

VTSP 5.5 - Student Study Guide

Course2 Module 2: vCenter Server Design Constraints

vCenter Server Design Constraints


Welcome to module 2, vCenter Server Design Constraints, of the vCenter course.
These are the topics that will be covered in this module.

Page 114

VTSP 5.5

Student Study Guide VTSP 5.5

Module Objectives
By the time you have completed this module you should be able to describe:

vCenter Server sizing and dependencies

Installation Options

Network Connectivity Requirements

Plug-ins and Add-ons, and

Service and Server Resilience

Student Study Guide - VTSP 5.5

Page 115

VTSP 5.5 - Student Study Guide

Configuration Maximums for vCenter


When you select and configure your virtual and physical equipment, you must stay at or
below the maximums supported by vSphere 5.5. These limits represent tested,
recommended limits, and they are fully supported by VMware.
The most important limits that affect the number and configuration of vCenter server
instances in an environment are listed in here.
If any of these limits is likely to impact an environment a design decision is required to
define the number and location of vCenter instances and how Virtual Machines should
be distributed.
These limits can be affected by other factors, such as performance requirements or
geographical distribution.
You can find configuration maximum information by clicking the link shown on screen.

Page 116

VTSP 5.5

Student Study Guide VTSP 5.5

Customer requirements for multiple sites


vCenter Single Sign-On (SSO) has been redesigned in vSphere 5.5 with a multi-master
model.
The architecture has been improved and the need for a separate database has been
removed.
There is built in automatic replication between and within sites. SSO is now fully site
aware.
There is also a Full suite of diagnostic / Troubleshooting tools that can be downloaded
to assist with SSO troubleshooting.
There is now only one deployment model. For multisite configuration you choose the
option to install vCenter Single Sign-On for additional vCenter servers in a new site.
vSphere 5.5 introduces a single authentication domain called vsphere.local which can
be spread across multiple sites.
Automatic detection and federation of SSO data takes place for each additional SSO
server that is added as well as automatic replication of users and groups, policies and
identity sources.
Single Sign On does not provide failover between Single Sign On Servers on different
sites, only replication. Each site should be protected using vCenter HA or vCenter
Heartbeat.

Student Study Guide - VTSP 5.5

Page 117

VTSP 5.5 - Student Study Guide


It is recommended to install all components on a single virtual machine using simple
install in order to ensure high availability of your vCenter Server and SSO.
By default in vSphere 5.5 each site is now independent and does not provide a single
pane of glass view as before.
Linked Mode is therefore required to provide a single pane of glass view across
geographically separate vCenters.
Linked mode replicates licenses, permissions and roles across multiple vCenter
servers.

Page 118

VTSP 5.5

Student Study Guide VTSP 5.5

Databases
Each vCenter Server instance must have its own database. Multiple vCenter Server
instances cannot share the same database schema.
Multiple vCenter Server databases can reside on the same database server, or they can
be separated across multiple database servers. Oracle databases can run multiple
vCenter Server instances in a single database server provided you have a different
schema owner for each vCenter Server instance.
vCenter Server supports Oracle and Microsoft SQL Server databases.
After you choose a supported database type, make sure you understand any special
configuration requirements such as the service patch or service pack level.
Also ensure that the machine has a valid ODBC data source name (DSN) and that you
install any native client appropriate for your database.
Ensure that you check the interoperability matrixes for supported database and service
pack information by clicking the link shown on screen.
Performance is affected by the number of hosts and the number of powered-on virtual
machines in your environment.
Correctly sizing the database will ensure that you avoid performance issues. Where
possible, try to minimize the amount of network hops between vCenter Server and its

Student Study Guide - VTSP 5.5

Page 119

VTSP 5.5 - Student Study Guide


database. If both are virtual machines, keep them on the same ESXi host, with a DRS
rule if required.
Remember that the bundled MS SQL Server database that is included in the Installable
version of vCenter is for small deployments of no more than five hosts and 50 virtual
machines.
The vCenter Appliance has its own embedded database, vPostgres.
This is only for deployments of up to 100 hosts and 3,000 virtual machines.
The vCenter Appliance only supports Oracle for use as an external database.
Single Sign on no longer requires a database in vSphere 5.5.

Page 120

VTSP 5.5

Student Study Guide VTSP 5.5

Directory Services
vSphere 5.1 introduced Single Sign On. Single Sign On has been completely
redesigned in vSphere 5.5.
Single Sign On in vSphere 5.1 used Active Directory as an LDAP Server as an identity
source.
vSphere 5.5 introduces Native Active Directory support using Kerberos as an Identity
Source.
vCenter Single Sign On creates an authentication domain that users are authenticated
in to access available resources (vCenter etc.)
The System Domain Identity Source is the default Identity Data Store that ships as part
of vSphere. The System Domain has a name which is a FQDN: the default is
vsphere.local.
The login name for the administrator is always: administrator@vsphere.local.
You should not set the vCenter Administrator to be a Local OS account as this doesnt
federate.
There are four identity sources that can be configured for Single Sign On.
Active Directory (Integrated Windows Authentication) which uses Kerberos.

Student Study Guide - VTSP 5.5

Page 121

VTSP 5.5 - Student Study Guide


Active Directory as an LDAP server. This option maintains compatibility with vSphere
5.1.
OpenLDAP and the local OS for Windows.
vCenter Single Sign on 5.5 has the following prerequisites.
You should ensure that your hostname has a fully qualified domain name. The machine
should have joined to an Active Directory Domain (for most use cases). Finally, you
should ensure that it is DNS resolvable (forward and reverse).

Page 122

VTSP 5.5

Student Study Guide VTSP 5.5

Web Client Server


The vSphere Web Client is a web application that can reside either on the same system
as vCenter Server or a separate system.
The vSphere Web Client has two components: A Java server and an Adobe Flex client
application running in a browser.
VMware recommends installing the Web Client on the same server that you are
installing vCenter Server, SSO and the inventory Service.
For larger deployments a separate Single Sign On and Web Client installation is
recommended.

Student Study Guide - VTSP 5.5

Page 123

VTSP 5.5 - Student Study Guide

Network Connectivity Requirements


The VMware vCenter Server system must be able to send data to every managed host
and receive data from every vSphere Client.
To enable migration and provisioning activities between managed hosts, the source and
destination hosts must be able to receive data from each other.
vCenter Server and the vCenter Appliance use several designated ports for
Communication. Managed hosts monitor designated ports for data from the vCenter
Server system.
If a Windows firewall exists during install of vCenter Server and it is enabled, the
installer opens these ports. Custom firewalls, must be manually configured.
To have the vCenter Server system use a different port to receive vSphere Client data,
see the vCenter Server and Host Management documentation.
When installing vCenter Server, a Gigabit Ethernet connection is recommended as a
minimum on the physical or virtual machine.
You must verify that no Network Address Translation (NAT) exists between the vCenter
Server system and the hosts it will manage.
To avoid conflicts, ensure that no other services, such as IIS are using the same port as
vCenter Server. As specified above, you can bind vCenter or the conflicting service to
another port.

Page 124

VTSP 5.5

Student Study Guide VTSP 5.5

Required Ports - vCenter Sever


Now let's look at the ports that vCenter Server uses to communicate with the managed
hosts and vSphere Client.

Student Study Guide - VTSP 5.5

Page 125

VTSP 5.5 - Student Study Guide

Plugin and Add-Ons


Prior to installing a plug-in you should ensure that you have met any infrastructure or
hardware pre-requisites that the plug-in may require and that the version of the plug-in
matches the versions of the vSphere Client and vCenter Server in your environment.
You should check compatibility of VMware plug-ins on the VMware Product
Interoperability Matrix web page and for 3rd party manufacturers consult their relevant
documentation.
Before deciding to implement any plug-ins you must discover what impact this plug-in
will have on your overall design.
Take Update Manager as an example. Update Manager is a vCenter Server plug-in that
allows you to apply updates and patches across all ESXi hosts. It is used to deploy and
manage updates for vSphere and third-party software on hosts and it is used to upgrade
virtual machine hardware, VMware Tools, and virtual appliances.
Update manager can be installed on the same server as vCenter or on a separate
server. Depending on your needs, you must take into consideration any extra Operating
System licenses and the specification of the physical or virtual machine that it will
require.
The best practice is to have Update Manager on the same host as its database,
however this may not be possible depending on your environment. Update Manager
cannot be installed on an Active Directory Domain Controller.

Page 126

VTSP 5.5

Student Study Guide VTSP 5.5

If using the VMware Update Manager Download service, which can be used if the
Update Manager server cannot be given access to the Internet, you will require another
physical or virtual machine and database to host this component.
With the overall solution in mind, it also is important to know that there is no update
manager plug-in for the vSphere Web Client.
As you can see, the decision to implement this plug-in not only requires compatibility
checks, but also there are design choices to be made concerning the allocation and
selection of server resources, databases and network connectivity.

Student Study Guide - VTSP 5.5

Page 127

VTSP 5.5 - Student Study Guide

Service and Server Resilience


vCenter Server installable can be installed either on a physical Windows machine or
inside a Windows virtual machine.
One of the possible reasons for installing vCenter Server on a physical machine might
be to keep it separate from your virtual environment and therefore isolated from any
possible virtual infrastructure outage. However, there are many more advantages for
running vCenter Server in a virtual machine. You don't need a dedicated physical server
and you can have more resilience in the event of a failure.
The vCenter Server virtual machine can be protected using High Availability (HA) so if
the physical server that vCenter is running on fails, HA will restart the vCenter Server
virtual machine on another host in the cluster.
Remember, that this may take a couple minutes, so you need to ensure that this is
acceptable for the level of service you need to guarantee.
In the event of a vCenter Server failure, it is important to note that HA will still failover
virtual machines and other virtual machines will still continue to run.
Other services like vMotion, Storage vMotion, DRS will be temporarily impacted by the
outage.
You can manually move a vCenter VM for maintenance tasks and you can create
snapshots of the vCenter Server virtual machine and use them to improve backup
speed, provide rollback options for patching and so on.
Page 128

VTSP 5.5

Student Study Guide VTSP 5.5

The vCenter Server database should also be backed up on a regular basis in the event
that the database becomes corrupt, so that it can easily be restored.
You may also choose to protect VMware vCenter Server using third-party clustering
solutions including, but not limited to, MSCS (Microsoft Cluster Services) and VCS
(Veritas Cluster Services).
Finally, when virtualizing vCenter Server, consider the services and servers that
vCenter server depends on. For example, you might want to start-up virtual machines
running Active Directory, DNS, SQL and SSO in that order first and ensure they power
up with a high-priority.
You should document the shutdown and start-up procedure for the cluster as a whole.
You should also consider whether or not you wish the vCenter Server to only reside on
a fixed host, where you can guarantee resources. If so, ensure that you change your
DRS automation level accordingly.
If using HA, ensure you change the start-up priority for vCenter Server to High.
To provide comprehensive protection for the vCenter server and guarantee high
availability, consider using vCenter Server Heartbeat which is discussed next.
VMware recommends having vCenter as a virtual machine in most instances.

Student Study Guide - VTSP 5.5

Page 129

VTSP 5.5 - Student Study Guide

vCenter Server Heartbeat


VMware vCenter Server Heartbeat delivers mission critical high availability for VMware
vCenter Server, protecting virtual infrastructure from application, configuration,
operating system, network and hardware-related problems. It is a Windows-based
service specifically designed to provide high availability protection for vCenter Server
configurations without requiring any specialized hardware.
vCenter Server Heartbeat is based on a cloned Active/Passive server pair. This means
that both machines will be exact copies of each other, including machine name, domain
identification and will have the same IP address. Both servers benefit from a shared
nothing architecture, meaning that both servers will have an exact copy of application,
configuration and data that is replicated from the primary (production) server to the
secondary (ready-standby) server.
As it is software based, vCenter Server Heartbeat can be implemented in a physical or
virtual environment, supporting both virtual to virtual (V2V), physical to virtual (P2V) and
physical to physical (P2P) configurations. It is hardware agnostic.
The default failover mode is automated physical or virtual machine failover.

Page 130

VTSP 5.5

Student Study Guide VTSP 5.5

vCenter Server Heartbeat


When implementing vCenter Server Heartbeat, consider the following. A second
network link is required for replication and heartbeat monitoring between the two
servers. If the other vCenter server is on a co-located site, ensure you have a sufficient
network connection between the sites. vCenter Server Heartbeat currently only supports
a 32-bit version of SQL server 2005/2008 installed on an x64 operating system, which
cannot be a domain controller, Global Catalog Server or DNS server.
Virtual to Virtual is the supported architecture if vCenter Server is already installed on
the production (Primary) server running on a virtual machine. Benefits to this
architecture include reduced hardware cost, shorter installation time, and use of the
Pre-Clone technique for installation. This option requires a similar CPU, memory
configuration and appropriate resource pool priorities and that you keep each pair on a
separate ESX host. If using the virtual to virtual architecture in a vSphere HA/DRS
enabled cluster, set VM anti-affinity rules to ensure the VM's aren't placed on the same
host to guard against failure at the host level.

Student Study Guide - VTSP 5.5

Page 131

VTSP 5.5 - Student Study Guide

vCenter Server Heartbeat


The Physical to Virtual architecture is used when the environment requires a mix of
physical and virtual machines. This architecture is appropriate to avoid adding more
physical servers or if you plan to migrate to virtual technologies over a period of time.
This option requires a similar CPU, identical memory and that the secondary virtual
machine has sufficient resources.
Consider that you will require twice the amount of resources in order to setup vCenter
Server Heartbeat to protect the secondary vCenter server instance.

Page 132

VTSP 5.5

Student Study Guide VTSP 5.5

Environment Scaling for vCenter Server


As of vSphere 5.5, VMware recommends that all services be installed on a local server.
This should be performed using the simple install option.
There are no changes to the architecture and all services are local.
This model supports up to 1000 vSphere hosts and 10,000 virtual machines.

Student Study Guide - VTSP 5.5

Page 133

VTSP 5.5 - Student Study Guide

Environment Scaling for vCenter Server


For larger organizations that require 6 or more vCenter Servers, it is recommended to
have a centralized SSO authentication Server. This is for installations at the same
physical location.
For this type of deployment you install a SSO server and the Web Client on the same
virtual machine and then install vCenter Server and the inventory service on other
virtual machines.
Availability of services is provided using vSphere HA or vCenter Heartbeat. A network
load balancer should also be implemented.
It is possible to mix vCenter 5.1 and 5.5. However it is recommended that you upgrade
when possible

Page 134

VTSP 5.5

Student Study Guide VTSP 5.5

Environment Scaling for vCenter Server


vSphere 5.5 Single Sign On and multi master architecture offer benefits to any size of
organization.
You can install a vCenter server in one location and as the organization grows add a
further geographically disparate site to replicate to.
You can also manage both sites using Linked Mode for a single pane of glass view.
Mixed vCenter Server SSO architectures (installable and appliance) can replicate to
each other in the same authentication domain.
For large organizations you can take advantage of a full scale centralized SSO Server
to offload replication and authentication, which also maintains compatibility with existing
vCenter Servers running version 5.1.
Together, all of the sites can replicate to each other, offering the enterprise full scale
authentication services.

Student Study Guide - VTSP 5.5

Page 135

VTSP 5.5 - Student Study Guide

Knowledge Check - vCenter Multisite Configuration


vCenter Server can be installed in a multisite configuration.
Which statements do you think are correct?

Page 136

VTSP 5.5

Student Study Guide VTSP 5.5

vCenter Database Selection


Your customer wants to use vCenter Server to manage 10 hosts and 200 virtual
machines.
Which of the following solutions is valid for the customer?

Student Study Guide - VTSP 5.5

Page 137

VTSP 5.5 - Student Study Guide

vCenter Database Selection


Your customer wants to use vCenter Server to manage 10 hosts and 200 virtual
machines.
The three valid combinations are in the customer's hands here.

Page 138

VTSP 5.5

Student Study Guide VTSP 5.5

Module Summary
Now that you have completed this module, you should be able to identify sizing and
dependencies issues, installation options, network connectivity requirements, plug-ins
and add-ons, and service and server resilience considerations for vCenter Server.

Student Study Guide - VTSP 5.5

Page 139

VTSP 5.5 - Student Study Guide

Course 2 Module 3: vCenter Scalability Features and Benefits

Module 3: vCenter Scalability Features and Benefits


Welcome to module 3, vSphere Scalability Features and Benefits. These are the topics
that will be covered in this module.

Page 140

VTSP 5.5

Student Study Guide VTSP 5.5

Module Objectives
In this module we are going to take a closer look at the distributed services that vCenter
manages and enables. These services provide the cluster wide features and advanced
functionality that are the key to vSphere scalability.
Many aspects of this module focus on not only explaining the distributed services but
also showing you sample whiteboards of how to present them to customers.
If a particular service/feature/functionality is not a part of the core vSphere Standard
licenses we will mention which license tier in which the feature is available.

Student Study Guide - VTSP 5.5

Page 141

VTSP 5.5 - Student Study Guide

Presenting vMotion
The traditional challenge for IT is how to execute operational and maintenance tasks
without disruption to business service delivery. In a non-virtualized environment this
essentially means downtime whenever maintenance is required on the infrastructure.
In a virtualized environment we have virtual machines running on the hypervisor which
is installed on the physical host.
Using vMotion, we can migrate (move) a virtual machine from one host to another host
without incurring any downtime. To do this we have to copy the memory footprint and
the running state of the virtual machine progressively across to a separate physical host
and then switch over to the running instance of the virtual machine to the new host, all
without any loss of data or downtime.
This may require a significant amount of dedicated network bandwidth as we may be
moving a lot of data between each host. Sufficient network bandwidth should be
considered during the planning, implementation and configuration stages (On-going
monitoring should also be carefully planned).
It also requires that the CPU on each physical host is from the same manufacturer and
family, i.e. you cant vMotion from an Intel to an AMD.
Best practice is for Virtual Machine files to be stored on shared storage systems (i.e. a
storage system where multiple hosts can access the same storage), such as a Fibre
Channel or iSCSI storage area network (SAN), or on an NFS NAS volume.

Page 142

VTSP 5.5

Student Study Guide VTSP 5.5

Performing a vMotion without shared storage is possible using Enhanced vMotion.


However, this will require a significantly longer time to copy a virtual machine as all the
VM file data AND memory footprint has to be copied across a network before the virtual
machine can be completely moved to the destination host.
A benefit of using shared storage is that vMotion only needs to move the memory
footprint and CPU state of the running virtual machine from the memory of one physical
host to another. vMotion with shared storage therefore completes in less time than
vMotion that does not use shared storage. Prior to version 5.1, this was the only option
for live migration of VMs between hosts. The term vMotion on its own typically implies
that shared storage is required.
While vMotion allows us to migrate virtual machines between hosts so that we can carry
out maintenance operations on those hosts without any downtime, then how do we do
the same for storage? Storage vMotion allows us to move the virtual machine files
between storage systems that are connected to the same host, without any service
interruption for the users.
All of these vMotion capabilities enable you to load balance between physical hosts
and storage systems, carry out planned hardware maintenance without service
interruption and they also allow you to carry out system expansions or upgrades such
as moving to newer faster hosts or storage systems without incurring service
interruptions.
For vMotion to be supported you require the following:

vMotion requires as a minimum the Essentials Plus license.


Virtual machine files are located on shared storage. Shared storage can be on a
Fibre Channel or iSCSI storage area network (SAN), or an NFS NAS. Enhanced
vMotion does not require shared storage but all other pre-requisites are the same
as for standard vMotion. Hosts must have the same CPU manufacturer and
family or have the same Enhanced vMotion Compatibility (EVC) baseline.
Migration with vMotion requires correctly configured network interfaces on source
and target hosts. On each host, configure a VMkernel port group for vMotion. It is
a best practice to have at least one dedicated GigE adapter for vMotion per host.
You must ensure that virtual machines have access to the same subnets on
source and destination hosts. If you are using standard switches for networking,
ensure that the network labels used for virtual machine port groups are
consistent across hosts.
If you are using vSphere Distributed Switches for networking, ensure that source
and destination hosts are members of all vSphere Distributed Switches that
virtual machines use for networking.

Use of Jumbo Frames is recommended for best vMotion performance.

Student Study Guide - VTSP 5.5

Page 143

VTSP 5.5 - Student Study Guide

Presenting HA
Weve talked about planned outages and how to manage them using vMotion. What
happens if you have an unplanned outage and how do we deal with it?
In the case of multiple ESXi hosts with virtual machines running on them, and where
files used shared storage, we can restart virtual machines on other available hosts if
one of our hosts fails.
This technology is called vSphere High Availability or vSphere HA.
vSphere HA provides high availability for virtual machines and the applications running
within them, by pooling the ESXi hosts they reside on into a cluster.
Hosts in the cluster are continuously monitored. In the event of a host failure, the virtual
machines on the failed host attempt to restart on alternate hosts.
vSphere App HA is new in vSphere 5.5. It is a virtual appliance that you can deploy on
the vCenter Server.
Using the components of vSphere App HA, you can define high availability policies for
critical middleware applications running on your virtual machines in the Data Center,
and configure remediation actions to increase their availability.
This means that you can virtualize Tier 1 applications and create a platform to ensure
that vSphere hosts the most critical part of the business.

Page 144

VTSP 5.5

Student Study Guide VTSP 5.5

In designing the system, you must ensure that you have enough capacity for HA
recovery from a host failure. HA performs failover and restarts virtual machines on
different hosts. Its first priority is the immediate availability of all virtual machines.
If you have hosts with too many virtual machines and you dont have enough capacity,
some of the virtual machines might not start even if HA is enabled.
However, if you do not have sufficient resources, you can prioritize your restart order for
the most important virtual machines.
Virtual machines are restarted even if insufficient resources exist, but you now have a
performance issue because virtual machines contend for the limited resources.
If virtual machines have reservations and those reservations cannot be guaranteed,
then some virtual machines might not be restarted.
You should also implement redundant heartbeat network addresses and isolation
addresses, and address the possible issue of a host isolation response (in the case
where a master heartbeat is lost).
As a minimum, you require an Essentials Plus License to use HA. vSphere App HA is
only available with an Enterprise Plus License.

Student Study Guide - VTSP 5.5

Page 145

VTSP 5.5 - Student Study Guide

Presenting DRS
A major concern in IT environments is to ensure that the load is distributed effectively
across the available resources. In this example we have two hosts handling all of the
load and one host with no load. Ideally, you want your infrastructure to sense this and to
move the loads so that the overall utilization is balanced.
VMware DRS or distributed load balancing monitors host, CPU and memory utilization
and can automatically respond to changes in load by rebalancing virtual machines
across the cluster when necessary. These virtual machines will be moved using
vMotion.
As new virtual machines are created or started, DRS can decide the optimal placement
for the virtual machines so that CPU and Memory resources are evenly consumed
across the cluster.
When you add a new physical server to a cluster, DRS enables virtual machines to
immediately take advantage of the new resources because it re-distributes the running
virtual machines across the newly expanded pool of resources. You can also define
rules (affinity rules) that allow you to control which virtual machines must be kept on the
same host and which virtual machines run on separate hosts.
In order for DRS to work, the pre-requisites of vMotion apply to the hosts and virtual
machines in a DRS cluster.

Page 146

VTSP 5.5

Student Study Guide VTSP 5.5

A Combination of HA and DRS can be used to enhance the clusters response to host
failures by improving the load distribution of the restarted VMs. HA powers on VMs then
DRS load balances VMs on Hosts.
Using DRS you can choose different levels of automation, Fully automated, Partially
automated and Manual. Fully Automated automatically places virtual machines on
hosts when powered on, as well as automatically migrating virtual machines. Partially
automated means that virtual machines will be automatically placed on hosts when
powered on and vCenter will suggest migrations. Manual will suggest migration
recommendations for virtual machines.
When designing DRS it is important to ensure that you have shared storage. As DRS
uses vMotion you should ensure that you understand the design requirements for
vMotion setup.
As a minimum you require an Enterprise License to use DRS.

Student Study Guide - VTSP 5.5

Page 147

VTSP 5.5 - Student Study Guide

Presenting DPM
Dynamic Power Management is an enhancement to DRS which monitors overall
utilization across a cluster and if it finds that the required protection levels can be met by
running all VMs on a reduced number of hosts, it will evacuate all virtual machines from
one or more hosts and then put those hosts into standby mode in order to save overall
power consumption.
When the virtual machine load increases and the host can no longer provide the
required level of protection, DPM will automatically restart hosts and migrate virtual
machines back onto them once they have come back online. When configuring DPM,
you must ensure that the cluster can startup and shutdown each host using IMPI or
Wake on LAN.
You should configure the vSphere DPM automation level for automatic operation and
use the default vSphere DPM power threshold. This decreases power and cooling costs
as well as decreasing administrative management overhead.
As a minimum you require an Enterprise License to use this feature.

Page 148

VTSP 5.5

Student Study Guide VTSP 5.5

Presenting FT
In some cases it is desirable to have the absolute minimum risk of downtime for some
virtual machines. VSphere fault tolerance or FT maintains an identical copy of a running
virtual machine in lockstep on a separate host. All Inputs and events performed on the
primary virtual machine are recorded and replayed on the secondary virtual machine
ensuring that the two remain in an identical state.
FT ensures that in the case of a host failure, the lockstep copy instantly takes over with
zero downtime.
vSphere Fault Tolerance is currently limited to Virtual Machines with a single vCPU and,
like vMotion, dedicated network uplinks on all hosts are recommended in order to
ensure that there is sufficient bandwidth available. All other standard vMotion
constraints also apply to VMs protected with Fault Tolerance. Consider that you are
using twice the amount of resources, so factor this into your design when using FT.
As a minimum you require a Standard License in order to use FT.

Student Study Guide - VTSP 5.5

Page 149

VTSP 5.5 - Student Study Guide

Presenting Storage Distributed Resource Scheduler (SDRS)


Just as DRS is utilized to provide benefits such as resource aggregation, automated
load balancing and bottleneck avoidance for host resources, Storage DRS provides the
same capabilities for storage resources. You can group and manage a cluster of similar
datastores as a single load-balanced storage resource called a datastore cluster.
Storage DRS collects the resource usage information for this datastore cluster and
makes automated decisions or recommendations about the initial virtual machine file
placement and migration to avoid I/O and space utilization bottlenecks on the
datastores in the cluster.
Storage DRS affinity rules enable controlling which virtual disks should or should not be
placed on the same datastore within a datastore cluster. By default, a virtual machine's
virtual disks are kept together on the same datastore. Storage DRS offers three types of
affinity rules:
VMDK Anti-Affinity:
Virtual disks of a virtual machine with multiple virtual disks are placed on different
datastores.
VMDK Affinity:
Virtual disks are kept together on the same datastore.

Page 150

VTSP 5.5

Student Study Guide VTSP 5.5


VM Anti-Affinity:
Two specified virtual machines are kept on different hosts from each other.
You should use affinity and anti-affinity rules as needed. As an example, to improve the
performance of an application by keeping the application disk on a datastore separate
from the operating system disk.
As a minimum you require an Enterprise Plus license to use Storage DRS.

Student Study Guide - VTSP 5.5

Page 151

VTSP 5.5 - Student Study Guide

Presenting Host Profiles


A host profile captures the configuration of a specific host in a template form so that this
profile can then be used to configure other hosts or validate if a hosts configuration
meets the requirements set by the administrator. This greatly reduces the manual steps
involved in configuring hosts and maintaining consistency and correctness in host
configuration across the data center.
Host profiles eliminate per-host, manual or UI-based host configuration. vSphere
administrators can use host profile policies to maintain configuration consistency and
correctness across the data center. Host profile policies capture the blueprint of a
known, validated golden configuration and use this as a baseline to configure
networking, storage settings, security settings, and other settings on multiple hosts. This
baseline can then be used to do a one-click or even scheduled configuration of newlydiscovered or re-provisioned hosts.
vSphere administrators can also monitor changes to this baseline configuration, detect
discrepancies, and fix them. The baseline reduces the setup time when provisioning
new hosts and eliminates the need for specialized scripts to perform ESXi host
configuration. The baseline can also be used to roll out administrator password
changes.
You should ensure that you set up and properly configure a host that will be used as a
reference host. Using host profiles is only supported for vSphere 4.0 hosts or later. As a
minimum, host profiles require an Enterprise Plus License.

Page 152

VTSP 5.5

Student Study Guide VTSP 5.5

Presenting Storage Profiles


Profile-Driven Storage enables administrators to have greater control over their storage
resources. It enables virtual machine storage provisioning to be automatically defined
by the configuration of the virtual machine.
Profile-Driven Storage uses VASA to deliver the storage characterization supplied by
the storage vendors to vCenter. VASA improves visibility into the physical storage
infrastructure through vCenter Server. Storage can also be manually tagged by the
Administrator. Instead of only seeing a block or file device with some amount of
capacity, this allows vCenter to know about replication, RAID, compression,
deduplication, and other capabilities and characteristics of the storage that it has
available.
With this new information, VMware administrators can create storage profiles that allow
them to define storage in terms of capability and not just capacity. Virtual machines can
then be assigned storage by policy in order to meet their storage performance,
protection or other characteristics in addition to capacity. The storage characterizations
are used to create the virtual machine placement rules in the form of storage profiles.
Storage Profiles also provide a way to check a virtual machines compliance against
these rules.
A key benefit here is that the virtual machine administrator no longer needs to make
detailed storage decisions regarding storage capability when deploying virtual

Student Study Guide - VTSP 5.5

Page 153

VTSP 5.5 - Student Study Guide


machines. This also extends to templates so that rapid provisioning can also
automatically assign suitable storage without requiring detailed storage allocation
decisions.
For example machines may have their storage automatically selected based on a
service level agreement which matches a particular storage profile. In the case shown
here, the virtual machine has Gold level storage specified for both its configuration files
and one of its virtual disks and, Bronze level storage for the other virtual disk. In this
environment, the Gold Storage profile corresponds to a datastore that has fast disks
with a high performance RAID type. The Bronze profile corresponds to a datastore that
has slower disks with a space-efficient RAID type. The use of storage profiles makes it
easier for the administrators to ensure that the correct types of storage resources are
automatically selected.
If you are planning to use storage profiles ensure that your storage system has VMware
vStorage APIs for Storage Awareness (VASA).
Because multiple system capabilities for a datastore are not supported, a datastore that
spans several extents assumes the system capability of only one of its extents.
Storage profiles require as a minimum an Enterprise Plus License.

Page 154

VTSP 5.5

Student Study Guide VTSP 5.5

Distributed Virtual Switches


In a vSphere infrastructure, if we need to make any changes or add capacity to the
network configuration in the cluster, we have to ensure that the changes are configured
correctly across each host, or we run the risk of breaking services such as vMotion,
where incorrectly labeled port groups will cause vMotion to fail. Managing the network
configuration on each host becomes a significant administrative overhead when dealing
with multiple servers.
The vSphere Distributed Switch (vDS) improves virtual machine networking by enabling
you to set up virtual machine access switching for your entire data center from a
centralized interface.
This enables the following , improved virtual machine network configuration, enhanced
network monitoring and troubleshooting capabilities and support advanced vSphere
networking features such as LACP.
The vSphere Distributed Switch provides the building blocks for many advanced
networking features in a vSphere environment, such as PV-LANs, VXLAN, Network I/O
control, SR-IOV and third-party switch extensions.
vSphere 5.1 also added the ability to export and restore DV switch configurations which
enhances the supportability of virtual switches. It also allows the use of the export as a
template to create a distributed switch in any other deployment.

Student Study Guide - VTSP 5.5

Page 155

VTSP 5.5 - Student Study Guide


DV switches now also support an automatic configuration rollback. This ensures that
misconfigurations that would significantly impact host or distributed switch behavior are
prevented from taking effect by rolling back to the previous valid configuration. These
rollbacks will occur when a network change that disconnects a host or creates any other
invalid host networking configuration. Rollbacks will also occur when an invalid update
is attempted on a distributed switch, distributed port group or distributed ports.
As a minimum you require an Enterprise Plus license to use Distributed Virtual
Switches.
Distributed Virtual Switches are covered in more depth in Course 4.

Page 156

VTSP 5.5

Student Study Guide VTSP 5.5

Auto Deploy
vSphere Auto Deploy facilitates the rapid provisioning of vSphere hosts by leveraging
the network boot capabilities of x86 servers together with the small footprint of the ESXi
hypervisor. Once installed a vCenter host profile is used to configure the host. After
configuration the host is connected to vCenter where it is available to host virtual
machines. The entire process is fully automated allowing new hosts to be quickly
provisioned with no manual intervention.
Stateless or diskless caching host deployments let you continue operation if the Auto
Deploy server becomes unreachable. This was the only mode of install available in
vSphere 5.0.
Stateless Caching caches the image when you apply the host profile. When you later
reboot, the host continues to use the Auto Deploy infrastructure to retrieve its image. If
the Auto Deploy server is not available, the host uses the cached image.

Student Study Guide - VTSP 5.5

Page 157

VTSP 5.5 - Student Study Guide

vSphere Storage Appliances


A VSA cluster leverages the computing and local storage resources of several ESXi
hosts. It provides a set of datastores that are accessible to all hosts within the cluster.
An ESXi host that runs VSA and participates in a VSA cluster is a VSA cluster member.
A VSA cluster is an affordable resilient shared storage solution that supports vMotion,
HA, DRS and other vSphere distributed services that is easy to install, and expandable.
Multiple VSA instances can be managed within a single vCenter Environment.
With VSA 1.0, you could create a VSA cluster with two or maximum three VSA cluster
members.
As of vSphere 5.1 the VSA can now support disks in all ESXi hosts in the cluster,
supports up to eight 3TB or twelve 2TB disks internally and up to 16 external disks in an
expansion enclosure.
The VSA now has the ability to dynamically enlarge the shared storage managed by the
VSA provided there is sufficient additional free physical space on all nodes in the VSA
cluster to extend into.
Prior to vSphere 5.1, vCenter could be run on the VSA but had to be installed
somewhere else first before a VSA cluster could be deployed. From VSA 5.1 onwards,
you can install a VMware vCenter instance on a local datastore of a VSA cluster virtual
machine.
That VMware vCenter instance then can be used to install the VSA by allocating a
subset of local storage, excluding the amount allocated for VMware vCenter (on all
hosts) for VSA and once it is configured the vCenter Server can be migrated to the VSA
storage.

Page 158

VTSP 5.5

Student Study Guide VTSP 5.5

You should ensure that you have the resources needed to install a VSA cluster. You will
require a physical or virtual machine that runs vCenter Server, however you can run
vCenter Server on one of the ESXi hosts in the cluster.
You require two or three physical hosts with ESXi installed. The hosts must all be the
same type of ESXi installation. VSA does not support combining freshly installed ESXi
and modified ESXi hosts in a single cluster. You require at least one Gb Ethernet or
10Gb Ethernet Switch.
As a minimum you require an Essentials Plus License to use one instance of VSA. A
VSA License is included in the Essentials Plus and all advanced Kits.
VSA is covered in more depth in Course 5.

Student Study Guide - VTSP 5.5

Page 159

VTSP 5.5 - Student Study Guide

Planned Maintenance
Your customer wants to be able to carry out planned maintenance tasks on ESXi hosts without
service interruption.
Which of the following technologies would be a best fit for them?
There are two correct answers.

Page 160

VTSP 5.5

Student Study Guide VTSP 5.5

vSphere Standard License


Your customer has a vSphere Standard License.
Identify three features which are available for the customer to use.

Student Study Guide - VTSP 5.5

Page 161

VTSP 5.5 - Student Study Guide

Module Summary
We have now explored the features, benefits and configuration requirements for the
advanced scalability and cluster wide features of vSphere that are managed and
controlled via vCenter.
Now that you have completed this module, feel free to review it until you are ready to
start the next module.

Page 162

VTSP 5.5

Student Study Guide VTSP 5.5

Course 3

VTSP V5.5 Course 3: VMware vSphere: VM Management


Welcome to VTSP 5.5 Course 3: VMware vSphere: VM Management.
There are 3 modules in this course as shown here.

Student Study Guide - VTSP 5.5

Page 163

VTSP 5.5 - Student Study Guide

Course 3 Objectives
At the end of this course you should be able to:

Define the architecture of vSphere 5.5 Virtual Machines


Explain how, why and when Virtual Machines can be customized in order to
support specific features, configurations and capabilities
Explain how and why a customer should leverage Virtual Machine templates,
cloning and snapshots
Describe the options available for migrating virtual machines including cold
migration, vMotion, and vSphere Replication
Be familiar with the VMware Update Manager and how it is used to manage
updates for vSphere hosts and virtual machines

Page 164

VTSP 5.5

Student Study Guide VTSP 5.5

Module 1: Virtual Machine Architecture


This is module 1, Virtual Machine Architecture, which explores the configuration options
in order to be able to explain the benefits, pre-requisites, licensing and other impacts
that are associated with specific virtual machine configuration and virtual infrastructure
design choices. These are the topics that will be covered in this module.

Student Study Guide - VTSP 5.5

Page 165

VTSP 5.5 - Student Study Guide

Module 1 Objectives
At the end of this module the delegate will be able to:

Describe the architecture of vSphere 5.5 Virtual Machines


Explore the configuration options
Explain the benefits, pre-requisites, licensing and other impacts that are
associated with specific VM configuration and virtual infrastructure design
choices.

Page 166

VTSP 5.5

Student Study Guide VTSP 5.5

What is a virtual machine?


A virtual machine (VM) comprises a set of specification and configuration files and is
backed by the physical resources of a host.
Running on an x86-based hardware platform, a VM behaves exactly like a physical
computer, and contains its own virtual CPU, RAM, and NIC.

Student Study Guide - VTSP 5.5

Page 167

VTSP 5.5 - Student Study Guide

What is a virtual machine?


A virtual machine contains virtual hardware, a guest operating system, and one or more
applications. In the ESXi architecture, applications running in virtual machines access
CPU, memory, disk, and network resources without direct access to the underlying
physical hardware.
The virtual machine monitor or VMM sends requests for computer resources on behalf
of its virtual machine to the ESXi hypervisor, known as the VMkernel.
In turn, the VMkernel presents the VMM resource requests to the physical hardware.
A virtual machine is a collection of virtualized hardware resources that would constitute
a physical machine on a native environment
As a physical machine can have a number of CPUs, a virtual machine is assigned a
number of virtual CPUs, or vCPUs. For example, a 4-way virtual machine has four
vCPUs. A virtual machine has one or more vCPU worlds on which guest instructions are
executed. So, the 4-vCPU virtual machine has 4 vCPU worlds.
There are other worlds associated with the virtual machine that execute management
tasks like handling the mouse and keyboard, snapshots, and legacy I/O devices.
Therefore, it is theoretically possible that a single-vCPU virtual machine can consume
more than 100% of a processor, although this is unlikely because those management
worlds are mostly inactive.
The CPU scheduler in VMware vSphere (ESXi 5.x) is crucial to providing good
performance in a consolidated environment.
Page 168

VTSP 5.5

Student Study Guide VTSP 5.5

Because most modern processors are now equipped with multi-core processors, it is
easy to build a system with tens of cores running hundreds of virtual machines. In such
a large system, allocating CPU resources efficiently and fairly is critical.
Fairness is one of the major design goals of the CPU scheduler.
Allocation of CPU time to virtual machines has to be faithful to the resource
specifications like CPU shares, reservations, and limits. The CPU scheduler works
according to the proportional share algorithm. This aims to maximize CPU utilization
and world execution efficiency, which are critical to system throughput.
With this in mind, you must ensure that you choose the appropriate amount of vCPUs
for your virtual machine depending on the type of workload that it will execute. This is
discussed in more detail later.

Student Study Guide - VTSP 5.5

Page 169

VTSP 5.5 - Student Study Guide

Virtual Machine Hardware


VMware vSphere 5.5 supports a new virtual machine format, VM Virtual Hardware
version 10.
The devices of a virtual machine are a collection of virtual hardware that includes the
devices shown on the screen. For example, you can select the amount of RAM to
configure the virtual machine. The processors that the virtual machine identifies are the
same as those on the physical host.
An important point to remember is that the virtual machine platform provided by the
ESXi host is independent of the host system and its physical hardware. Every VMware
platform product provides the same set of virtualized hardware regardless of the system
on which it is running.
For example, you can move a virtual machine from a HP server with an AMD processor
and local SCSI disks to an IBM server with an Intel processor and Fibre Channel SAN.
Some flavors of Linux install a kernel module specific to AMD when the virtual machine
is initially built on a server with AMD platform.
If you move that same virtual machine unmodified to an Intel based host, it could go into
kernel panic upon boot.
The virtual machine can be powered on and run unaffected by the hardware
differences.

Page 170

VTSP 5.5

Student Study Guide VTSP 5.5

ESXi supports virtual machines with up to 64 virtual CPUs which allows you to run
larger CPU-intensive workloads on the VMware ESXi platform. ESXi also supports 1TB
virtual RAM(vRAM). This means you can assign up to 1TB of RAM to ESXi 5.5 virtual
machines.
In turn, this means you can run even the largest applications in vSphere including very
large databases, and you can virtualize even more resource-intensive Tier 1 and 2
applications.
With vSphere 5.1, VMware partnered with NVIDIA to provide hardware-based vGPU
support inside the virtual machine.
vGPUs improve the graphics capabilities of a virtual machine by off-loading graphicintensive workloads to a physical GPU installed on the vSphere host. vSphere 5.1 was
the first vSphere release to provide support for hardware-accelerated 3D graphicsvirtual graphics processing unit (vGPU)-inside of a virtual machine.
That support was limited to only NVIDIA-based GPUs. With vSphere 5.5, vGPU support
has been expanded to include both Intel- and AMD-based GPUs. Virtual machines with
graphic-intensive workloads or applications that typically have required hardware-based
GPUs can now take advantage of additional vGPU vendors, makes and models.
Virtual machines still can leverage VMware vSphere vMotion technology, even across a
heterogeneous mix of vGPU vendors, without any downtime or interruptions to the
virtual machine.
vGPU support can be enabled using both the vSphere Web Client and VMware Horizon
View for Microsoft Windows 7 OS and Windows 8 OS. The following Linux OSs also are
supported: Fedora 17 or later, Ubuntu 12 or later and Red Hat Enterprise Linux (RHEL)
7. Controlling vGPU use in Linux OSs is supported using the vSphere Web Client.

Student Study Guide - VTSP 5.5

Page 171

VTSP 5.5 - Student Study Guide

Configuration Maximums
Before deploying a virtual machine, you must plan your environment. You should
understand the requirements and configuration maximums for virtual machines
supported by vSphere 5.5.
The maximum CPU configuration is 64 vCPUs per virtual machine. You must have
adequate licensing in place if you want to use this many vCPUs.
The maximum amount of RAM per virtual machine is 1TB. Before scaling this much,
take into account whether the guest operating system can support these amounts and
whether the client can use these resources for the workload required.
With vSphere 5.5, the maximum size of a virtual disk is 62TB - an increase from almost
2TB in vSphere 5.1. 62TB Virtual Mode RDMs can also be created.
vSphere 5.5 adds AHCI SATA controllers. You can configure a maximum of four
controllers with support for 30 devices per controller, making a total of 120 devices. This
increases the number of virtual disks available to a virtual machine from 60 to 180.
The maximum amount of virtual SCSI targets per virtual machine is 60, and is
unchanged.
Currently, the maximum number of Virtual NICs that a virtual machine can have is 10.
Be sure you choose the network adapters appropriate for the virtual machine you are
creating.

Page 172

VTSP 5.5

Student Study Guide VTSP 5.5

Video memory is limited to 512MB per virtual machine.


If you plan an environment with hundreds of desktops, take into account any virtual
machine overhead incurred by using video memory or 3D support.
Most of these limits mean that generally, you do not need to be concerned with the
overall maximums when creating a virtual machine, but as always, bear them in mind.
The configuration maximums document can be located by visiting www.vmware.com.

Student Study Guide - VTSP 5.5

Page 173

VTSP 5.5 - Student Study Guide

Virtual Machine Licensing Considerations


The new vSphere licensing model for each of vSphere 5.0, 5.1 and 5.5 continues to be
based on processor licenses.
It eliminates the restrictive physical entitlements of CPU cores and physical RAM per
server and does not limit the number of virtual machines or amount of virtual memory
(vRAM) on each licensed processor.
Therefore, you can have up to 64 vCPUs in a virtual machine, depending on the number
of licensed CPUs on the host and the type of license.
You cannot start a virtual machine with more virtual processors than the total number of
licensed physical cores on your host.
To use the Virtual Serial Port Concentrator, you must have an Enterprise or Enterprise
Plus license.
vSphere Desktop Edition is designed for licensing vSphere in VDI deployments.
vSphere Desktop provides all the functionalities of vSphere Enterprise Plus.
It can only be used for VDI deployment and can be leveraged with both VMware
Horizon View and other third-party VDI connection brokers.
VMware vSphere Hypervisor is a free product that provides a way to get started with
virtualization quickly and at no cost.
It cannot connect to a vCenter Server or be managed by one.
Page 174

VTSP 5.5

Student Study Guide VTSP 5.5

Previously in vSphere 5.1 the server was limited to using 32Gb of physical RAM. In
vSphere 5.5 this restriction has been removed.

Student Study Guide - VTSP 5.5

Page 175

VTSP 5.5 - Student Study Guide

Core Virtual Machine Files


Like a physical computer, a virtual machine runs an operating system and applications.
The virtual machine is comprised of a set of specification and configuration files and is
backed by the physical resources of a host.
These files are stored in a single directory on a VMFS or NFS datastore that is
connected to the host.

Page 176

VTSP 5.5

Student Study Guide VTSP 5.5

Core Virtual Machine Files


Configuration files ending .vmx contain all the configuration and hardware settings for
the virtual machine. These files are stored in text format.
There are two types of swap file.
The <VM_name>.vswp is created for the virtual machine during power on. This is used
when the physical host exhausts its allocated memory and guest swap is used. Its size
is equal to the allocated RAM less any memory reservation at boot time.
The vmx-<VM_name>.vswp file is used to reduce the VMX memory reservation
significantly (for example, from about 50MB or more per virtual machine to about 10MB
per virtual machine).
This allows the remaining memory to be swapped out when host memory is
overcommitted, reducing overhead memory reservation for each virtual machine.
The host creates VMX swap files automatically, provided there is sufficient free disk
space at the time a virtual machine is powered on.
Non-volatile RAM stores the state of the virtual machines BIOS in the file
<VM_name>.nvram. If this is deleted, the virtual machine will recreate one when it is
powered on.
Log files are created when the virtual machine is power cycled. The current log file is
always called vmware.log
A virtual machine template file has the extension .vmtx.
Student Study Guide - VTSP 5.5

Page 177

VTSP 5.5 - Student Study Guide


A Raw Device Map File will have the extension -rdm.vmdk
Each virtual disk drive for a virtual machine consists of a pair of .vmdk files. One is a
text file containing descriptive data about the virtual hard disk, and the second is the
actual content of that disk.
For example, a virtual machine named examplevm has one disk attached to it. This disk
is comprised of a descriptor file, examplevm.vmdk, of under 1KB, and a 10GB example
vm-flat.vmdk flat file which contains virtual machine content.
There are a number of ways to provision virtual disks, which will be discussed later.
Suspend files, are generated when a user or an administrator suspends the virtual
machine from the power-on state.
Snapshot data files, which describe the virtual machines snapshots, if they exist.
Snapshot state files which store the memory state of the virtual machine at the time you
take the snapshot. Note, a .vmsn file is created each time you take a snapshot,
regardless of the memory selection.
Snapshot disk file is a snapshot file that represents the difference between the current
state of the virtual disk and the state that existed at the time the previous snapshot was
taken.

Page 178

VTSP 5.5

Student Study Guide VTSP 5.5

Knowledge Check: VM Configuration Maximums


Your customer has asked about configuration maximums on virtual machines.

Student Study Guide - VTSP 5.5

Page 179

VTSP 5.5 - Student Study Guide

VMware Tools
VMware Tools is a suite of utilities that enhances the performance of the virtual
machines guest OS and improves the management of the virtual machine.
The VMware Tools installer files for Windows, Linux, FreeBSD, NetWare, and Solaris
guest OS are built into ESXi as ISO image files.
After installing and configuring the guest OS, you must install VMware Tools.
VMware Tools provide two very visible benefits: better video performance and the ability
to move the mouse pointer freely into and out of the console window.
VMware Tools also install other important components, such as device drivers.
The VMware Tools service performs various tasks such as passing messages from the
host to the guest OS, running scripts that help automate the operations of the OS,
synchronizing the time in the guest OS with the time in the host OS, and sending a
heartbeat to the host so that it knows the guest OS is running.
On Windows guests, VMware Tools controls grabbing and releasing of the mouse
pointer.
VMware Tools also enables you to copy and paste text between the desktop of the local
host and the desktop of the virtual machine.

Page 180

VTSP 5.5

Student Study Guide VTSP 5.5

VMware Tools includes a set of VMware device drivers for improved graphical, network,
and mouse performance, as well as efficient memory allocation between virtual
machines.
From the VMware Tools control panel, you can modify settings and connect and
disconnect virtual devices.
There is also a set of VMware Tools scripts that help automate the guest OS tasks.
An icon in the notification area of the Windows taskbar indicates when VMware Tools is
running and provides ready access to the VMware Tools control panel and help utility.

Student Study Guide - VTSP 5.5

Page 181

VTSP 5.5 - Student Study Guide

Using Operations Manager for Better Performance and Capacity Utilization


vSphere with Operations Manager provides powerful tools for analyzing the resources
and the performance of your virtual environment.
You can select an object to get an overview of the Health status of every related object,
such as the associated vCenter Server, data centers, datastores, hosts and virtual
machine.
This data allows you to find answers to questions regarding remaining capacity in your
environment, how many virtual machines you can deploy before capacity runs out, or
which resources are constrained in your environment.
Bar graphs display trend views of object counts and resource use activity.
The information tables provide extended forecasts for object counts and for used
resources. It includes the remaining capacity for the next week, month and longer
periods.
Depending on the object you are looking at, you can start a what-if scenario to learn
about the impact of capacity changes.

Page 182

VTSP 5.5

Student Study Guide VTSP 5.5

Student Study Guide - VTSP 5.5

Page 183

VTSP 5.5 - Student Study Guide

Customizing Virtual Machine Settings


Before creating a virtual machine, you must consider such factors as the guest
operating system you want to install and the type of workload it will manage. These will
affect the requirements of your virtual machine.
With this in mind, it is useful to understand the impact each component of a virtual
machine has on resources, capacity, licensing and performance; and what impact
changing these can have. These are explored briefly next.
Virtual machines run the applications and services that support individual users and
entire lines of business. They must be designed, provisioned, and managed to ensure
the efficient operation of these applications and services.
While it is best to deploy virtual machines with the default settings unless a clear case
exists for doing otherwise, there are some choices you might face that can have an
impact on the design of your virtual machine .

Page 184

VTSP 5.5

Student Study Guide VTSP 5.5

NCIS
When you configure a virtual machine, you can add virtual network interface cards
(NICs) and specify the adapter type. The types of network adapters that are available
depend on the following factors:

The virtual machine version, which in turn depends on what host created it or
most recently updated it.
Whether the virtual machine has been updated to the latest version for the
current host.
The guest OS.
Six main NIC types are supported: E1000, Flexible, Vlance, VMXNET, VMXNET 2
(Enhanced) and VMXNET3.
The default virtual NIC emulated in a virtual machine is either an AMD PCnet32 device
(vlance), an Intel E1000 device (E1000), or an Intel E1000e device (E1000e).
VMware also offers the VMXNET family of paravirtualized network adapters. These
provide better performance than default adapters and should be used for optimal
performance within any guest OS for which they are available. The VMXNET virtual
NICs (particularly VMXNET3) also offer performance features not found in the other
virtual NICs.
The VMXNET3 paravirtualized NIC requires that the virtual machine use virtual
hardware version 7 or later and, in some cases, requires that VMware Tools be installed

Student Study Guide - VTSP 5.5

Page 185

VTSP 5.5 - Student Study Guide


in the guest OS. A virtual machine with a VMXNET3 device cannot use vMotion to
migrate to a host running ESX/ESXi 3.5.x or earlier.
When two virtual machines on the same host communicate through a single vSwitch,
their network speeds are not limited by the wire speed of any physical network card.
Instead, they transfer network packets as fast as the host resources allow. If the virtual
machines are connected to different virtual switches, traffic will go through a wire and
incur unnecessary CPU and network overhead.

Page 186

VTSP 5.5

Student Study Guide VTSP 5.5

vRAM
Carefully select the amount of memory you allocate to your virtual machines. You
should allocate enough memory to hold the working set of applications you will run in
the virtual machine, thus minimizing thrashing. You should also avoid over-allocating
memory, as this consumes memory that could be used to support more virtual
machines.
ESXi uses five memory management mechanisms-page sharing, ballooning, memory
compression, swap to host cache, and regular swapping-to dynamically reduce the
amount of physical memory required for each virtual machine.
When Page Sharing is enabled, ESXi uses a proprietary technique to transparently and
securely share memory pages between virtual machines, thus eliminating redundant
copies of memory pages. If the virtual machines memory usage approaches its memory
target, ESXi will use ballooning to reduce that virtual machines memory demands.
If the virtual machines memory usage approaches the level at which host-level
swapping will be required, ESXi will use memory compression to reduce the number of
memory pages it will need to swap out. If memory compression doesnt keep the virtual
machines memory usage low enough, ESXi will next forcibly reclaim memory using
host-level swapping to a host cache (if one has been configured). Swap to host cache is
a feature that allows users to configure a special swap cache on SSD storage. In most
cases this host cache (being on SSD) will be much faster than the regular swap files
(typically on hard disk storage), significantly reducing access latency.
Student Study Guide - VTSP 5.5

Page 187

VTSP 5.5 - Student Study Guide


If the host cache becomes full, or if a host cache has not been configured, ESXi will
next reclaim memory from the virtual machine by swapping out pages to a regular swap
file. Like swap to host cache, some of the pages ESXi swaps out might be active. Unlike
swap to host cache, however, this mechanism can cause virtual machine performance
to degrade significantly due to its high access latency.
While ESXi uses these methods to allow significant memory overcommitment with little
or no impact on performance, you should avoid overcommitting memory to the point that
active memory pages are swapped out with regular host-level swapping.

Page 188

VTSP 5.5

Student Study Guide VTSP 5.5

CPUs
When choosing the number of vCPUs consider whether the virtual machine needs more
than one. As a general rule, always try to use as few vCPUs as possible. If the
operating system supports symmetric multiprocessing (SMP), consider whether the
application is multithreaded and whether it would benefit from multiple vCPUs. This
could provide improvements for the virtual machine and the host.
Configuring a virtual machine with more vCPUs than its workload can use might cause
slightly increased resource usage, potentially impacting performance on very heavilyloaded systems. Common examples of this include a single-threaded workload running
in a multiple-vCPU virtual machine or a multi-threaded workload in a virtual machine
with more vCPUs than the workload can effectively use.
Unused vCPUs still consume timer interrupts in some guest operating systems, though
not with tickless timer kernels such as 2.6 Linux kernels.

Student Study Guide - VTSP 5.5

Page 189

VTSP 5.5 - Student Study Guide

SCSI
ESXi supports multiple virtual disk types.
Thick provisioned - Thick virtual disks, which have all their space allocated at creation
time, are further divided into eager zeroed and lazy zeroed disks. An eager-zeroed thick
disk has all space allocated and zeroed out at the time of creation. This increases the
time it takes to create the disk, but results in the best performance, even on the first
write to each block.
A lazy-zeroed thick disk has all space allocated at the time of creation, but each block is
zeroed only on first write. This results in a shorter creation time, but reduced
performance the first time a block is written to. Subsequent writes, however, have the
same performance as eager-zeroed thick disks.
The use of VAAI*-capable SAN storage can speed up disk creation and zeroing by
offloading operations to the storage array.
*VMware vSphere Storage APIs-Array Integration
Thin-provisioned - Space required for a thin-provisioned virtual disk is allocated and
zeroed upon first write, as opposed to upon creation. There is a higher I/O cost (similar
to that of lazy-zeroed thick disks) during the first write to an unwritten file block, but on
subsequent writes, thin-provisioned disks have the same performance as eager-zeroed
thick disks.

Page 190

VTSP 5.5

Student Study Guide VTSP 5.5

The use of VAAI-capable SAN storage can improve thin-provisioned disk first-time-write
performance by improving file locking capability and offloading zeroing operations to the
storage array.
Thin provisioning of storage addresses a major inefficiency issue by allocating blocks of
storage to a guest operating system (OS), file system, or database only as they are
needed, rather than at the time of creation.
However, traditional thin provisioning does not address reclaiming stale or deleted data
within a guest OS, leading to a gradual growth of storage allocation to a guest OS over
time. With vSphere 5.1, VMware introduces a new virtual disk type, the space-efficient
sparse virtual disk (SE sparse disk), with the ability to reclaim previously-used space
within the guest OS. Currently, SE sparse disk is restricted to VMware Horizon View.
As a guide, use one partition per virtual disk, to deploy a system disk and a separate
application data disk. This simplifies backup and separate disks help distribute I/O load.
Place a virtual machines system and data disks on the same datastore, unless they
have widely varying I/O characteristics. Do not place all system disks on one datastore
and all data disks on another.
Store swap files on shared storage with the virtual machine files as this option is the
default and the simplest configuration for administration.

Student Study Guide - VTSP 5.5

Page 191

VTSP 5.5 - Student Study Guide

Knowledge Check: Virtual Machine Customization


Your customer's data center team has asked about the maximum amount of virtual
disks they can attach to their virtual machine.

Page 192

VTSP 5.5

Student Study Guide VTSP 5.5

Hot Extending Virtual Disks


With Hot Extending, you can extend a virtual disk without any downtime on the virtual
machine. Not all guest OSs support hot-extending.
Keep in mind that after you increase the size of the virtual disk, you need to use the
appropriate tool in the guest OS to allow the file system on this disk to use the newly
allocated disk space.
It is important to note that if a virtual machine has snapshots, and you hot-extend a
virtual disk, you can no longer commit a snapshot or revert the base disk to its original
size.
vSphere 5.5 introduces support for 62TB virtual disks. However, at this time it should be
noted that there is no support for hot extending disk files beyond 2TB. The virtual disk
must be offline for this to happen.

Student Study Guide - VTSP 5.5

Page 193

VTSP 5.5 - Student Study Guide

Hot-adding Hardware
Examples of hot-addable devices are USB controllers, Ethernet adapters, and hard disk
devices.

Page 194

VTSP 5.5

Student Study Guide VTSP 5.5

Hot-adding Hardware
USB Controllers
USB controllers are available to add to virtual machines to support USB passthrough
from an ESXi host or client computer to the virtual machine.
You can add multiple USB devices to a virtual machine when the physical devices are
connected to an ESXi host. USB passthrough technology supports adding USB devices
such as security dongles and mass storage devices to virtual machines that reside on
the host to which the devices are connected.
Devices can connect to only one virtual machine at a time.
For a list of USB device models supported for passthrough, refer to the knowledge base
article at kb.vmware.com/kb/1021345

Student Study Guide - VTSP 5.5

Page 195

VTSP 5.5 - Student Study Guide

Hot-adding Hardware
Network Interface Cards
You can add a network interface card (NIC) to a virtual machine to bridge a network, to
enhance communications, or to replace an older adapter.
When you add a NIC to a virtual machine, you select the adapter type, network
connection, and indicate whether the device should connect when the virtual machine is
turned on.
Ensure that your operating system supports the type of NIC that you wish to use and
remember to use the VMXNET3 paravirtualized network adapter for operating systems
where it is supported.

Page 196

VTSP 5.5

Student Study Guide VTSP 5.5

Hot-adding Hardware
Hard Disks
To add a hard disk to a virtual machine, you can create a virtual disk, add an existing
virtual disk, or add a mapped SAN LUN. You may have to refresh or rescan the
hardware in an operating system such as Windows 2003.
You cannot hot-add IDE disks.

Student Study Guide - VTSP 5.5

Page 197

VTSP 5.5 - Student Study Guide

Hot-Add CPU and Memory


The slide shows details of the Add page.

Page 198

VTSP 5.5

Student Study Guide VTSP 5.5

VMDirectPath I/O Generation


VMDirectPath I/O is a technology that improves CPU efficiency by allowing device
drivers in virtual machines to bypass the virtualization layer, and directly access and
control physical devices.
VMDirectPath I/O relies on DMA Address Translation in an I/O memory management
unit to convert guest physical addresses to host physical addresses.
VMDirectPath I/O is targeted to those applications that benefit from direct access by the
guest operating system to the I/O devices.
When VMDirectPath I/O is enabled for a particular device and virtual machine, that
device is no longer shown in the ESXi device inventory and no longer available for use
by the ESXi I/O stack.
This virtual machine takes full control of the device. Therefore, while the virtual machine
gains performance (mostly CPU), it loses many virtualization features, such as vMotion,
virtual device hot add, and virtual machine suspend and resume.
Each virtual machine supports up to six PCI DirectPath devices.
Generally, you cannot migrate a virtual machine configured with a passthrough PCI
device through vMotion.
However, Cisco Unified Computing Systems (UCS) through Cisco Virtual Machine
Fabric Extender (VM-FEX) distributed switches support migration of virtual machines.

Student Study Guide - VTSP 5.5

Page 199

VTSP 5.5 - Student Study Guide


To ensure that the host hardware is compatible with VM DirectPath I/O, check the
VMware Compatibility Guide at
http://www.vmware.com/resources/compatibility/search.php

Page 200

VTSP 5.5

Student Study Guide VTSP 5.5

Single Root I/O Virtualization (SR-IOV) Support


vSphere versions 5.1 and later support Single Root I/O Virtualization (SR-IOV).
This feature is beneficial for users who want to offload I/O processing to the physical
adapters to help reduce network latency. vSphere 5.5 introduces a new simplified
workflow for SR-IOV configuration.
SR-IOV is a standard that allows one Peripheral Component Interconnect Express
(PCIe) adapter to be presented as multiple separate logical devices to the VMs.
The hypervisor manages the physical functions (PF) while the virtual functions (VFs) are
exposed to the VMs. In the hypervisor, SR-IOV capable network devices offer the
benefits of direct I/O, which include reduced latency and reduced host CPU utilization.
VMware vSphere ESXi platforms VMDirectPath (pass through) functionality provides
similar benefits to the customer, but requires a physical adapter per VM.
In SR-IOV, the pass-through functionality can be provided from a single adapter to
multiple VMs through VFs.
The limitation with this feature is that since the network processing has been offloaded
to a physical adapter, vSphere vMotion, vSphere FT, and vSphere HA features are not
available to the customers when this feature is selected.
vSphere 5.5 enables users to communicate the port group properties defined on the
vSphere standard switch (vSS) or VDS to the virtual functions.

Student Study Guide - VTSP 5.5

Page 201

VTSP 5.5 - Student Study Guide


For example, if promiscuous mode is enabled in a port group, that configuration is then
passed to virtual functions, and the virtual machines connected to the port group will
receive traffic from other virtual machines.
To configure SR-IOV, your environment must meet certain criteria.
For example, vSphere must be at version 5.1 on Hosts with Intel processors. AMD
processors are not supported at version 5.5. For the full list of Support Configuration,
check the vSphere Documentation by following the link below.
SR-IOV offers performance benefits and trade-offs similar to those of DirectPath I/O.
DirectPath I/O and SR-IOV have similar functionality but you use them to accomplish
different things.SR-IOV is not compatible with certain core virtualization features, such
as vMotion.
SR-IOV does, however, allow for a single physical device to be shared amongst multiple
guests.
With VMDirectPath I/O, you can map only one physical function to one virtual machine.
With SR-IOV you share a single physical device, allowing multiple virtual machines to
connect directly to the physical function.

Page 202

VTSP 5.5

Student Study Guide VTSP 5.5

Raw Device Mapping (RDM) Overview


Raw Device Mapping (RDM) provides a mechanism for a virtual machine to have direct
access to logical units (LUNs) on the physical storage subsystem. It is available only on
block-based storage arrays.
RDM comprises a mapping file in a separate VMFS volume that acts as a proxy for a
raw physical storage device. It allows a virtual machine to directly access and use the
storage device and contains metadata for managing and redirecting the disk access to
the physical device.
The mapping file gives some of the advantages of direct access to a physical device
while keeping some advantages of a virtual disk in VMFS.
As a result, it merges the VMFS manageability with the raw device access.
You can use the vSphere Client to add raw LUNs to virtual machines.
You can also use vMotion to migrate virtual machines with RDMs as long as both the
source and target hosts have access to the raw LUN.
Additional benefits of RDM include distributed file locking, permissions, and naming
functionalities.
Please note that VMware recommends using VMFS datastores for most virtual disk
storage.
A new LUN is required for each virtual machine with RDM.
Student Study Guide - VTSP 5.5

Page 203

VTSP 5.5 - Student Study Guide

Knowledge Check: Core Virtual Machine Files


Match the file types on the left with the correct functions to complete the table on the
right.

Page 204

VTSP 5.5

Student Study Guide VTSP 5.5

Module Summary
Now that you have completed this module, you should be able to:

Describe the architecture of vSphere 5.5 Virtual Machines


Explore the configuration options
Explain the benefits, pre-requisites, licensing and other impacts that are
associated with specific VM configuration and virtual infrastructure design
choices

Student Study Guide - VTSP 5.5

Page 205

VTSP 5.5 - Student Study Guide

Module 2: Copying and Migrating Virtual Machines


This is module 2, Copying and Migrating Virtual Machines. These are the topics that will
be covered in this module.

Page 206

VTSP 5.5

Student Study Guide VTSP 5.5

Module 2 Objectives
At the end of this module, you will be able to:

Describe the options in vSphere 5.5 for copying or moving Virtual Machines
within and between Virtual Infrastructures.
Explain how and why a customer should make use of these features.

Student Study Guide - VTSP 5.5

Page 207

VTSP 5.5 - Student Study Guide

Templates
VMware provides several methods to provision vSphere virtual machines.
The optimal method for your environment depends on factors such as the size and type
of your infrastructure and the goals that you want to achieve.
A template is a master copy of a virtual machine that can be used to create and
provision new virtual machines, minimizing the time needed for provisioning.
The template image usually includes a specific OS, one or more applications, and a
configuration that provides virtual counterparts to hardware components.
Templates coexist with virtual machines in the inventory and cannot be powered on or
edited.
You can create a template by converting a powered-off virtual machine to a template,
cloning a virtual machine to a template, or by cloning another template.
Converting a virtual machine to a template is extremely fast as no copy tasks are
needed. The files are just renamed.
Cloning can be relatively slow as a full copy of the disk files needs to be made.
Templates can be stored in a VMFS datastore or an NFS datastore.
You can deploy from a template in one data center to a virtual machine in a different
data center.

Page 208

VTSP 5.5

Student Study Guide VTSP 5.5

Template Contents
Templates are master images from which virtual machines are deployed. A welldesigned template provides the best starting point for most virtual machine
deployments.
When creating a template, you should consider the workload it will be used for, how it
can be optimized to run within ESXi server, and how the guest OS can be optimized.
The type of workload that the virtual machine will process will affect the amount of
vCPU and memory it needs. Size and type of base disk, data disk or disks and default
SCSI controller all must be considered.
You should disable any devices and ports that the virtual machine will not use. Disable
any serial or parallel ports in the virtual machine BIOS that are not required.
Ensure that VMware Tools is installed into the guest OS.
Only use templates for master image deployment ensuring that they are fit for purpose.
This minimizes the administration overhead that needs to be done on the guest
operating system.

Student Study Guide - VTSP 5.5

Page 209

VTSP 5.5 - Student Study Guide

Cloning a Virtual Machine


Cloning a virtual machine can save time if you are deploying many similar virtual
machines.
You can create, configure, and install software on a single virtual machine.
Then you can clone it multiple times rather than creating and configuring each virtual
machine individually.
You can clone a virtual machine from one data center to another.
Cloning a virtual machine to a template preserves a master copy of the virtual machine.
For example, you can create one template, modify the original virtual machine by
installing additional software in the guest operating system, and create another
template.
You can customize clones in the same way as you can when you deploy from a
template.
The virtual machine being cloned can either be powered on or powered off.

Page 210

VTSP 5.5

Student Study Guide VTSP 5.5

Cloned VMs and Templates Compared


The table points out some of the differences between templates and clones and which
is appropriate and when to use them.
The virtual machine must be powered off to create a template. A virtual machine can be
cloned when powered on or off.
A template is a master copy of a virtual machine and can be used to create many
clones. A clone is an exact copy of a virtual machine taken at the time of the clone.
Templates are best for production environments. They are configured as per your
security policy.
Clones are ideal for test and development where you need exact copies of a server.
Templates cannot be powered on or edited, whereas clones can.
Templates are suited for mass deployment of virtual machines. Clones are not.

Student Study Guide - VTSP 5.5

Page 211

VTSP 5.5 - Student Study Guide

Knowledge Check: VM Templates


Your customer has asked about using templates across data centers.
Is the statement true or false?

Page 212

VTSP 5.5

Student Study Guide VTSP 5.5

Snapshots: An Overview
Snapshots capture the state and data of a virtual machine at a point in time.
Snapshots are useful when you must revert repeatedly to the same virtual machine
state, but you do not want to create multiple virtual machines.
These short-term solutions for capturing point-in-time virtual machine states are not
appropriate for long-term virtual machine backups. Do not run production virtual
machines from snapshots on a long-term basis.
Snapshots do not support some disk types or virtual machines configured with bus
sharing.
VMware does not support snapshots of raw disks, RDM physical-mode disks, or guest
operating systems that use an iSCSI initiator in the guest.
Snapshots are not supported with PCI vSphere DirectPath I/O devices.
Snapshots can negatively affect the performance of a virtual machine. Performance
degradation depends on how long the snapshot or snapshot tree is in place, the depth
of the tree, and how much the virtual machine and its guest operating system have
changed from the time you took the snapshot.
This degradation might include a delay in the time it takes the virtual machine to poweron.
When preparing your VMFS datastore, factor snapshots into the size if you are going to
use them. Increase the datastore usage on disk alarm to a value above 30% to avoid
running out of space unexpectedly on a VMFS datastore which is undesirable in a
production environment.
Student Study Guide - VTSP 5.5

Page 213

VTSP 5.5 - Student Study Guide

What is captured in a Snapshot?


A snapshot captures the entire state of the virtual machine, including:
- The contents of virtual machines virtual disks,
- The virtual machine settings, and
- The contents of the virtual machines memory (optional).
After a base snapshot is taken, each snapshot file contains the changes made to the
VM since the base snapshot.
The original disk image remains unchanged, and all Write actions are made to a
different image. This differential image becomes a change log, recording every change
made to a file since the snapshot was taken. This means that Read accesses have to
read not just one file, but all difference data: the original data plus every change made
to the original data.
When you revert to a selected snapshot, you return the virtual machines memory,
settings, and virtual disks to the state that they were in when you took the snapshot.
Note that capturing virtual machine memory within the snapshot consumes a significant
amount of hard disk space in the datastore.
A file equal in size to the virtual machines memory is created in the virtual machines
home folder on the datastore, in the same directory as the .vmx file.
Taking a snapshot is a synchronous operation. Selecting the Quiesce guest file system
option will pause any processes running on the operating system to provide a more
reliable file system while you capture a snapshot. VMware Tools must be installed on
the guest operating system for this option to function.
Page 214

VTSP 5.5

Student Study Guide VTSP 5.5

Snapshots provide a point-in-time image of the disk that backup solutions can use, but
Snapshots are not meant to be a robust method of backup and recovery.
If the files containing a virtual machine are lost, its snapshot files are also lost. Also,
large numbers of snapshots are difficult to manage, consume large amounts of disk
space, and are not protected in the case of hardware failure.
Short-lived snapshots play a significant role in virtual machine data protection solutions
where they are used to provide a consistent copy of the VM while the backup operation
is carried out.

Student Study Guide - VTSP 5.5

Page 215

VTSP 5.5 - Student Study Guide

Snapshot Relationships in a Linear Process


Taking successive snapshots of a single VM creates generations of snapshots.
In a linear process, each snapshot has one parent and one child, except for the last
snapshot, which has no child snapshots.
Using snapshots, you can create restore positions in a linear process.
This way, when you add to or modify a virtual machine, you can always revert to an
earlier known working state of the virtual machine.

Page 216

VTSP 5.5

Student Study Guide VTSP 5.5

Snapshot Relationships in a Process Tree


Alternatively, you can create a process tree of snapshots.
With a process tree, you can save a number of sequences as branches from a single
baseline snapshot.
This strategy is often used while testing software.
You can take a snapshot before installing different versions of a program to ensure that
each installation begins from an identical baseline.
In a process tree, each snapshot has one parent, and can have more than one child.
The parent snapshot of a virtual machine is the snapshot on which the current state is
based.
If you revert or go to an earlier snapshot, the earlier snapshot becomes the parent
snapshot of the virtual machine.
You can create extensive snapshot trees that you can use to save the virtual machine
state at any specific time and restore the virtual machine state later.
Each branch in a snapshot tree can have up to 32 snapshots.
Remember that reverting to a snapshot discards the current disk and memory states
and restores them to how they were when the snapshot was taken.
Do not manually manipulate individual child disks or any snapshot configuration files.
This can compromise the snapshot tree and result in data loss.
This restriction includes disk resizing and making modifications to the base parent disk
using vmkfstools.
Student Study Guide - VTSP 5.5

Page 217

VTSP 5.5 - Student Study Guide

Best Practices for VM Snapshots


Here are some important facts and best practices to keep in mind when working with
VM snapshots.
Snapshots are not backups. As the snapshot file is only a change log relative to the
original virtual disk or parent snapshot, do not rely upon it as a direct backup process.
The virtual machine is running on the most current snapshot, not the original vmdk disk
files.
Snapshots are not complete copies of the original vmdk disk files. The change log in the
snapshot file combines with the original disk files to make up the current state of the
virtual machine.
If the base disks are deleted, the snapshot files are useless.
Snapshot files can grow to the same size as the original base disk file, which is why the
provisioned storage size of a virtual machine increases by an amount equal to the
original size of the virtual machine multiplied by the number of snapshots on the virtual
machine.
While maximum supported number of snapshots in a chain is 32, VMware recommends
that you use no more than 3 snapshots in a chain.
Use no single snapshot for more than 24-72 hours. This prevents snapshots from
growing so large as to cause issues when deleting/committing them to the original
virtual machine disks.
An excessive number of snapshots in a chain or very large snapshots may cause
decreased virtual machine and host performance.

Page 218

VTSP 5.5

Student Study Guide VTSP 5.5

Configure automated vCenter Server alarms to trigger when a virtual machine is running
from snapshots.
VMware KB 1025279 details a full list of best practices.

Student Study Guide - VTSP 5.5

Page 219

VTSP 5.5 - Student Study Guide

Knowledge Check: VM Snapshot Best Practices


Your customer will be using snapshots.
Which three statements about snapshots are correct?

Page 220

VTSP 5.5

Student Study Guide VTSP 5.5

Options for Moving a Virtual Machine


In certain circumstances, you may want to relocate a virtual machine from one location
to another.
These circumstances may include, but are not limited to, moving a virtual machine
between platforms using different VMware products, troubleshooting issues involving
high disk space usage, balancing disk space usage, or cloning or backing up a virtual
machine.
Copying or moving virtual disk files across a network can be accomplished in many
ways and on many platforms.
There are several options to move files across to different platforms: FTP file transfer,
SCP file transfer, NFS shares. Windows File Sharing (CIFS shares), for example.
Steps on how to enable, configure, and transfer files using these specific methods are
outside of the scope of this course.
You can also download virtual machine files using the datastore browser.
Virtual machines can be cold migrated, which enables the virtual machine to be
powered up on another ESXi host with a different family of CPU, possibly in another
data center.
You can conveniently export a virtual machine as an Open Virtualization Format
package and transport this to another host, site or country. This can then be imported
into another vCenter for use.
Using vMotion, you can move a live running virtual machine from one ESXi host to
another without downtime.
Student Study Guide - VTSP 5.5

Page 221

VTSP 5.5 - Student Study Guide


With Storage vMotion you can migrate a virtual machine and its files from one datastore
to another.
With enhanced vMotion, a virtual machine can change its datastore and host
simultaneously, even if the two hosts have no shared storage in common.
These are discussed in more detail in the next module and are outlined here.

Page 222

VTSP 5.5

Student Study Guide VTSP 5.5

Importing and Exporting


With vSphere Client, you can import and export virtual machines, virtual appliances, and
vApps so that you can share these objects between products and organizations.
You are already familiar with virtual machines.
A virtual appliance is a pre-configured virtual machine that typically includes a preinstalled guest operating system and software that provides a specific function.
A vApp is a container for one or more virtual machines that can be used to package and
manage multi-tiered applications.
You can add pre-configured virtual machines and virtual appliances to the vCenter
Server, ESXi host inventory.
When you export a virtual machine, you can create virtual appliances that can be
imported by other users.
You can use the Export function to distribute pre-installed software as a virtual
appliance or as a means of distributing virtual machine templates to users.
You can also do this for users who cannot directly access and use the templates in your
vCenter Server inventory.
Virtual machines, virtual appliances, and vApps are stored in the Open Virtual Machine
Format or OVF.
This file format that allows for the exchange of virtual machines, virtual appliances, and
vApps across products and platforms.
OVF files are compressed, allowing for faster downloads.
Student Study Guide - VTSP 5.5

Page 223

VTSP 5.5 - Student Study Guide


The vSphere Client validates OVF files before importing and ensures that they are
compatible with the intended destination servers.
OVF files can include metadata; for example, you can attach an end-user license
agreement to the OVF package to show to the end user when the package is installed.

Page 224

VTSP 5.5

Student Study Guide VTSP 5.5

Migration Overview
Apart from importing and exporting the virtual machines, you can migrate virtual
machines from one host to another or from one datastore to another.
Choice of migration method will depend on the environment and whether the priority is
avoiding downtime, maximizing virtual machine performance, or using new storage.
There are five migration techniques, each one serving a distinct purpose.
If a virtual machine is powered off or suspended during migration, we refer to the
process as cold migration.
With a cold migration, the source and target host do not require shared storage.
If the virtual machine is powered off, it can be moved and powered on using a
completely different host with different CPU family characteristics.
If your virtual machine needs to stay running for any reason, then you can use vMotion
to migrate the virtual machines. vMotion is required if you are using VMware Distributed
Resource Scheduler or DRS, as it allows DRS to balance virtual machines across hosts
in the DRS cluster. This will be discussed in detail later.
If you are migrating a virtual machines files to a different datastore to balance the disk
load better or transition to a different storage array, use Storage vMotion.
With enhanced vMotion, you can change the location of a VMs datastore and host
simultaneously, even if the two hosts have no shared storage in common.

Student Study Guide - VTSP 5.5

Page 225

VTSP 5.5 - Student Study Guide

Cold Migration
A cold migration moves the virtual machine configuration files and optionally relocates
the disk files, in three basic steps.
First, the vCenter Server moves the configuration files, including the NVRAM and the
log files.
A cold migration also moves the suspend file for suspended virtual machines and
optionally, the disks of the virtual machine from the source host to the destination hosts
associated storage area.
Then, the vCenter Server registers the virtual machine with the new host.
After the migration is complete, the vCenter Server deletes the old version of the virtual
machine from the source host.
If any errors occur during the migration, the virtual machine reverts to the original state
and location.
If the virtual machine is turned off and configured with a 64-bit guest operating system,
vCenter Server generates a warning if you try to migrate it to a host that does not
support 64-bit operating systems.
Otherwise, CPU compatibility checks do not apply when you migrate turned off virtual
machines with cold migration.

Page 226

VTSP 5.5

Student Study Guide VTSP 5.5

vMotion Migration
There are three types of vMotion migration: vMotion, Storage vMotion and enhanced
vMotion.
vMotion is a key enabling technology for creating the dynamic, automated and selfoptimizing data center.
With vSphere vMotion, you can migrate virtual machines from one physical server to
another with zero downtime, providing continuous service availability and complete
transaction integrity.
If you need to take a host offline for maintenance, you can move the virtual machine to
another host. With vMotion, virtual machine working processes can continue throughout
a migration.
The entire state of the virtual machine is moved to the new host, while the associated
virtual disk remains in the same location on storage that is shared between the two
hosts. After the virtual machine state is migrated, the virtual machine runs on the new
host. Migrations with vMotion are completely transparent to the running virtual machine.
You can use vSphere Distributed Resource Scheduler (DRS) to migrate running virtual
machines from one host to another to balance the load thanks to vMotion.
Migration with vMotion requires a vMotion license and a specific configuration. vMotion
is available in Standard, Enterprise and Enterprise Plus editions of vSphere.

Student Study Guide - VTSP 5.5

Page 227

VTSP 5.5 - Student Study Guide

Designing for vMotion


Configure hosts for vMotion with shared storage to ensure that virtual machines are
accessible to both source and target hosts.
Shared storage can be implemented on a VMFS datastore located on a Fibre Channel
or iSCSI SAN. It can also be an NFS datastore on NAS storage.
The source and destination hosts must have compatible processors.
vMotion requires that the processors of the target host must be able to resume
execution using the equivalent instructions that the processors of the source host were
using when suspended.
Processor clock speeds and cache sizes, and the number of processor cores can vary,
but processors must come from the same vendor class (that is, Intel or AMD) and same
processor family (for example, P4 or Opteron).
Migration with vMotion also requires correctly configured network interfaces on source
and destination hosts.
vMotion requires at least a dedicated Gigabit Ethernet network between all vMotionenabled hosts.
If only two Ethernet adapters are available then for best security, dedicate the GigE
adapter to vMotion, and use VLANs to divide the virtual machine and management
traffic on the other adapter.
For best availability, combine both adapters into a bond, and use VLANs to divide traffic
into networks: one or more for virtual machine traffic and one for vMotion.

Page 228

VTSP 5.5

Student Study Guide VTSP 5.5

To meet vMotion compatibility requirements, ensure that a virtual machine's swap file is
accessible to the destination host.
Configure a VMkernel port group on each host for vMotion. Use of Jumbo Frames is
recommended for best vMotion performance.
Ensure that virtual machines have access to the same subnets on source and
destination hosts.
Concurrent vMotion and Storage vMotion are possible but may require additional
network resources.
If you need to support storage vMotion or more than four concurrent vMotion migrations
you must check the product documentation for the limits of simultaneous migrations.
Note that a vMotion migration will fail if the virtual machine uses raw disks for clustering
purposes.

Student Study Guide - VTSP 5.5

Page 229

VTSP 5.5 - Student Study Guide

Storage vMotion
With Storage vMotion, you can migrate a virtual machine and its disk files from one
datastore to another while the virtual machine is running.
You can move virtual machines off arrays for maintenance or to upgrade.
You also have the flexibility to optimize disks for performance, or to transform disk
types, which you can use to reclaim space.
During a migration with Storage vMotion, you can transform virtual disks from ThickProvisioned Lazy Zeroed or Thick-Provisioned Eager Zeroed to Thin-Provisioned or the
reverse.
You can choose to place the virtual machine and all its disks in a single location, or
select separate locations for the virtual machine configuration file and each virtual disk.
The virtual machine does not change execution host during a migration with Storage
vMotion.
The Storage vMotion migration process does not disturb the virtual machine. There is
no downtime and the migration is transparent to the guest operating system and the
application running on the virtual machine.
You can migrate a virtual machine from one physical storage type to another. Storage
vMotion supports FC, iSCSI, and NAS network storage.
Storage vMotion was enhanced in vSphere 5.x to support migration of virtual machine
disks with snapshots.

Page 230

VTSP 5.5

Student Study Guide VTSP 5.5

Storage vMotion Uses


Storage vMotion has a number of uses in virtual data center administration.
For example, during an upgrade of a VMFS datastore, the vCenter Server administrator
can migrate the virtual machines that are running on a VMFS3 datastore to a VMFS5
datastore and then upgrade the VMFS3 datastore without any impact on virtual
machines.
The administrator can use Storage vMotion to migrate virtual machines back to the
original datastore without any virtual machine downtime.
vCenter Server administrators can use Storage vMotion to move virtual machines off a
storage device to allow maintenance, reconfiguration, or retirement of the storage
device without virtual machine downtime.
Another use is for redistributing storage load.
Using Storage vMotion, the administrator can redistribute virtual machines or virtual
disks across a series of storage volumes in order to balance the system capacity and
improve performance.
Alternatively, administrators might migrate virtual machines to tiered storage with
different service levels to address the changing business requirements for those virtual
machines, and so help achieve service level targets.

Student Study Guide - VTSP 5.5

Page 231

VTSP 5.5 - Student Study Guide

Storage vMotion Design Requirements and Limitations


To ensure successful migration with Storage vMotion, a virtual machine and its host
must meet resource and configuration requirements for virtual machine disks to be
migrated.
The virtual machine disks must be in persistent mode or RDMs.
For virtual compatibility mode RDMs, you can migrate the mapping file or convert it into
thick-provisioned or thin-provisioned disks during migration, as long as the destination is
not an NFS datastore. If you convert the mapping file, a new virtual disk is created and
the contents of the mapped LUN are copied to this disk.
For physical compatibility mode RDMs, you can migrate the mapping file only.
Another limitation is that migration of virtual machines during VMware Tools installation
is not supported.
Additionally, the host on which the virtual machine is running must have a license that
includes Storage vMotion. ESX and ESXi 3.5 hosts must be licensed and configured for
vMotion.
ESX and ESXi 4.0 and later hosts do not require vMotion to be configured in order to
perform migrations with Storage vMotion.
The host on which the virtual machine is running must have access to both the source
and target datastores.
And finally, the number of simultaneous migrations with Storage vMotion is limited.
Check the product documentation for further information on calculating specific limits.

Page 232

VTSP 5.5

Student Study Guide VTSP 5.5

Enhanced vMotion
vSphere 5.1 enabled a virtual machine to change its datastore and host simultaneously,
even if the two hosts don't have any shared storage in common.
It allows virtual machine migration between clusters in a larger data center, which may
not have a common set of datastores between them but also allows virtual machine
migration in small environments without access to expensive shared storage equipment.
Another way of looking at this functionality is supporting VMotion without shared
storage.
To use enhanced vMotion, the hosts must be connected to the same VMware vCenter
and be part of the same data center.
In addition, the hosts must be on the same layer-2 network.
vSphere 5.1 and later allows the combination of vMotion and Storage vMotion into a
single operation.
This combined migration copies both the virtual machine memory and its disk over the
network to the destination host.
After all the memory and disk data are send over, the destination virtual machine will
resume and the source virtual machine will be powered off.
This vMotion enhancement ensures

Zero downtime migration


No dependency on shared storage
Lower operating cost

Student Study Guide - VTSP 5.5

Page 233

VTSP 5.5 - Student Study Guide


Improves service level and performance SLAs
Enhanced vMotion is also referred to as cross-host storage vMotion. It can only be
initiated when using the vSphere Web Client.
A virtual machine and its host must meet resource and configuration requirements for
the virtual machine files and disks to be migrated with cross-host storage vMotion.

Page 234

VTSP 5.5

Student Study Guide VTSP 5.5

Enhanced vMotion
Cross-host storage vMotion is subject to the following requirements and limitations:

The hosts must be licensed for vMotion and running ESXi5.1 or later.
The hosts must meet the networking requirements for vMotion mentioned
previously.
The virtual machines must be configured for vMotion and Virtual machine disks
must be in persistent mode or be raw device mappings.
The destination host must have access to the destination storage.
When you move a virtual machine with RDMs and do not convert those RDMs to
VMDKs, the destination host must have access to the RDM LUNs.
Finally, consider the limits for simultaneous migrations when you perform a cross-host
storage vMotion.
See the vCenter Server and Host Management product documentation for further
information available at
http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html

Student Study Guide - VTSP 5.5

Page 235

VTSP 5.5 - Student Study Guide

Microsoft Cluster Services Support


Microsoft Cluster Service (MSCS) continues to be deployed in virtual machines for
application availability purposes. VMware is introducing a number of additional features
to continue supporting customers who implement this application in their vSphere
environments. In vSphere 5.5, VMware introduces support for the following features
related to MSCS:

Support for Microsoft Windows 2012 clustering;


Round-Robin path policy for shared storage and
Fibre Channel over Ethernet (FCoE) and iSCSI protocols for shared storage.

Page 236

VTSP 5.5

Student Study Guide VTSP 5.5

Microsoft Cluster Services Support

Student Study Guide - VTSP 5.5

Page 237

VTSP 5.5 - Student Study Guide

Microsoft Cluster Services Support

Page 238

VTSP 5.5

Student Study Guide VTSP 5.5

Knowledge Check: Storage Design Requirements for Migration


Shared storage may be required for certain migration scenarios.
Which migration type requires shared storage?

Student Study Guide - VTSP 5.5

Page 239

VTSP 5.5 - Student Study Guide

Module Summary
Now that you have completed this module, you should be able to:

Describe the options in vSphere 5.5 for copying or moving Virtual Machines
within and between Virtual Infrastructures
Explain how and why a customer should make use of these features

Having completed this module, feel free to review it until you are ready to start the next
module.

Page 240

VTSP 5.5

Student Study Guide VTSP 5.5

Module 3: vSphere Replication and vSphere Update Manager


This is module 3, vSphere Replication and vSphere Update Manager.
These are the topics that will be covered in this module.

Student Study Guide - VTSP 5.5

Page 241

VTSP 5.5 - Student Study Guide

Module 3 Objectives
At the end of this module you will be able to:
Explain the benefits of, and prerequisites for, vSphere Replication and vSphere
Update Manage

Page 242

VTSP 5.5

Student Study Guide VTSP 5.5

Why should a customer consider vSphere Replication?


With vSphere Replication, administrators can continually replicate a running virtual
machine to another location.
Replication creates a copy of a virtual machine that can be stored locally within a cluster
or at another site, providing a data source to rapidly restore a virtual machine within
minutes.
vSphere Replication is provided as a no-charge component of all eligible VMware
vSphere licenses, from the Essentials Plus Kit through the Enterprise Plus Edition.
It offers protection and simple recoverability to the vast majority of VMware
environments without extra cost.
This all means you can protect individual virtual machines without the need for
expensive storage replication hardware. vSphere Replication works regardless of your
storage platform.
It can be regarded as supplementing or replacing your existing replication solution.
vSphere Replication eliminates third party replication costs and helps to create a flexible
disaster recovery plan.
It is aimed at reducing bandwidth by using seed copy of virtual machines and
replicating delta files across the network.
vSphere Replication operates at the VMDK level, meaning you can repurpose older
storage at the protection site.

Student Study Guide - VTSP 5.5

Page 243

VTSP 5.5 - Student Study Guide


You can also leverage different technologies at opposite sites such as SAN to NAS, FC
to iSCSI etc.
vSphere Replication can provide flexible RPOs from 24 hours to as little as 15 minutes,
scaling to hundreds of virtual machines per cluster. RPO is Recovery Point Objective,
the maximum tolerable period in which data might be lost from an IT service due to a
major incident.

Page 244

VTSP 5.5

Student Study Guide VTSP 5.5

vSphere Replication
vSphere Replication is the only true hypervisor-level replication engine available
today.
It is integrated with a vSphere Essentials Plus license or higher.
Changed blocks in the virtual machine disk or disks for a running virtual machine at a
primary site are sent to a secondary site, where they are applied to the virtual machine
disks for the offline or protection copy of the virtual machine.
This is cost-efficient because it reduces both storage costs and replication costs.
At the storage layer, vSphere Replication eliminates the need to have higher-end
storage arrays at both sites.
Customers can use lower-end arrays, and different storage across sites, including
Direct-Attached Storage. For example, a popular option is to have Tier 1 storage at the
production site, but lower-end storage such as less expensive arrays at the failover site.
All this leads to overall lower costs per replication.
vSphere Replication is also inherently less complex than storage-based replication.
Replication is managed directly from vCenter, eliminating dependencies on storage
teams. As it is managed at the level of individual virtual machines, the setup of SRM is
less complicated.
SRM can use vSphere Replication to replicate data to servers at the recovery site.
Despite its simplicity and cost-efficiency, vSphere Replication is still a robust, powerful
replication solution.
Student Study Guide - VTSP 5.5

Page 245

VTSP 5.5 - Student Study Guide


vSphere 5.5 introduces the ability to retain historical point-in-time states of replication.
This gives administrators the ability to revert to earlier states after failover if they want to
revert to a last-known-good state- for example, to before a virus hit, before data
corruption occurred, or to a pre-patched state.
However, keep in mind the points-in-time are retained as vmdk snapshots at the
recovery site. If there are a lot of very large snapshots on a virtual machine, it could take
a long time to commit or revert after failover.
Support for Storage-DRS interoperability is introduced in vSphere 5.5, which allows
replicated virtual machines to be moved across datastores with Storage vMotion without
impacting any ongoing replication.

Page 246

VTSP 5.5

Student Study Guide VTSP 5.5

Replication Appliance
vSphere Replication is distributed as a 64-bit virtual appliance packaged in the .ova
format.
A previous iteration of vSphere Replication was included with SRM 5.0.
vSphere Replication Management Server and vSphere Replication Server are now
included in the single VR appliance.
This allows a single appliance to act in both a VR management capacity and as the
recipient of changed blocks.
This makes scaling sites an easy task.
The replication appliance in version 5.5 contains an "add-on" VR appliance. These
additional appliances allow you to configure replication to up to a maximum of 10 other
target locations.

Student Study Guide - VTSP 5.5

Page 247

VTSP 5.5 - Student Study Guide

vSphere 5.5 Replication Server Appliances


With vSphere 5.5, topologies with vSphere Replication can now be broadened to
encompass replication between and within data centers, and can include many different
models of deployment dependent on where the vSphere Replication Server appliances
are deployed.
Each vCenter Server needs to have a single master' vSphere Replication Appliance
deployed and paired with it, but up to nine further vSphere Replication Servers can be
deployed to locations managed by that vCenter Server to act as the target for changed
blocks.
Each VR Appliance can manage at most 500 replications, irrespective of topologies or
number of VR Appliances present.
Here, a single VR appliance is deployed at the main data center (on the left). One
remote site has a vCenter Server managing its own data center as well as the hosts and
VMs at a tertiary data center (the two sites on the right). The third data center (top right)
does not have a vCenter Server, but has a VR Server that is a target for replication from
the data center in the bottom right.
As an example with this model, the servers at the main data center are replicating to the
second data center as a completely independent target managed by another VC. The
servers at the secondary data center are replicating to the third data center which is
managed by the second data center. The servers at the third data center are replicating
back to the primary data center.

Page 248

VTSP 5.5

Student Study Guide VTSP 5.5

The VR Agents on the central data center track changed blocks and distribute them via
the vSphere host's management network to the VR Server defined as the destination for
each individual VM.
Consider the following however. A virtual machine can only be replicated to a single
destination. A virtual machine cannot be replicated to multiple remote locations at one
time. Any target destination must have a vSphere Replication Appliance to act as a VR
management component as well as a target, or a vSphere Replication Server to act
strictly as a target for replication.

Student Study Guide - VTSP 5.5

Page 249

VTSP 5.5 - Student Study Guide

Replication Design Requirements and Limitations


The vSphere Replication virtual appliance has a dual-core CPU, a 10GB and a 2GB
hard disk, and 4GB of RAM. It is distributed as a 64-bit virtual appliance packaged in the
.ova format, to be deployed in a vCenter Server environment using the OVF deployment
wizard on an ESXi host.
vSphere Replication does not have a separate license.
You can use vSphere Replication if your edition of vSphere includes the vSphere
Replication license.
If you have the correct vSphere license, there is no limit on the number of virtual
machines that you can replicate by using vSphere Replication.
vSphere Replication uses default network ports for communication between hosts. Port
80 is used for management traffic, 902 is used for replication traffic to the destination
ESXi hosts, and 5480 is used by the administrators web browser.
A further list of ports can be found in the vSphere Replication 5.5 Documentation at
http://www.vmware.com
To ensure successful virtual machine replication, you must verify that your virtual
infrastructure respects certain limits before you start the replication.
Each vCenter Server needs to have a single master vSphere Replication Appliance
deployed and paired with it.
Each vSphere Replication management server can manage a maximum of 500
replicated virtual machines.
vSphere Replication is compatible with certain other vSphere management features.
Page 250

VTSP 5.5

Student Study Guide VTSP 5.5

You can safely use vSphere Replication in combination with certain vSphere features,
such as vSphere vMotion. Some other vSphere features, for example vSphere
Distributed Power Management, require special configuration for use with vSphere
Replication.
You must check that vSphere Replication is compatible with the versions of ESXi
Server, vCenter Server and Site Recovery Manager on the site that it will be used.

Check the compatibility matrixes and the VMware compatibility guide at


http://partnerweb.vmware.com/comp_guide2/sim/interop_matrix.php and
http://partnerweb.vmware.com/comp_guide2/search.php?testConfig=16&deviceC
ategory=software

Student Study Guide - VTSP 5.5

Page 251

VTSP 5.5 - Student Study Guide

Replication Design Requirements and Limitations


This slide shows an exploded view of the VMware Product Interoperability Matrixes
available from VMware.com.

Page 252

VTSP 5.5

Student Study Guide VTSP 5.5

What is vSphere Data Protection (VDP)?


VMware vSphere Data Protection (VDP) is a backup and recovery solution for VMware,
from VMware. VDP is designed primarily for small and medium sized environments.
VDP is based on EMC Avamar providing an enterprise-class backup and recovery
solution at an SMB price point. VDP makes backing up and restoring VMware virtual
machines simple and easy.
VDP is available in two versions:

VDP Advanced, which is sold separately and protects approximately 200 VMs
per VDP Advanced virtual appliance. VDP Advanced is licensed per-CPU and is
available either as a stand-alone license or included with the vSphere with
Operations Management (vSOM) Enterprise and Enterprise Plus Acceleration
Kits.
VDP, which is included with vSphere 5.1 Essentials Plus and higher. VDP
provides basic backup and recovery for approximately 50 VMs per VDP virtual
appliance.

Student Study Guide - VTSP 5.5

Page 253

VTSP 5.5 - Student Study Guide

What is vSphere Data Protection (VDP)?


VDP and VDP Advanced are deployed as a Linux-based virtual appliance. These
solutions are fully-integrated with the vSphere Web Client, which makes configuration
and management of the solution easy and intuitive for the vSphere administrator.
VDP and VDP Advanced utilize the vSphere APIs for Data Protection (VADP) including
Changed Block Tracking (CBT). Once the initial backup of a virtual machine has been
completed, only the changed blocks are backed up during subsequent backups.
VDP and VDP Advanced are built upon the mature and proven EMC Avamar solution.
VDP and VDP Advanced leverage Avamar's robust backup engine and variable length
de-duplication algorithm.
By levering the vSphere APIs for Data Protection, VDP and VDP Advanced perform
backup and recovery of virtual machines without the need for a backup agent. Backup
data is stored on disk, unlike a legacy tape solution, which enables fast and reliable
backups and restores. Backups are performed regardless of the protected virtual
machine's power state. Virtual machine snapshots and vSphere SCSI-Hot-Add are
utilized during the backup and recovery processes.
VDP Advanced features agents for Microsoft SQL Server and Microsoft Exchange.
These agents enable granular, application-consistent backup and recovery for these
application databases. The agents also provide additional functionality such as log
management, multiple backup streams, and client-side de-duplication.
The majority of this training will be focused on VDP Advanced.

Page 254

VTSP 5.5

Student Study Guide VTSP 5.5

What is vSphere Data Protection (VDP) Advanced?


One of the greatest benefits of VDP Advanced is the pure simplicity of the solution.
Since it is fully-integrated with the vSphere Web Client, it is very easy and intuitive for a
vSphere administrator to create backup jobs, perform restores, and manage the
solution. A vSphere administrator does not need to spend a large amount of time
learning a new user interface.
The first time a virtual machine is backed up, all of the blocks that make up the virtual
machine are backed up. Subsequent backups of the same virtual machine leverage
Changed Block Tracking (CBT) to determine which blocks have changed since the last
backup and only those changed blocks are backed up. This reduces the amount of time
it takes to back up virtual machines.
Avamar's variable length de-duplication in conjunction with CBT dramatically reduces
the amount of storage capacity required for backup data. Variable length segment deduplication is more efficient than fixed length segment de-duplication. The majority of
other backup solutions in the market use fixed length de-duplication.
The ability to backup and restore entire virtual machines without the need for a backup
agent reduces complexity. It is also possible to restore individual files and folders using
just a web browser in Windows and Linux virtual machines.
While most virtual machine workloads can be backed up and restored without the need
for an agent, certain tier-1 workloads benefit from agent-based backup and recovery.
Seeing that the ideal approach to protecting a virtual machine environment is using a
mixture of agent-less virtual machine backups and agent-based application-level

Student Study Guide - VTSP 5.5

Page 255

VTSP 5.5 - Student Study Guide


backups, VDP Advanced includes agents for tier-1 applications such as Microsoft SQL
Server and Microsoft Exchange.

Page 256

VTSP 5.5

Student Study Guide VTSP 5.5

VDP Advanced Key Components


Deploying and managing a VDP Advanced environment requires only a few key
components.
A VDP Advanced virtual appliance is deployed from an Open Virtualization Archive
(.ova) file. There is no need to manually create a new virtual machine, install a guest
operating system, patch the guest operating system (OS), and install the backup and
recovery software. A VDP and VDP Advanced virtual appliance is preconfigured with a
Linux guest OS and the backup solution installed and ready for configuration. This
dramatically reduces the complexity and amount of time required for deployment.
While VDP Advanced includes agents for SQL Server and Exchange, it can also
leverage the VSS components to quiesce a Windows guest OS and applications that
are VSS-aware. However, note that the VSS components in VMware Tools does not
perform log management. Thus, the need for agents for applications such as SQL
Server and Exchange.
A VDP Advanced virtual appliance is deployed with four virtual CPUs and four gigabytes
of memory. The backup data storage capacity deployed is two terabytes. Note that this
capacity is de-duplicated backup capacity. It is also possible to dynamically expand the
backup data capacity of a VDP Advanced appliance up to eight terabytes.
VDP Advanced requires vCenter Server and Single Sign On (SSO). The vSphere Web
Client is required to manage VDP Advanced. The traditional vSphere Client cannot be
used to manage VDP Advanced.

Student Study Guide - VTSP 5.5

Page 257

VTSP 5.5 - Student Study Guide

VDP Advanced Implementation


VDP Advanced 5.1 requires vCenter Server 5.1 or higher and VMware Single Sign On
(SSO). vCenter Server can be the Windows-based version or the Linux-based vCenter
Server Virtual Appliance. vSphere Essentials Plus 4.1 or higher is required.
A VDP Advanced virtual appliance is deployed with two terabytes of de-duplicated
backup data capacity. Additional storage capacity is also needed for the VDP Advanced
guest OS and application. The total amount of storage needed to deploy a VDP
Advanced appliance is approximately 3.1 terabytes. See the VDP Administration Guide
for more details and the complete list of prerequisites.
When designing a VDP Advanced environment, keep in mind a maximum of 10 VDP
Advanced appliances per vCenter Server is supported. However, it is recommended to
limit the number of appliances to one per vSphere host. Be sure to deploy the VDP
Advanced appliance(s) to storage with high performance.
A VDP Advanced appliance can back up eight virtual machines concurrently. This
activity creates considerable load on the storage. Be sure to place VDP Advanced on
storage that can handle this load.

Page 258

VTSP 5.5

Student Study Guide VTSP 5.5

VDP Advanced Implementation


When deploying VDP Advanced, be sure to create a DNS host record for the virtual
appliance prior to deployment. Fully Qualified Domain Names (FQDN) should always be
used when configuring VMware solutions. VDP Advanced is no exception.
When possible, deploy a VDP Advanced appliance in the same cluster as the virtual
machines it will protect. This enables VDP Advanced to leverage vSphere SCSI
HotAdd, which improves backup performance and removes backup traffic from the
network. In cases where SCSI HotAdd cannot be used, VDP Advanced will use the
Network Block Device (NBD) transport mode, which carries backup traffic across the
network.
Make sure time is synchronized across the environment. Variances in time between
VDP Advanced, vCenter Server, and vSphere hosts can cause issues with VDP
Advanced.
VDP Advanced is deployed by default with two terabytes of de-duplicated backup data
capacity. This capacity can be expanded in two terabyte increments up to a total of
eight terabytes. This capacity expansion is performed in-place. See the VDP
Administration guide for more details and requirements when expanding the capacity of
a VDP Advanced virtual appliance.
On average, an eight terabyte VDP Advanced appliance can back up approximately 200
virtual machines.
This number assumes average virtual machine sizes, average data change rate, a 3060 day retention policy, etc.
The maximum supported number of virtual machines that can be backed up is 400.
Every environment is different so results will vary.
Student Study Guide - VTSP 5.5

Page 259

VTSP 5.5 - Student Study Guide


VDP (not Advanced) can back up approximately 50 virtual machines assuming the
same averages mentioned previously. VDP (not Advanced) supports a maximum of 100
virtual machines per appliance.
It is possible to scale out the solution by deploying additional VDP or VDP Advanced
appliances - up to 10 appliances per vCenter Server.

Page 260

VTSP 5.5

Student Study Guide VTSP 5.5

Upsell to VDP Advanced


VDP (not Advanced) is included with vSphere 5.1 Essentials Plus and higher. VDP is a
basic backup and recovery solution designed for small environments.
Always lead with VDP Advanced. It is sold separately and provides the following
benefits above VDP:
It offers greater scale - up to eight terabytes of de-duplicated backup data capacity per
appliance and up to 10 appliances per vCenter Server.
While VDP Advanced is deployed with two terabytes of de-duplicated backup data
capacity, this capacity can be expanded in place. Existing backup data is preserved.
Many workloads can easily be backed up without the need for a backup agent.
However, some tier-1 applications benefit from the use of an agent to provide
application-consistent backups and restores, granular selection of individual databases,
and advanced options such as log management and multiple backup streams. VDP
Advanced includes agents for Microsoft SQL Server and Microsoft Exchange.
VDP Advanced licensing is simple. It is licensed per-CPU just like vSphere.
If a customer starts with VDP (not Advanced) and later decides they would like to take
advantage of the additional benefits VDP Advanced offers, it is possible to migrate
existing backup data from VDP to VDP Advanced.
VDP Advanced is virtual machine backup and recovery for VMware, from VMware.

Student Study Guide - VTSP 5.5

Page 261

VTSP 5.5 - Student Study Guide

Update Manager: An Overview


Update Manager is a simple patch management solution for the virtual infrastructure. It
is a vCenter Server plug-in for applying security updates and bug fixes to reduce risks
from vulnerabilities.
With Update Manager, you can apply updates and patches across all ESXi hosts.
You can install and update third-party software on hosts and use Update Manager to
upgrade virtual machine hardware, VMware Tools, and virtual appliances.
You can run centralized, automated patch and version management from within
VMware vSphere Server.
Security administrators can compare ESXi hosts, as an example, against baselines to
identify and remediate systems that are not in compliance.

Page 262

VTSP 5.5

Student Study Guide VTSP 5.5

Update Manager: An Overview


Update Manager consists of a server part and a plug-in part. You can install the Update
Manager server and Update Manager Client plug-in on Windows machines only.

Student Study Guide - VTSP 5.5

Page 263

VTSP 5.5 - Student Study Guide

Update Manager Components


The major components of Update Manager are illustrated here.
Update Manager Server is installed directly on vCenter Server or on a separate system.
This must be installed on a 64-bit operating system.
Patch Database: You can use the same database Server that is used by vCenter Server
for the Patch Database, but it requires a unique database.
If you dont specify one, the software installs SQL Server 2005 Express. Installing the
database on the same machine increases the minimum specifications.
For best performance, ensure you have two or more logical cores at a speed of 2GHz
and 4Gb of RAM. A Gigabit connection is recommended.
Update Manager plug-in runs on the system the vSphere Desktop Client is installed on.
This can be a 32-bit or 64-bit OS.
Guest Agents are installed into virtual machines from the Update Manager Server, used
for scanning and remediation operations.
Update Manager Download Service: If your update manager server does not have
direct access to the Internet, you can create a download server.
This will download patches outside of the internetwork. You can then load them to
Update Manager using portable media or a shared Repository or URL.
With the UMDS in Update Manager 5.5, you can also configure multiple download URLs
and restrict downloads to the product version and type that are relevant to your
environment.

Page 264

VTSP 5.5

Student Study Guide VTSP 5.5

UMDS 5.5 can be installed only on 64-bit Windows systems.


The disk storage requirements for Update Manager vary depending on your
deployment.
Make sure that you have at least 20GB of free space for patch data. Depending on the
size of your deployment, Update Manager requires a minimum amount of free space
per month for database usage.

Student Study Guide - VTSP 5.5

Page 265

VTSP 5.5 - Student Study Guide

Knowledge Check: Update Manager Components


Your customer has asked about the components in the Update Manager.
Which descriptions match the numbered items?

Page 266

VTSP 5.5

Student Study Guide VTSP 5.5

Knowledge Check: VDP Advanced Implementation


VDP Advanced has specific design considerations.
What is the correct answer?

Student Study Guide - VTSP 5.5

Page 267

VTSP 5.5 - Student Study Guide

Module Summary
Now that you have completed this module, you should be able to:

Explain the benefits of, and prerequisites for, vSphere Replication and vSphere
Update Manager

Page 268

VTSP 5.5

Student Study Guide VTSP 5.5

Course 4

VTSP V5.5 Course 4: VMware vSphere: vNetworks


Welcome to the VTSP V5.5 Course 4: VMware vSphere: vNetworks.
There are 2 modules in this course: Overview and Advanced Features.

Student Study Guide - VTSP 5.5

Page 269

VTSP 5.5 - Student Study Guide

Course Objectives
At the end of this course you should be able to:
Explain the role and function of the components of vSphere virtual networking
Describe the advanced networking features of vSphere 5.5 and carry out an activity to
determine the proper networking architecture for a customer scenario
This course does not include information on VMware NSX - Network Virtualization.
Information on NSX will be covered in future eLearning training.

Page 270

VTSP 5.5

Student Study Guide VTSP 5.5

Module 1: vSphere Networks Overview


This is module 1, vSphere Networks Overview.
These are the topics that will be covered in this module.

Student Study Guide - VTSP 5.5

Page 271

VTSP 5.5 - Student Study Guide

Module 1 Objectives
At the end of this module you will be able to:

Explain the data center networking architecture


Describe vSphere standard vSwitches, distributed vSwitches and third party
switches
Describe the new vSphere 5.5 features for Distributed vSwitches
Identify when customers should migrate virtual machines to a distributed switch

Page 272

VTSP 5.5

Student Study Guide VTSP 5.5

Data Center Networking Architecture


Historically, network administrators have owned the core, distribution, and access layers
of a physical network in a physical computing environment. In a physical world, each
server has a dedicated network cable plugged into a physical port on a switch.
The access layer provides a good place to monitor network traffic and interpose on
network traffic if the need arises.
For organizations that manage physical infrastructure with separate administrators for
servers and networks, the switch port is the line of demarcation between servers and
network.
When you introduce a new server you must connect the server to the appropriate edge
switch. You first have to find if there is a physical port on the switch available for you.
The cable must be the right type for example an RJ45 or SFP+. The cable must be the
appropriate length and handled and managed correctly, as found in a cable
management system. You must ensure that the port configuration is correct, depending
on the VLAN, LAG, trunk mode and so on. You can now connect your physical server
and can access network services.
You will most certainly have more than one physical connection, so you must repeat the
above steps depending on the amount of connections that you wish to use. Once this is
complete, you will generally have to test each of the connections thoroughly. For
example to ensure that NIC teaming works correctly. whenever you have to physically

Student Study Guide - VTSP 5.5

Page 273

VTSP 5.5 - Student Study Guide


move or replace a server, you will have to repeat most or all of these steps. This is the
pattern that you will have to follow for every server; it is time consuming and logistically
complicated.
If you want to monitor or analyze the network traffic for a specific physical server, the
best place to do this is at the access layer.
If you want to segregate or control traffic that flows between servers, you will need to
reconfigure the connections that they make at the access layer. This involves
reconfiguring port settings on the physical switches and running appropriate tests to
ensure that you have connectivity.
Making changes in the physical environment is time-consuming with great potential for
error, and a lot of care must be taken to ensure that the correct changes are being
made. Unplugging the incorrect cable can have disastrous consequences.
Misconfiguration can bring down an entire stack of switches. Preventing human error is
a very important consideration when working in the physical network environment.
In a virtual world, the initial hardware setup is the same as a physical environment.
However, as we are dealing with a virtual environment we have to map out and design
all of the networking uplink requirements for the host and virtual machines prior to
installing the physical server. For example, we must consider how many management
ports we need in order to manage the physical ESXi host. Resilience dictates that all
services have at least two independent uplinks (Physical NIC connections to a switch in
the infrastructure). The uplinks must be able to handle management traffic, IP storage,
vMotion, Fault Tolerance and all of the production networks that the virtual machines
will need.
The primary purpose of the virtual network architecture is to provide a way to map the
services and virtual machine network ports to the physical network interfaces. VMware
virtual switches provide this capability. With vSwitches you have the ability to configure
the virtual ports which are used for vSphere services such as management, vMotion
and port groups which are used for connecting virtual machine network interfaces
(vNICs).
The vSwitch allows us to select which uplinks or pNICs to use for each virtual switch
port or port group. In this way we can easily control the network connectivity parameters
for a specific service or virtual machine by changing the configuration of a virtual switch,
rather than having to change anything in the physical environment, provided that the
physical connectivity for the ESXi host is in place and is configured correctly external to
the host.
For organizations that manage physical infrastructure with separate administrators for
servers and networks, the line of demarcation effectively moves to the distribution layer
of the network and eliminates much of the physical work required for connecting and
reconfiguring new servers and services.
For example in the configuration shown we have four virtual machines. Virtual machines
1 and 2 do not have any uplinks. This is called an internal-only or isolated port group.
Virtual machines 3 and 4 have resilient connections to the physical network through
pNIC2 and pNIC3 physical links to the edge or access layer switches. The management

Page 274

VTSP 5.5

Student Study Guide VTSP 5.5

port for this ESXi host has resilient connectivity via pNIC0 and pNIC1. This type of
virtual switch is used to provide networking for this and only this ESXi host. It is called a
standard vSwitch.
If we have more than one host, we then encounter similar logistical problems that we
have in a physical environment such as managing the configuration on multiple
systems. If we are using technologies such as vMotion, Fault tolerance, HA or DRS or
even if we just want to move machines between hosts we must ensure that virtual
machine ports group names are consistent and that they must physically map to the
same networks. This can become increasingly difficult with each host that we add.
The impact of misconfiguration can be minimized using host profiles to deploy ESXi
Hosts which ensures that all virtual switch configurations are consistent.
Distributed vSwitches will provide the same unified configuration but with additional
functionality.

Student Study Guide - VTSP 5.5

Page 275

VTSP 5.5 - Student Study Guide

vSphere Networking Overview


The vSphere Networking Overview provides two types of virtual networking
architecture, the standard virtual switch architecture and the distributed virtual switch
architecture.
Standard virtual switches manage virtual machines and networking at the host level.
This networking architecture is supported on all versions of vSphere.
A distributed virtual switch manages virtual machines and networking at the data center
level. Distributed virtual switches are not available in all versions of vSphere. They are
only available in the Enterprise Plus Edition of vSphere 5.1 or later. VMware
recommends that all networks be set up or migrated using the distributed virtual switch
architecture, since it simplifies the data center by centralizing network configuration in
addition to providing a more robust feature set.
Although the distributed network architecture is recommended for setting up virtual
networks in vSphere 5.5, it is important to understand how the components of the
standard virtual switch work so you can successfully either migrate components from
this architecture to the distributed network architecture as required or support
environments that only have standard virtual switches implemented.
The next series of screens will explain each type of networking architecture in detail.

Page 276

VTSP 5.5

Student Study Guide VTSP 5.5

Standard Switch Architecture


The components of the standard virtual switch architecture are configured at the host
level. The standard virtual environment provides similar networking elements as those
found on actual physical switches.
Like a physical machine, each virtual machine has one or more virtual network adapters
or virtual network interface cards or vNICs. The operating system and applications
communicate with the vNIC through a standard device driver or a VMware optimized
device driver just as though the vNIC is a physical NIC.
The vNIC has its own MAC address, can be configured with multiple IP addresses and
responds to the standard Ethernet protocol exactly like a physical NIC would.
Nonetheless, an outside agent can determine that it is communicating with a virtual
machine if it checks the six byte vendor identifier in the MAC address.
A standard virtual switch, or vSwitch, operates just like a layer-2 physical switch. It
maintains a port forwarding table and performs three important functions.
These include looking up each frame's destination MAC when it arrives, forwarding a
frame to one or more ports for transmission, and avoiding unnecessary deliveries.
Each host server can have multiple standard virtual switches. You can create up to 127
virtual switches on each ESXi host. Each standard virtual switch has two sides to it. On
one side of the virtual switch you have port groups.

Student Study Guide - VTSP 5.5

Page 277

VTSP 5.5 - Student Study Guide


Port groups connect virtual machines to the standard virtual switch. On the other side of
the standard virtual switch you have what are known as uplink ports. Uplink ports
connect the standard virtual switch to physical Ethernet adapters which resides on the
host. In turn, these physical Ethernet adapters connect to physical switches leading to
the outside world.

Page 278

VTSP 5.5

Student Study Guide VTSP 5.5

Standard Switch Architecture : Port Groups


A port group is a unique concept in the virtual environment. A port group is a
mechanism for setting policies that govern the network connected to it. Instead of
connecting to a particular port on standard virtual switch, a virtual machine connects its
vNIC to a port group. All virtual machines that connect to the same port group belong to
the same network inside the virtual environment.
Port groups can be configured to enforce a number of policies that provide enhanced
network security, network segmentation, better performance, higher availability, and
traffic management.
Just like port groups that can be created to handle the virtual machine traffic, VMkernel
connection type or VMkernel Port can also be created to provide network connectivity
for the host and handling VMware vMotion, IP storage, and Fault Tolerance.
Moving a virtual machine from one host to another is called migration. Using vMotion
you can migrate powered on virtual machines with no downtime. Please note that your
VMkernel networking stack must be set up properly to accommodate vMotion.
IP storage refers to any form of storage that uses TCP/IP network ESXi. Because these
storage types are network based, therefore, they can use the same VMkernel interface
and port group.

Student Study Guide - VTSP 5.5

Page 279

VTSP 5.5 - Student Study Guide

vSwitch
A standard virtual switch can connect its uplink ports to more than one physical Ethernet
adapter to enable NIC teaming. With NIC teaming, two or more physical adapters can
be used for load balancing or to provide failover capabilities in the event of a physical
adapter hardware failure or a network outage.
The virtual ports on a virtual standard switch provide logical connection points among
and between virtual and physical devices. You can think of the virtual ports as virtual
RJ-45 ports. Each virtual switch can have up to 4088 virtual Ports, with a limit of 4,096
ports on all virtual switches on a host. This system-wide limit includes eight reserved
ports per standard virtual switch.
Virtual Ethernet adapters (vNICs) connect to virtual ports when you power on the virtual
machine on which the adapters are configured, when you take an explicit action to
connect the device, or when you migrate a virtual machine using vSphere vMotion.
A vNIC updates the virtual switch port with the MAC filtering information when it is
initialized and whenever it changes. A virtual port may ignore any requests from the
virtual Ethernet adapter that would violate the Layer 2 security policy in effect for the
port. For example, if MAC spoofing is blocked, the port drops any packets that violate
this rule.
When designing your environment you should consider how many networks are
needed. How many networks or VLANS required depends on the types of traffic
required for VMware vSphere operation and in support of the organization's services
and applications.

Page 280

VTSP 5.5

Student Study Guide VTSP 5.5

Types of Virtual Switch Connections


A virtual switch allows the following connection types:

One or more virtual machine port groups, and


VMkernel port:
For IP storage, vMotion migration, VMware vSphere Fault
Tolerance;
And
For the ESXi management network
When designing your environment, you should consider how many networks are
needed. How many networks or VLANS required depends on the types of traffic
required for VMware vSphere operation and in support of the organization's services
and applications.

Student Study Guide - VTSP 5.5

Page 281

VTSP 5.5 - Student Study Guide

Virtual Switch Connection Examples


More than one network can coexist on the same virtual switch, or networks can exist on
separate virtual switches.
Components that should be on separate networks include Virtual machines, vSphere
Fault Tolerance, IP Storage iSCSI/NFS, vSphere High Availability, VMware vSphere
vMotion and management.
The two main reasons to separate different types of network traffic are to reduce
contention and latency and improve performance. High latency can negatively affect
performance. This is especially important when using IP storage, or FT.
You can enhance security by limiting network access. vMotion and IP storage traffic are
not encrypted, so a separate network helps protect what could be sensitive data.
To avoid contention between the different types of network traffic, configure enough
physical NIC Ports to satisfy bandwidth needs or use Network I/O control which we will
cover later.

Page 282

VTSP 5.5

Student Study Guide VTSP 5.5

Distributed Switch
VMware vSphere 5.1 enhanced the networking capabilities of the distributed switch.
Some of these features like Network Health Check help detect mis-configurations
across physical and virtual switches.
Configuration Backup Restore allows vSphere admins to store the VDS configuration as
well as recover the network from the old configurations.
You can address the challenges that you face when a management network failure
causes the hosts to disconnect from vCenter Server using rollback and recovery.
Importantly this allows you to recover from lost connectivity or incorrect configurations.
vSphere 5.5 introduces some key networking enhancements and capabilities to further
simplify operations, improve performance and provide security in virtual networks. LACP
has been enhanced in vSphere 5.5.
Traffic filtering has been introduced as well as Differentiated Service Code Point
Marking support.
Distributed virtual switches require an Enterprise Plus license.
Distributed virtual switches are not manageable when vCenter Server is unavailable, so
vCenter Server becomes a tier-one application.

Student Study Guide - VTSP 5.5

Page 283

VTSP 5.5 - Student Study Guide

Distributed Switch Architecture


Each distributed switch includes distributed ports. A distributed port is a port that
connects to the VMkernel or to a virtual machine's network adapter.
vCenter Server stores the state of distributed ports in the vCenter Server database, so
networking statistics and policies migrate with virtual machines when moved across
hosts.
Migrating the state of a distributed port with vMotion is important when implementing
state-dependent features, such as inline intrusion detection systems, firewalls, and
third-party virtual switches.
Distributed port groups perform the same functions as port groups in standard virtual
switches. They provide a way to logically group distributed ports to simplify configuration
and they inherit all distributed switch properties.
A distributed port group does not constitute the means to segregate traffic within the
distributed switch unless you use private VLANs.
dvUplinks provide a level of abstraction for the physical NICs on each host. NIC
teaming, load balancing, and failover policies on the vDS and DV Port Groups are
applied to the dvUplinks and not the physical NICs on individual hosts.
Each physical NIC on each host is mapped to a dvUplink, permitting teaming and
failover consistency irrespective of physical NIC assignments.

Page 284

VTSP 5.5

Student Study Guide VTSP 5.5

Distributed Switch Architecture (2)


Within a distributed virtual switch, the control and I/O planes are separate.

Student Study Guide - VTSP 5.5

Page 285

VTSP 5.5 - Student Study Guide

Distributed Switch Architecture (2)


The control plane resides in and is owned by vCenter Server. The control plane is
responsible for configuring distributed switches, distributed port groups, distributed
ports, uplinks, and NIC teaming.
The control plane also coordinates the migration of the ports and is responsible for the
switch configuration.
For example, in the case of a conflict in the assignment of a distributed port (say,
because a virtual machine and its template are powered on), the control plane is
responsible for deciding what to do.

Page 286

VTSP 5.5

Student Study Guide VTSP 5.5

Distributed Switch Architecture (2)


The I/O Plane is implemented as a hidden standard virtual switch inside the VMkernel of
each ESXi host.
The I/O plane manages the actual I/O hardware on the host and is responsible for
forwarding packets.
The diagram on the screen shows the components of the I/O plane of a distributed
virtual switch. On each host, an I/O plane agent runs as a VMkernel process and is
responsible for communicating between the control and the I/O planes.
I/O filters are attached to the I/O chains connecting the vNICs to the distributed ports
and the distributed ports to the uplinks.
vNetwork Appliance APIs make it possible to define custom filters and apply them to the
I/O chains. The APIs also provide the means to preserve filtering information for the
virtual machine connected to each port, even after a vMotion migration.
Inside the I/O plane, the forwarding engine decides how to forward packets to other
distributed ports.
The engine can forward the packets towards other virtual machines on the same
distributed switch or to an uplink, requiring it to make NIC teaming decisions.
Forwarding functions can also be customized using the vNetwork Appliance APIs.
Network applications can make use of these features by creating new filters for example
for host intrusion detection, traffic analysis and so on.

Student Study Guide - VTSP 5.5

Page 287

VTSP 5.5 - Student Study Guide

Third-Party Distributed Switches


vNetwork Appliance APIs allow third-party developers to create distributed switch
solutions for use in a vSphere Data Center. Third-party solutions allow network
administrators to extend existing network operations and management into the vSphere
Data Center.
This diagram shows the basic way a third-party solution plugs in to the vNetwork
architecture.
The Custom Control Plane is implemented outside of vCenter Server, for example it
may be implemented as a virtual appliance.
The vSphere Client includes a plug-in to provide a management interface.
vCenter Server includes an extension to handle the communication with the control
plane.
On the host, a custom I/O plane agent replaces the standard I/O plane agent and the
I/O plane itself may be replaced for customization of forwarding and filtering.
An example of a third-party switch that leverages the vNetwork APIs is the Cisco Nexus
1000V. Network administrators can use this solution in place of the distributed switch to
extend vCenter Server and to manage Cisco Nexus and Cisco Catalyst switches.
Now lets take a look at some of the advanced features of distributed switches.

Page 288

VTSP 5.5

Student Study Guide VTSP 5.5

Network Health check


In the absence of a tool to verify whether the physical setup is capable of deploying
virtual machines correctly, debugging even simple issues can be frustrating for vSphere
administrators.
Network Health Check, available on ESXi 5.1 distributed switches and later, assures
proper physical and virtual operation by providing health monitoring for physical network
setups including VLAN, MTU or Teaming.
This feature gives you a window into the operation of your physical and virtual network.
vSphere admins also can provide failure data to the Network admins to facilitate speedy
resolution.
This provides vSphere admins with proactive alerting for problems that have traditionally
been difficult to troubleshoot.
Network Health Check feature prevents the common configuration errors:

Mismatched VLAN trunks between virtual switch and physical switch


Mismatched MTU setting between vNIC, virtual switch, physical adapter, and
physical switch ports
Mismatched Teaming Configurations
The network health check in vSphere 5.5 monitors the following three network
parameters at regular intervals:

Student Study Guide - VTSP 5.5

Page 289

VTSP 5.5 - Student Study Guide


VLAN - Checks whether vSphere distributed switch VLAN settings match trunk port
configuration on the adjacent physical switch ports.

MTU - Checks whether the physical access switch port MTU jumbo frame setting
based on per VLAN matches the vSphere distributed switch MTU setting.
Network adapter teaming - Checks whether the physical access switch ports
EtherChannel setting matches the distributed switch distributed port group IP
Hash teaming policy settings.
The default interval for performing the configuration check is one minute.
For the VLAN and MTU check, there must be at least two physical uplinks connected to
the VDS.
For the Teaming policy check, there must be at least two active uplinks in the teaming
and at least two hosts in the VDS.

Page 290

VTSP 5.5

Student Study Guide VTSP 5.5

Network Health Check: Knowledge Check


The Port Group Configuration shown has been configured in your customers
environment.
Which Configuration shown on screen will be detected by Network Health Check and
will notify the customer of a misconfiguration?

Student Study Guide - VTSP 5.5

Page 291

VTSP 5.5 - Student Study Guide

Export and Restore


Export and Restore features of the enhanced distributed switch provide you with a way
to create backups for network settings at the virtual data switch level or at the port group
level and save the data anywhere. This feature lets you recreate network configuration
seamlessly, giving you a means of restoring full functionality in instances of network
settings failures or VMware vCenter database corruption.
Using the Back Up and Restore feature of the enhanced distributed switch you can
export network configuration backup virtual data switch and port group configuration
asynchronously on a disk.
You can use the backed up information for restoring the DVswitch config in the case of
a vCenter database corruption or to rebuild or by replicating a DVswitch config multiple
times (in the case where you need to scale out across multiple DVswitches) or
replicating it in a lab environment or at a DR site.
When restoring the configuration you can restore everything (including unique IDs) or
only configuration parameters (such as port groups names, and so on). The former
would be for restoring a corrupt DVswitch, the latter would be for copying it.

Page 292

VTSP 5.5

Student Study Guide VTSP 5.5

Automatic Rollback
The management network is configured on every host and is used to communicate with
VMware vCenter as well as to interact with other hosts during vSphere HA
configuration. This is critical when it comes to centrally managing hosts through vCenter
Server.
If the management network on the host goes down or there is a misconfiguration,
VMware vCenter can't connect to the host and thus can't centrally manage resources.
The Automatic rollback and recovery feature of vSphere 5.5 addresses all the concerns
that customers have regarding the use of management network on a VDS.
This feature automatically detects any configuration changes on the management
network and if the host can't reach the vCenter Server, it doesn't permit the
configuration changes to take effect by rolling back to a previous valid configuration.
There are two types of Rollbacks.

Host networking rollbacks: These occur when an invalid change is made to the
host networking configuration. Every network change that disconnects a host
also triggers a rollback.
Distributed switch rollbacks: These occur when invalid updates are made to
distributed switch-related objects, such as distributed switches, distributed port
groups, or distributed ports.

Student Study Guide - VTSP 5.5

Page 293

VTSP 5.5 - Student Study Guide


In vSphere 5.5, rollback is enabled by default.
However, you can enable or disable rollbacks at the vCenter Server level. If Automatic
Rollback is disabled, vSphere 5.1 and later allows you to connect directly to a host to fix
distributed switch properties or other networking misconfigurations using the Direct
Console User Interface (DCUI).
Recovery is not supported on stateless ESXi instances.
The Management Network must be configured on a distributed switch. This is the only
way you can fix distributed switch configuration errors using the DCUI.

Page 294

VTSP 5.5

Student Study Guide VTSP 5.5

Link Aggregation Control Protocol (LACP)


As part of the vSphere 5.1 release, VMware introduced support for some Link
Aggregation Control Protocol features on distributed switches.
LACP is a standards-based link aggregation method to control the bundling of several
physical network links to form a logical channel for increased bandwidth and
redundancy purposes.
LACP works by sending frames down all links that have the protocol enabled.
If it finds a device on the other end of the link that also has LACP enabled, it will send
frames along the same links enabling the two units to detect multiple links between
themselves and then combine them into a single logical link.
This dynamic protocol provides advantages over the static link aggregation method
supported by previous versions of vSphere.
Plug and Play automatically configures and negotiates between host and access layer
physical switch, whilst dynamically detecting link failures and cabling mistakes and
automatically reconfiguring the links.
Administrators need not worry about network link failures as LACP uses the heartbeat
between the endpoints to detect link failures and cabling mistakes. LACP also
reconfigures the broken links automatically.
LACP does have some limitations on a vSphere Distributed Switch.

Student Study Guide - VTSP 5.5

Page 295

VTSP 5.5 - Student Study Guide


LACP support is not compatible with software iSCSI multipathing.
LACP support settings do not exist in host profiles.
The teaming and failover health check does not work for LAG (link aggregation group)
ports. LACP takes care to check the connectivity of the LAG ports.
The enhanced LACP support can work correctly when only one LAG handles the traffic
per distributed port or port group.

Released in vSphere 5.5, several key enhancements are available on a vSphere


Distributed Switch.
With the introduction of comprehensive load balancing algorithm support, 22 new
hashing algorithm options are available.
For example, source and destination IP address and VLAN field can be used as the
input for the hashing algorithm.
When using LACP in vSphere 5.1 you are limited to using IP Hash load balancing and
Link Status Network failover detection.
A total of 64 link aggregation groups (LAGs) are available per host and per vSphere
Distributed Switch. LACP 5.1 support only provides one LAG per distributed switch and
per host.
Because LACP configuration is applied per host, this can be very time-consuming for
large deployments.
Page 296

VTSP 5.5

Student Study Guide VTSP 5.5

In this release, new workflows to configure LACP across a large number of hosts are
made available through templates.
In this example, a vSphere host is deployed with four uplinks, and those uplinks are
connected to the two physical switches. By combining two uplinks on the physical and
virtual switch, LAGs are created.
The LACP configuration on the vSphere host is performed on the VDS and the port
groups.
First, the LAGs and the associated uplinks are configured on the VDS. Then, the port
groups are configured to use those LAGs.
In this example, the green port group is configured with LAG1; the yellow port group is
configured with LAG2.
All the traffic from virtual machines connected to the green port group follows the LAG1
path.

Student Study Guide - VTSP 5.5

Page 297

VTSP 5.5 - Student Study Guide

Distributed Switches Versus Standard Switches


With a standard virtual switch, a separate configuration in a separate management
panel is required to maintain each ESXi host's network configuration. So in the example
on the screen, in order for an administrator to view the network configuration of the data
center, the administrator would have to view the network configuration tab of each
separate ESXi host.
With a distributed virtual switch, the administrator only has to view one management
panel to view the network configuration for the entire data center.
Scaling maximums should be considered when migrating to a distributed virtual switch.

Page 298

VTSP 5.5

Student Study Guide VTSP 5.5

Distributed Switch Benefits


Network configuration at the data center level offers several advantages.
First, it simplifies data center setup and administration by centralizing network
configuration. For example, adding a new host to a cluster and making it vMotion
compatible is much easier.
Second, distributed ports migrate with their clients. So, when you migrate a virtual
machine with vMotion, the distributed port statistics and policies move with the virtual
machine, thus simplifying debugging and troubleshooting.
There are now new advanced features that are available when using a distributed
switch, such as LACP, VXLAN, Network health check.
Finally, enterprise networking vendors can provide proprietary networking interfaces to
monitor, control, and manage virtual networks.

Student Study Guide - VTSP 5.5

Page 299

VTSP 5.5 - Student Study Guide

Drawbacks of Standard vSwitches


The main drawback of a standard virtual switch is that every ESXi host should have
separate vSwitches configured on it.
That means that virtual local area networks or VLAN, security policies, and teaming
policies have to be individually configured on each and every ESXi host.
If a policy needs to change, the vSphere administrator must change that policy on every
host.
While vCenter Server does allow the administrator to manage the ESXi hosts centrally,
the changes to standard virtual switches still have to be applied to each ESXi host
individually.
Another drawback is that you cannot create an isolated virtual network connecting two
virtual machines on different hosts without configuring network hardware.
Finally, when a virtual machine is migrated with VMware vMotion, the networking state
of the virtual machine gets reset. This makes network monitoring and troubleshooting a
more complex task in a virtual environment.

Page 300

VTSP 5.5

Student Study Guide VTSP 5.5

Migrating to Distributed Virtual Switches


Distributed virtual switches ease the management burden of every host virtual switch
configuration by treating the network as an aggregate resource.
In this configuration, individual host-level virtual switches are abstracted into a single
large vNetwork distributed virtual switch that spans multiple hosts at the data center
level.
Although VMware supports standard virtual switches, it is a best practice and
recommended that the distributed virtual switch architecture be used for all virtual
networking purposes, including the virtual machine connections and the VMkernel
connections to the physical network for VMkernel services such as NFS, iSCSI, or
vMotion.
When you want to use any of the advanced features you must move from a standard
switch.
If you are increasing the size of your virtual data center by adding hosts, you should
also review the decision to use distributed switches.
If you have a requirement to create complicated sets of virtual machines connected to
isolated networks, distributed switches should be considered. A hybrid model of
standard switches for management and distributed switches for everything else was
recommended in the past.

Student Study Guide - VTSP 5.5

Page 301

VTSP 5.5 - Student Study Guide


This should no longer be a requirement and you should consider migrating hybrid
models to a fully distributed model as the distributed switch auto recovery is a feature
implemented to eliminate the need for standard switches in this case.
If you wish to take advantage of ingress and egress traffic shaping you will need to
move to a distributed switch.
If a customer wishes to make use of load based NIC teaming then distributed switches
must be used.
You have three options for migration.
The first is fully manual and will involve downtime whilst you restructure the virtual
network configuration.
The second and third options involve the migrate virtual machine networking wizard.
The simplest approach is to use the fully automated option which will involve some
virtual machine downtime. In this option the migration of all pNICs and virtual ports
(VMkernel Ports such as vMotion and so on) can be migrated in a single step from
vCenter Server. However this then removes all uplinks from the affected standard
vSwitch.
At this point you now also migrate the virtual machines from a standard switch to a
distributed switch. The virtual machines networks will be disconnected during the
migration in this case.
If you want to avoid any downtime, this can be achieved using migration by staging.
The first step in this case is to allocate some pNICs to the distributed switch to ensure
connectivity for the virtual machine networks. Then the migration wizard is used to
migrate just the virtual machines without any downtime.
And finally, the wizard is used to move all the remaining pNICs and VMkernel ports to
the distributed switch.

Page 302

VTSP 5.5

Student Study Guide VTSP 5.5

Specific Licensing Requirements


Certain network features require licenses.
As a minimum, Distributed Switches and Network I/O Control require an Enterprise Plus
License.
Network I/O Control (NIOC) is discussed in the next module.
Standard switches are included in all license versions.

Student Study Guide - VTSP 5.5

Page 303

VTSP 5.5 - Student Study Guide

Module Summary
Now that you have completed this module, you should be able to:

Explain the data center networking architecture


Describe vSphere standard vSwitches, distributed vSwitches and third party
switches
Describe the new vSphere 5.5 features for Distributed vSwitches
Identify when customers should migrate virtual machines to a distributed switch

Page 304

VTSP 5.5

Student Study Guide VTSP 5.5

Module 2: vSphere Networks: Advanced Features


Welcome to Module 2 of vSphere Networks: Advanced Features.
These are the topics that will be covered in this module.

Student Study Guide - VTSP 5.5

Page 305

VTSP 5.5 - Student Study Guide

Module 2 Objectives
At the end of this module you should be able to:

Describe Private VLANs


Explain the VXLAN enhanced distributed switch and its pre-requisites
Explain Network load balancing and failover policies
Explain the concept of Network I/O control, what benefits it brings and how a
customer can implement it
Describe VMware Security Tools/Products

Page 306

VTSP 5.5

Student Study Guide VTSP 5.5

Private VLANs: Overview


Private VLANs support compatibility with existing networking environments using private
VLAN technology.
Private VLANs enable users to restrict communication between virtual machines on the
same VLAN or network segment, significantly reducing the number of subnets required
for certain network configurations.
PVLANs are only available on distributed vSwitches.
PVLANs are a way of easily providing layer 2 network isolation between servers in the
same subnet or network, without having to worry about such things as MAC access
control lists.
However it is important to remember that use of PVLANs will require compatible
physical switches.
The next few screens will explain how private VLANs effect virtual machine
communication.

Student Study Guide - VTSP 5.5

Page 307

VTSP 5.5 - Student Study Guide

Private VLANs: Architecture


Private VLANs or PVLANs allow you to isolate traffic between virtual machines in the
same VLAN.
This allows PVLANs to provide additional security between virtual machines on the
same subnet without exhausting the VLAN number space.
PVLANs are useful on a DMZ where the server needs to be available to external
connections and possibly internal connections, but rarely needs to communicate with
the other servers on the DMZ.
A PVLAN can be configured in a way that allows the servers to communicate only with
the default gateway on the DMZ, denying communication between the servers.
If one of the servers is compromised by a hacker, or infected with a virus, the other
servers on the DMZ are safe.
The basic concept behind PVLANs is to divide an existing VLAN, now referred to as the
primary PVLAN, into one or more segments, by associating VLAN ID's together.
These segments are called secondary PVLANs. A PVLAN is identified by its primary
PVLAN ID.
A primary PVLAN ID can have multiple secondary PVLAN IDs associated with it.
Primary PVLANs are promiscuous, so virtual machines on a promiscuous PVLAN are

Page 308

VTSP 5.5

Student Study Guide VTSP 5.5

reachable by and can reach any node in the same promiscuous PVLAN, as well as any
node in the primary PVLAN.
Ports on secondary PVLANs can be configured as either isolated or community.
Virtual machines on isolated ports communicate only with virtual machines on
promiscuous ports, whereas virtual machines on community ports communicate with
both promiscuous ports and other ports on the same secondary PVLAN.
PVLANs do not increase the total number of VLAN's available, all PVLAN IDs are VLAN
IDs but their use means that you do not have to dedicate VLAN's for each isolated
segment.

Student Study Guide - VTSP 5.5

Page 309

VTSP 5.5 - Student Study Guide

Private VLANs: An Example


Virtual machines in a promiscuous private VLAN are reachable by and can reach any
node in the same promiscuous private VLAN, as well as any node in the primary
PVLAN.
In the example depicted on the screen, virtual machines E and F are in the promiscuous
private VLAN 5, so all virtual machines communicate with each other as well as with
any nodes in the primary private VLAN 5.

Page 310

VTSP 5.5

Student Study Guide VTSP 5.5

Virtual machines in an isolated private VLAN cannot communicate with other virtual
machines except those in the promiscuous private VLAN.
In this example, virtual machines C and D are in isolated private VLAN 155, so they
cannot communicate with each other. However, virtual machines C and D can
communicate with virtual machines E and F.

Student Study Guide - VTSP 5.5

Page 311

VTSP 5.5 - Student Study Guide

Virtual machines in a community private VLAN can communicate with each other and
with the virtual machines in the promiscuous private VLAN, but not with any other virtual
machine.
In this example, virtual machines A and B can communicate with each other and with E
and F because they are in the promiscuous private VLAN. However, they cannot
communicate with C or D because they are not in the community private VLAN.
Network packets originating from a community are tagged with the secondary PVLAN
ID as it transverses the network.
There are a couple of things to note about how vNetwork implements private VLANs.
First, vNetwork does not encapsulate traffic inside private VLANs. In other words, there
is no secondary private VLAN encapsulated inside a primary private VLAN packet.
Also, traffic between virtual machines on the same private VLAN, but on different ESXi
hosts, moves through the physical switch.
Therefore, the physical switch must be private VLAN-aware and configured
appropriately so that traffic in the secondary private VLAN can reach its destination.

Page 312

VTSP 5.5

Student Study Guide VTSP 5.5

VLAN limitations
Traditional VLAN-based switching models suffer challenges such as operationally
inefficient fault tolerance.
High-availability technologies such as VMware Fault Tolerance work best with flat
Layer 2 networks, but creating and managing this architecture can be operationally
difficult, especially at scale.

Student Study Guide - VTSP 5.5

Page 313

VTSP 5.5 - Student Study Guide

VLAN limitations
IP address maintenance and VLAN limits become challenges as the data center scales,
particularly when strong isolation is required or in service provider environments.
In large cloud deployments, applications within virtual networks may need to be logically
isolated.
For example, a three-tier application can have multiple virtual machines requiring
logically isolated networks between the virtual machines.
Traditional network isolation techniques such as VLAN (4096 LAN segments through a
12-bit VLAN identifier) may not provide enough segments for such deployments even
with the use of PVLANs.
In addition, VLAN-based networks are bound to the physical fabric and their mobility is
restricted.

Page 314

VTSP 5.5

Student Study Guide VTSP 5.5

Virtual Extensible Local Area Network (VXLAN)


A VXLAN is a Layer 2 overlay over a Layer 3 network. For example, it allows you to
connect devices across Layer 3 networks and make them appear like they share the
same Layer 2 domain.
The original Layer 2 Ethernet frame from a virtual machine is given a VXLAN ID and is
then encapsulated as a UDP packet which in total adds only 50 bytes of overhead to
each packet.
A VXLAN segment is identified by a 24-bit VXLAN identifier which means that a single
network can support up to 16 million LAN segments.
This number is much higher than the 4,094 limit imposed by the IEEE 802.1Q VLAN
specification.
VXLAN fabric is elastic, enabling traffic to traverse clusters, virtual switches and layer 3
networks.
Cross cluster placement of virtual machines fully utilizes network resources without any
physical re-wiring.
Virtual machines do not see the VXLAN ID. VXLAN virtual wires thus provide
application level isolation.. It provides for scalable Layer 2 networks across the data
center for efficient workload deployment.
Physical network infrastructure works as is' without the need to upgrade.

Student Study Guide - VTSP 5.5

Page 315

VTSP 5.5 - Student Study Guide


IP multicasting is used to support broadcast and unknown unicast traffic between the
VXLAN endpoints.
VXLAN technology has been submitted to the Internet Engineering Task Force for
standardization.
VXLAN technology enables the expansion of isolated vCloud Architectures across layer
2 domains, overcoming the limitations of the IEEE 802.1Q standard.
By utilizing a new MAC in UDP encapsulation technique, it allows a virtual machine to
communicate using an overlay network that spans across multiple physical networks. It
decouples the virtual machine from the underlying network thereby allowing the virtual
machine to move across the network without reconfiguring the network.
To operate a VXLAN you require a few components to be in place.
Ensure that you have vCenter Server 5.1 or later, ESXi 5.1 or later, and vSphere
Distributed Switch 5.1 or later.
Verify that DHCP is available on VXLAN transport VLANs.
The physical infrastructure MTU must be at least 50 bytes more than the MTU of the
virtual machine vNICs. For Link Aggregation Control Protocol (LACP), 5- tuple hash
distribution must be enabled.
Get a multicast address range from your network administrator and segment ID pool.
You must also ensure that you have deployed the vShield Manager Appliance.
If you are creating a cluster, VMware recommends that you use a consistent switch type
and version across a given network scope.
Inconsistent switch types can lead to undefined behavior in your VXLAN virtual wire.

Page 316

VTSP 5.5

Student Study Guide VTSP 5.5

VXLAN Sample Scenario


It this scenario your customer has several ESXi hosts on two clusters. The Engineering
and Finance departments are on their own port groups on Cluster1. The Marketing
department is on Cluster2. Both clusters are managed by a single vCenter Server 5.5.
The company is running out of compute space on Cluster1 while Cluster2 is underutilized.
The network supervisor asks you to figure out a way to extend the Engineering
department to Cluster2 so that virtual machines belonging to Engineering on both
clusters can communicate with each other.
This would enable the company to utilize the compute capacity of both clusters by
stretching the company's L2 layer.
If you were to do this the traditional way, you would need to connect the separate
VLANs in a special way so that the two clusters can be in the same L2 domain.
This might require the company to buy a new physical device to separate traffic, and
lead to issues such as VLAN sprawl, network loops, and administration and
management overhead.

Student Study Guide - VTSP 5.5

Page 317

VTSP 5.5 - Student Study Guide

VXLAN Sample Scenario


You remember seeing a VXLAN virtual wire demo at VMworld 2012, and decide to
evaluate the vShield 5.5 release.
You conclude that building a VXLAN virtual wire across dvSwitch1 and dvSwitch2 will
allow you to stretch the company's L2 layer.
Note that VXLAN is not currently supported as the transport for vMotion or for creating
layer 2 stretched environments for SRM.

Page 318

VTSP 5.5

Student Study Guide VTSP 5.5

Load Balancing and Failover Policies


If a design requires more uplink bandwidth than can be provided by a single physical network
link, or if the resilience provided by NIC teams is necessary to avoid any single point of failure
for a particular network, then more than one physical uplink is required for the virtual switch or
port group.
Multiple physical uplinks are required to meet minimum acceptable resilience criteria for most
production networks in a vSphere environment:

Host management networks

IP Storage networks

Fault Tolerant logging networks

vMotion and Storage vMotion networks

All production virtual machine networks


When defining host NIC requirements, the design must have sufficient uplinks to provide the
required level of resilience for each network, that is physical NICs in each host.
It must also have sufficient bandwidth to meet peak bandwidth needs and sufficient isolation
between networks that cannot, or should not, use shared uplinks.
The behavior of the uplink traffic from a virtual switch or port group is controlled through the
use of load balancing and failover policies which are typically configured at the virtual switch
level but can also be configured at the Port Group level if more specific control is required.

Student Study Guide - VTSP 5.5

Page 319

VTSP 5.5 - Student Study Guide


Correct use of load balancing and failover policies will allow higher levels of consolidation of
networking resources and help to minimize the number of physical NICs required per host, and
the number of physical switch ports needed in a design.
Load balancing and failover policies allow you to determine how network traffic is distributed
between adapters and how to re-route traffic in the event of a failure.
You can edit your load balancing and failover policies by configuring the load balancing policy,
failover detection, and network adapter order.
The choices you make for these policies can have a significant impact on your overall network
design as the choices you make will determine if you need dedicated NICs for a specific set of
virtual networks or if you can safely share physical uplinks between virtual networks.
These choices also impact the number of switch ports required, the switch configuration and the
topology of the inter-switch links in your upstream network infrastructure.
Load balancing and failover policies can be controlled at either the Standard virtual switch level
or port group level.
On a distributed switch, these are controlled at the distributed switch level and DV port group
level and can be set in the vSphere Client.

Page 320

VTSP 5.5

Student Study Guide VTSP 5.5

Load Balancing Policies


The settings for load balancing enable you to specify how a physical uplink should be
selected by the VMkernel.
The teaming policies define how the VMkernel decides which physical uplink to use for
VMkernel or Virtual machine traffic.

Student Study Guide - VTSP 5.5

Page 321

VTSP 5.5 - Student Study Guide

Load Balancing Policies


Route-Based on the Originating Port ID
Routing traffic based on the originating port ID balances the load based on the virtual
port where the traffic enters the vSwitch or dvSwitch. Port ID-based assignments use
fixed assignments.
In some cases, multiple heavy loaded virtual machines are connected to the same pNIC
and the load across the pNICs is not balanced.
The image on the screen shows that pNIC1 is connected to two virtual machines with
heavier load and is overloaded, whereas pNIC0 has only one virtual machine with a low
load.
This policy does not require any specific physical switch configuration. Consider
creating teamed ports in the same L2 Domain teamed over two physical switches.

Page 322

VTSP 5.5

Student Study Guide VTSP 5.5

Load Balancing Policies


Route Based on IP Hash
Route based on IP hash chooses an uplink based on a hash of the source and
destination IP addresses of each packet. Evenness of traffic distribution depends on the
number of TCP/IP sessions to unique destinations.
Prior to vSphere 5.1 when using the IP hash policy, physical switch ports that were
attached would have to be set to static Etherchannel. This is called Etherchannel mode
on on certain switch types like Cisco and HP.
You cannot use IP hashing with static Etherchannel across non-stacked switches.
New with vSphere 5.1 you have the option to use Link Aggregation Control Protocol on
distributed virtual switches only. Using LACP helps avoid misconfiguration issues and
switch settings that do not match. It is possible to troubleshoot connections more easily
with LACP as you can tell if cables or links are configured correctly via the switch LACP
Status information.
LACP must first be enabled on the distributed switch before use and configured on the
physical switch.

Student Study Guide - VTSP 5.5

Page 323

VTSP 5.5 - Student Study Guide

Load Balancing Policies


Route Based on Source MAC Hash
When using route based on source MAC hash option on a drop-down menu in the
vSphere Client, an uplink is selected based on the hash from the source Ethernet
adapter.
When you use this setting, traffic from a given virtual Ethernet adapter is consistently
sent to the same physical adapter unless there is a failover to another adapter in the
NIC team. The replies are received on the same physical adapter as the physical switch
learns the port association.
This setting provides an even distribution of traffic if the number of virtual Ethernet
adapters is greater than the number of physical adapters. A given virtual machine
cannot use more than one physical Ethernet adapter at any given time unless it uses
multiple source MAC addresses for traffic it sends.
This policy does not require any specific physical switch configuration. Consider
creating teamed ports in the same L2 Domain teamed over two physical switches to
improve network resilience.

Page 324

VTSP 5.5

Student Study Guide VTSP 5.5

Load Balancing Policies


Load-Based Teaming
Although teaming can be configured on a standard virtual switch, load-based teaming is
only available with distributed virtual switches. Initially, ports are assigned the way they
are assigned in source port-based load balancing.
The algorithm in load-based teaming regularly checks the load of all teaming NICs. If
one NIC gets overloaded while another has bandwidth available, the distributed virtual
switch reassigns the port-NIC mapping to reach a balanced status.
Until the next check is performed, the mapping maintains a stable state.
The load-based teaming policy ensures that a distributed virtual switch's uplink capacity
is optimized.
Load-based teaming avoids the situation of other teaming policies where some uplinks
are idle while others are completely saturated. This is the recommended teaming policy
when the network I/O control feature is enabled for the vNetwork Distributed Switch.
This policy does not require any specific physical switch configuration.
From a design perspective, Load-Based Teaming allows for higher levels of
consolidation at the virtual network level as physical NIC uplink capacity can be used
more efficiently.
Fewer physical NICs may be required to deliver the required bandwidth for specific
network functions.
Student Study Guide - VTSP 5.5

Page 325

VTSP 5.5 - Student Study Guide

Traffic Filtering
In a vSphere distributed switch (version 5.5 and later), the traffic filtering and marking
policy allows you to protect the virtual network from unwanted traffic and security
attacks.
It also allows you to apply a QoS (Quality of Service) tag to a certain type of traffic.
Traffic filtering is the ability to filter packets based on the various parameters of the
packet header.
This capability is also referred to as access control lists (ACLs), and is used to provide
port-level security.
The vSphere Distributed Switch supports packet classification.
This is based on the following three different types of qualifier:
MAC SA and DA qualifiers, System traffic qualifiers, such as vSphere vMotion, vSphere
management, vSphere FT, and so on; and, IP qualifiers, such as Protocol type, IP SA,
IP DA, and port number.

Page 326

VTSP 5.5

Student Study Guide VTSP 5.5

Traffic Filtering
After the qualifier has been selected and packets have been classified, users have the
option either to filter or tag those packets.
When the classified packets have been selected for filtering, users have the option to
filter ingress, egress, or traffic in both directions.
Traffic-filtering configuration is at the port group level.

Student Study Guide - VTSP 5.5

Page 327

VTSP 5.5 - Student Study Guide

Differentiated Service Code Point Marking


Differentiated Service Code Point (DSCP) Marking or Tagging helps classify network
traffic and provide Quality of Service (QoS).
Important traffic can be tagged so that it doesnt get dropped in the physical network
during congestion.
Physical network devices use tags to identify important traffic types and provide Quality
of Service based on the value of the tag.
Because business-critical and latency-sensitive applications are virtualized and are run
in parallel with other applications on an ESXi host, it is important to enable the traffic
management and tagging features on a vSphere Distributed Switch.

Page 328

VTSP 5.5

Student Study Guide VTSP 5.5

Differentiated Service Code Point Marking


The traffic management feature on vSphere Distributed Switch helps reserve bandwidth
for important traffic types, and the tagging feature enables the external physical network
to detect the level of importance of each traffic type.
It is a best practice to tag the traffic near the source, which helps achieve end-to-end
Quality of Service.
During network congestion scenarios, the highly tagged traffic doesnt get dropped,
providing the traffic type with higher Quality of Service.
VMware has supported 802.1p tagging on VDS since vSphere 5.1. The 802.1p tag is
inserted in the Ethernet header before the packet is sent out on the physical network.
In vSphere 5.5, the DSCP marking support enables users to insert tags in the IP
header.
IP header-level tagging helps in layer 3 environments, where physical routers function
better with an IP header tag than with an Ethernet header tag.

Student Study Guide - VTSP 5.5

Page 329

VTSP 5.5 - Student Study Guide

Failover Policies
Virtual network uplink resilience is provided by the failover policies within the properties
of the NIC team, at the virtual switch or port group level.
The failover policy specifies whether the team has standby physical NIC capacity, and
how that standby capacity is used.
Failover policies determine the method to be used for failover detection and how traffic
is re-routed in the event of a physical adapter failure on the host.
The failover policies that can be set are network failure detection, notify switches,
failback, and failover order.
It is important to remember that physical uplinks can be mapped to only one vSwitch at
a time while all port groups within a vSwitch can potentially share access to its physical
uplinks.
This allows design choices where standby NICs can be shared amongst multiple virtual
networks that are otherwise fully isolated, or all uplinks are active but some are also
defined as standby for alternative networks.
This type of design minimizes the total number of physical uplinks while maintaining
reasonable performance during failures without requiring dedicated standby NICs that
would otherwise be idle.

Page 330

VTSP 5.5

Student Study Guide VTSP 5.5

This capability becomes less useful as the number of vSwitches on each host
increases; hence best practice is to minimize the number of vSwitches.
During switch and port configuration, you can define which physical NICs are reserved
for failover and which are excluded.
Designs should ensure that under degraded conditions, such as when single network
link failures occur, not only is continuity ensured via failover, but acceptable bandwidth
is delivered under those conditions.

Student Study Guide - VTSP 5.5

Page 331

VTSP 5.5 - Student Study Guide

Failover Policies
Network Failover Detection
Network failover detection specifies the method to use for failover detection. The policy
can be set to either the Link Status only option or the Beacon Probing option within the
vSphere Client.
When the policy is set to Link Status only, failover detection will rely solely on the link
status that the network adapter provides. This option detects failures, such as cable
pulls and physical switch power failures. However, it does not detect configuration
errors, such as a physical switch port being blocked by spanning tree protocol or
misconfigured to the wrong VLAN or cable pulls on the other side of a physical switch.
The Beaconing option sends out and listens for beacon probes on all NICs in the team
and uses this information, along with link status, to determine link failure. This option
detects many failures that are not detected by Link Status only alone.
LACP works with Link Status Network failover detection.

Page 332

VTSP 5.5

Student Study Guide VTSP 5.5

Failover Policies
Notify Switches
When you use the notify switches policy, you must specify how the VMkernel
communicates with the physical switches in the event of a failover.
The notify switches can be set to either Yes or No.
If you select Yes, a notification is sent out over the network to update the lookup tables
on physical switches whenever a virtual Ethernet adapter is connected to the vSwitch or
dvSwitch or whenever that virtual Ethernet adapter's traffic is routed over a different
physical Ethernet adapter in the team due to a failover event,.
In almost all cases, this is desirable for the lowest latency when a failover occurs.

Student Study Guide - VTSP 5.5

Page 333

VTSP 5.5 - Student Study Guide

Failover Policies
Failback
By default, NIC teaming applies a failback policy.
This means that if a physical Ethernet adapter that had failed comes back online, the
adapter is returned to active duty immediately, displacing the standby adapter that took
over its slot. This policy is in effect when the Rolling Failover setting is set to No. If the
primary physical adapter experiences intermittent failures, this setting can lead to
frequent changes in the adapter in use.
Another approach is to set Rolling Failover to Yes.
With this setting, a failed adapter is left inactive even after recovery until another
currently active adapter fails, requiring replacement. Please note that the Failover Order
policy can be set in the vSphere Client.

Page 334

VTSP 5.5

Student Study Guide VTSP 5.5

Failover Policies
Failover Order
You can use the Failover Order policy setting to specify how to distribute the work load
for the physical Ethernet adapters on the host.
You can place some adapters in active use, designate a second group as standby
adapters for use in failover situations, and designate other adapters as unused,
excluding them from NIC Teaming.
Please note that the Failover Order policy can be set in the vSphere Client.

Student Study Guide - VTSP 5.5

Page 335

VTSP 5.5 - Student Study Guide

Network I/O Control


Let's have a look at network I/O control and its architecture.
In environments that use 1 Gigabit Ethernet or GigE physical uplinks, it is not
uncommon to see multiple physical adapters dedicated to certain traffic types.
1 GigE is rapidly being replaced by 10 GigE networks. While it provides ample
bandwidth for all traffic, this presents a new challenge. Different kinds of traffic, which
was limited to the bandwidth of a single 1 GigE can now use up to 10 GigE. So, for
optimum utilization of 10 GigE link, there has to be a way to prioritize the network traffic
by traffic flows. Prioritizing traffic will ensure that latency sensitive and critical traffic
flows can access the bandwidth they require.
Network I/O control enables the convergence of diverse workloads on a single
networking pipe. It provides control to the administrator to ensure predictable network
performance when multiple traffic types are flowing in the same pipe. It provides
sufficient controls to the vSphere administrator in the form of limits and shares
parameters to enable and ensure predictable network performance when multiple traffic
types contend for the same physical network resources.
Network resource pools determine the bandwidth that different network traffic types are
given on a vSphere distributed switch.
When network I/O control is enabled, distributed switch traffic is divided into predefined
network resource pools: Fault Tolerance traffic, iSCSI traffic, vMotion traffic,

Page 336

VTSP 5.5

Student Study Guide VTSP 5.5

management traffic, vSphere Replication (VR) traffic, NFS traffic, and virtual machine
traffic. As a best practice for networks that support different types of traffic flow, take
advantage of Network I/O Control to allocate and control network bandwidth. You can
also create custom network resource pools for virtual machine traffic. The iSCSI traffic
resource pool shares do not apply to iSCSI traffic on a dependent hardware iSCSI
adapter.
Without network I/O control you will have to dedicate physical uplinks (pNICs)
specifically and solely for software iSCSI traffic if you are using the software iSCSI
adapter.
Network I/O control is only available on distributed switches. It must be enabled and
licensed using as a minimum an Enterprise Plus license.

Student Study Guide - VTSP 5.5

Page 337

VTSP 5.5 - Student Study Guide

Network I/O Control Features


Network I/O control provides its users with different features. These include isolation,
shares, and limits.

Page 338

VTSP 5.5

Student Study Guide VTSP 5.5

Network I/O Control Features


Isolation
Isolation ensures traffic isolation so that a given flow will never be allowed to dominate
over others, thus preventing drops and undesired jitter.
When network I/O control is enabled, distributed switch traffic is divided into the
following predefined network resource pools: VMware Fault Tolerance traffic, iSCSI
traffic, management traffic, NFS traffic, virtual machine traffic, vMotion traffic vSphere
Replication or VR traffic.

Student Study Guide - VTSP 5.5

Page 339

VTSP 5.5 - Student Study Guide

Network I/O Control Features


Limits
Limits specify an absolute bandwidth for a traffic flow. Traffic from a given flow is never
allowed to exceed its limit. The limit is specified in megabits per second. Limits are
useful when you do not want to have the other traffic affected too much by other
network events.
The system administrator can specify an absolute shaping limit for a given resourcepool flow using a bandwidth capacity limiter. As opposed to shares that are enforced at
the dvUplink level, limits are enforced on the overall vNetwork distributed switch or vDS
set of dvUplinks, which means that a flow of a given resource pool will never exceed a
given limit for a vDS out of a given vSphere host.
Consider an example where virtual machine and iSCSI traffic use nearly all the
available bandwidth. vMotion starts and consumes a large percentage of the bandwidth.
In this case, it might be a good idea to limit the bandwidth of vMotion.

Page 340

VTSP 5.5

Student Study Guide VTSP 5.5

Network I/O Control Features


Shares
Shares allow a flexible networking capacity partitioning to help users in dealing with
over commitment when flows compete aggressively for the same resources. Network
I/O control uses shares to specify the relative importance of traffic flows.
The system administrator can specify the relative importance of a given resource-pool
flow using shares that are enforced at the dvUplink level. The underlying dvUplink
bandwidth is then divided among resource-pool flows based on their relative shares in a
work-conserving way. This means that unused capacity will be redistributed to other
contending flows and won't go to waste.
As shown in the image, the network flow scheduler is the entity responsible for
enforcing shares and therefore is in charge of the overall arbitration under over
commitment. Each resource-pool flow has its own dedicated software queue inside the
scheduler so that packets from a given resource pool are not dropped due to high
utilization by other flows.

Student Study Guide - VTSP 5.5

Page 341

VTSP 5.5 - Student Study Guide

VMware Security Strategy


To secure the vSphere environment, the first thing that needs to be secured is the
platform.
The platform comprises of the physical hardware, the VMware ESXi virtualization layer,
and the virtual hardware layer of the virtual machines.
The next level of security ensures that the guest operating systems, which are the
endpoints of the virtual machine, are secure.
VMware vShield Endpoint optimizes antivirus and other host and endpoint security for
use in vSphere environments.
You can use vShield App to protect your applications against internal network-based
threats.
vShield App also reduces the risk of policy violations in the corporate security perimeter.
It does so by using application-aware firewalls with deep packet inspection and
connection control that is based on source and destination IP addresses.
Finally, when using vShield Edge, you get comprehensive perimeter network security
for virtual data centers.
vShield Edge provides port group isolation, and network security gateway services to
ensure the security of your data centers.
Before installing vShield Edge, you must become familiar with your network topology.

Page 342

VTSP 5.5

Student Study Guide VTSP 5.5

vShield Edge can have multiple interfaces, but you must connect at least one internal
interface to a portgroup or VXLAN virtual wire before you can deploy the vShield Edge.
Before you install vShield in your vCenter Server environment, consider your network
configuration and resources.
You can install one vShield Manager per vCenter Server, one vShield App or one
vShield Endpoint per ESX host, and multiple vShield Edge instances per data center.

Student Study Guide - VTSP 5.5

Page 343

VTSP 5.5 - Student Study Guide

VMware vCloud Networking and Security


vCloud Networking and Security launched with vSphere 5.1.
vShield functionality was integrated along with vShield Edge, Endpoint, App and Data
Security capabilities.
vSphere 5.1 also introduced two versions of vCloud Networking and Security.
These were Standard, which provided essential software defined networking and
integrated security and Advanced which provided high availability and cloud load
balancing.
With the introduction of vSphere 5.5, both editions have now been integrated into a
single edition in vCloud Suite aimed at vSphere environments only.

Page 344

VTSP 5.5

Student Study Guide VTSP 5.5

Network Activity 1
Your customer has approached you for help with the design of their environment. On a
single ESXi host they have eight physical 1Gbps Network cards.
They have already decided to use one standard virtual switch with five port groups, as
they want to keep the design uncomplicated.
They want to connect Production and Test & Development virtual machines to the
Standard vSwitch.
They propose the connections as shown for the vNICs to the relevant Port Groups.
The customer predicts that several virtual machines will be added to the production port
group in the near future and wants to ensure that the physical connections to this port
group are as resilient as possible.
The customer cannot tolerate any loss of management due to the loss of a physical
switch or physical network adapter.
The customer wants to use IP Storage, avail of vMotion with all other ESXi hosts, and
have a separate port group for Management.
Have a look at the options A, B, C and D to see how the Physical network adapters
might be connected, then select the best configuration for this customer and click
Submit.
Option A is displayed here.

Student Study Guide - VTSP 5.5

Page 345

VTSP 5.5 - Student Study Guide

Network Activity 1 Option B is displayed here.

Network Activity 1
Option C is displayed here.
Page 346

VTSP 5.5

Student Study Guide VTSP 5.5

Network Activity 1
Option D is displayed here.

Student Study Guide - VTSP 5.5

Page 347

VTSP 5.5 - Student Study Guide

Network Activity 1
The correct solution was Option C.
This configuration will allow several virtual machines to be added to the production port
group in the future and ensure that the physical connections to this port group are as
resilient as possible.
There is no risk of loss of management due to the loss of a physical switch or physical
network adapter.
The customer can use IP Storage, avail of vMotion with all other ESXi hosts, and have a
separate port group for Management.

Page 348

VTSP 5.5

Student Study Guide VTSP 5.5

Network Activity 2
Your customer has 4 virtual machines that they wish to use placed in controlled
networks to restrict communication to and from the machines. These virtual machines
will all need to be able to communicate with the default gateway device. They have
approached you with the following requirement:
Virtual machine A must be able to communicate with any node in the Primary PVLAN.
Virtual machine B must be able to communicate with virtual machine A but not with
virtual machines C or D.
Virtual machine C must be able to communicate with virtual machine A and D. It must
not be allowed to communicate with virtual machine B.
Virtual machine D must be able to communicate with virtual machine A and C. It must
not be allowed to communicate with virtual machine B.
Place each VM into the correct PVLAN.

Student Study Guide - VTSP 5.5

Page 349

VTSP 5.5 - Student Study Guide

Network Activity 2
The correct solution is shown.
Virtual Machine A can communicate with any node in the Primary PVLAN.
Virtual Machine B can communicate with Virtual Machine A but not with Virtual Machine
C or D.
Virtual Machine C can communicate with virtual machine A and D. It cannot
communicate with Virtual Machine B.
Virtual Machine D can communicate with virtual machine A and C. It cannot
communicate with Virtual Machine B.

Page 350

VTSP 5.5

Student Study Guide VTSP 5.5

Network Activity 3
A customer has approached you for help in scaling out their network environment. They
have recently purchased several new ESXi hosts, as well as some 10Gps network
adapters.
The customer has requested a solution that can be deployed across the ESXi Servers,
simplifying data center setup and administration.
They want a solution that enables the convergence of diverse workloads on each
10Gps networking connection for optimum utilization of a 10 GigE link as well as
optimizing uplink capacity.
Finally they want to know which level of vSphere License they will require in order to
achieve this.
From the solutions shown on screen, choose the most appropriate solution for each
area of deployment.

Student Study Guide - VTSP 5.5

Page 351

VTSP 5.5 - Student Study Guide

Module 2 Summary
This concludes Module 2, vSphere Networks - Advanced Features.
Now that you have completed this module, you should be able to:

Describe Private VLANs


Explain the VXLAN enhanced distributed switch and its pre-requisites
Explain Network load balancing and failover policies
Explain the concept of Network I/O control, what benefits it brings and how a
customer can implement it.
Describe VMware Security Tools/Products

Page 352

VTSP 5.5

Student Study Guide VTSP 5.5

Course 5

VTSP 5.5 Course 5 vStorage


Welcome to the VTSP V5.5 Course 5: VMware vSphere: vStorage.
There are 3 modules in this course as shown here.

Student Study Guide - VTSP 5.5

Page 353

VTSP 5.5 - Student Study Guide

Course Objectives
At the end of this course you should be able to:

Explain the vStorage architecture, virtual machine storage requirements and the
function of the types of storage available to vSphere solutions.
Describe the vSphere PSA, SIOC, VAAI, VASA and Storage DRS and explain
the benefits and requirements of each.
Determine the proper storage architecture by making capacity, performance and
feature capability decisions.
Information & Training on VSAN or virtualization of storage are not included in this
overview.

Page 354

VTSP 5.5

Student Study Guide VTSP 5.5

Module 1: vSphere vStorage Architecture


This is module 1, vSphere vStorage Architecture.
These are the topics that will be covered in this module.

Student Study Guide - VTSP 5.5

Page 355

VTSP 5.5 - Student Study Guide

Module 1 Objectives
At the end of this module, you will be able to:

Explain the high level vSphere Storage Architecture.


Describe the capacity and performance requirements for Virtual Machine
Storage.
Describe the types of physical storage vSphere can utilize and explain the
features, benefits and limitations of each type.

Page 356

VTSP 5.5

Student Study Guide VTSP 5.5

The vStorage Architecture - Overview


The VMware vSphere storage architecture consists of layers of abstraction that hide
and manage the complexity and differences among physical storage subsystems.
The Virtual Machine guest operating systems and their applications see the storage as
SCSI attached local disks.
In most cases these virtual SCSI disks are files stored in vSphere datastores, but
sometimes it is desirable to map block storage directly to virtual machines using Raw
Device Mappings (RDM).
The virtual SCSI disks, or VMDKs, are provisioned as files on vSphere datastores which
may be backed by either local SCSI storage, SAN attached block storage, or NFS NAS
storage. The datastore abstraction is a model that assigns storage space to virtual
machines while insulating the virtual machine and its guest OS from the complexity of
the underlying physical storage technology.
The guest virtual machine is not exposed directly to the Fibre Channel SAN, iSCSI
SAN, local storage, or NAS. The storage available to a virtual machine can be extended
by increasing the size of the VMDK files.
New virtual disks can be added at any time, but this additional virtual disk capacity may
not be immediately usable without some reconfiguration within the Guest operating
system.

Student Study Guide - VTSP 5.5

Page 357

VTSP 5.5 - Student Study Guide


Each vSphere datastore is either a physical Virtual Machine File System (VMFS)
volume on a block storage device or an NFS share on a NAS array.
Datastores can span multiple physical storage subsystems if necessary and a single
VMFS volume can contain one or more LUNs from a local SCSI disk, a Fibre Channel
SAN disk array, or an iSCSI SAN disk array.
New LUNs added to a physical storage subsystem are detected and can be made
available to existing datastores, or used to create new datastores. If the physical
storage system supports increasing LUN sizes dynamically, vSphere will recognize this
increase in available capacity and it can be used to extend a datastore or create a new
datastore if needed. Storage capacity on datastores can be extended using either
approach without powering down physical hosts or storage subsystems.
When more direct access to block storage is required, this can be provided through an
RDM. While RDMs bypass the VMFS layer, they are associated with a mapping file that
allows vSphere to interact with and manage them in much the same way as VMDKs
located in VMFS or NFS Datastores.
The capacity and performance requirements of the VMFS datastores are provided by
the physical capacity of the storage that is connected to the ESXi host, either locally or
via a SAN. Similarly, the capacity and performance of the NFS datastores are provided
by the NAS systems to which the NFS datastores are mapped.
vSphere 5.5 sees the introduction of VSAN. VSAN provides customers with a
distributed compute and storage architecture which is fully integrated and managed by
vCenter. At this time it is in public beta and as such is not covered in this course. It
should be noted that its introduction will impact how you design and deploy your
environment. The diagram shown on this slide does not reflect these changes.

Page 358

VTSP 5.5

Student Study Guide VTSP 5.5

Virtual Machine Storage


Guest operating systems see only virtual disks that are presented through virtual
controllers. Virtual controllers appear to a virtual machine as different types of
controllers, including BusLogic Parallel,
LSI Logic Parallel, LSI Logic SAS, VMware Paravirtual SCSI, and SATA.
The virtual machine internally sees these disks as local drives and has no knowledge of
the underlying storage architecture.
Each virtual machine can be configured with up to 8 virtual controllers. These can be
four SCSI and four SATA. SCSI can manage up to 15 virtual disks each, whilst SATA
can manage up to 30 virtual disks per controller.
This allows a virtual machine to have a maximum of 180 virtual disks, 60 of these as
SCSI and 120 of these as SATA.
Each virtual disk is mapped to a VMDK file on a vSphere datastore available to the
ESXi host or in the case of SCSI, mapped to a raw device in cases where more direct
access to the underlying storage is required.

Student Study Guide - VTSP 5.5

Page 359

VTSP 5.5 - Student Study Guide

LUN, Volume, and Datastore


When working with physical storage and vSphere Infrastructure, it's important to
understand the terms LUN, volume, and datastore.
A Logical Unit Number, or LUN, is a single allocation of block storage presented to a
server.
This LUN is the unique identification a host has assigned to a given block device
resource (disk) it finds when it scans its storage systems. The term disk is often used
interchangeably with LUN.
From the perspective of an ESXi host, a LUN is a single unique raw storage block
device or disk. In the first example illustrated in the diagram, the storage administrator
has provisioned 20GB of storage on a SAN array and presented it to the ESXi host as
LUN 10.
A VMFS volume is a collection of storage resources managed as a single shared
resource formatted with VMFS. In most cases the VMFS volume contains a single LUN.
In those cases the datastore and the VMFS volume are identical. However, in some
cases the VMFS volume might span two or more LUNs and be composed of multiple
extents as in the example here with LUN 11 and LUN 12.
Some VMFS volumes have many extents, and in some rare cases several VMFS
volumes might exist on a single LUN, as in the final example here where LUN 15 is
used to create two 10 GB Volumes.
Page 360

VTSP 5.5

Student Study Guide VTSP 5.5

When creating VMFS formatted datastores, the vSphere Administrator must first choose
the LUN that will provide the physical storage capacity for the datastore and then select
how much of the LUN's capacity will be allocated to that datastore.
This allocation is called a volume. VMFS volumes can span multiple LUNS in which
case each part is called a VMFS volume extent.
For best performance, a LUN should not be configured with multiple VMFS datastores.
Each LUN should only be used for a single VMFS datastore.
In contrast, NFS shares are created by the storage administrator and are presented and
mounted to ESXi hosts as NFS Datastores by the vSphere Administrator.
Whether they are based on VMFS formatted storage or NFS mounts, all datastores are
logical containers that provide a uniform model for storing virtual machine files.

Student Study Guide - VTSP 5.5

Page 361

VTSP 5.5 - Student Study Guide

Virtual Machine Contents Resides in a Datastore


Datastores provide the functional storage capacity used by ESXi hosts to store virtual
machine content and other files used by vSphere such as templates and ISOs.
As we see here, a virtual machine is stored as a set of files: the Virtual Disks that we
have seen already and other files that store the virtual machine configuration, BIOS and
other functions we will cover later.
These are usually stored in one directory located in a single datastore. Some files may
be stored in other directories, or even other datastores, but they are stored in one
directory by default.
Since virtual machines are entirely encapsulated in these sets of files, they can be
moved, copied and backed up efficiently.
VMFS5 allows up to 256 VMFS volumes per ESXi host, with the minimum volume size
of 1.3GB and maximum size of 64TB. By default, up to 8 NFS datastores per ESXi host
can be supported, and that can be increased to 64 NFS datastores per system.
These important limits may affect design decisions, particularly in larger environments.

Page 362

VTSP 5.5

Student Study Guide VTSP 5.5

Types of Datastores
The type of datastore to be used for storage depends upon the type of physical storage
devices in the data center. The physical storage devices include local SCSI disks and
networked storage, such as FC SAN disk arrays, iSCSI SAN disk arrays, and NAS
arrays.
Local SCSI disks store virtual machine files on internal or external storage devices
attached to the ESXi host through a direct bus connection.
On networked storage, virtual machine files are stored on external shard storage
devices or arrays. The ESXi host communicates with the networked devices through a
high-speed network.
Let's add the HBAs and Switches in the diagram. Notice that there are front-end
connections on the SAN and NAS arrays.
As we mentioned earlier, block storage from local disks, FC SANs and iSCSI SANs are
formatted as VMFS volumes. NAS storage must use NFS v3 shares for an ESXi host to
be able to use it for NFS Datastores.
The performance and capacity of the storage subsystem depends on the storage
controllers (the capability and quantity of local RAID controllers, SAN HBAs and
Network Adapters used for IP storage), the SAN or NAS array controllers for remote
storage.

Student Study Guide - VTSP 5.5

Page 363

VTSP 5.5 - Student Study Guide


The performance capabilities of each datastore depend primarily on the configuration of
the physical disks allocated and can vary significantly depending on the type, quantity
and RAID configuration of the disks that are assigned for the datastores used by the
vSphere host.
All of the components involved must be selected appropriately if the required storage
capacity and performance needs are to be met.
There are many storage options available, but to be supported, the specific combination
of hardware used in the solution must be on the vSphere Hardware Compatibility List.

Page 364

VTSP 5.5

Student Study Guide VTSP 5.5

VMFS Volume
A VMFS volume is a clustered file system that allows multiple hosts read and write
access to the same storage device simultaneously.
The cluster file system enables key vSphere features, such as fast live migration of
running virtual machines from one host to another, to be supported when virtual
machines are stored on SAN storage that is shared between multiple vSphere hosts.
It also enables vSphere HA to automatically restart virtual machines on separate hosts if
needed, and it enables clustering of virtual machines across different hosts.
VMFS provides an on-disk distributed locking system to ensure that the same virtual
machine is not powered-on by multiple hosts at the same time.
If an ESXi host fails, the on-disk lock for each virtual machine can be released so that
virtual machines can be restarted on other ESXi hosts.
Besides locking functionality, VMFS allows virtual machines to operate safely in a SAN
environment with multiple ESXi hosts sharing the same VMFS datastore. Up to 64 hosts
can be connected concurrently to a single VMFS-5 volume.
VMFS can be deployed on a variety of SCSI-based storage devices, such as FC and
iSCSI SAN arrays. A virtual disk stored on a VMFS always appears as a mounted SCSI
device to a virtual machine.

Student Study Guide - VTSP 5.5

Page 365

VTSP 5.5 - Student Study Guide


The virtual disk hides the physical storage layer from the virtual machines operating
system. This feature allows operating systems that are not certified for SANs to be run
inside a virtual machine.
You can create or store multiple virtual machines on the same VMFS volume where
each virtual machine is defined by a set of files that are usually stored together in a
single directory.
The ability to place multiple virtual machines in a shared datastore greatly simplifies the
task of managing storage but it requires careful planning to ensure that the underlying
physical storage can deliver the aggregate performance needed by all of the virtual
machines in the shared datastore.

Page 366

VTSP 5.5

Student Study Guide VTSP 5.5

NFS Volumes
NFS is a file-sharing protocol that is used by NAS systems to allow multiple remote
systems to connect to a shared file system. It is used to establish a client-server
relationship between the ESXi hosts and NAS devices. In contrast to block storage
provided by local SCSI disks or SAN arrays, the NAS system itself is responsible for
controlling access and managing the layout and the structure of the files and directories
of the underlying physical storage.
ESXi hosts mount NFS volumes as NFS Datastores. Once these are created they
provide the same logical structure for storage that VMFS datastores provide for block
storage.
NFS allows volumes to be accessed simultaneously by multiple ESXi hosts that run
multiple virtual machines. With vSphere 5.5 NFS datastores provide the same advanced
features that depend on shared storage as VMFS datastores.
The strengths of NFS are similar to those of VMFS datastores. After storage is
provisioned to the ESXi hosts, the vCenter administrator is free to use the storage as
needed.
One major difference between VMFS and NFS datastores is that NFS shares can be
mounted on other systems even while they are mounted on ESXi hosts. This can make
it simpler to move data in or out of an NFS datastore for example if you want to copy
ISO files into an ISO library stored on an NFS datastore or simply wish to copy virtual

Student Study Guide - VTSP 5.5

Page 367

VTSP 5.5 - Student Study Guide


machine data files between systems without directly involving vSphere clients or
interfaces. Obviously, care must be taken not to interfere with running virtual machines
when doing this.

Page 368

VTSP 5.5

Student Study Guide VTSP 5.5

New vSphere Flash Read Cache


vSphere 5.5 introduces a new storage solution called vSphere Flash Read Cache, a
new Flash-based storage solution that is fully integrated with vSphere.
Flash Read Cache is a configurable resource in the vSphere Web Client. It provides
VMDK caching to accelerate I/Os in a shared storage environment.
Flash Read Cache has an open framework to allow third- party flash cache solutions in
the VMware storage I/O stack.
vSphere Flash Read Cache enables the pooling of multiple Flash-based devices into a
single consumable vSphere construct called vSphere Flash Resource, which is
consumed and managed in the same way as CPU and memory are done today in
vSphere.
vSphere Flash Read Cache framework design is based on two major components:

vSphere Flash Read Cache software and


vSphere Flash Read Cache infrastructure.

Student Study Guide - VTSP 5.5

Page 369

VTSP 5.5 - Student Study Guide

New vSphere Flash Read Cache


The vSphere Flash Read Cache infrastructure is responsible for integrating the vSphere
hosts locally attached Flash-based devices into the vSphere storage stack.
This integration delivers a Flash management platform that enables the pooling of
Flash-based devices into a vSphere Flash Resource.
The vSphere Flash Read Cache software is natively built into the core vSphere ESXi
Hypervisor.
vSphere Flash Read Cache provides a write-through cache mode that enhances the
performance of virtual machines without the modification of applications and operating
systems.
Virtual machines cannot detect the described performance and the allocation of
vSphere Flash Read Cache.
The performance enhancements are introduced to virtual machines based on the
placement of the vSphere Flash Read Cache, which is situated directly in the virtual
machines virtual disk data path.
vSphere Flash Read Cache enhances virtual machine performance by accelerating
read-intensive workloads in vSphere environments.
The tight integration of vSphere Flash Read Cache with vSphere 5.5 also delivers
support and compatibility with vSphere features such as vSphere vMotion, vSphere
High availability and vSphere Distributed Resource Scheduling.

Page 370

VTSP 5.5

Student Study Guide VTSP 5.5

vSphere hosts can be configured to consume some of the vSphere Flash Resource as
vSphere Flash Swap Cache, which replaces the Swap to SSD feature previously
introduced with vSphere 5.0.
The cache reservation is allocated from the Flash Read Cache resource.
The Flash Read Cache can be reserved for any individual VMDK in a Flash Read
Cache pool.
A Flash Read Cache is created only when a virtual machine is powered on, and it is
discarded when a virtual machine is suspended or powered off.

Student Study Guide - VTSP 5.5

Page 371

VTSP 5.5 - Student Study Guide

vSphere Flash Read Cache Requirements


vSphere Flash Read Cache has a number of software and hardware requirements that
must be fulfilled.
You must be running vCenter Server version 5.5, and vFlash can only be configured
and managed when using the Web client.
This is the same with all of the new features in vSphere 5.5.
You must have at least one vSphere ESXi 5.5 host as minimum.
The maximum size of a cluster in version 1.0 is 32 nodes. It should be noted that not
every host in the cluster needs to bear flash storage to benefit from vFlash Read Cache.
The virtual machine hardware version must be at version 10 and compatible with
vSphere 5.5 or later.
Solid State devices (SSDs) are used for read cache only. When a local SSD disk is
formatted as VMFS, it becomes unavailable for Flash Read Cache.
If the SSD that you plan to use with Flash Read Cache is already formatted with VMFS,
you must remove the VMFS datastore.
To set up a Flash Read Cache resource, use an SSD device connected to your host. If
you need to increase the capacity of your Flash Read Cache resource, you can add
more SSD devices, up to eight in total.
Finally, not all workloads benefit with a Flash Read Cache. The performance boost
depends on your workload pattern and working set size.
Page 372

VTSP 5.5

Student Study Guide VTSP 5.5

Read-intensive workloads with working sets that fit into the cache might benefit from a
Flash Read Cache configuration.

Student Study Guide - VTSP 5.5

Page 373

VTSP 5.5 - Student Study Guide

Storage Approaches
When considering storage options for a design it is important to fully understand the
benefits and limitations of each type of storage solution.
Shared and Local Storage
Shared storage is more expensive than local storage, but it supports a larger number of
vSphere features.
However, local storage might be more practical in a small environment with only a few
ESXi hosts.
Shared VMFS or NFS datastores offer a number of benefits over local storage.
Shared storage is a pre-requisite for vSphere HA & FT and significantly enhances the
speed of vMotion, DRS and DPM machine migrations. Shared storage is ideal for
repositories of virtual machine templates or ISOs. As environments scale shared
storage initially simplifies and eventually becomes a pre-requisite for increased growth.
To deliver high capacity, high performance with recoverability a shared storage solution
is required.

Page 374

VTSP 5.5

Student Study Guide VTSP 5.5

Isolated Storage
Isolated storage is a design choice where each virtual machine is mapped to a single
LUN, as is generally the case with physical servers.
When using RDMs, such isolation is implicit because each RDM volume is mapped to a
single virtual machine.
The primary advantage of this approach is that it limits the performance impact that
virtual machines can have on each other at the vSphere storage level although there
are security situations where storage isolation is desirable.
Given the advances in current storage systems, performance gains of RDMs is minimal
and should be used sparingly or if required by an application vendor.
The disadvantage of this approach is that when you scale the virtual environment, you
will reach the upper limit of 256 LUNs and VMFS volumes that can be configured on
each ESXi host. You may also need to provide an additional disk or LUN each time you
want to increase the storage capacity for a virtual machine.
This situation can lead to a significant management overhead. In some environments,
the storage administration team may need several days' notice to provide a new disk or
a LUN. You can also use vMotion to migrate virtual machines with RDMs as long as
both the source and target hosts have access to the raw LUN.
Another consideration is that every time you need to grow the capacity for a virtual
machine, the minimum commit size is that of an allocation of a LUN.

Student Study Guide - VTSP 5.5

Page 375

VTSP 5.5 - Student Study Guide


Although many arrays allow LUNs to be of any size, the storage administration team
may avoid carving up lots of small LUNs because this configuration makes it harder for
them to manage the array.
Most storage administration teams prefer to allocate LUNs that are fairly large.
They like to have the system administration or application teams divide those LUNs into
smaller chunks that are higher up in the stack.
VMFS suits this allocation scheme perfectly and is one of the reasons VMFS is so
effective in the virtualization storage management layer.

Page 376

VTSP 5.5

Student Study Guide VTSP 5.5

Consolidated Storage
When using consolidated storage, you gain additional management productivity and
resource utilization by pooling the storage resource and sharing it with many virtual
machines running on several ESXi hosts.
Dividing this shared resource between many virtual machines allows better flexibility,
easier provisioning, and simplifies ongoing management of the storage resources for
the virtual environment.
Keeping all your storage consolidated also enables Storage DRS to be used to ensure
that storage resources are dynamically allocated in response to utilization needs.
Compared to strict isolation, consolidation normally offers better utilization of storage
resources.
The main disadvantage is additional resource contention that, under some
circumstances, can lead to a reduction in virtual machine I/O performance.
By including consolidated storage in your original design, you can save money in your
hardware budget in the long run.
Think about investing early in a consolidated storage plan for your environment.

Student Study Guide - VTSP 5.5

Page 377

VTSP 5.5 - Student Study Guide

Isolated Storage or a Consolidated Pool of Storage?


The questions you have to consider before you decide on isolated or consolidated
storage are:
How many virtual machines can share a VMFS volume?
What is the throughput of these virtual machines?
Are the virtual machines running mission critical applications?
Are the virtual machine structures spread out?
The answers to these questions will help you decide if you need isolated or
consolidated storage. In general, its wise to separate heavy I/O workloads from the
shared pool of storage. This separation helps optimize the performance of those high
transactional throughput applications - an approach best characterized as consolidation
with some level of isolation.
Due to varying workloads, there is no exact rule to determine the limits of performance
and scalability for allocating the number of virtual machines per LUN. These limits also
depend on the number of ESXi hosts sharing concurrent access to a given VMFS
volume.
The key is to recognize the upper limit of 256 LUNs and understand that this can limit
the consolidation ratio if you take the concept of one LUN per virtual machine too far.
Many different applications can easily and effectively share a clustered pool of storage.

Page 378

VTSP 5.5

Student Study Guide VTSP 5.5

After considering all these points, the best practice is to have a mix of consolidated and
isolated storage.

Student Study Guide - VTSP 5.5

Page 379

VTSP 5.5 - Student Study Guide

Virtual Machine and Host Storage Requirements


Each virtual machine requires storage capacity for its configuration files, virtual disks,
Virtual Memory Swap and Snapshots.
These capacity requirements are primarily driven by the size of the virtual disks
associated with the virtual machine, but for precise sizing the space needed for VM
swap files and active snapshots must also be accounted for.
The storage solution must also meet the performance requirements of the virtual
machine, so that the I/Os per second, read/write pattern and bandwidth needs of the
VM can be met. Again these are primarily driven by the performance needs of the virtual
machine's virtual disks but snapshots will add additional overhead while they are active
and that must be considered.
ESXi 5.5 hosts requires a boot device with a minimum of 1GB of storage and when
booting from a local or SAN attached disk a 5.2GB disk is required for the boot and
scratch partitions. In auto-deploy scenarios the 4GB scratch partitions from multiple
ESXi hosts can be co-located on a single SAN LUN.
Consolidation allows for better utilization of the aggregate capacity and performance of
the physical storage solution but capacity and performance analysis must map the
requirements accurately to ensure that shared datastores deliver both the required
space and performance that is needed.

Page 380

VTSP 5.5

Student Study Guide VTSP 5.5

VM & Host Storage Requirements


Virtual Machine Contents
A virtual machine usually resides in a single folder or subdirectory that is created by an
ESXi host. When a user creates a new virtual machine, virtual machine files are
automatically created on a datastore. Some of these, the .vmdk files and swap files for
example, can be moved to other locations if required for performance or other
management reasons.
vSphere storage design selection decisions are primarily driven by the capacity and
performance needs of the VMDK but it is important to understand the role of the other
Virtual Machine files.
The .vmx file is the virtual machine configuration file and is the primary configuration file
that stores settings chosen in the New Virtual Machine Wizard or virtual machine
settings editor. The .vmxf file contains additional configuration file for virtual machine.
The virtual disks for the VM are defined by the .vmdk files. Each .vmdk is a small ASCII
text file that stores information about a virtual machine's hard disk drive. Each .vmdk file
is paired with a much larger -flat.vmdk file that contains the actual virtual disk data. This
pair of disks is sometimes referred to as the base virtual disk and will appear in the
vSphere datastore browser as a single entry representing both files.
The virtual machine's BIOS configuration is stored in the .nvram file.

Student Study Guide - VTSP 5.5

Page 381

VTSP 5.5 - Student Study Guide


The .vmss file is the virtual machine suspending state file that stores the state of a
suspended virtual machine while it is suspended.
Snapshot metadata is stored in the .vmsd file. This contains centralized information
about all snapshots for the virtual machine. The individual snapshots will have one or
more .vmsn files that store the running state of a virtual machine at the time you take
the snapshot. These files are stored in the VM directory, with the .vmx file.
Snapshots also create .vmdk and xxxx-delta.vmdk files that contain the difference
between the current disk state of the disk and the state of the disk that existed at the
time the snapshot was started. These files are stored in the same datastore and
directory as the base virtual machine disks they are associated with.
Each running virtual machine will also have a .vswp file that is the virtual machine's
swap file for memory allocation.
The vmware-n.log files contain log data about the virtual machine that can be used in
troubleshooting when you encounter problems, these are always stored in the directory
that holds the configuration (.vmx) file of the virtual machine.

Page 382

VTSP 5.5

Student Study Guide VTSP 5.5

VMDK Types Thick and Thin Provisioning


When provisioning virtual disks for VMs you have a number of choices in terms of the
type of VMDK.
Thick provisioned disks have the full capacity of the disk immediately allocated from the
VMFS. Thin provisioned disks on the other hand allocate capacity as needed and will
typically only consume the disk space that is actively in use by the VM.
Initial provisioning of virtual disks is usually rapid; however the Eager-Zeroed thick disk
format actively clears all data on the reserved space to zero before reporting the disk as
initialized. For large disks this can be a slow process.
In most cases the choice of which type to use is entirely up to the administrator but
Eager-Zeroed thick disks must be used in some cases, for example all disks in VMs
protected by vSphere Fault Tolerance must use them.

Student Study Guide - VTSP 5.5

Page 383

VTSP 5.5 - Student Study Guide

Thick Provisioning
When you create a virtual machine, a certain amount of storage space on a datastore is
provisioned, or allocated, to the virtual disk files.
By default, ESXi offers a traditional storage provisioning method. In this method, the
amount of storage the virtual machine will need for its entire lifecycle is estimated, a
fixed amount of storage space is provisioned to its virtual disk, and the entire
provisioned space is committed to the virtual disk during its creation. This type of virtual
disk that occupies the entire provisioned space is called a thick disk format.
A virtual disk in the thick format does not change its size unless it is modified by a
vSphere administrator. From the beginning, it occupies its entire space on the datastore
to which it is assigned. However, creating thick format virtual disks leads to
underutilization of datastore capacity because large amounts of storage space that are
pre-allocated to individual virtual machines might remain unused and stranded since it
cannot be used by any other virtual machine.
Thick virtual disks, which have all their space allocated at creation time, are further
divided into two types: eager zeroed and lazy zeroed.
The default allocation type for thick disks is lazy-zeroed. While all of the space is
allocated at the time of creation, each block is zeroed only on first write. This results in a
shorter creation time, but reduced performance the first time a block is written to.
Subsequent writes, however, have the same performance as on eager-zeroed thick
disks. The Eager-Zeroed Thick type of disk has all space allocated and zeroed out at

Page 384

VTSP 5.5

Student Study Guide VTSP 5.5

the time of creation. This increases the time it takes to create the disk, but it results in
the best and most consistent performance, even on the first write to each block.

Student Study Guide - VTSP 5.5

Page 385

VTSP 5.5 - Student Study Guide

Thin Provisioning
To avoid over-allocating storage space and thus minimize stranded storage, vSphere
supports storage over-commitment in the form of thin provisioning.
When a disk is thin provisioned, the virtual machine thinks it has access to a large
amount of storage. However, the actual physical footprint is much smaller.
Disks in thin format look just like disks in thick format in terms of logical size. However,
the VMFS drivers manage the disks differently in terms of physical size. The VMFS
drivers allocate physical space for the thin-provisioned disks on first write and expand
the disk on demand, if and when the guest operating system needs it. This capability
enables the vCenter Server administrator to allocate the total provisioned space for
disks on a datastore at a greater amount than the actual capacity of the datastore.
It is important to note that thin provisioned disks add overhead to virtual disk operations
when the virtual disk needs to be extended, for example when data is written for the first
time to a new area of the disk, this can lead to more variable performance with thin
provisioned disks than with thick provisioned disks, especially when compared to eagerzero thick provisioned disks and they may not be suitable for VMs with demanding disk
performance requirements.
If the VMFS volume is full and a thin disk needs to allocate more space for itself, the
virtual machine will be paused and vSphere prompts the vCenter Server administrator
to provide more space on the underlying VMFS datastore. While the virtual machine is
suspended, the integrity of the VM is maintained however this is still a very undesirable
scenario and care must be taken to avoid this happening if possible.

Page 386

VTSP 5.5

Student Study Guide VTSP 5.5

vSphere provides alarms and reports that specifically track allocation versus current
usage of storage capacity so that the vCenter Server administrator can optimize
allocation of storage for the virtual environment and be alerted when there is any risk of
datastores running out of sufficient capacity to cater for the dynamic space
requirements of running machines with thin provisioned virtual disks.

Student Study Guide - VTSP 5.5

Page 387

VTSP 5.5 - Student Study Guide

Space Efficient Sparse Disks


With the release of vSphere 5.1, VMware introduced a new virtual disk type, the spaceefficient sparse virtual disk (SE sparse disk). One of its major features is the ability to
reclaim previously used space within the guest OS. Another major feature of the SE
sparse disk is the ability to set a granular virtual machine disk block allocation size
according to the requirements of the application. Some applications running inside a
virtual machine work best with larger block allocations; some work best with smaller
blocks. This was not tunable in the past.
The SE sparse disk implements a space reclaim feature to reclaim blocks that were
previously used but now are unused on the guest OS. These are blocks that were
previously written but currently are unaddressed in a file system/database due to file
deletions, temporary files, and so on. There are two steps involved in the space
reclamation feature: The first step is the wipe operation that frees up a contiguous area
of free space in the virtual machine disk (VMDK); the second step is the shrink, which
unmaps or truncates that area of free space to enable the physical storage to be
returned to the free pool.
VMware View 5.1 and VMware Horizon View 5.x are the only products that use Space
Efficient Sparse Disks.
Sparse disks are not currently supported when using the new 62TB disks introduced in
vSphere 5.5

Page 388

VTSP 5.5

Student Study Guide VTSP 5.5

vSphere Thin Provisioning at Array and Virtual Disk Level


In vSphere environments thin provisioning can be done at the array level and the virtual
disk level.
Thin provisioning at the array level is done by the storage administrator while at the
virtual disk level it is under the control of the vSphere Administrator.
In both cases, overall storage utilization can be significantly improved over thick
provisioning but an important difference is that if the storage array is unable to increase
the size of its thin provisioned disks because it has insufficient spare capacity, vSphere
may be unaware of the problem and this can lead to virtual machines crashing. This
problem is addressed by arrays that support the VMware vSphere Storage APIs - Array
Integration (VAAI) VM-Stun primitive that we will see in the next module.
When storage array level thin provisioning is used it is vital that alerting and monitoring
at the array level is implemented to avoid scenarios where virtual machines are
impacted due to the array being unable to expand thin provisioned volumes as required.
Array level thin provisioning can allow administrators to take advantage of the benefits
of storage thin provisioning in terms of overall utilization without sacrificing performance.

Student Study Guide - VTSP 5.5

Page 389

VTSP 5.5 - Student Study Guide

Planning for Swap Space, Snapshots and Thin Provisioning


Disk storage capacity is defined primarily by the storage capacity needs of the Virtual
Machine Guest operating system's virtual disks, snapshots and swap files.
If you are not using thin provisioning, you will need to ensure that your storage solution
has a total capacity that exceeds the combined size of the configured capacity for all of
the VM disks. With thin-provisioned disks, you do not need as much physical capacity
but you will have to ensure you can support the amount of capacity actually in use, plus
a buffer for expected growth in actual used space.
Most of the remaining virtual machine configuration files are relatively small and do not
consume any significant amounts of disk capacity. There are two other major classes of
VM files that can significantly affect overall storage capacity needs:
If Virtual Machine VMX swap files are enabled these may require anywhere from a
hundred megabytes to over 10GB per VM depending on VM configuration. While this
requirement is relatively small in most cases, if you have many VMs sharing a
datastore, or if the VMs have lots of vRAM, vCPUs or both, this may need to be
accounted for.
Snapshot capacity - this will vary depending on whether snapshots are used, how
much disk activity the machines with active snapshots are performing and how long the
snapshots are kept active. Longer lived snapshots on busy systems can use a lot of
capacity and additional capacity must be available in the datastore(s) to support them. If
the datastore free space is exhausted by a VM's snapshots that VM will be suspended
Page 390

VTSP 5.5

Student Study Guide VTSP 5.5

and other VMs on the same datastore may be impacted (e.g. it may be impossible to
start VMs due to the inability of the VMkernel to create a swap file on the datastore).
The guideline is to always leave between 10 and 20% of capacity free on datastores so
that these typical overheads can be accommodated without impacting production
systems but when snapshots are in active use, care must be taken to monitor the
consumption of free space to ensure they do not negatively impact running VMs.

Student Study Guide - VTSP 5.5

Page 391

VTSP 5.5 - Student Study Guide

Storage Considerations
You must keep a few points in mind while configuring datastores, selecting virtual disks
and choosing the type of storage solution for a vSphere environment.
For VMFS volumes the best practice is to have just one VMFS volume per LUN. Each
VMFS can be used for multiple virtual machines, as we have noted earlier, as
consolidation on shared datastores simplifies administration and management but there
will be scenarios where virtual machines, or even individual virtual disks, are best
served by dedicating a specific datastore to them. These are usually driven by
performance considerations.
When virtual machines running on a datastore require more space, you can dynamically
increase the capacity of a VMFS datastore by extending the volume or by adding an
extent. An extent is a partition on a storage device, or LUN. You can add up to 32 new
extents of the same storage type to an existing VMFS datastore.
Test and production environments should be kept on separate VMFS volumes.
RDMs can be used for virtual machines that are part of physical-to-virtual clusters or
clusters that span ESXi hosts (cluster-across-boxes) or virtual machines that need to
work directly with array features, such as array level snapshots.
You must keep iSCSI and NAS on separate and isolated IP networks for best
performance.

Page 392

VTSP 5.5

Student Study Guide VTSP 5.5

The default limit for NFS mounts on ESXi is eight but this can be extended up to sixty
four, if necessary.
When deploying ESXi 5.5 in environments that have older VMFS 2 file systems you
have to first upgrade VMFS 2 to VMFS 3 and then upgrade to VMFS 5 as ESXi 5.5
does not support VMFS2.

Student Study Guide - VTSP 5.5

Page 393

VTSP 5.5 - Student Study Guide

Space Utilization-Related Issues


vSphere Snapshots are implemented using Sparse disks with a Copy-On-Write
mechanism that is used to fill in data in the sparse disk while the snapshot is in use.
This technique is efficient in terms of performance and capacity but on very active disks
it can result in snapshots eventually consuming just as much space as the original disk.
It is important to ensure that snapshots are not left running for extended periods of time
and storage alarms should be used to ensure that if snapshots are left running that an
alert will be raised if they are consuming too much datastore space.
When snapshots are used, as they will be to enable most virtual machine backup
solutions, then sufficient capacity must be allocated to accommodate them. While a 1020% overall free space reserve on datastores serves as a good guideline that will allow
short term snapshots to be used safely, it is important to monitor and evaluate the
capacity they need and either adjust the datastore capacity or move VMs to a more
appropriate datastore if excessive consumption of free space is noticed.

Page 394

VTSP 5.5

Student Study Guide VTSP 5.5

Monitoring Space Utilization


vCenter Server administrators can monitor space utilization by setting up alarms that
send a notification when a certain threshold is reached. They can also analyze reports
and charts that graphically represent statistical data for various devices and entities and
give real-time data on capacity utilization.
When either thin-provisioned VMDKs or snapshots are being used it is critically
important to ensure that datastore space utilization is actively monitored.

Student Study Guide - VTSP 5.5

Page 395

VTSP 5.5 - Student Study Guide

Alarms
Alarms are notifications that are set on events or conditions for an object. For example,
the vCenter Server administrator can configure an alarm on disk usage percentage, to
be notified when the amount of disk space used by a datastore reaches a certain level.
The vSphere administrator can set alarms on all managed objects in the inventory.
When an alarm is set on a parent entity, such as a cluster, all child entities inherit the
alarm. Alarms cannot be changed or overridden at the child level.
Alarms should be used to generate notifications when specific disk utilization thresholds
are reached. By default vSphere will generate warnings when a datastore exceeds 75%
capacity allocated, and an alert is raised when it exceeds 85%. These defaults can be
changed and should be selected to generate effective notifications to the administrators.
For example if a datastore has thick provisioned VMDKs and snapshots will not be used
then it may be safe to pre-allocate over 90% of this datastore and change the warning
and alert levels accordingly. In contrast if a datastore contains very dynamic thin
provisioned VMDK's or contains VMs that will have a number of active snapshots
running then the default alarm levels might be more appropriate.

Page 396

VTSP 5.5

Student Study Guide VTSP 5.5

Raw Device Mapping


While most virtual machine disks are provisioned as VMDK files stored on VMFS or
NFS datastores, the vSphere Raw Device Mapping virtual disk type provides a
mechanism for a virtual machine to have more direct access to LUNs on the physical
storage subsystem. The RDM disk type is available only on block-based storage arrays.
An RDM is a mapping file stored on a VMFS volume, usually in the default virtual
machine directory, that acts as a proxy for the raw physical storage device. The RDM
mapping file contains metadata for managing and redirecting the disk access to the
physical device.
The mapping file gives some of the advantages of direct access to a physical device
while keeping some advantages of a virtual disk in VMFS.
As a result, it merges the benefits of VMFS manageability with the raw device access.
There are various terms describing RDM, such as mapping a raw device into a
datastore, mapping a system LUN, or a disk file to a physical disk volume.
You can use the vSphere Client to add raw LUNs to virtual machines.
You can also use vMotion to migrate virtual machines with RDMs as long as both the
source and target hosts have access to the raw LUN. Additional benefits of RDM
include support for distributed file locking, permissions, and naming functions.
VMware recommends using VMFS datastores for most virtual disk storage.

Student Study Guide - VTSP 5.5

Page 397

VTSP 5.5 - Student Study Guide

RDM Compatibility Modes


There are two compatibility modes available for RDMs, virtual and physical.
Virtual compatibility mode appears exactly as a VMFS virtual disk to a virtual machine. It
provides the benefits of VMFS, such as advanced file locking system for data protection
and snapshots. However, the real hardware characteristics of the storage disk are
hidden from the virtual machine.
With RDMs in physical compatibility mode, the VMkernel passes all SCSI commands
directly to the device except for the Report LUNs command. Because of this, all
characteristics of the underlying storage are exposed to the virtual machine. However,
blocking Report LUNs prevents the virtual machine from discovering any other SCSI
devices except for the device mapped by the RDM file. This SCSI command capability
is useful when the virtual machine is running SAN management agents or SCSI targetbased software.
For RDMs in physical compatibility mode, you cannot convert RDMs to virtual disks and
you cannot perform operations such as Storage vMotion, migration, or cloning. Also,
you can only relocate RDMs to VMFS5 datastores. VMFS 5 supports RDMs in physical
compatibility mode that are more than 2TB disk size.
vSphere 5.5 introduces support for 62TB (virtual mode) RDMs.

Page 398

VTSP 5.5

Student Study Guide VTSP 5.5

Uses for RDMs


You might need to use raw LUNs with RDMs in situations when SAN snapshots or other
layered applications run in a virtual machine. RDMs can be used in this way to enable
scalable backup off-loading systems by using features inherent to the SAN.
You may also need to use RDMs in Microsoft Cluster Services (MSCS) clustering
scenarios that span physical hosts in virtual-to-virtual clusters and physical-to-virtual
clusters. In this case, the cluster data and quorum disks should be configured as RDM
rather than as files on a shared VMFS.
A separate LUN is required for each RDM used by a virtual machine.

Student Study Guide - VTSP 5.5

Page 399

VTSP 5.5 - Student Study Guide

Functionality Supported Using Larger VMDK and vRDMS


vSphere 5.5 increases the maximum size of a virtual machine disk file (or VMDK) to a
new limit of 62TB. The previous limit was 2TB minus 512 bytes.
62TB VMDKs are supported on VMFS5 and NFS.
There are no specific virtual hardware requirements; however ESXi 5.5 is required.
The maximum size of a virtual Raw Device Mapping (non-pass-thru) is also increasing,
from 2TB minus 512 bytes to 62TB.
Support for 64TB physical (pass-thru) Raw Device Mappings was introduced in vSphere
5.0.
Virtual machine snapshots also support this new size for delta disks that are created
when a snapshot is taken of the virtual machine.
This new size meets the scalability requirements of all application types running in
virtual machines.
VMDKs can be deployed up to 62TB, or already existing VMDKs which are 2TB can be
grown offline.

Page 400

VTSP 5.5

Student Study Guide VTSP 5.5

VMDirectPath I/O
VMDirectPath I/O is a vSphere feature that allows an administrator to reserve specific
hardware devices, such as Fibre Channel HBA's or Network Adapters, for use by a
specific Virtual Machine. With VMDirectPath IO the physical device is presented directly
to the VM Guest.
The Guest must fully support the hardware and will require drivers to be installed and
full configuration of the services associated with the device must be performed within
the Guest OS.
This may be required for situations where the VM has to have complete control of the
storage hardware at the HBA level. In some cases the performance needs of the VM
workload may require this level of control and this can only be provided by using
VMDirectPath IO to reserve the required devices and present them directly to the VM
Guest.
The main drawback is that any VM configured to use VMDirectPath IO is effectively
locked into the host it is running on and cannot avail of vMotion, HA, FT, DRS or other
cluster techniques where vSphere may need to move the virtual machine to another
host. VM Snapshots are also not supported as vSphere has no visibility to the directly
managed storage.

Student Study Guide - VTSP 5.5

Page 401

VTSP 5.5 - Student Study Guide

FC Storage Area Networks


Fibre Channel SANs are a network storage solution that provides block based storage.
Fibre Channel stores virtual machine files remotely on an FC storage area network
(SAN). FC SAN is a specialized high-speed network that connects your hosts to highperformance storage devices.
The network shown uses Fibre Channel protocol to transport SCSI traffic from virtual
machines to the FC SAN devices.
In this configuration, a host connects to a SAN fabric, which consists of Fibre Channel
switches and storage arrays, using a Fibre Channel adapter. LUNs from a storage array
become available to the host. You can access the LUNs and create datastores for your
storage needs.
The datastores use the VMFS format.
Fibre Channel SAN deployments can be very complex to deploy and typically tend to be
more expensive than other protocol SANs. However, they are highly resilient and offer
high performance.
You should configure storage redundancy in order to create availability, scalability and
performance. Accessing the same storage through different transport protocols, such as
iSCSI and Fibre Channel, at the same time is not supported.
When you use ESXi with a SAN, certain restrictions apply. ESXi does not support FC
connected tape devices. ESXi supports a variety of SAN storage systems in different
configurations.
Page 402

VTSP 5.5

Student Study Guide VTSP 5.5

Generally, VMware tests ESXi with supported storage systems for basic connectivity,
HBA failover, and so on. Not all storage devices are certified for all features and
capabilities of ESXi, and vendors might have specific positions of support with regard to
ESXi.
You should observe these tips for preventing problems with your SAN configuration:

Place only one VMFS datastore on each LUN.


Do not change the path policy the system sets for you unless you understand the
implications of making such a change.
Document everything. Include information about zoning, access control, storage,
switch, server, FC HBA configuration and software, firmware versions, and
storage cable plan.
Plan for failure.

Student Study Guide - VTSP 5.5

Page 403

VTSP 5.5 - Student Study Guide

iSCSI Storage Area Networks


iSCSI SANs use Ethernet connections between computer systems, or host servers, and
high performance storage subsystems.
The SAN components include iSCSI host bus adapters (HBAs) or Network Interface
Cards (NICs) in the host servers, switches and routers that transport the storage traffic,
cables, storage processors (SPs), and storage disk systems.
iSCSI networks can also be complex to implement, however iSCSI solutions can
leverage any form of IP network but in most cases they will be used on a high
performance Ethernet network. With the combination of iSCSI multi-pathing that
vSphere supports and 10Gb Ethernet networks iSCSI can deliver extremely high
performance.
When designing an iSCSI storage network you should create a dedicated network. This
removes the opportunity for contention with other traffic types, which can decrease
latency and increase performance. A VLAN is a viable method to segment traffic, as
long as the physical link is not over-subscribed, which can increase contention and
latency. You should also limit the number of switches that traffic needs to traverse. This
reduces latency and complexity. You should also configure jumbo frames end-to-end to
reduce protocol overhead and improve throughput.
You should observe these tips for avoiding problems with your SAN configuration:

Page 404

VTSP 5.5

Student Study Guide VTSP 5.5

Place only one VMFS datastore on each LUN. Multiple VMFS datastores on one
LUN are not recommended.
Do not change the path policy the system sets for you unless you understand the
implications of making such a change.
Document everything; include information about configuration, access control,
storage, switch, server and iSCSI HBA configuration, software and firmware
versions, and storage cable plan.
Plan for failure.

Student Study Guide - VTSP 5.5

Page 405

VTSP 5.5 - Student Study Guide

Network Attached Storage - NAS


Like iSCSI SANS, NAS devices also provide storage across IP networks.
However unlike either FC or iSCSI SANS NAS devices present network shares to the
hosts that they are connected to and handle storage at the file level, while SANs
present raw storage at the block level.
For vSphere solutions this means that the NAS devices manage their own file systems,
controlling access, authentication and file locking while SAN storage has to be
formatted with VMFS. vSphere only supports NFS v3 NAS shares.
NAS solutions can deliver extremely high performance, high capacity resilient storage
for vSphere. NIC teaming should be used to provide resilience.
Multiple NICs connected to the IP-storage network via distributed vSwitches can also be
configured with Load Based Teaming to provide scalable NFS load balancing.

Page 406

VTSP 5.5

Student Study Guide VTSP 5.5

VSA Enables Storage High Availability (HA)


A VSA cluster leverages the computing and local storage resources of several ESXi
hosts. The VSA is a cost effective solution that uses virtual appliances that use local
storage in ESXi hosts to provide a set of datastores that are accessible to all hosts
within the cluster without requiring a separate SAN or NAS.
An ESXi host that runs the VSA and participates in a VSA cluster is a VSA cluster
member. VSA clusters can be configured with either two or three members.
A VSA cluster provides various benefits that include the following:

Saving money - Less than half the cost of storage hardware alternatives
Easy installation - with just a few mouse clicks, even on existing virtualized
environments (brown-field installation)
Add more storage anytime - Add more disks without disruption as your storage
needs expand
Get High Availability in a few clicks
Minimize application downtime - Migrate virtual machines from host to host, with
no service disruption
Eliminate any single point of failure - Provide resilient data protection that
eliminates any single point of failure within your IT environment
Manage multiple VSA clusters centrally - Run one or more VSA-enabled
customers from one vCenter Server for centralized management of distributed
environments

Student Study Guide - VTSP 5.5

Page 407

VTSP 5.5 - Student Study Guide


The VSA does have some setup limitations which you should consider before
implementing the VSA Cluster:

VSA is not intended to be a High End, High Capacity Enterprise Storage Cluster
Solution. It is targeted towards the Small to Medium business market.
Across 3 hosts the maximum amount of useable storage that a VSA 5.5 cluster
can support is 36TB.
Decide on 2-member or 3-member VSA cluster. You cannot add another VSA
cluster member to a running VSA cluster. For example, you cannot extend a 2member VSA cluster with another member.
vCenter Server must be installed and running before you create the VSA cluster.
Consider the vSphere HA admission control reservations when determining the
number of virtual machines and the amount of resources that your cluster
supports.
The VSA cluster that includes ESXi version 5.0 hosts does not support memory
overcommitment for virtual machines. Because of this, you should reserve the
configured memory of all non-VSA virtual machines that use VSA datastores so
that you do not overcommit memory.

Page 408

VTSP 5.5

Student Study Guide VTSP 5.5

VSA 5.5 Capacity


In VSA 5.5 the number of 3 TB disk drives that can be used by the VSA appliance per
ESXI host is 8. This provides a usable capacity of 18TB per host after RAID 6
overheads. Across a 3 node VSA 5.5 cluster this delivers an effective total usable
capacity of 27TB of resilient storage.
For disks with capacity of 2TB or less an ESXi 5.5 host can have up to 12 local disks or
up to 16 disks in an expansion chassis. This supports up to 24TB of usable VMFS5
Capacity per host and a total usable storage capacity of up to 36TB of resilient storage
across a 3 node VSA 5.5 cluster.

Student Study Guide - VTSP 5.5

Page 409

VTSP 5.5 - Student Study Guide

Running vCenter on the VSA Cluster


Prior to vSphere 5.1 vCenter had to be installed somewhere else first before a VSA
cluster could be deployed. Running the vCenter server from the VSA datastore was not
supported.
In VSA 5.1 and later, you can install a vCenter Server instance on a local datastore of a
VSA cluster host.
The vCenter Server instance then can be used to install VSA by allocating a subset of
local storage, excluding the amount allocated for vCenter Server (on all hosts) for VSA.
Running vCenter in the VSA Datastore is only supported when there are three host
nodes in the VSA 5.5 cluster. In this case it can initially be installed on a local datastore
and then migrated to the VSA datastore once that has been configured.

Page 410

VTSP 5.5

Student Study Guide VTSP 5.5

Drive Types
All vSphere storage solutions ultimately require physical storage hardware that is either
installed locally in vSphere hosts or are managed by a remote SAN or NAS array.
These have traditionally been hard disk drives, precision mechanical devices that store
data on extremely rigid spinning metal or ceramic disks coated in magnetic materials.
Hard disks use a variety of interfaces but most enterprise disks today use either Serial
Attached SCSI, SAS, Near-Line Serial Attached SCSI, NL-SAS or Serial ATA.

Student Study Guide - VTSP 5.5

Page 411

VTSP 5.5 - Student Study Guide

Drive Types
SAS currently provides 6 Gigabit per second interface speeds which allows an
individual disk to transfer data at up to 600Megabytes / sec provided the disk can
support that sort of transfer rate. SAS also supports a number of advanced queuing,
error recovery and error reporting technologies that make it ideal for enterprise storage.
SAS drives are available in 7.2k , 10k and 15k rpm speeds. These speeds determine
the effective performance limit that drives can sustain in continuous operation with 7.2k
drives delivering about 75 - 100 IO operations per second (IOPS) and 15K drives
generally delivering around at most 210 IOPS.
SAS drives are specifically manufactured and designed deliver high reliability. A key
metric for hard drive reliability is what is termed the Bit Error Rate, which indicates the
volume of data that can be read from the drive before a single bit error is experienced.
For SAS drives this is typically 1 in 10^16. This number is sufficiently high to guarantee
that most drives will not experience an error during their supported lifetime. SAS drives
are also rated for relatively long Mean Time Between Failure of somewhere in the
region of 1.6 million hours, or about 180 years.

Page 412

VTSP 5.5

Student Study Guide VTSP 5.5

Drive Types NL-SAS


NL-SAS drives are similar to SAS drives in that they use the same interface but the underlying
drive technology is not as robust. NL-SAS drives BER numbers are about 10 times lower than
SAS and the mean time between failure is about 25% lower. NL-SAS drives tend to be larger
capacity and use slower 7.2K or even 5.4K rpm speeds and are typically used where peak
performance or ultimate reliability is not required. IO performance of NL-SAS disks tends to be
in the 50-100 IOPs range.

Student Study Guide - VTSP 5.5

Page 413

VTSP 5.5 - Student Study Guide

Drive Types Solid State Drives


Solid State Drives are a relatively new technology that eliminates the mechanical disk in favor
of solid state Flash memory technology. They are extremely fast, with IOPs performance
numbers for single drives ranging from 5000 IOPs to over 750,000 or more for a single drive.
SSDs are typically substantially smaller and much more expensive per Gigabyte than hard disk
drives.

Page 414

VTSP 5.5

Student Study Guide VTSP 5.5

Drive Types-SATA
SATA is an older interface that is primarily used in non server environments. It has few of the
high end reliability and queuing features that are required for server storage and hard drives that
use SATA have been phased out of SAN and NAS solutions over the past few years. Performance
is somewhat slower than NL-SAS drives that are otherwise identical.

Student Study Guide - VTSP 5.5

Page 415

VTSP 5.5 - Student Study Guide

Solid State Disks Enablement


Solid state disks/devices (SSDs) are very resilient and provide faster access to data.
They also provide several advantages, particularly for workloads that require extremely
high IOPs.
SSD enablement in vSphere provides several benefits.
On ESXi 5.x, you can use VMFS datastores that are created on SSD storage devices to
allocate space for ESXi host cache.
On ESXi 5.5 and later, you use local SSDs for such features as Flash Read Cache.
When used appropriately as part of a vSphere storage solution, the high I/O throughput
achieved by SSDs can be used to help increase virtual machine consolidation ratios
significantly.
ESXi 5.x allows guest operating systems to identify their virtual disks as virtual SSDs
which can help the guest operating system optimize its internal disk I/O for operation on
SSDs.
This virtual SSD functionality is only supported on virtual hardware version 8, ESXi 5.x
hosts, and VMFS5 file type or later.
An ESXi host can automatically distinguish SSDs from regular hard drives. However
some SSDs may not be automatically identified and ESXi will not make optimal use of
them until they are identified.

Page 416

VTSP 5.5

Student Study Guide VTSP 5.5

You can use PSA SATP claim rules for tagging SSD devices manually that are not
detected automatically.
vSphere 5.5 introduces support for Hot-Pluggable PCIe SSD devices.
SSD drives can be added or removed from a running ESXi host. This reduces the
amount of downtime for virtual machines for administrators in the same way that
traditional hot plug SAS/SATA disks have done.
The hardware and BIOS of the server must support Hot-Plug PCIe.

Student Study Guide - VTSP 5.5

Page 417

VTSP 5.5 - Student Study Guide

Storage Tradeoffs
There are many trade-offs involved in storage design. At a fundamental level there are
broad choices to be made in terms of the cost, features and complexity when selecting
which storage solutions to include.
Direct attached storage and the VSA are low cost storage solutions that are relatively
simple to configure and maintain.
The principle drawbacks with them are the lack of support for most advanced features
with DAS and the limited scale and performance capability of the VSA. As a result both
of these are primarily suited to small environments where cost is a principle driver and
the advanced cluster features are less compelling.
For the SAN and NAS solutions FC SANs are typically high cost with iSCSI and NAS
solutions tending to be more prevalent at the lower end.
All three can support all advanced cluster based vSphere functionality and deliver very
high capacity, high performance with exceptional resilience. There are relatively few
truly entry level FC solutions but both iSCSI and NFS solutions scale from the
consumer price level up, and there are a number of entry level iSCSI and NAS solutions
that are on the HCL and supported.
FC tends to be dominant at the high end and in demanding environments where
performance is critical but there are iSCSI and NAS solutions at the very high end too.

Page 418

VTSP 5.5

Student Study Guide VTSP 5.5

The choice on which is most suitable will often come down to a customer preference for
having an Ethernet/IP-storage network over a dedicated FC network or whether specific
detailed OEM capabilities are better suited to the specific customer needs than the
alternatives.

Student Study Guide - VTSP 5.5

Page 419

VTSP 5.5 - Student Study Guide

Design Limits - Knowledge Check


There are important limits that may affect design decisions for virtual storage.
Match the correct values to the boxes to complete the description of these limits.

Page 420

VTSP 5.5

Student Study Guide VTSP 5.5

Virtual Storage Types - Knowledge Check


Complete the statements on virtual storage types.
Match the words with their correct positions in the sentences.

Student Study Guide - VTSP 5.5

Page 421

VTSP 5.5 - Student Study Guide

Thick Provisioning - Knowledge Check


Thick-provisioned disks can be allocated as lazy-zero or eager-zero. Can you recall the
properties of each?

Page 422

VTSP 5.5

Student Study Guide VTSP 5.5

Usable Capacity - Knowledge Check

Student Study Guide - VTSP 5.5

Page 423

VTSP 5.5 - Student Study Guide

Module Summary
This concludes module 1, vSphere vStorage Architecture.
Now that you have completed this module, you should be able to:

Explain the high level vSphere Storage Architecture.


Describe the capacity and performance requirements for Virtual Machine
Storage.
Describe the types of physical storage vSphere can utilize and explain the
features, benefits and limitations of each type.

Page 424

VTSP 5.5

Student Study Guide VTSP 5.5

Module 2: Advanced Features for Availability and Performance


This is module 2, Advanced Features for Availability and Performance.
These are the topics that will be covered in this module.

Student Study Guide - VTSP 5.5

Page 425

VTSP 5.5 - Student Study Guide

Module 2 Objectives
At the end of this module, you will be able to:

Explain and demonstrate multipathing and the PSA architecture


Explain Storage I/O Control
Compare the features of Datastore Clusters and Storage DRS with isolated
storage features
Describe Storage Hardware Acceleration (VAAI)
Compare VMware API for Storage Awareness (VASA) with plain storage
Explain Profile-Driven Storage

Page 426

VTSP 5.5

Student Study Guide VTSP 5.5

Pluggable Storage Architecture


The storage I/O path provides virtual machines with access to storage devices through
device emulation. This device emulation allows a virtual machine to access files on a
VMFS or NFS file system as if they were SCSI devices. The VMkernel provides storage
virtualization functions such as the scheduling of I/O requests from multiple virtual
machines and multipathing. In addition, VMkernel offers several Storage APIs that
enable storage partners to integrate and optimize their products for vSphere.
To understand the role this plays, it is useful to look at how I/O requests are handled by
the VMkernel. Virtual machine uses SCSI emulation which redirects the I/O requests
from the guest to the VMkernel. Depending on the type of virtual disk, this request may
pass through a number of layers that handle translation through a VMDK disk format,
snapshots, VMFS and NFS datastores and a SCSI disk emulation layer or it may be
handed directly to a Raw Device Mapping. Regardless of the virtual disk type, the I/O
request is then handled by the Logical Device I/O Scheduler which manages the I/O
requests from all virtual machines to and from physical devices. The I/O Scheduler
passes the requests to the Pluggable Storage Architecture (PSA) which manages the
path and failover policy for each physical storage device.
It is the PSA which controls the specific path of each I/O request.

Student Study Guide - VTSP 5.5

Page 427

VTSP 5.5 - Student Study Guide

Processing I/O Requests


By default, ESXis Pluggable Storage Architecture provides the VMware Native
Multipathing Plug-in or NMP.
NMP is an extensible module that manages sub-plug-ins.
There are two types of NMP sub-plug-ins, Storage Array Type Plug-ins or SATPs, and
Path Selection Plug-ins or PSPs. SATPs and PSPs can be built-in and are provided by
VMware.
They can also be provided by a third-party vendor.
When a virtual machine issues an I/O request to a storage device managed by the
NMP, the NMP calls the PSP assigned to this storage device.
The PSP then selects an appropriate physical path for the I/O to be sent. The NMP
reports the success or failure of the operation.
If the I/O operation is successful, the NMP reports its completion.
However, if the I/O operation reports an error, the NMP calls an appropriate SATP.
The SATP checks the error codes and, when appropriate, activates inactive paths. The
PSP is called to select a new path to send the I/O.
You need to understand how the PSA manages paths to your storage in order to
correctly specify the hardware configuration of your hosts and to ensure that your
storage network is configured optimally for performance and resilience.
Page 428

VTSP 5.5

Student Study Guide VTSP 5.5

Extending PSA
PSA is an open, modular framework that coordinates the simultaneous operation of
multiple multipathing plug-ins or MPPs.
The PSA framework also supports the installation of third-party plug-ins that can replace
or supplement vStorage native components which we have just seen.
These plug-ins are developed by software or storage hardware vendors and integrate
with the PSA.
They improve critical aspects of path management and add support for new path
selection policies and new arrays, currently unsupported by ESXi.
Third-party plug-ins are of three types: third-party SATPs, third-party PSPs, and thirdparty MPPs.
Third-party SATPs are generally developed by third-party hardware manufacturers, who
have expert knowledge of their storage devices.
These plug-ins are optimized to accommodate specific characteristics of the storage
arrays and support the new array lines.
You need to install third-party SATPs when the behavior of your array does not match
the behavior of any existing PSA SATP.
When installed, the third-party SATPs are coordinated by the NMP. They can be
simultaneously used with the VMware SATPs.
Student Study Guide - VTSP 5.5

Page 429

VTSP 5.5 - Student Study Guide


The second type of third-party plug-ins are third-party PSPs.
They provide more complex I/O load balancing algorithms. Generally, these plug-ins are
developed by third-party software companies and help you achieve higher throughput
across multiple paths.
When installed, the third-party PSPs are coordinated by the NMP.
They can run alongside, and be simultaneously used with the VMware PSPs.
The third type, third-party MPPs, can provide entirely new fault tolerance and
performance behavior.
They run in parallel with the VMware NMP. For certain specified arrays, they replace the
behavior of the NMP by taking control over the path failover and load-balancing
operations.
When the host boots up or performs a re-scan, the PSA discovers all physical paths to
storage devices available to the host.
Based on a set of claim rules defined in the /etc/VMware/esx.conf file, the PSA
determines which multipathing module should claim the paths to a particular device and
become responsible for managing the device.
For the paths managed by the NMP module, another set of rules is applied in order to
select SATPs and PSPs.
Using these rules, the NMP assigns an appropriate SATP to monitor physical paths and
associates a default PSP with these paths.

Page 430

VTSP 5.5

Student Study Guide VTSP 5.5

Knowledge Check - PSA


The PSA controls the specific path of each I/O request.
Match the terms with their correct positions in the sentence.

Student Study Guide - VTSP 5.5

Page 431

VTSP 5.5 - Student Study Guide

Multipathing
To maintain a constant connection between an ESXi host and its storage, ESXi
supports multipathing.
Multipathing is the technique of using more than one physical path for transferring data
between an ESXi host and an external storage device.
In case of a failure of any element in the SAN network, such as HBA, switch, or cable,
ESXi can fail over to another physical path.
In addition to path failover, multipathing offers load balancing for redistributing I/O loads
between multiple paths, thus reducing or removing potential bottlenecks.

Page 432

VTSP 5.5

Student Study Guide VTSP 5.5

FC Multipathing
To support path switching with Fibre Channel or FC SAN, an ESXi host typically has
two or more HBAs available, from which the storage array can be reached using one or
more switches.
Alternatively, the setup should include one HBA and two storage processors or SPs so
that the HBA can use a different path to reach the disk array.
In the graphic shown, multiple paths connect each ESXi host with the storage device for
a FC storage type.
In FC multipathing, if HBA1 or the link between HBA1 and the FC switch fails, HBA2
takes over and provides the connection between the server and the switch. The process
of one HBA taking over another is called HBA failover.
Similarly, if SP1 fails or the links between SP1 and the switches break, SP2 takes over
and provides the connection between the switch and the storage device.
This process is called SP failover.
The multipathing capability of ESXi supports both HBA and SP failover.

Student Study Guide - VTSP 5.5

Page 433

VTSP 5.5 - Student Study Guide

iSCSI Multipathing
Multipathing between a server and storage array provides the ability to load-balance
between paths when all paths are present and to handle failures of a path at any point
between the server and the storage.
Multipathing is a de facto standard for most Fibre Channel SAN environments.
In most software iSCSI environments, multipathing is possible at the VMkernel network
adapter level, but not the default configuration.
In a VMware vSphere environment, the default iSCSI configuration for VMware ESXi
servers creates only one path from the software iSCSI adapter (vmhba) to each iSCSI
target.
To enable failover at the path level and to load-balance I/O traffic between paths, the
administrator must configure port binding to create multiple paths between the software
iSCSI adapters on ESXi servers and the storage array. Without port binding, all iSCSI
LUNs will be detected using a single path per target.
By default, ESXi will use only one vmdk NIC as egress port to connect to each target,
and you will be unable to use path failover or to load-balance I/O between different
paths to the iSCSI LUNs.
This is true even if you have configured network adapter teaming using more than one
uplink for the VMkernel port group used for iSCSI.

Page 434

VTSP 5.5

Student Study Guide VTSP 5.5

In the case of simple network adapter teaming, traffic will be redirected at the network
layer to the second network adapter during connectivity failure through the first network
card, but failover at the path level will not be possible, nor will load balancing between
multiple paths.

Student Study Guide - VTSP 5.5

Page 435

VTSP 5.5 - Student Study Guide

Storage I/O Resource Allocation


VMware vSphere provides mechanisms to dynamically allocate storage I/O resources,
allowing critical workloads to maintain their performance even during peak load periods
when there is contention for I/O resources. This allocation can be performed at the level
of the individual host or for an entire datastore.
The primary aim of Storage I/O Control (SIOC) is to prevent a single virtual machine
from taking all of the I/O bandwidth to a shared datastore.
With SIOC disabled (the default), all hosts accessing a datastore get an equal portion of
that datastores resources. In this example the low business priority virtual machine
running a data mining application is sharing a datastore with two other important
business VMs, but is hogging all of the datastore resources.
By enabling Storage I/O Control on the datastore, the Online Store and Microsoft
Exchange Virtual Machines which have a higher business importance can now be
assigned a priority when contention arises on that datastore.
Priority of Virtual Machines is established using the concept of Shares. The more
shares a VM has, the more bandwidth it gets to a datastore when contention arises.
Although we had a disk shares mechanism in the past, it was only respected by VMs on
the same ESX host so wasn't much use on shared storage which was accessed by
multiple ESX hosts.

Page 436

VTSP 5.5

Student Study Guide VTSP 5.5

Storage I/O Control enables the honoring of share values across all ESX hosts
accessing the same datastore.
Storage I/O Control is best used for avoiding contention and thus poor performance on
shared storage.
It gives you a way of prioritizing which VMs are critical and which are not so critical from
an I/O perspective.

SIOC has several requirements and limitations.


Datastores that are Storage I/O Control-enabled must be managed by a single
vCenter Server system.
Storage I/O Control is supported on Fibre Channel-connected, iSCSI-connected,
and NFS-connected storage.
Raw Device Mapping (RDM) is not supported.
Storage I/O Control does not support datastores with multiple extents.
Check the VMware Storage/SAN compatibility guide at http://bit.ly/Xxtyh to verify
whether your automated tiered storage array has been certified to be compatible
with Storage I/O Control.

Storage I/O Control is enabled by default on Storage DRS-enabled datastore clusters.


When the average normalized datastore latency exceeds a set threshold, the datastore
is considered to be congested.
Storage I/O Control initializes to distribute the available storage resources to virtual
machines in proportion to their shares.
Configure shares on virtual disks, based on relative importance of the virtual machine.
As a minimum SIOC requires an Enterprise Plus level license.

Student Study Guide - VTSP 5.5

Page 437

VTSP 5.5 - Student Study Guide

Datastore Clusters
A datastore cluster is a collection of datastores with shared resources and a shared
management interface. Datastore clusters are to datastores what clusters are to hosts.
When you create a datastore cluster, you can use vSphere Storage DRS to manage
storage resources.
A datastore cluster enabled for Storage DRS is a collection of datastores working
together to balance Capacity and IOPs latency.
When you add a datastore to a datastore cluster, the datastores resources become part
of the datastore cluster's resources.
As with clusters of hosts, you use datastore clusters to aggregate storage resources,
which enables you to support resource allocation policies at the datastore cluster level.
The following resource management capabilities are also available per datastore
cluster:

Space Utilization Load Balancing


I/O Latency Load Balancing
Anti-affinity Rules

Page 438

VTSP 5.5

Student Study Guide VTSP 5.5

Datastore Cluster Requirements


Datastores and hosts that are associated with a datastore cluster must meet certain
requirements to use datastore cluster features successfully.
Datastore clusters must contain similar or interchangeable datastores.
A datastore cluster can contain a mix of datastores with different sizes and I/O
capacities, and can be from different arrays and vendors.
However, the following types of datastores cannot coexist in a datastore cluster.
NFS and VMFS datastores cannot be combined in the same datastore cluster.
Replicated datastores cannot be combined with non-replicated datastores in the same
Storage-DRS-enabled datastore cluster.
All hosts attached to the datastores in a datastore cluster must be ESXi 5.0 and later.
If datastores in the datastore cluster are connected to ESX/ESXi 4.x and earlier hosts,
Storage DRS does not run.
Datastores shared across multiple data centers cannot be included in a datastore
cluster.
As a best practice, do not include datastores that have hardware acceleration enabled
in the same datastore cluster as datastores that do not have hardware acceleration
enabled.

Student Study Guide - VTSP 5.5

Page 439

VTSP 5.5 - Student Study Guide


Datastores in a datastore cluster must be homogeneous to guarantee hardware
acceleration-supported behavior.
A datastore cluster has certain vSphere Storage vMotion requirements.
The host must be running a version of ESXi that supports Storage vMotion. The host
must have write access to both the source datastore and the destination datastore.
The host must have enough free memory resources to accommodate Storage vMotion.
The destination datastore must have sufficient disk space.
Finally, the destination datastore must not be in maintenance mode or entering
maintenance mode.

Page 440

VTSP 5.5

Student Study Guide VTSP 5.5

Storage DRS
Storage Distributed Resource Scheduler (SDRS) allows you to manage the aggregated resources
of a datastore cluster.
When Storage DRS is enabled, it provides recommendations for virtual machine disk placement
and migration to balance space and I/O resources across the datastores in the datastore cluster.
Storage DRS provides the following functions:
Initial placement of virtual machines based on storage capacity and, optionally, on I/O latency.
Storage DRS provides initial placement and ongoing balancing recommendations to datastores in
a Storage DRS-enabled datastore cluster.
Use of Storage vMotion to migrate virtual machines based on storage capacity. The default
setting is to balance usage when a datastore becomes eighty percent full. Consider leaving more
available space if snapshots are used often or multiple snapshots are frequently used
Storage DRS provides for the use of Storage vMotion to migrate virtual machines based on I/O
latency.
When I/O latency on a datastore exceeds the threshold, Storage DRS generates recommendations
or performs Storage vMotion migrations to help alleviate high I/O load.
Use of fully automated, storage maintenance mode to clear a LUN of virtual machine files.
Storage DRS maintenance mode allows you to take a datastore out of use in order to service it.
Storage DRS also provides configuration in either manual or fully automated modes use of
affinity and anti-affinity rules to govern virtual disk location.

Student Study Guide - VTSP 5.5

Page 441

VTSP 5.5 - Student Study Guide


The automation level for a datastore cluster specifies whether or not placement and migration
recommendations from Storage DRS are applied automatically.
You can create Storage DRS anti-affinity rules to control which virtual disks should not be
placed on the same datastore within a datastore cluster. By default, a virtual machine's virtual
disks are kept together on the same datastore.
For example, to improve the performance of an application by keeping the application disk on a
datastore separate from the operating system disk
You can create a scheduled task to change Storage DRS settings for a datastore cluster so that
migrations for fully automated datastore clusters are more likely to occur during off-peak hours.
Backing up virtual machines can add latency to a datastore.
You can schedule a task to disable Storage DRS behavior for the duration of the backup.
Always use Storage DRS when possible.
As a minimum Storage DRS requires an Enterprise Plus level license.

Page 442

VTSP 5.5

Student Study Guide VTSP 5.5

VAAI Overview
vStorage APIs for Array Integration (VAAI) provide hardware acceleration functionality.
VAAI enables your host to offload specific virtual machine and storage management
operations to compliant storage hardware.
With the storage hardware assistance, your host performs these operations faster and
consumes less CPU, memory, and storage fabric bandwidth.
VAAI uses the following Block Primitives:

Atomic Test & Set (ATS), which is used during creation and locking of files on the
VMFS volume, replacing SCSI reservations for metadata updates. NFS uses its
own locking mechanism, so does not use SCSI reservations.
Clone Blocks/Full Copy/XCOPY is used to copy or migrate data within the same
physical array.
Zero Blocks/Write Same is used to zero-out disk regions.
In ESXi 5.x, support for NAS Hardware Acceleration is included with support for
these primitives.

Full File Clone - Like the Full Copy VAAI primitive provided for block arrays, this
Full File Clone primitive enables virtual disks to be cloned by the NAS device.
Native Snapshot Support - Allows creation of virtual machine snapshots to be
offloaded to the array.
Extended Statistics - Enables visibility to space usage on NAS datastores and is
useful for Thin Provisioning.

Student Study Guide - VTSP 5.5

Page 443

VTSP 5.5 - Student Study Guide


Reserve Space - Enables creation of thick virtual disk files on NAS.
Thin Provisioning in ESXi 5.x hosts introduced out of space conditions on SCSI LUNs.
This induces a VM stun. NFS servers already provide this information which is
propagated up the stack.
Thin Provisioning in ESXi 5.x allows the ESXi host to tell the array when the space
previously occupied by a virtual machine (whether it is deleted or migrated to another
datastore) can be reclaimed on thin provisioned LUNs.
Block Delete in ESXi 5.x hosts allows for space to be reclaimed using the SCSI UNMAP
feature.
Dead space reclamation is not an issue on NAS arrays.
vSphere 5.5 introduces a new and simpler VAAI UNMAP/Reclaim command.
There are also two major enhancements in vSphere 5.5; the ability to specify the
reclaim size in blocks rather than as a percentage value and dead space can now be
reclaimed in increments rather than all at once.
Block primitives are enabled by default on ESXi hosts. NAS primitives require a NAS
plug-in from array vendors and the implementation of these primitives may vary so
ensure that you check with the NAS vendor. As a minimum VAAI requires an Enterprise
level license.
Ultimately, the goal of VAAI is to help storage vendors provide hardware assistance to
speed up VMware I/O operations that are more efficiently accomplished in the storage
hardware. Without the use of VAAI, cloning or migration of virtual machines by the
vSphere VMkernel Data Mover involves software data movement.
In nearly all cases, hardware data movement will perform significantly better than
software data movement and It will consume fewer CPU cycles and less bandwidth on
the storage fabric.
As an example support for the ATS primitive allows a greater number of virtual disks on
a single datastore which gives you more flexibility when choosing your storage design.

Page 444

VTSP 5.5

Student Study Guide VTSP 5.5

vSphere Storage APIs - Storage Awareness (VASA)


vSphere Storage APIs - Storage Awareness (VASA) is a set of APIs that permit arrays
to integrate with vCenter for management functionality.
VASA allows a storage vendor to develop a software component (a vendor provider) for
its storage arrays.
Storage vendor providers allow vCenter Server to retrieve information from storage
arrays including topology, capabilities and status.
The vendor provider is third-party software, provided by the storage provider, that is
installed on the storage array or management device.
The vCenter Server uses the vendor provider to retrieve the status, topology, and
capabilities of the storage array. Information from the vendor provider
is displayed in the VMware vSphere Client.
The vendor provider exposes three pieces of information to vCenter Server. Storage
topology lists information regarding the physical storage array elements.
Storage capabilities list the storage capabilities and the services that the storage array
offers, which can be used in Storage Profiles.
Storage state displays the health status of the storage array, including alarms and
events for configuration changes.

Student Study Guide - VTSP 5.5

Page 445

VTSP 5.5 - Student Study Guide


Vendor providers provide a number of benefits.
The storage system information presented by the storage vendors is visible in vCenter
Server.
This provides a complete "end-to-end" view of your infrastructure from the vCenter
Server. The storage capabilities information presented by the storage providers appear
as system-defined entries for Storage Profiles.
When you use the vendor provider functionality, certain requirements and
considerations apply.
See the vSphere Compatibility guide at
http://partnerweb.vmware.com/comp_guide2/search.php or check with your storage
vendor, who can provide you with information as to whether your storage supports
vendor providers.
The vendor provider cannot run on the same host as the vCenter Server. The vendor
provider must have bi-directional trust with the vSphere Storage Management Service,
via an SSL certificate exchange.
Both block storage and file system storage devices can use vendor providers.
Fibre Channel over Ethernet (FCoE) does not support vendor providers.
A single vCenter Server can simultaneously connect to multiple different vendor
providers. It is possible to have a different vendor provider for each type of physical
storage device available to your host.

Page 446

VTSP 5.5

Student Study Guide VTSP 5.5

Knowledge Check - Storage Vendor Providers


Storage Vendor Providers allow vCenter Server to retrieve information from storage
arrays. Which three statements about Vendor Providers are correct?

Student Study Guide - VTSP 5.5

Page 447

VTSP 5.5 - Student Study Guide

Profile-Driven Storage
Managing datastores and matching the SLA requirements of virtual machines with the
appropriate datastore can be challenging and cumbersome tasks.
Profile-driven storage enables the creation of datastores that provide varying levels of
service. Profile-driven storage can be used to do the following: Categorize datastores
based on system-defined or user-defined levels of service.
The capabilities of the storage subsystem can be identified by using VMware Storage
APIs for Storage Awareness (VASA). Storage vendors can publish the capabilities of
their storage to VMware vCenter Server, which can display the capabilities in the user
interface.
User-defined means storage capabilities can be identified by the user (for non-VASA
systems). For example, user-defined levels might be gold, silver and bronze.
Profile-Driven Storage will reduce the amount of manual administration required for
virtual machine placement while improving virtual machine SLA storage compliance.
Virtual machine storage profiles can be associated to virtual machines and periodically
checked for compliance to ensure that the virtual machine is running on storage with the
correct performance and availability characteristics.
Use compliance checks to ensure a virtual machine is always on appropriate storage.
Find non-compliant virtual machines and correct the error via Storage vMotion.

Page 448

VTSP 5.5

Student Study Guide VTSP 5.5

Profile-Driven Storage delivers these benefits by taking advantage of Full integration


with VASA, enabling usage of storage characterization supplied by storage vendors. It
Supports NFS, iSCSI and Fibre Channel (FC) storage, and all storage arrays on the
HCL.
It enables the vSphere administrator to tag storage based on customer- or businessspecific descriptions.
Use storage characterizations and/or administrator-defined descriptions to create virtual
machine placement rules in the form of storage profiles, providing an easy means to
check a virtual machines compliance with these rules, ensuring a virtual machine is not
deployed or migrated to an incorrect type of storage without the administrators being
informed.
A limitation of Profile driven storage is that the administrator must explicitly understand
the virtual machines requirements when choosing the appropriate profile for a virtual
machine. If the administrator does not understand the requirements of the virtual
machine storage, the virtual machine may not be placed on the best tier of storage.
The administrator must decide which tier is the best fit for the virtual machine even
though it can be running a range of disparate workloads not best suited for just one tier.
As a minimum, Profile-Driven Storage requires an Enterprise Plus level license.

Student Study Guide - VTSP 5.5

Page 449

VTSP 5.5 - Student Study Guide

Knowledge Check - Storage I/O Control


See if you can recall Storage I/O Control license requirements.
Which statement correctly describes license requirements for Storage I/O Control?

Page 450

VTSP 5.5

Student Study Guide VTSP 5.5

Knowledge Check - Datastore Clusters


When creating datastore clusters, you need to be aware of certain restrictions and guidelines.
Which three statements are valid considerations for creating datastore clusters?

Student Study Guide - VTSP 5.5

Page 451

VTSP 5.5 - Student Study Guide

Module Summary
This concludes module 2, vSphere vStorage Advanced Features.
Now that you have completed this module, you should be able to:

Explain and demonstrate multipathing and the PSA architecture


Explain Storage I/O Control
Compare the features of Datastore Clusters and Storage DRS with Isolated
storage features
Describe Storage Hardware Acceleration (VAAI)
Compare VMware API for Storage Awareness (VASA) with plain storage
Explain Profile Driven Storage
Now that you have completed this module, feel free to review it until you are ready to
start the next module.

Page 452

VTSP 5.5

Student Study Guide VTSP 5.5

Module 3: Determining Proper Storage Architecture


This is module 3, Determining Proper Storage Architecture.
This is the topic that will be covered in this module.

Student Study Guide - VTSP 5.5

Page 453

VTSP 5.5 - Student Study Guide

Module Objectives
Using a sample customer, this module will help you to make design choices with regard
to base VM capacity and performance needs; define requirements for snapshots,
SDRS, template storage and planned growth rates; and explore utilization and
performance trade-offs with shared storage consolidation decisions.

Page 454

VTSP 5.5

Student Study Guide VTSP 5.5

Performance and Capacity Scenario


Carla is the owner of a growing business. She has approached your company to help
size, scale and implement a suitable storage solution. Your initial consultation has
provided you with the following information about Carla and her business:
Currently they have 10 virtual machines which will then rise to 50 Virtual Machines after
consolidation. They all reside on 2X ESXi 5.1 Servers without any shared storage. The
business is increasing so will need to accommodate additional workloads. Currently all
virtual machines reside on local storage.
They anticipate that the total server/storage needs will be 12TB. Capacity analysis
indicates that they will require 10TB of capacity for file and general purpose storage.
1.5TB for high performance databases. 500GB for a mission critical email system.
The large application and database servers will use 150 IOPS per virtual machine.
The small application servers will use 50 IOPS per virtual machine.

Student Study Guide - VTSP 5.5

Page 455

VTSP 5.5 - Student Study Guide

Performance and Capacity Scenario


Carla has the following business requirements:
The business has good in-house Ethernet expertise, and wants to leverage this. The
business requires an easy way to migrate from and provision new virtual machines. The
customer plans to implement a Microsoft Cluster in the near future.
The business cannot afford to have any downtime.

Page 456

VTSP 5.5

Student Study Guide VTSP 5.5

Which Solution to Offer


Which solution is a best fit for your customer?
Select an answer and click Submit.

Student Study Guide - VTSP 5.5

Page 457

VTSP 5.5 - Student Study Guide

Which Solution to Offer


iSCSI-SAN: Given the customer's requirements this is the best fit. They can leverage
their in-house expertise to implement the storage as well as rolling out a Microsoft
cluster in the near future. The shared storage ability will allow them to use Storage
vMotion the virtual machines from their Local Storage. iSCSI multipathing will ensure
that the design meets the high-availability configuration.
The iSCSI array should be sized with your immediate to mid-term scaling goals in mind.
Think about availability, performance (IOPS, Mbps, Latency) and finally about capacity.
NFS is not the best match to the customer's requirements. If they are planning to
implement a Microsoft Cluster, block-based storage as provided by FC and iSCSI will
allow them to support more cluster configurations.
FC-SAN is not the best match for the customer's requirements as they have good
internal Ethernet experience and wish to capitalize on it. The NAS and iSCSI options
both allow the customer to use ethernet for their storage networking while an FC SAN
will not.

Page 458

VTSP 5.5

Student Study Guide VTSP 5.5

Snapshots, SDRS, Templates Scenario


Carla is happy with the choice of an iSCSI array. The business wants to further optimize
storage utilization on the array.
The business wants to optimize the way virtual disks configured on the storage array
use the maximum amount of capacity.
The business wants to quickly deploy virtual machines from existing virtual machines to
scale up to meet demand.
They want to be able to use Test and Dev machines and rollback from any changes
easily as well as use their 3rd Party Backup Solution.
The business wants to optimize the initial placement of running virtual machines and I/O
load to be based on the available storage capacity of datastores.

Student Study Guide - VTSP 5.5

Page 459

VTSP 5.5 - Student Study Guide

Which Solution to Offer


Carla needs to be sure she's making the right choices. Can you match the solutions to her
queries?

Page 460

VTSP 5.5

Student Study Guide VTSP 5.5

Which Solution to Offer


Thin Provisioned - ESXi supports thin provisioning for virtual disks. With the disk-level
thin provisioning feature, you can create virtual disks in a thin format. For a thin virtual
disk, ESXi provisions the entire space required for the disk's current and future
activities, for example 40GB. However, the thin disk uses only as much storage space
as the disk needs for its initial operations. In this example, the thin-provisioned disk
occupies only 20GB of storage. As the disk requires more space, it can grow into its
entire 40GB provisioned space.
Templates - The customer can create a template to create a master image of a virtual
machine from which you can deploy many virtual machines.
Snapshots -A snapshot is a reproduction of the virtual machine just as it was when you
took the snapshot. The snapshot includes the state of the data on all virtual machine
disks and the virtual machine power state (on, off, or suspended). This will fulfill the
customers test, dev. and backup requirements.
Storage DRS - Storage Distributed Resource Scheduler (SDRS) allows you to manage
the aggregated resources of a datastore cluster. When Storage DRS is enabled, it
provides recommendations for virtual machine disk placement and migration to balance
space and I/O resources across the datastores in the datastore cluster. Consider
configuring Storage DRS to balance datastore utilization at 80 percent. Leave 10-20
percent additional capacity available to accommodate snapshots, swap files, and log
files.

Student Study Guide - VTSP 5.5

Page 461

VTSP 5.5 - Student Study Guide

Which Solution to Offer


Carla has a few questions regarding some of the advanced features.

Page 462

VTSP 5.5

Student Study Guide VTSP 5.5

Which Solution to Offer


1. Which set of APIs permit arrays to integrate with vCenter for management
functionality?
vSphere Storage APIs - Storage Awareness (VASA) is a set of APIs that permit
arrays to integrate with vCenter for management functionality.
VASA allows a storage vendor to develop a software component (a Vendor
Provider) for its storage arrays.
Storage Vendor Providers allow vCenter Server to retrieve information from
storage arrays including topology, capabilities and status.
2. Which VAAI primitive allows space to be reclaimed on the datastore?
The SCSI UNMAP VAAI primitive allows vSphere to inform the storage array that
previously active storage locations are no longer in use, freeing up previously
dead space for reuse when hardware based thin provisioning is in use.
This allows a storage array to recover thin provisioned capacity at the array level
when files or data on a VMFS datastore is deleted.
3. Which mechanism can we use to dynamically allocate Storage I/O resources?
Storage IO Control, SIOC, allows vSphere to dynamically allocate storage I/O
resources, allowing critical workloads to maintain their performance even during
peak load periods when there is contention for I/O resources.
This allocation can be performed at the level of the individual host or for an entire
datastore.
Student Study Guide - VTSP 5.5

Page 463

VTSP 5.5 - Student Study Guide

Module Summary
By now, you should be able to:

Make design choices with regard to determining base VM capacity and


performance needs
Define requirements for snapshots, SDRS, template storage and planned growth
rates
Explore utilization and performance trade-offs with shared storage consolidation
decision

Page 464

VTSP 5.5