Академический Документы
Профессиональный Документы
Культура Документы
Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
Topics: Virtualization Technologies, Hypervisor
- Virtualization assigns a logical name for a physical resource and then provides a
pointer to that physical resource when a request is made. Virtualization provides a
means to manage resources efficiently because the mapping of virtual resources to
physical resources can be both dynamic and facile.
- Virtualization is dynamic in that the mapping can be assigned based on rapidly
changing conditions, and it is facile because changes to a mapping assignment can be
nearly instantaneous.
These are among the different types of virtualization that are characteristic of cloud
computing:
- Access: A client can request access to a cloud service from any location.
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
- Application: A cloud has multiple application instances and directs requests to an
instance based on conditions.
- CPU: Computers can be partitioned into a set of virtual machines with each machine
being assigned a workload. Alternatively, systems can be virtualized through load-
balancing technologies.
- Storage: Data is stored across storage devices and often replicated for redundancy.
Traditional Architecture
Virtual Machines
VM Isolation
1. Strong Multiplexing
a. Run multiple VM on single physical host
b. Processor hardware isolates VMs
2. Strong Guarantees
a. Software bugs, crashes, viruses within one VM cannot affect other VMs
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
3. Performance Isolation
a. Partition system resources
b. E.g. VMware controls for reservation, limit, shares
VM Encapsulation
a. Entire VM is a file: OS, applications, data, memory and device states can be
stored as a file and migrated to another systems
b. Snapshots & clones: Capture VM state on the fly and restore to point-in-time &
rapid system provisioning, backup, remote mirroring
c. Easy content Distribution: Pre-configured apps, demos, virtual appliances
VM Compatibility
1. Hardware Independent: Physical hardware is hidden by virtualization layer, standard
virtual hardware is exposed to VM
2. Create Once, Run Anywhere: No configuration issues & Migrate VMs between hosts
3. Legacy VMs: Run ancient OS on new platform. E.g. DOS VM drives virtual IDE and
vLance devices, mapped to modern SAN and GigE hardware.
Uses of Virtualization:
1. Test & Development: Rapidly provision test and development servers, store libraries
of pre-configured test machines
2. Business Continuity: Reduces
cost and complexity by
encapsulating entire systems into
single files that can be replicated
and restored onto any target
server.
3. Cost: Reduce costs by
consolidating services onto the
fewest number of physical
machines.
Limitations
- Overloading of Servers
- Migrations of VM possible only if both physical machines use the same manufacturer's
processor.
Full Virtualization
- Full virtualization is a technique in which a complete installation of one machine is
run on another.
- The result is a system in which all software running on the server is within a virtual
machine.
- This sort of deployment allows not only unique applications to run, but also different
operating systems.
- Virtualization is relevant to cloud computing because it is one of the ways in which you
will access services on the cloud. That is, the remote datacenter may be delivering your
services in a fully virtualized format.
- In order for full virtualization to be possible, it was necessary for specific hardware
combinations to be used. It wasnt until 2005 that the introduction of the AMD-
Virtualization (AMD-V) and Intel Virtualization Technology (IVT) extensions made it
easier to go fully virtualized.
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
- The hypervisor interacts directly with the physical server's CPU and disk space.
- The hypervisor keeps each virtual server completely independent and unaware of the
other virtual servers running on the physical machine.
- Each guest server runs on its own OS, i.e., one guest running on Linux and another on
Windows.
- The hypervisor monitors the physical server's resources.
- As virtual servers run applications, the hypervisor relays resources from the physical
machine to the appropriate virtual server.
- Hypervisors have their own processing needs, which mean that the physical server
must reserve some processing power and resources to run the hypervisor application.
This can impact overall server performance and slow down applications.
- Full virtualization provides
sufficient emulation of the
underlying platform that a guest
operating system and application
set can run unmodified and
unaware that their platform is
being virtualized.
- Providing a full emulation of the
platform means that all platform
devices are emulated with
enough detail to permit the guest
OS to manipulate them at their
native level (such as register-level
interfaces).
- While full virtualization comes
with a performance penalty, the technique permits running unmodified operating
system, which is ideal, particularly when source is unavailable such as with
proprietary operating systems.
Benefits
- Sharing a computer system among multiple users
- Isolating users from each other and from the control program
- Emulating hardware on another machine
Para Virtualization
- The fundamental issue with full virtualization is the emulation of devices within the
hypervisor.
- A solution to this problem is to make the guest operating system aware that it's being
virtualized.
- With this knowledge, the guest OS can short circuit its drivers to minimize the
overhead of communicating with physical devices.
- In this way, the guest OS drivers and hypervisor drivers integrate with one another to
efficiently enable and share physical device access.
- Low-level emulation of devices is removed, replaced with cooperating guest and
hypervisor drivers.
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
Pros
- Improved Performance
Cons
- The downside of paravirtualization is that the guest must be modified to integrate
hypervisor awareness
OS Level Virtualization
An OS-level virtualization approach doesn't use a hypervisor at all.
Instead, the virtualization capability is part of the host OS, which performs all the
functions of a fully virtualized hypervisor.
The biggest limitation of this approach is that all the guest servers must run the same
OS. Because all the
guest operating
systems must be the
same, this is called
a homogeneous enviro
nment.
Examples: FreeBSD
Jails, Solaris
Containers.
Shared kernel
virtualization or
operating system
virtualization takes
advantage of the architectural design of Linux and UNIX based operating systems.
o At the core of the operating system is the kernel. The kernel handles all the
interactions between the operating system and the physical hardware.
o The second key component is the root file system which contains all the libraries,
files and utilities necessary for the operating system to function.
Under shared kernel virtualization the virtual guest systems each have their own root
file system but share the kernel of the host operating system.
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
Operating system-level virtualization is a server virtualization method where
the kernel of an operating system allows for multiple isolated user-space instances,
instead of just one.
Such instances (often called containers, virtualization engines (VE), virtual private
servers (VPS) or jails) may look and feel like a real server, from the point of view of its
owner.
In addition to isolation mechanisms, the kernel often provides resource management
features to limit the impact of one container's activities on the other containers.
Uses
Operating system-level virtualization is commonly used in virtual
hosting environments, where it is useful for securely allocating finite hardware
resources amongst a large number of mutually-distrusting users.
System administrators may also use it, to a lesser extent, for consolidating server
hardware by moving services on separate hosts into containers on the one server.
Other typical scenarios include separating several applications to separate containers
for improved security, hardware independence, and added resource management
features.
Advantages:
This form of virtualization usually imposes little or no overhead, because programs in
virtual partition use the operating system's normal system call interface and do not
need to be subject to emulation or run in an intermediate virtual machine, as is the
case with whole-system virtualizers or paravirtualizers.
It also does not require hardware assistance to perform efficiently.
This is easier to back up, more space-efficient and simpler to cache than the block-
level copy-on-write schemes common on whole-system virtualizers.
Disadvantages
Operating system-level virtualization is not as flexible as other virtualization
approaches since it cannot host a guest operating system different from the host one,
or a different guest kernel. For example, with Linux, different distributions are fine,
but other OS such as
Windows cannot be hosted.
Hardware-assisted Virtualization
Hardware-assisted
virtualization is a platform
virtualization approach that
enables efficient full
virtualization using help from
hardware capabilities,
primarily from the host
processors.
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
Full virtualization is used to simulate a complete hardware environment, or virtual
machine, in which an unmodified guest operating system (using the same instruction
set as the host machine) executes in complete isolation.
Hardware-assisted virtualization was added to x86 processors (Intel VT-x or AMD-V) in
2006.
Hardware-assisted virtualization is also known as accelerated virtualization;
Server hardware is virtualization aware.
Hypervisor and VMM load at privilege Ring -1
(firmware)
Removes CPU emulation bottleneck
First generation enhancements include Intel
Virtualization Technology (VT-x) and AMDs AMD-V
which both target privileged instructions with a new
CPU execution mode feature that allows the VMM to
run in a new root mode below ring 0.
Advantages
Hardware-assisted virtualization reduces the
maintenance overhead of paravirtualization as it
reduces (ideally, eliminates) the changes needed in the
guest operating system.
It is also considerably easier to obtain better
performance.
Disadvantages
Hardware-assisted virtualization requires explicit
support in the host CPU, which is not available on all
x86/x86_64 processors.
A "pure" hardware-assisted virtualization approach, using entirely unmodified guest
operating systems, involves many VM traps, and thus high CPU overheads, limiting
scalability and the efficiency of server consolidation. This performance hit can be
mitigated by the use of paravirtualized drivers; the combination has been called
"hybrid virtualization".
Characteristics:
1. VMM provides an environment for programs which is essentially identical with the
original machine
2. Programs run in this environment show at worst only minor decreases in speed
3. The VMM is in complete control of system resources
Uses
Consider a hardware virtualization hypervisor when you need to perform any of the following:
System consolidation: Virtualization hypervisors support the operation of multiple
systems on the same physical hardware, reducing costs and the physical server footprint
while delivering similar and often improved services.
System testing: Hypervisors support the isolation of systems, letting you test new
software and applications without affecting production. They also provide a low-cost
testing alternative to physical systems.
Heterogeneous system operation: Hypervisors support the simultaneous execution of
multiple operating systems on the same physical hardware, letting organizations run
heterogeneous systems on reduced hardware footprints.
Hardware optimization: Hypervisors increase hardware usage through the operation of
multiple workloads on each physical host server. Server usage can increase from 5% to
10% to upwards of 60% or 70%.
Application high availability: By sharing workloads through technologies such as
failover clustering, servers running virtualization hypervisors can support application
high availability and ensure that services are always available when running inside VMs.
Resource optimization: By running different applications in separate VMs, hypervisors
can increase resource use because each application requires a number of resources at
different times.
Service flexibility: Because hypervisors support the operation of systems through VMs,
organizations gain flexibility because VMs are easier to clone and reproduce than physical
machines.
Dynamic resource management: Virtualization hypervisors support manual or
automated resource allocation to VM workloads as peak usage occurs. Because of this,
hypervisors provide better support for dynamic resource allocation in data centers.
Advantages
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
From the standpoint of cloud computing, these features enable VMMs to manage
application provisioning, provide for machine instance cloning and replication, allow for
graceful system failover, and provide several other desirable features.
Disadvantages
The downside of virtual machine technologies is that having resources indirectly
addressed means there is some level of overhead.
Type 1 Hypervisor
Type 1 (or native, bare metal) hypervisors run directly on the host's hardware to
control the hardware and to manage guest operating systems.
A guest operating-system thus runs on another level above the hypervisor.
This model represents the classic implementation of virtual-machine architectures.
A Type-1 hypervisor is a type of client hypervisor that interacts directly with hardware
that is being virtualized.
It is completely independent from the operating system, unlike a Type-2 hypervisor,
and boots before the operating system (OS).
Currently, Type-1 hypervisors are being used by all the major players in the desktop
virtualization space.
The Type 1 hypervisor is often referred to as a hardware virtualization engine.
A Type 1 hypervisor provides better performance and greater flexibility because it
operates as a thin layer designed to expose hardware resources to virtual
machines (VMs), reducing the overhead required to run the hypervisor itself.
Because a Type 1 hypervisor runs directly on the hardware, it is a function in and of
itself. Servers that run Type 1 hypervisors are often single-purpose servers that offer
no other function. They become part of the resource pool and are designed specifically
to support the operation of multiple applications within various VMs.
Because they run directly on the hardware, Type 1 hypervisors support hardware
virtualization.
Type 1 hypervisors are VM monitors that are designed to keep track of all of the events
that occur within a VM and, when required, provide -- or deny -- access to appropriate
resources to meet VM operating requirements. Ideally, the VM monitor will perform its
operations through the use of
policies that contain all of the
settings assigned to a
particular VM.
A bare-metal virtualization
hypervisor does not require
admins to install a server
operating system first.
Disadvantage
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
Hardware support is typically more limited, because the hypervisor usually has limited
device drivers built into it.
Bare-metal virtualization is well suited for enterprise data centers, because it usually
comes with advanced features for resource management, high availability and security.
Advantages:
No host operating system is required, they are installed on bare metal.
Bare-metal virtualization means the hypervisor has direct access to hardware
resources, which results in better performance, scalability and stability.
Examples
Vmware ESX
Oracle VM
LynxSecure
Type 2 Hypervisor
Type 2 (or hosted) hypervisors run within a conventional operating-
system environment.
With the hypervisor layer as a distinct second software level, guest operating-systems
run at the third level above the hardware.
Installed over an operating System and are referred to as Type 2 VM or hosted VM.
A Type-2 hypervisor is a type of client hypervisor that sits on top of an operating
system.
Unlike a Type-1 hypervisor, a Type-2 hypervisor relies heavily on the operating system.
It cannot boot until the operating system is already up and running and, if for any
reason the operating system crashes, all end-users are affected. This is a big drawback
of Type-2 hypervisors, as they are only as secure as the operating system on which
they rely.
Also, since Type-2 hypervisors depend on an OS, they are not in full control of the end
user's machine.
Because they run as an application on top of an operating system, Type 2 hypervisors
perform software virtualization.
A hosted hypervisor requires you to first install an OS. These hypervisors are basically
like applications that install on a guest OS.
Advantages
This approach provides better hardware compatibility than bare-metal virtualization,
because the OS is responsible for the hardware drivers instead of the hypervisor.
Disadvantages:
A hosted virtualization hypervisor does not have direct access to hardware and must
go through the OS, which increases resource overhead and can degrade virtual
machine (VM) performance. Also, because there are typically many services and
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
applications running on the host OS, the hypervisor often steals resources from the
VMs running on it.
Uses:
Hosted hypervisors are common for desktops, because they allow you to run multiple
OSes. These virtualization hypervisor types are also popular for developers, to
maintain application compatibility on modern OSes.
Examples
KVM
Microsoft Hyper-V (run over Windows Server)
Xen (run over linux)
Porting Applications
Cloud computing applications have the ability to run on virtual systems and for these
systems to be moved as needed to respond to demand.
Systems (VMs running applications), storage, and network assets can all be virtualized
and have sufficient flexibility to give acceptable distributed WAN application
performance.
Developers who write software to run in the cloud will undoubtedly want the ability to
port their applications from one cloud vendor to another, but that is a much more
difficult proposition. Cloud computing is a relatively new area of technology, and the
major vendors have technologies that don't interoperate with one another.
VM Migration
Live migration refers to the process of moving a running virtual machine or
application between different physical machines without disconnecting the client or
application.
Memory, storage, and network connectivity of the virtual machine are transferred from
the original host machine to the destination.
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
VM memory migration
Two techniques for moving the virtual machine's memory state from the source to the
destination are pre-copy memory migration and post-copy memory migration.
1. Pre-copy memory migration
a. Warm-up phase: In pre-copy memory migration, the Hypervisor typically copies
all the memory pages from source to destination while the VM is still running on
the source. If some memory pages change (become 'dirty') during this process,
they will be re-copied until the rate of re-copied pages is not less than page
dirtying rate.
b. Stop-and-copy phase: After the warm-up phase, the VM will be stopped on the
original host, the remaining dirty pages will be copied to the destination, and the
VM will be resumed on the destination host. The time between stopping the VM
on the original host and resuming it on destination is called "down-time", and
ranges from a few milliseconds to seconds according to the size of memory and
applications running on the VM. There are some techniques to reduce live
migration down-time, such as using probability density function of memory
change.
2. Post-copy memory migration
Post-copy VM migration is initiated by suspending the VM at the source.
With the VM suspended, a minimal subset of the execution state of the VM is
transferred to the target.
The VM is then resumed at the target, even though most of the memory state of the VM
still resides at the source.
At the target, when the VM tries to access pages that have not yet been transferred, it
generates page-faults.
These faults are trapped at the target and redirected towards the source over the
network.
Such faults are referred to as network faults. The source host responds to the network-
fault by sending the faulted page.
Since each page fault of the running VM is redirected towards the source, this
technique can degrade performance of applications running inside the VM.
However, pure demand-paging accompanied with techniques such as pre-paging can
reduce this impact by a great extent.
When down-time of a VM during a live migration is not noticeable by the end user, it is
called a seamless live migration.