Вы находитесь на странице: 1из 16

Er.

Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
Topics: Virtualization Technologies, Hypervisor

Topic-1: Virtualization Technologies


- When it comes to cloud computing, the definition that best fits the context is a
collection of objects that are grouped together.
- It is that act of grouping or creating a resource pool that is what succinctly
differentiates cloud computing from all other types of networked systems.
- Not all cloud computing applications combine their resources into pools that can be
assigned on demand to users, but the vast majority of cloud-based systems do.
- The benefits of pooling resources to allocate them on demand are so compelling as to
make the adoption of these technologies a priority. Without resource pooling, it is
impossible to attain efficient utilization, provide reasonable costs to users, and
proactively react to demand.
- When you use cloud computing, you are accessing pooled resources using a technique
called virtualization.

- Virtualization is nothing more than an


abstraction over physical resources to
make them shareable by a number of
physical users.
- Virtualization allows multiple
operating system instances to run
concurrently on a single computer, it
is a means of separating hardware
from a single operating system.
- Virtualization, in computing, refers to
the act of creating a virtual (rather than actual) version of something, including but not
limited to a virtual computer hardware platform, operating system (OS), storage device,
or computer network resources.
- A physical server on which one or more virtual machines are running is defined as
host. The virtual machines are called guests. A hypervisor or Virtual Machine Monitor
(VMM) is a piece of software, firmware or hardware that creates and runs virtual
machines.

- Virtualization assigns a logical name for a physical resource and then provides a
pointer to that physical resource when a request is made. Virtualization provides a
means to manage resources efficiently because the mapping of virtual resources to
physical resources can be both dynamic and facile.
- Virtualization is dynamic in that the mapping can be assigned based on rapidly
changing conditions, and it is facile because changes to a mapping assignment can be
nearly instantaneous.

These are among the different types of virtualization that are characteristic of cloud
computing:
- Access: A client can request access to a cloud service from any location.
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
- Application: A cloud has multiple application instances and directs requests to an
instance based on conditions.
- CPU: Computers can be partitioned into a set of virtual machines with each machine
being assigned a workload. Alternatively, systems can be virtualized through load-
balancing technologies.
- Storage: Data is stored across storage devices and often replicated for redundancy.
Traditional Architecture

Virtual Machines

VM Isolation
1. Strong Multiplexing
a. Run multiple VM on single physical host
b. Processor hardware isolates VMs
2. Strong Guarantees
a. Software bugs, crashes, viruses within one VM cannot affect other VMs
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
3. Performance Isolation
a. Partition system resources
b. E.g. VMware controls for reservation, limit, shares

VM Encapsulation
a. Entire VM is a file: OS, applications, data, memory and device states can be
stored as a file and migrated to another systems
b. Snapshots & clones: Capture VM state on the fly and restore to point-in-time &
rapid system provisioning, backup, remote mirroring
c. Easy content Distribution: Pre-configured apps, demos, virtual appliances

VM Compatibility
1. Hardware Independent: Physical hardware is hidden by virtualization layer, standard
virtual hardware is exposed to VM
2. Create Once, Run Anywhere: No configuration issues & Migrate VMs between hosts
3. Legacy VMs: Run ancient OS on new platform. E.g. DOS VM drives virtual IDE and
vLance devices, mapped to modern SAN and GigE hardware.

Uses of Virtualization:
1. Test & Development: Rapidly provision test and development servers, store libraries
of pre-configured test machines
2. Business Continuity: Reduces
cost and complexity by
encapsulating entire systems into
single files that can be replicated
and restored onto any target
server.
3. Cost: Reduce costs by
consolidating services onto the
fewest number of physical
machines.

Non Virtualized Data Centers (non-


consolidated server):
- Involves too many data servers which are underutilized.
- Requires high budget & infrastructure for various applications like: maintenance,
networking, floor space, cooling, power and disaster recovery

Consolidated Server/VM Multiplexing


- Virtualization helps us break the one service per server model
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
- Consolidate many services into a fewer number of machines when workload is low,
reducing costs
- Conversely, as demand for a particular service increases, we can shift more virtual
machines to run that service
- We can build a data center with fewer total resources, since resources are used as
needed instead of being dedicated to single services
- Multiplex VMs workload on same physical server
- Aggregate multiple workload: Estimate total capacity need based on aggregated
workload
- Performance level of each VM be preserved

Virtualization provides various benefits of Cloud:


- Service-based: A service-based architecture is where clients are abstracted from service
providers through service interfaces.
- Scalable and elastic: Services can be altered to affect capacity and performance on
demand.
- Shared services: Resources are pooled in order to create greater efficiencies.
- Metered usage: Services are billed on a usage basis.
- Internet delivery: The services provided by cloud computing are based on Internet
protocols and formats.
- Resource Pooling (thus maximum resource utilization)
- Server Consolidation (thus saving energy and cost)
- VM Migration (Refers to moving a server environment from one place to another)
o Load balancing
o Maintenance
o Failover
- Redundancy
- Isolated machines (Useful for testing operating systems and new applications)
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
Benefits:
Virtualization can help companies maximize the value of IT investments, decreasing the
server hardware footprint, energy consumption, and cost and complexity of managing IT
systems while increasing the flexibility of the overall environment.
1. Cost: Depending on your solution, you can have a cost-free datacenter. You do have to
shell out the money for the physical server itself, but there are options for free
virtualization software and free operating systems.
2. Administration: Having all your servers in one place reduces your administrative
burden
3. Fast Deployment: Because every virtual guest server is just a file on a disk, its easy
to copy (or clone) a system to create a new one. To copy an existing server, just copy
the entire directory of the current virtual server. This can be used in the event the
physical server fails, or if you want to test out a new application to ensure that it will
work and play well with the other tools on your network. Virtualization software allows
you to make clones of your work environment for these endeavors.
4. Reduced Infrastructure Costs: We already talked about how you can cut costs by
using free servers and clients, like Linux, as well as free distributions of Windows
Virtual Server, Hyper-V, or VMware. But there are also reduced costs across your
organization. If you reduce the number of physical servers you use, then you save
money on hardware, cooling, and electricity. You also reduce the number of network
ports, console video ports, mouse ports, and rack space.

Limitations
- Overloading of Servers
- Migrations of VM possible only if both physical machines use the same manufacturer's
processor.

Different types of Virtualization


- Full Virtualization
- Para-Virtualization
- OS-level Virtualization
- Hardware assisted Virtualization

Full Virtualization
- Full virtualization is a technique in which a complete installation of one machine is
run on another.
- The result is a system in which all software running on the server is within a virtual
machine.
- This sort of deployment allows not only unique applications to run, but also different
operating systems.
- Virtualization is relevant to cloud computing because it is one of the ways in which you
will access services on the cloud. That is, the remote datacenter may be delivering your
services in a fully virtualized format.
- In order for full virtualization to be possible, it was necessary for specific hardware
combinations to be used. It wasnt until 2005 that the introduction of the AMD-
Virtualization (AMD-V) and Intel Virtualization Technology (IVT) extensions made it
easier to go fully virtualized.
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
- The hypervisor interacts directly with the physical server's CPU and disk space.
- The hypervisor keeps each virtual server completely independent and unaware of the
other virtual servers running on the physical machine.
- Each guest server runs on its own OS, i.e., one guest running on Linux and another on
Windows.
- The hypervisor monitors the physical server's resources.
- As virtual servers run applications, the hypervisor relays resources from the physical
machine to the appropriate virtual server.
- Hypervisors have their own processing needs, which mean that the physical server
must reserve some processing power and resources to run the hypervisor application.
This can impact overall server performance and slow down applications.
- Full virtualization provides
sufficient emulation of the
underlying platform that a guest
operating system and application
set can run unmodified and
unaware that their platform is
being virtualized.
- Providing a full emulation of the
platform means that all platform
devices are emulated with
enough detail to permit the guest
OS to manipulate them at their
native level (such as register-level
interfaces).
- While full virtualization comes
with a performance penalty, the technique permits running unmodified operating
system, which is ideal, particularly when source is unavailable such as with
proprietary operating systems.

Benefits
- Sharing a computer system among multiple users
- Isolating users from each other and from the control program
- Emulating hardware on another machine

Para Virtualization
- The fundamental issue with full virtualization is the emulation of devices within the
hypervisor.
- A solution to this problem is to make the guest operating system aware that it's being
virtualized.
- With this knowledge, the guest OS can short circuit its drivers to minimize the
overhead of communicating with physical devices.
- In this way, the guest OS drivers and hypervisor drivers integrate with one another to
efficiently enable and share physical device access.
- Low-level emulation of devices is removed, replaced with cooperating guest and
hypervisor drivers.
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
Pros
- Improved Performance

Cons
- The downside of paravirtualization is that the guest must be modified to integrate
hypervisor awareness

OS Level Virtualization
An OS-level virtualization approach doesn't use a hypervisor at all.
Instead, the virtualization capability is part of the host OS, which performs all the
functions of a fully virtualized hypervisor.
The biggest limitation of this approach is that all the guest servers must run the same
OS. Because all the
guest operating
systems must be the
same, this is called
a homogeneous enviro
nment.
Examples: FreeBSD
Jails, Solaris
Containers.
Shared kernel
virtualization or
operating system
virtualization takes
advantage of the architectural design of Linux and UNIX based operating systems.
o At the core of the operating system is the kernel. The kernel handles all the
interactions between the operating system and the physical hardware.
o The second key component is the root file system which contains all the libraries,
files and utilities necessary for the operating system to function.
Under shared kernel virtualization the virtual guest systems each have their own root
file system but share the kernel of the host operating system.
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
Operating system-level virtualization is a server virtualization method where
the kernel of an operating system allows for multiple isolated user-space instances,
instead of just one.
Such instances (often called containers, virtualization engines (VE), virtual private
servers (VPS) or jails) may look and feel like a real server, from the point of view of its
owner.
In addition to isolation mechanisms, the kernel often provides resource management
features to limit the impact of one container's activities on the other containers.

Uses
Operating system-level virtualization is commonly used in virtual
hosting environments, where it is useful for securely allocating finite hardware
resources amongst a large number of mutually-distrusting users.
System administrators may also use it, to a lesser extent, for consolidating server
hardware by moving services on separate hosts into containers on the one server.
Other typical scenarios include separating several applications to separate containers
for improved security, hardware independence, and added resource management
features.
Advantages:
This form of virtualization usually imposes little or no overhead, because programs in
virtual partition use the operating system's normal system call interface and do not
need to be subject to emulation or run in an intermediate virtual machine, as is the
case with whole-system virtualizers or paravirtualizers.
It also does not require hardware assistance to perform efficiently.
This is easier to back up, more space-efficient and simpler to cache than the block-
level copy-on-write schemes common on whole-system virtualizers.

Disadvantages
Operating system-level virtualization is not as flexible as other virtualization
approaches since it cannot host a guest operating system different from the host one,
or a different guest kernel. For example, with Linux, different distributions are fine,
but other OS such as
Windows cannot be hosted.

Hardware-assisted Virtualization
Hardware-assisted
virtualization is a platform
virtualization approach that
enables efficient full
virtualization using help from
hardware capabilities,
primarily from the host
processors.
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
Full virtualization is used to simulate a complete hardware environment, or virtual
machine, in which an unmodified guest operating system (using the same instruction
set as the host machine) executes in complete isolation.
Hardware-assisted virtualization was added to x86 processors (Intel VT-x or AMD-V) in
2006.
Hardware-assisted virtualization is also known as accelerated virtualization;
Server hardware is virtualization aware.
Hypervisor and VMM load at privilege Ring -1
(firmware)
Removes CPU emulation bottleneck
First generation enhancements include Intel
Virtualization Technology (VT-x) and AMDs AMD-V
which both target privileged instructions with a new
CPU execution mode feature that allows the VMM to
run in a new root mode below ring 0.

Advantages
Hardware-assisted virtualization reduces the
maintenance overhead of paravirtualization as it
reduces (ideally, eliminates) the changes needed in the
guest operating system.
It is also considerably easier to obtain better
performance.

Disadvantages
Hardware-assisted virtualization requires explicit
support in the host CPU, which is not available on all
x86/x86_64 processors.
A "pure" hardware-assisted virtualization approach, using entirely unmodified guest
operating systems, involves many VM traps, and thus high CPU overheads, limiting
scalability and the efficiency of server consolidation. This performance hit can be
mitigated by the use of paravirtualized drivers; the combination has been called
"hybrid virtualization".

Topic-2: Virtual Machine Monitor/Hypervisor


A virtual machine is an isolated efficient duplication
of real machine.
A hypervisor or virtual machine monitor (VMM) is a
piece of computer software, firmware or hardware
that creates and runs virtual machines.
A low-level program is required to provide system
resource access to virtual machines, and this
program is referred to as the hypervisor or Virtual
Machine Monitor (VMM).
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
A computer on which a hypervisor is running one or more virtual machines is defined
as a host machine. Each virtual machine is called a guest machine.
The hypervisor presents the guest operating systems with a virtual operating
platform and manages the execution of the guest operating systems. Multiple
instances of a variety of operating systems may share the virtualized hardware
resources.
Given a computer system with a certain set of resources, you can set aside portions of
those resources to create a virtual machine. From the standpoint of applications or
users, a virtual machine has all the attributes and characteristics of a physical system
but is strictly software that emulates a physical machine.
A system virtual machine (or a hardware virtual machine) has its own address space in
memory, its own processor resource allocation, and its own device I/O using its own
virtual device drivers. Some virtual machines are designed to run only a single
application or process and are referred to as process virtual machines.
A virtual machine is a computer that is walled off from the physical computer that the
virtual machine is running on. This makes virtual machine technology very useful for
running old versions of operating systems, testing applications in what amounts to a
sandbox, or in the case of cloud computing, creating virtual machine instances that
can be assigned a workload. Virtual machines provide the capability of running
multiple machine instances, each with their own operating system.

Characteristics:
1. VMM provides an environment for programs which is essentially identical with the
original machine
2. Programs run in this environment show at worst only minor decreases in speed
3. The VMM is in complete control of system resources

Process VM versus VM Monitors


Process VM: A program is compiled to intermediate (portable) code which is then
executed by a run time system. E.g. Java VM
VM Monitor: A separate software layer mimics the instruction set of the hardware. So,
a complete OS and its applications can be supported. E.g. VMware
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi

Uses
Consider a hardware virtualization hypervisor when you need to perform any of the following:
System consolidation: Virtualization hypervisors support the operation of multiple
systems on the same physical hardware, reducing costs and the physical server footprint
while delivering similar and often improved services.
System testing: Hypervisors support the isolation of systems, letting you test new
software and applications without affecting production. They also provide a low-cost
testing alternative to physical systems.
Heterogeneous system operation: Hypervisors support the simultaneous execution of
multiple operating systems on the same physical hardware, letting organizations run
heterogeneous systems on reduced hardware footprints.
Hardware optimization: Hypervisors increase hardware usage through the operation of
multiple workloads on each physical host server. Server usage can increase from 5% to
10% to upwards of 60% or 70%.
Application high availability: By sharing workloads through technologies such as
failover clustering, servers running virtualization hypervisors can support application
high availability and ensure that services are always available when running inside VMs.
Resource optimization: By running different applications in separate VMs, hypervisors
can increase resource use because each application requires a number of resources at
different times.
Service flexibility: Because hypervisors support the operation of systems through VMs,
organizations gain flexibility because VMs are easier to clone and reproduce than physical
machines.
Dynamic resource management: Virtualization hypervisors support manual or
automated resource allocation to VM workloads as peak usage occurs. Because of this,
hypervisors provide better support for dynamic resource allocation in data centers.

Advantages
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
From the standpoint of cloud computing, these features enable VMMs to manage
application provisioning, provide for machine instance cloning and replication, allow for
graceful system failover, and provide several other desirable features.
Disadvantages
The downside of virtual machine technologies is that having resources indirectly
addressed means there is some level of overhead.

Type 1 Hypervisor
Type 1 (or native, bare metal) hypervisors run directly on the host's hardware to
control the hardware and to manage guest operating systems.
A guest operating-system thus runs on another level above the hypervisor.
This model represents the classic implementation of virtual-machine architectures.
A Type-1 hypervisor is a type of client hypervisor that interacts directly with hardware
that is being virtualized.
It is completely independent from the operating system, unlike a Type-2 hypervisor,
and boots before the operating system (OS).
Currently, Type-1 hypervisors are being used by all the major players in the desktop
virtualization space.
The Type 1 hypervisor is often referred to as a hardware virtualization engine.
A Type 1 hypervisor provides better performance and greater flexibility because it
operates as a thin layer designed to expose hardware resources to virtual
machines (VMs), reducing the overhead required to run the hypervisor itself.
Because a Type 1 hypervisor runs directly on the hardware, it is a function in and of
itself. Servers that run Type 1 hypervisors are often single-purpose servers that offer
no other function. They become part of the resource pool and are designed specifically
to support the operation of multiple applications within various VMs.
Because they run directly on the hardware, Type 1 hypervisors support hardware
virtualization.
Type 1 hypervisors are VM monitors that are designed to keep track of all of the events
that occur within a VM and, when required, provide -- or deny -- access to appropriate
resources to meet VM operating requirements. Ideally, the VM monitor will perform its
operations through the use of
policies that contain all of the
settings assigned to a
particular VM.
A bare-metal virtualization
hypervisor does not require
admins to install a server
operating system first.
Disadvantage
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
Hardware support is typically more limited, because the hypervisor usually has limited
device drivers built into it.
Bare-metal virtualization is well suited for enterprise data centers, because it usually
comes with advanced features for resource management, high availability and security.

Advantages:
No host operating system is required, they are installed on bare metal.
Bare-metal virtualization means the hypervisor has direct access to hardware
resources, which results in better performance, scalability and stability.

Examples
Vmware ESX
Oracle VM
LynxSecure

Type 2 Hypervisor
Type 2 (or hosted) hypervisors run within a conventional operating-
system environment.
With the hypervisor layer as a distinct second software level, guest operating-systems
run at the third level above the hardware.
Installed over an operating System and are referred to as Type 2 VM or hosted VM.
A Type-2 hypervisor is a type of client hypervisor that sits on top of an operating
system.
Unlike a Type-1 hypervisor, a Type-2 hypervisor relies heavily on the operating system.
It cannot boot until the operating system is already up and running and, if for any
reason the operating system crashes, all end-users are affected. This is a big drawback
of Type-2 hypervisors, as they are only as secure as the operating system on which
they rely.
Also, since Type-2 hypervisors depend on an OS, they are not in full control of the end
user's machine.
Because they run as an application on top of an operating system, Type 2 hypervisors
perform software virtualization.
A hosted hypervisor requires you to first install an OS. These hypervisors are basically
like applications that install on a guest OS.

Advantages
This approach provides better hardware compatibility than bare-metal virtualization,
because the OS is responsible for the hardware drivers instead of the hypervisor.

Disadvantages:
A hosted virtualization hypervisor does not have direct access to hardware and must
go through the OS, which increases resource overhead and can degrade virtual
machine (VM) performance. Also, because there are typically many services and
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
applications running on the host OS, the hypervisor often steals resources from the
VMs running on it.

Uses:
Hosted hypervisors are common for desktops, because they allow you to run multiple
OSes. These virtualization hypervisor types are also popular for developers, to
maintain application compatibility on modern OSes.
Examples
KVM
Microsoft Hyper-V (run over Windows Server)
Xen (run over linux)

Topic-3: Understanding Machine Imaging


A mechanism (other than using hypervisor & load balancing) commonly used to
provide system portability, instantiate applications, and provision and deploy systems
in the cloud is through storing the state of systems using a system image.
A system image makes a copy or a clone of the entire computer system inside a single
container such as a file.
The system imaging program is used to make this image and can be used later to
restore a system image.
Some imaging programs can take snapshots of systems, and most allow you to view
the files contained in the image and do partial restores.
A prominent example of a system image and how it can be used in cloud computing
architectures is the Amazon Machine Image (AMI) used by Amazon Web Services to
store copies of a virtual machine.
An AMI is a file system image that contains an operating system, all appropriate device
drivers, and any applications and state information that the working virtual machine
would have.
When you subscribe to AWS, you can choose to use one of its hundreds of canned
AMIs or to create a custom system and capture that system's image to an AMI.
An AMI can be for public use under a free distribution license, for pay-per-use with
operating systems such as Windows, or shared by an EC2 user with other users who
are given the privilege of access.
An Amazon Machine Image (AMI) is a template that contains a software configuration
for your server (for example, an operating system, an application server, and
applications). You specify an AMI when you launch an instance, which is a virtual
server in the cloud. The AMI provides the software for the root volume of the instance.
You can launch as many instances from your AMI as you need.
The AMI file system is not a standard bit-for-bit image of a system that is common to
many disk imaging programs. AMI omits the kernel image and stores a pointer to a
particular kernel that is part of the AWS kernel library. Among the choices are Red Hat
Linux, Ubuntu, Microsoft Windows, Solaris, and others.
Files in AMI are compressed and encrypted, and an XML file is written that describes
the AMI archive.
AMIs are typically stored in your Amazon S3 (Simple Storage System) buckets as a set
of 10MB chunks.
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
Machine images are sometimes referred to as virtual appliancessystems that are
meant to run on virtualization platforms. Running Virtual Machines are known as
Instances.
AWS EC2 runs on the Xen hypervisor, for example.
Virtual appliances are provided to the user or customer as files, via either electronic
downloads or physical distribution. The file format most commonly used is the Open
Virtualization Format (OVF).
The Distributed Management Task Force (DMTF) publishes the OVF specification
documentation. Most virtualization vendors, including VMware, Microsoft, Oracle, and
Citrix, support OVF for virtual appliances.
The term virtual appliance is meant to differentiate the software image from an
operating virtual machine. The system image contains the operating system and
applications that create an environment.
Virtual appliances are a subset of the broader class of software appliances. Installation
of a software appliance on a virtual machine and packaging that into an image creates
a virtual appliance. Like software appliances, virtual appliances are intended to
eliminate the installation, configuration and maintenance costs associated with
running complex stacks of software.
A virtual appliance is not a complete virtual machine platform, but rather a software
image containing a software stack designed to run on a virtual machine platform
which may be a Type 1 or Type 2 hypervisor.
Most virtual appliances are used to run a single application and are configurable from
a Web page.
Virtual appliances are a relatively new paradigm for application deployment, and cloud
computing is the major reason for the interest in them and for their adoption. This
area of WAN application portability and deployment, and of WAN optimization of an
application based on demand, is one with many new participants.

Porting Applications
Cloud computing applications have the ability to run on virtual systems and for these
systems to be moved as needed to respond to demand.
Systems (VMs running applications), storage, and network assets can all be virtualized
and have sufficient flexibility to give acceptable distributed WAN application
performance.
Developers who write software to run in the cloud will undoubtedly want the ability to
port their applications from one cloud vendor to another, but that is a much more
difficult proposition. Cloud computing is a relatively new area of technology, and the
major vendors have technologies that don't interoperate with one another.

VM Migration
Live migration refers to the process of moving a running virtual machine or
application between different physical machines without disconnecting the client or
application.
Memory, storage, and network connectivity of the virtual machine are transferred from
the original host machine to the destination.
Er. Rohit Handa
Lecturer, CSE-IT Department
IBM-ICE Program, BUEST Baddi
VM memory migration
Two techniques for moving the virtual machine's memory state from the source to the
destination are pre-copy memory migration and post-copy memory migration.
1. Pre-copy memory migration
a. Warm-up phase: In pre-copy memory migration, the Hypervisor typically copies
all the memory pages from source to destination while the VM is still running on
the source. If some memory pages change (become 'dirty') during this process,
they will be re-copied until the rate of re-copied pages is not less than page
dirtying rate.
b. Stop-and-copy phase: After the warm-up phase, the VM will be stopped on the
original host, the remaining dirty pages will be copied to the destination, and the
VM will be resumed on the destination host. The time between stopping the VM
on the original host and resuming it on destination is called "down-time", and
ranges from a few milliseconds to seconds according to the size of memory and
applications running on the VM. There are some techniques to reduce live
migration down-time, such as using probability density function of memory
change.
2. Post-copy memory migration
Post-copy VM migration is initiated by suspending the VM at the source.
With the VM suspended, a minimal subset of the execution state of the VM is
transferred to the target.
The VM is then resumed at the target, even though most of the memory state of the VM
still resides at the source.
At the target, when the VM tries to access pages that have not yet been transferred, it
generates page-faults.
These faults are trapped at the target and redirected towards the source over the
network.
Such faults are referred to as network faults. The source host responds to the network-
fault by sending the faulted page.
Since each page fault of the running VM is redirected towards the source, this
technique can degrade performance of applications running inside the VM.
However, pure demand-paging accompanied with techniques such as pre-paging can
reduce this impact by a great extent.
When down-time of a VM during a live migration is not noticeable by the end user, it is
called a seamless live migration.

Вам также может понравиться