Вы находитесь на странице: 1из 10

Virtualization Concept

Waters, J. (2007, March 15) Virtualization Definition and Solutions. Retrieved


from http://www.cio.com/article/2439494/virtualization/virtualizationdefinition-and-solutions.html
Virtualization refers to technologies designed to provide a layer of
abstraction between computer hardware systems and the software running
on them. By providing a logical view of computing resources, rather than a
physical view, virtualization solutions make it possible to do a couple of very
useful things: They can allow you, essentially, to trick your operating
systems into thinking that a group of servers is a single pool of computing
resources. And they can allow you to run multiple operating systems
simultaneously on a single machine.
Virtualization has its roots in partitioning, which divides a single physical
server into multiple logical servers. Once the physical server is divided, each
logical server can run an operating system and applications independently. In
the 1990s, virtualization was used primarily to re-create end-user
environments on a single piece of mainframe hardware. If you were an IT
administrator and you wanted to roll out new software, but you wanted see
how it would work on a Windows NT or a Linux machine, you used
virtualization technologies to create the various user environments.
Rouse, M. (2016 October 2) Virtualization. Retrieved from
http://searchservervirtualization.techtarget.com/definition/virtualization
Virtualization is the creation of a virtual -- rather than actual -- version of
something, such as an operating system, a server, a storage device or
network resources. Virtualization describes a technology in which an
application, guest operating system or data storage is abstracted away from
the true underlying hardware or software. A key use of virtualization
technology is server virtualization, which uses a software layer called a
hypervisor to emulate the underlying hardware. This often includes the CPU's
memory, I/O and network traffic. The guest operating system, normally
interacting with true hardware, is now doing so with a software emulation of
that hardware, and often the guest operating system has no idea it's on
virtualized hardware. While the performance of this virtual system is not
equal to the performance of the operating system running on true hardware,
the concept of virtualization works because most guest operating systems
and applications don't need the full use of the underlying hardware. This
allows for greater flexibility, control and isolation by removing the

dependency on a given hardware platform. While initially meant for server


virtualization, the concept of virtualization has spread to applications,
networks, data and desktops.

Properties of Virtualization
Advantages & Disadvantages
Marshall, D. (2011, November 2) Virtualization Report. Retrieved from
http://www.infoworld.com/article/2621446/server-virtualization/servervirtualization-top-10-benefits-of-server-virtualization.html

Save energy
Migrating physical servers over to virtual machines and consolidating them
onto far fewer physical servers means lowering monthly power and cooling
costs in the data center. This was an early victory chant for server
virtualization vendors back in the early part of 2000, and it still holds true
today.
Reduce the data center footprint
This one goes hand in hand with the previous benefit. In addition to saving
more of your company's green with a smaller energy footprint, server
consolidation with virtualization will also reduce the overall footprint of your
entire data center. That means far fewer servers, less networking gear, a
smaller number of racks needed -- all of which translates into less data
center floor space required. That can further save you money if you don't
happen to own your own data center and instead make use of a co-location
facility.
QA/lab environments
Virtualization allows you to easily build out a self-contained lab or test
environment, operating on its own isolated network. If you don't think this is
useful or powerful, just look to VMware's own trade show, VMworld. This
event creates one of the largest public virtual labs I've ever experienced, and
it truly shows off what you can do with a virtual lab environment. While this
is probably way more lab than you'd ever actually need in your own
environment, you can see how building something like this would be cost
prohibitive with purely physical servers, and in many cases, technologically
improbable.

Faster server provisioning


As a data center administrator, imagine being able to provide your business
units with near instant-on capacity when a request comes down the chain.
Server virtualization enables elastic capacity to provide system provisioning
and deployment at a moment's notice. You can quickly clone a gold image,
master template, or existing virtual machine to get a server up and running
within minutes. Remember that the next time you have to fill out purchase
orders, wait for shipping and receiving, and then rack, stack, and cable a
physical machine only to spend additional hours waiting for the operating
system and applications to complete their installations.
Reduce hardware vendor lock-in
While not always a bad thing, sometimes being tied down to one particular
server vendor or even one particular server model can prove quite
frustrating. But because server virtualization abstracts away the underlying
hardware and replaces it with virtual hardware, data center managers and
owners gain a lot more flexibility when it comes to the server equipment
they can choose from. This can also be a handy negotiating tool with the
hardware vendors when the time comes to renew or purchase more
equipment.
Increase uptime
Most server virtualization platforms now offer a number of advanced features
that just aren't found on physical servers, which helps with business
continuity and increased uptime. Though the vendor feature names may be
different, they usually offer capabilities such as live migration, storage
migration, fault tolerance, high availability, and distributed resource
scheduling. These technologies keep virtual machines chugging along or give
them the ability to quickly recover from unplanned outages. The ability to
quickly and easily move a virtual machine from one server to another is
perhaps one of the greatest single benefits of virtualization with far-reaching
uses. As the technology continues to mature to the point where it can do
long-distance migrations, such as being able to move a virtual machine from
one data center to another no matter the network latency involved, the
virtual world will become that much more in demand.
Improve disaster recovery
Virtualization offers an organization three important components when it
comes to building out a disaster recovery solution. The first is its hardware
abstraction capability. By removing the dependency on a particular hardware
vendor or server model, a disaster recovery site no longer needs to keep

identical hardware on hand to match the production environment, and IT can


save money by buying cheaper hardware in the DR site since it rarely gets
used. Second, by consolidating servers down to fewer physical machines in
production, an organization can more easily create an affordable replication
site. And third, most enterprise server virtualization platforms have software
that can help automate the failover when a disaster does strike. The same
software usually provides a way to test a disaster recovery failover as well.
Imagine being able to actually test and see your failover plan work in reality,
rather than hoping and praying that it will work if and when the time comes.
Isolate applications
In the physical world, data centers typically moved to a "one app/one server"
model in order to isolate applications. But this caused physical server sprawl,
increased costs, and underutilized servers. Server virtualization provides
application isolation and removes application compatibility issues by
consolidating many of these virtual machines across far fewer physical
servers. This also cuts down on server waste by more fully utilizing the
physical server resources and by provisioning virtual machines with the
exact amount of CPU, memory, and storage resources that it needs.
Extend the life of older applications
Let's be honest -- you probably have old legacy applications still running in
your environment. These applications probably fit into one or more of these
categories: It doesn't run on a modern operating system, it may not run on
newer hardware, your IT team is afraid to touch it, and chances are good that
the person or company who created it is no longer around to update it. By
virtualizing and encapsulating the application and its environment, you can
extend its life, maintain uptime, and finally get rid of that old Pentium
machine hidden in the corner of the data center. You know the one, it's all
covered in dust with fingerprints from administrators long gone and names
forgotten.
Help move things to the cloud
As much as you think you've been talked to death about virtualizing your
environment, that probably doesn't even compare to the amount of times in
the last year alone that you've had someone talk to you about joining "the
cloud." The good news here is that by virtualizing your servers and
abstracting away the underlying hardware, you are preparing yourself for a
move into the cloud. The first step may be to move from a simple virtualized
data center to a private cloud. But as the public cloud matures, and the
technology around it advances and you become more comfortable with the
thought of moving data out of your data center and into a cloud hosting

facility, you will have had a head start in getting there. The journey along the
way will have better prepared you and the organization.

Advantages and Disadvantages of Virtualization. (2015, February 11)


Retrieved from http://occupytheory.org/advantages-and-disadvantages-ofvirtualization/
Limitations
One major downside that you need to be aware of before you opt for
virtualization involves the various limitations that exist. Not all servers are
applications are specifically designed to be virtualization-friendly. This means
that some aspects of your computer technology within your business might
not leave you with the available option of virtualization.
Instant Access to Data
Since data is essential to your business it is essential that you only choose
virtualization options that offer adequate data protection. Not owning your
own servers can put your data at risk and this is not ideal. You do not want
your data to be vulnerable.

I skipped B because life.


C: Hypervisors
Kleyman, B. (2012, August 1) Hypervisor 101: Understanding the
Virtualization Market. Retrieved from
http://www.datacenterknowledge.com/archives/2012/08/01/hypervisor-101-alook-hypervisor-market/
The evolution of virtualization greatly revolves around one piece of very important
software. This is the hypervisor. As an integral component, this software piece allows
for physical devices to share their resources amongst virtual machines running as
guests on to top of that physical hardware. To further clarify the technology, its
important to analyze a few key definitions:

Type I Hypervisor. This type of hypervisor (pictured at the beginning


of the article) is deployed as a bare-metal installation. This means that
the first thing to be installed on a server as the operating system will be
the hypervisor. The benefit of this software is that the hypervisor will
communicate directly with the underlying physical server hardware.
Those resources are then paravirtualized and delivered to the running
VMs. This is the preferred method for many production systems.
Type II Hypervisor. This model (shown below) is also known as a
hosted hypervisor. The software is not installed onto the bare-metal, but
instead is loaded on top of an already live operating system. For
example, a server running Windows Server 2008R2 can have VMware
Workstation 8 installed on top of that OS. Although there is an extra hop
for the resources to take when they pass through to the VM the
latency is minimal and with todays modern software enhancements,
the hypervisor can still perform optimally.
Guest Machine. A guest machine, also known as avirtual machine
(VM) is the workload installed on top of the hypervisor. This can be a
virtual appliance, operating system or other type of virtualization-ready
workload. This guest machine will, for all intents and purposes, believe
that it is its own unit with its own dedicated resources. So, instead of
using a physical server for just one purpose, virtualization allows for
multiple VMs to run on top of that physical host. All of this happens
while resources are intelligently shared between other VMs.
Host Machine. This is known as the physical host. Within
virtualization, there may be several components SAN, LAN, wiring, and
so on. In this case, we are focusing on the resources located on the
physical server. The resource can include RAM and CPU. These are then
divided between VMs and distributed as the administrator sees fit. So, a
machine needing more RAM (a domain controller) would receive that
allocation, while a less important VM (a licensing server for example)
would have fewer resources. With todays hypervisor technologies,
many of these resources can be dynamically allocated.
Paravirtualization Tools. After the guest VM is installed on top of the
hypervisor, there usually is a set of tools which are installed into the
guest VM. These tools provide a set of operations and drivers for the
guest VM to run more optimally. For example, although natively installed
drivers for a NIC will work, paravirtualized NIC drivers will communicate
with the underlying physical layer much more efficiently. Furthermore,

advanced networking configurations become a reality when


paravirtualized NIC drivers are deployed.

For Your Reference:

Paravirtualization
Ferro, G. (2012, March 7) Intro to Server Virtualization: Hypervisor vs.
Paravirtualization. Retrieved from https://www.petri.com/hypervisor-vsparavirtualization
The concept of Paravirtualization is very similar to that of the hypervisor
principle. A software hypervisor is installed on a physical server and a guest
OS is installed into the environment. The difference is that the guest OS
needs to know that it is virtualized to take advantage of the functions.
Operating systems require extensions to make API calls to the hypervisor.
Not every operating system or application can support being paravirtualized
and not every server virtualization vendor supports running operating
systems in a paravirtualized format. The guest OS / program will be aware

that it operates in a shared medium at some level, and although it may not
be visible to the user, it will be to the system administrator.

D: Cloud Computing
Woodford, C. (2016, August 13) Cloud Computing. Retrieved from
http://www.explainthatstuff.com/cloud-computing-introduction.html
Cloud computing means that instead of all the computer hardware and
software you're using sitting on your desktop, or somewhere inside your
company's network, it's provided for you as a service by another company
and accessed over the Internet, usually in a completely seamless way.
Exactly where the hardware and software is located and how it all works
doesn't matter to you, the userit's just somewhere up in the nebulous
"cloud" that the Internet represents.
Types of cloud computing
IT people talk about three different kinds of cloud computing, where different
services are being provided for you. Note that there's a certain amount of
vagueness about how these things are defined and some overlap between
them.
Infrastructure as a Service (IaaS) means you're buying access to raw
computing hardware over the Net, such as servers or storage. Since you buy
what you need and pay-as-you-go, this is often referred to as utility
computing. Ordinary web hosting is a simple example of IaaS: you pay a
monthly subscription or a per-megabyte/gigabyte fee to have a hosting
company serve up files for your website from their servers.
Software as a Service (SaaS) means you use a complete application running
on someone else's system. Web-based email and Google Documents are
perhaps the best-known examples. Zoho is another well-known SaaS
provider offering a variety of office applications online.
Platform as a Service (PaaS) means you develop applications using Webbased tools so they run on systems software and hardware provided by
another company. So, for example, you might develop your own ecommerce
website but have the whole thing, including the shopping cart, checkout, and
payment mechanism running on a merchant's server. App Cloud (from
salesforce.com) and the Google App Engine are examples of PaaS.

Advantages
The pros of cloud computing are obvious and compelling. If your business is
selling books or repairing shoes, why get involved in the nitty gritty of buying
and maintaining a complex computer system? If you run an insurance office,
do you really want your sales agents wasting time running anti-virus
software, upgrading word-processors, or worrying about hard-drive crashes?
Do you really want them cluttering your expensive computers with their
personal emails, illegally shared MP3 files, and naughty YouTube videos
when you could leave that responsibility to someone else? Cloud computing
allows you to buy in only the services you want, when you want them,
cutting the upfront capital costs of computers and peripherals. You avoid
equipment going out of date and other familiar IT problems like ensuring
system security and reliability. You can add extra services (or take them
away) at a moment's notice as your business needs change. It's really quick
and easy to add new applications or services to your business without
waiting weeks or months for the new computer (and its software) to arrive.
Drawbacks
Instant convenience comes at a price. Instead of purchasing computers and
software, cloud computing means you buy services, so one-off, upfront
capital costs become ongoing operating costs instead. That might work out
much more expensive in the long-term.
If you're using software as a service (for example, writing a report using an
online word processor or sending emails through webmail), you need a
reliable, high-speed, broadband Internet connection functioning the whole
time you're working. That's something we take for granted in countries such
as the United States, but it's much more of an issue in developing countries
or rural areas where broadband is unavailable.
If you're buying in services, you can buy only what people are providing, so
you may be restricted to off-the-peg solutions rather than ones that precisely
meet your needs. Not only that, but you're completely at the mercy of your
suppliers if they suddenly decide to stop supporting a product you've come
to depend on. (Google, for example, upset many users when it announced in
September 2012 that its cloud-based Google Docs would drop support for old
but de facto standard Microsoft Office file formats such as .DOC, .XLS, and
.PPT, giving a mere one week's notice of the changealthough, after public
pressure, it later extended the deadline by three months.) Critics charge that
cloud-computing is a return to the bad-old days of mainframes and
proprietary systems, where businesses are locked into unsuitable, long-term
arrangements with big, inflexible companies. Instead of using "generative"
systems (ones that can be added to and extended in exciting ways the

developers never envisaged), you're effectively using "dumb terminals"


whose uses are severely limited by the supplier. Good for convenience and
security, perhaps, but what will you lose in flexibility? And is such a
restrained approach good for the future of the Internet as a whole?

READ ME : I skipped B, too much reading for me. For part C I skipped
the hardware support that makes virtualization possible. I mean youre
going to go through everything but just to let you know. I hope I helped even
just a little.
P.S. My brain hurts now.

Вам также может понравиться