Вы находитесь на странице: 1из 39

A

Project Report
On

VIRTUALISATION
VMWare Analysis

Submitted to

COMPUCOM INSTITUTE
OF INFORMATION TECHNOLOGY &
MANAGEMENT, JAIPUR

By
Neeraj Gupta

Undertaken at

LINUX WORLD
Acknowledgement

This work is the result of extensive guidance and dedicated work performance. I wish to acknowledge
and express my personal gratitude to all those without whom this work could not have been possible.

First of all I express my heartiest devotion to “Almighty God” for his graceful blessings at every step
without which nothing could have been reality. I feel very delighted to get this rare opportunity to show
my profound senses of reverences and indebtedness to my esteemed guide. Ms Sonia Choudhary and
my Head Of Department Mr. N.S. Yadav for their keen and sustained interest, and valuable advice,
throughout the course that led my training report, to a successful completion. For this kind act of
consideration I beholden to them in special manner and no one can fully convey my feelings of
respect and regard for them.

I also like to extend my thanks to two persons who guided me throughout the training. First person is
Mr. Satish Kumar (Head of Linux Based Development at Linux World) and second one is Mr. Vimal
Daga (Head of Industrial Training Committee, Linux World). Without them it would have been
impossible to complete the project modules at time.

I can’t express my thanks in words for my parents who gave me the support and opportunity to attain the
success.

Last but not least I would like to thank those who have directly or indirectly helped and co-operated in
accomplishing this report.

Neeraj Gupta
BE(CS)-06-072-1440
B.Tech. VII Sem.
Computer Engineering
Contents

1. Introduction

1.1 Objective of Training

1.2 Profile of the Company

1.2.1 About Company

1.2.2 RnD

2. Green Tech

3. Virtualization

3.1 History of Virtualization

3.2 Need for Virtualization

3.3 How does Virtualization works?

3.4 Benefits of Virtualization

4. Virtual Machine and its Benefits

5. Hypervisor

6. Energy Efficiency

7. Types of Virtualisation
8. Cloud Computing

9. Annexure

INTRODUCTION
As the part of curriculum of Rajasthan Technical University, I have completed my training for 30 days
during the time period from 16th May to 24th June, 2009. I am, thus, writing this report to present my
learning and the knowledge gained during the training while working on the live project.

I pursued my training at “Linux World Research and Development, Jaipur”, also known as “LW”
LinuxWorld is an organization based in India, specializing in open source solutions and providing
commercial Linux support and services.

I opted for Virtulization with RedHat as my training technology. The project was assigned to a group of
4 trainees and I was selected to lead the team.

Virtualization is the ability for a single system to act as multiple systems--is becoming a key
technology in the data center. Virtualization permits more efficient allocation of hardware resources,
keeping costs in control while maintaining the security that comes with placing key applications in
separate computer silos.
The LinuxWorld Virtualization course teaches system administrators how to deploy virtualization in
Linux, thus taking greater advantage of hardware and other resources.
1.1 Objectives of the Training

Objectives of my training were:

• To understand the basic Life Cycle of project development and how to implement it on Real
Life Projects in a limited time with accuracy and efficiency.

• To understand the company standards for building a project.

• To gain a practical experience of working in a professional environment.

• How to build an application that satisfies the Customer needs.


1.2 Profile of the Company

Company Name: Linux World

Company Headquarter: 5, Krishna Tower, Gopal Nagar-A, Gopalpura Bypass, Jaipur-302015

Tel: +91 141 4060 666, Fax: +91 141 4060 620

R & D Division: 118, Keshav Vihar, Behind Lifecare Hospital, Ridhi-Sidhi road, Mansarovar,
Jaipur.

Company Website: http://www.lwindia.com

Company e-mail: course@lwindia.com

Founded: 2000

Managing Director: Mr. Vimal Daga

1.2.1 About Company

LinuxWorld is an organization based in India, specializing in open source solutions and


providing commercial Linux support and services. We deliver open source solutions for business
requirements using our vast knowledge & experience in Linux operating system, open source
software, overall system development and integration.

Expertise of LinuxWorld ensures the maximum uptime and reliable business process continuity
of the customer. One such service is our outsourced Linux systems administration where we
do the full administration of customers' servers 24x7 ensuring the security, reliablity & smooth
operation.
Our Services are Offered in India & Worldwide!

We are in the business of looking after your business.


Placing our clients at the centre of all that we do, we provide bespoke Linux support solutions
that fully optimise the everyday on-line availability, performance and security your business
needs.

Unlike many other Linux companies, The company’s approach is based on consultation and
participation. We truly belive that we need to understand your business, your customer and your
needs before we can provide a service and a solution that is going to positively support and
contribute to your success.

Our combined commercial and technical approach enables us to share our exepertise, experience
and passion for all things Linux World with you. This approach have been proven time and time
again to fully meet our’s client’s needs, add value to their business and support the development
and implementation of a ‘best-fit’ solution for them that is ‘Right for Them’.

Research and Development Wing

Research is a process of investigation. An examination of subject from different point of views.

Research is the hunt for truth. It is getting to know a subject by reading up on it. When a
question is asked, the first spark igniting a chain reaction is struck that teminates in a research
process.
GREEN TECH

There has been a lot of noise around companies adopting 'greener' technologies related to IT
equipment of late. The greener approach not only saves on power costs but also lets you do your
part for the environment. Vendors have been pretty restless as well pushing their power friendly
equipment to catch on with the pulse. Through this story we look at technologies that enable a
green data center, relevance of cloud computing as a green technology, the concepts behind
building a green office; plus a tour of some of the newest green buildings.

The biggest challenge for a datacenter today is to go green. And this is not because we have suddenly
become very nature friendly but it's because at the end of the day when we save nature by cutting down
our power, cooling, space, heat emissions, etc we directly or indirectly save costs. The reasons for this
are simple. IT is a must for every business, and as the business grows, an organization needs to invest
more on the IT infrastructure. With the rising energy costs, more IT equipment translates to higher costs
of power consumption, and also more space, which anyways comes at a premium. So, if products
continue consuming the power they have been consuming, then it could have serious implications. Last
year, Gartner estimated that ICT accounts for 2% of global CO2 emissions, which is the same as the
aviation industry. That's a high figure by any means, and unsustainable as
suggested by Gartner.

To understand this need of going green, let's take an example. Well! this is not
an example but is a real life case. It's about an office in Gurgaon. The office
has around 400 computers and a small datacenter with somewhere around 30
servers and 5 blades which eats up around 150 to 200 KW of power in a day.
The consumption includes the power consumed by the cooling and lighting equipments running in the
datacenter as well as in the building.

In Gurgaon power is a major problem and to address it, the company has a 320 KV diesel generator
which eats 1000 rupees worth of diesel every hour and also throws out a lot of pollutants. But
surprisingly, when in the night the city undergoes major power cuts for hours and the office is closed,
except the datacenter which is a 24x7 operation. So, the datacenter should not require more than a 20
KV generator, the same 320KV generator is used and a lot of fuel is burnt without any reason and a lot
of money goes for a toss.

Such a situation is nothing but wrong planning and is very common in our neighbourhood. However,
with a bit of planning, a lot of money and environment can be saved. Below we talk about some of the
key technologies that should be used in a datacenter to make it greener.

Virtualization
This is a trend that has really picked up momentum across the IT industry, and is a key technique being
touted for going green. Every organization today is combating the evils of server proliferation in the
datacenter. There's a server for just about every application: mail, web, proxy, business apps, security,
content management, file sharing and so on. The sad part is that their average utilization hovers around
30-40%, if not less. And yet they continue to run 24x7 and consume energy even when they're idle. So,
in effect, you're paying the energy cost of servers, which are idle almost 70% of the time. That's not a
very pleasant thought indeed, which is why the whole concept of server virtualization has become so
popular. It helps combat this problem.

Virtualization allows you to abstract the hardware from the software. So a server, which traditionally
runs a single OS and application in the data center, is able to run multiple OSes and apps
simultaneously. This would allow you to load a single server with more applications and increase its
utilization. This reduces the number of servers in the data center, and also helps you defer your server
purchase. With new servers more and more processing capabilities are coming to the market and so
adopting virtualization has become more easy and efficient. In the second week of September this year,
Intel has released its 7400 series Xeon processors which has 6 cores per processor. Such innovations are
driving the industry to go greener with more widespread use of virtualization.
Cloud Computing
If you extend the concept of virtualization from a single server to a complete grid, and make its access
available over the Internet, it's called a Cloud. Just imagine, if virtualizing a single server can save you
50 to 70% of resources then how much savings will happen in case your complete data center acts as a
single grid and is then virtualized. We have a complete section on what Cloud computing is and how it
helps in going green, at the end of this story.

Power and backup


It's always good to use renewable sources of energy such as solar and wind. But it might not be feasible
for all datacenters to go for such deployments as the costs are huge and the ROI is slow. But there are
certain things which can be easily done, such as using UPSes instead of generators.

Yes, even though UPSes are not great for the environment (especially if old batteries are not disposed
properly) then it can cause a lot of harm to the nature. But they have their share of goodies as well. First,
they don't eat up oil and second, they save a lot of smoke and money given the sky-rocketing prices of
petroleum products.

Moreover, they preserve power. So, if I go back to the Gurgaon datacenter example, we can easily
replace the generator with a UPS, and so when the utilization of power is less in nights, the UPS will
only supply the desired amount of power and will increase the backup by preserving the unutilized
power. So if the UPS can give a 2 hour long backup in the day when the office is fully active, it can give
you a 6 hour backup in the night when only 10 or 20% equipment are working.

Blade servers
Blades are a great way of saving energy and e-waste. They let you increase the density of your
datacenter to multiple levels. A single 7U blade chassis can take up to 14 blades which saves your real
estate space and in turn reduce the ambient cooling requirements (as you can host your datacenter in a
smaller space). Blade servers are generally built with specialized processors which eat less amount of
electricity. In Intel's dictionary these processors are called LV (low voltage) processors and their
performance per Watt is higher than others, but are not the highest performing processors in the lot.

In our tests we found that a single blade server while running eats up around 150 to 180 watts of
electricity whereas a standard server rack mountable server eats up 250+ watts. So, you can see a sure
power saving in this.

The other benefit which you get with blades is that most of the blade vendors today provide chassis
which are both backward and forward compatible, which means you can easily replace the existing
servers with new ones as and when they are available, and can do more virtualization to consolidate
instead of buying new servers. The vendors also provides buyback schemes for old blades against new
ones pretty often, so it also solves your e-waste problems as you don't have to throw away those blades.
You might as well save some money by giving them back to the vendors.

Green equipment and components


It's not just servers and blades that need to go green and consume less power, many components today
come with a greener version. These versions are essentially products with slightly reduced performance
(in some cases) and better power efficiency along with less harmful elements in the body of the device
(paint, metal, wires, etc). A lot of such products are available out there, ranging from a simple network
switch to hard disks and even processors; all of them today have greener versions. And yes, all these
components are a part of your datacenter, so if while building or upgrading a datacenter, you should
look forward to such equipment.

Shield Your Datacenter-or be Doomed


As data scales up, the processing power to manage it, storage capacities, cooling needs and server
numbers surge skywards. IT managers and consultants have turned to consolidate the number of servers
that run on their network, or they chose to virtualize depending on data load at any given point of time,
in order to use lesser servers at optimum levels of capacity. Most enterprises today have embraced
virtualization to the point that RoIs are calculated easily by analyzing business needs and virtualizing a
considerable number of servers to reduce losses. Everything is going just fine. Profits for the next
financial year sound very promising. Right?

Wrong. While virtualization has given businesses a fair say in how the network and IT operations of a
company perform, there is still one important component of the IT infrastructure which is by and large
ignored – security. Interestingly, there are products out there that do not require you to dent your
company finances too much. Besides exorbitant costs, the other big reason for security being rated quite
low on a network manager's list is the complexity of having to interact with multiple vendors for
multiple applications, software and maintenance of network security. To a great extent, this concern can
be eliminated as security vendors are looking to go to market with 'all in one' box format security
devices which require the network manager to deal with just one vendor.

An ideal example is Check Point's newly launched Power 1 range of online security applications and the
more robust UTM 1 Total Security offering. The idea of Power 1 is to combine firewall, IPSec, virtual
private networks (VPN) and intrusion prevention with advanced acceleration technologies, delivering
high-performance multi-Gbps security platforms. They promise performance up to 14 Gbps firewall
throughput, at a price/performance ratio of around $4 per Mbps. With a 6.1 Gbps intrusion prevention
speed, application layer threats can be identified and eliminated fast. Also, mission critical businesses
that happen to experience application security threats, such as worms or buffer overflows, are now
capable of stopping them while maintaining high performance and uninterrupted business. Hardware
upgradations are needed only when the company's networks grow or need to be scaled up. According to
an internet vulnerability study by IBM's X Force Global Technology Services, high severity
vulnerabilities increased last year by 28 percent, and interestingly vendors with the most vulnerability
exposures were Microsoft, Apple, Oracle, IBM and Cisco. To add to the misery, only 50 percent can be
corrected through vendor patches, and even in 2007, 90 percent of vulnerabilities could be remotely
exploited, and this percent has marginally gone up this year.
VIRTUALISATION

What is Virtualization?
Virtualization is a proven software technology that is rapidly transforming the IT landscape and
fundamentally changing the way that people compute. Today’s powerful x86 computer hardware was
designed to run a single operating system and a single application. This leaves most machines vastly
underutilized. Virtualization lets you run multiple virtual machines on a single physical machine,
sharing the resources of that single computer across multiple environments. Different virtual machines
can run different operating systems and multiple applications on the same physical computer. While
others are leaping aboard the virtualization bandwagon now, VMware is the market leader in
virtualization. Our technology is production-proven, used by more than 150,000 customers, including
100% of the Fortune 100.
History of Virtualization
Virtualization was first developed in the 1960s to partition large, mainframe hardware for better
hardware utilization. Today, computers based on x86 architecture are faced with the same problems of
rigidity and underutilization that mainframes faced in the 1960s.

VMware invented virtualization for the x86 platform in the 1990s to address underutilization and other
issues, overcoming many challenges in the process. Today, VMware is the global leader in x86
virtualization, with over 150,000 customers, including 100% of the Fortune 100.

In the Beginning: Mainframe Virtualization


Virtualization was first implemented more than 30 years ago by IBM as a way to logically partition
mainframe computers into separate virtual machines. These partitions allowed mainframes to
“multitask”: run multiple applications and processes at the same time. Since mainframes were expensive
resources at the time, they were designed for partitioning as a way to fully leverage the investment.

The Need for x86 Virtualization


Virtualization was effectively abandoned during the 1980s and 1990s when client-server applications
and inexpensive x86 servers and desktops led to distributed computing. The broad adoption of Windows
and the emergence of Linux as server operating systems in the 1990s established x86 servers as the
industry standard. The growth in x86 server and desktop deployments led to new IT infrastructure and
operational challenges. These challenges include:

• Low Infrastructure Utilization. Typical x86 server deployments achieve an average utilization of
only 10% to 15% of total capacity, according to International Data Corporation (IDC), a market research
firm. Organizations typically run one application per server to avoid the risk of vulnerabilities in one
application affecting the availability of another application on the same server.
• Increasing Physical Infrastructure Costs. The operational costs to support growing physical
infrastructure have steadily increased. Most computing infrastructure must remain operational at all
times, resulting in power consumption, cooling and facilities costs that do not vary with utilization
levels.

• Increasing IT Management Costs. As computing environments become more complex, the level
of specialized education and experience required for infrastructure management personnel and the
associated costs of such personnel have increased. Organizations spend disproportionate time and
resources on manual tasks associated with server maintenance, and thus require more personnel to
complete these tasks.

• Insufficient Failover and Disaster Protection. Organizations are increasingly affected by the
downtime of critical server applications and inaccessibility of critical end user desktops. The threat of
security attacks, natural disasters, health pandemics and terrorism has elevated the importance of
business continuity planning for both desktops and servers.

• High Maintenance end-user desktops. Managing and securing enterprise desktops present
numerous challenges. Controlling a distributed desktop environment and enforcing management, access
and security policies without impairing users’ ability to work effectively is complex and expensive.
Numerous patches and upgrades must be continually applied to desktop environments to eliminate
security vulnerabilities.
How Does Virtualization Work?
The VMware virtualization platform is built on a business-ready architecture. Use software such
as VMware vSphere andVMware ESXi (a free download) to transform or “virtualize” the
hardware resources of an x86-based computer—including the CPU, RAM, hard disk and
network controller—to create a fully functional virtual machine that can run its own
operating system and applications just like a “real” computer. Each virtual machine
contains a complete system, eliminating potential conflicts. VMware virtualization works
by inserting a thin layer of software directly on the computer hardware or on a host
operating system. This contains a virtual machine monitor or “hypervisor” that allocates
hardware resources dynamically and transparently. Multiple operating systems run
concurrently on a single physical computer and share hardware resources with each other.
By encapsulating an entire machine, including CPU, memory, operating system, and
network devices, a virtual machine is completely compatible with all standard x86
operating systems, applications, and device drivers. You can safely run several operating
systems and applications at the same time on a single computer, with each having access to
the resources it needs when it needs them.
Screenshot of VMware running linux under windows

Benefits Of Virtualisation:

• Saving physical space is the most obvious answer—as a rule, our server rooms don’t grow
along in sync with demands from the business. Unless the entire company is considering an office
move, IT departments will have to work within their given space despite increases in demand. It’s
obvious that replacing four 2U servers with one more powerful 2U server is going to free up 6Us of
space, although this is optimistic, and realistically 4Us would be freed (I’ll explain the grounds for this
later). This is still a huge saving in real estate and will enable twice
the number of services to be hosted in the same physical space. Many firms
turning to virtualisation will look at moving to blade servers at the same time
to maximise space savings.

• Reduced hardware costs are another advantage of virtualisation. In the


example of my four under-utilised systems, all could be migrated to one server
of the same specifications and still have adequate resources—that’s at 25% of
the current hardware cost. Even if I were to over-spec the new system to allow
for future increases in usage, the savings are not to be ignored.

• Reduced power consumption and need for cooling are benefits which come hand in hand.
While the power
consumption and heat output of a system with high levels of utilisation will be
greater than that of a system under a lesser load, the consolidation of
multiple low-load systems should still produce less heat and demand less power
over all. Data centres are finding it increasingly more difficult to keep up
with demand for power at the rack and the cooling demand which comes with
increased power consumption (and that additional cooling also requires power,
increasing overall running costs).

• The ability to rapidly deploy a new system without


ordering new hardware, building/installing the server, and updating
firmware can be a big time saver for sys admins (whose time is usually at a
premium).Although the above does not give an in-depth explanation of
the many advantages afforded by virtualisation, I hope it gives an informative
overview of benefits when compared to the more traditional server farm model. There is an abundance
of information to be found on both TechRepublic and Google, this briefing should give those
interested in the topic a starting point for
further research. I was hoping to also discuss the potential weaknesses of
virtualised services and the conceivable actions which can help to neutralise
these risks (and in some cases make virtualisation a more solid and lower-cost
approach)—however, I seem to have run short of time so will look at this next
week rather than hastily fumbling over the subject now.

Save money How you say? Here’s a good example: You just recently purchased five licenses of
Windows 2003 for five servers about to implemented into your infrastructure. This would cost you
roughly $10K-to-$15K in licensing fees. What if I told you I could give you the same infrastructure for
$2K-to-$5K? How? By simply buying one license of Windows Server 2003 R2, you get up to four
virtual instances free-of-charge. Simply download any virtualization software you desire and install
four more virtual operating systems for free.

Consolidate servers Hosting facilities and corporate server rooms are busting at the seams. It seems
every vendor has some unique software that requires a stand-alone server. In the dot-com era this
might have worked, but today we are faced with increasing energy costs to power these money-sucking
machines.Server rooms are the energy vampires of technology’s new millennium. How can we face
this increasing cost head on? Virtualization.You could have a software and server inventory done and
see how many servers are simply just running one application-maybe even a legacy application. By
taking advantage of virtualization, you could easily consolidate 20 servers down to five.

Maximize utilization Maximizing utilization of servers and consolidation of servers seem to go hand-
in-hand. You cannot do one without the other. When you consolidate servers, you maximize
utilization. As a consultant working deep in the trenches, I can’t tell you how many times I’ve seen a
huge Quad processor server running a miniscule app and the utilization of the server is not even
registering.That same box, if utilized to its potential, could host three-to-five virtual instances. It is not
uncommon these days to gather up all the legacy applications you are still running and place them on
one server with several virtual instances.

What is a Virtual Machine?

A virtual machine is a tightly isolated software container that can run its own operating systems and
applications as if it were a physical computer. A virtual machine behaves exactly like a physical
computer and contains it own virtual (ie, software-based) CPU, RAM hard disk and network interface
card (NIC).

An operating system can’t tell the difference between a virtual machine and a physical machine, nor can
applications or other computers on a network. Even the virtual machine thinks it is a “real” computer.
Nevertheless, a virtual machine is composed entirely of software and contains no hardware components
whatsoever. As a result, virtual machines offer a number of distinct advantages over physical hardware.
Virtual Machines Benefits

In general, VMware virtual machines possess four key characteristics that benefit the user:

• Compatibility: Virtual machines are compatible with all standard x86 computers

• Isolation: Virtual machines are isolated from each other as if physically separated

• Encapsulation: Virtual machines encapsulate a complete computing environment

• Hardware independence: Virtual machines run independently of underlying hardware

Compatibility
Just like a physical computer, a virtual machine hosts its own guest operating system and applications,
and has all the components found in a physical computer (motherboard, VGA card, network card
controller, etc). As a result, virtual machines are completely compatible with all standard x86 operating
systems, applications and device drivers, so you can use a virtual machine to run all the same software
that you would run on a physical x86 computer.

Isolation

While virtual machines can share the physical resources of a single computer, they remain completely
isolated from each other as if they were separate physical machines. If, for example, there are four
virtual machines on a single physical server and one of the virtual machines crashes, the other three
virtual machines remain available. Isolation is an important reason why the availability and security of
applications running in a virtual environment is far superior to applications running in a traditional, non-
virtualized system.

Encapsulation

A virtual machine is essentially a software container that bundles or “encapsulates” a complete set of
virtual hardware resources, as well as an operating system and all its applications, inside a software
package. Encapsulation makes virtual machines incredibly portable and easy to manage. For example,
you can move and copy a virtual machine from one location to another just like any other software file,
or save a virtual machine on any standard data storage medium, from a pocket-sized USB flash memory
card to an enterprise storage area networks (SANs).

Hardware Independence

Virtual machines are completely independent from their underlying physical hardware. For example,
you can configure a virtual machine with virtual components (eg, CPU, network card, SCSI controller)
that are completely different from the physical components that are present on the underlying hardware.
Virtual machines on the same physical server can even run different kinds of operating systems
(Windows, Linux, etc).

When coupled with the properties of encapsulation and compatibility, hardware independence gives you
the freedom to move a virtual machine from one type of x86 computer to another without making any
changes to the device drivers, operating system, or applications. Hardware independence also means that
you can run a heterogeneous mixture of operating systems and applications on a single physical
computer.
What is Hypervisor?

Hypervisor:

• A hypervisor, also called a virtual machine manager, is a program that allows multiple operating
systems to share a single hardware host.

• Each operating system appears to have the host's processor, memory, and other resources all to itself.
However, the hypervisor is actually controlling the host processor and resources, allocating what is
needed to each operating system in turn and making sure that the guest operating systems (called
virtual mahines) cannot disrupt each other.

Fundamental characteristics of a hypervisor are:


Parent Child Child
Partition Partition Partition
Have a purpose-built, thin OS independent architecture for enhanced reliability and robustness.

• Make optimal use of available hardware resourcesApps Apps Apps


• Deliver performance acceleration features that support mission critical applications
• Enable advanced capabilities not previously possible on physical systems.
Server
OS 1 OS 2
Core

Windows hypervisor

Hardware
Hypervisor Design Goals:

• Security.
• Strong Isolation.
• Performance.
• Virtualization support.
• Simplicity:
1. Restrict activities to monitoring and enforcing.
2. Where possible, push policy up.

Physical Hardware:
• The hypervisor restricts itself to managing a minimum set of hardware
• Processors
• Local APICs
• Constant-rate system counter
• System physical address space
• Focus is on scheduling and isolation
• In Windows virtualization, the parent partition manages the rest
• IHV drivers
• Processor power management
• Device hot add and removal
• New drivers are not required.

Use Virtual Machines as the Building Blocks of your Virtual Infrastructure

Virtual machines are a fundamental building block of a much larger solution: the virtual infrastructure.
While a virtual machine represents the hardware resources of an entire computer, a virtual infrastructure
represents the interconnected hardware resources of an entire IT infrastructure—including computers,
network devices and shared storage resources. Organizations of all sizes use VMware solutions to build
virtual server and desktop infrastructures that improve the availability, security and manageability of
mission-critical applications.

Reduce Costs with a Virtual Infrastructure

Lower your capital and operational costs and improve operational efficiency and flexibility. Go beyond
server consolidation and deploy a standard virtualization platform to automate your entire IT
infrastructure. VMware customers have harnessed the power of virtualization to better manage IT
capacity, provide better service levels, and streamline IT processes. We coined a term for virtualizing
the IT infrastructure–we call it the virtual infrastructure.

What is a Virtual Infrastructure?

A virtual infrastructure lets you share your physical resources of multiple machines across your entire
infrastructure. Avirtual machine lets you share the resources of a single physical computer across
multiple virtual machines for maximum efficiency. Resources are shared across multiple virtual
machines and applications. Your business needs are the driving force behind dynamically mapping the
physical resources of your infrastructure to applications—even as those needs evolve and change.
Aggregate your x86 servers along with network and storage into a unified pool of IT resources that can
be utilized by the applications when and where they’re needed. This resource optimization drives greater
flexibility in the organization and results in lower capital and operational costs.

A virtual infrastructure consists of the following components:

• Bare-metal hypervisors to enable full virtualization of each x86 computer.

• Virtual infrastructure services such as resource management and consolidated backup to


optimize available resources among virtual machines

• Automation solutions that provide special capabilities to optimize a particular IT process such as
provisioning ordisaster recovery.

Decouple your software environment from its underlying hardware infrastructure so you can aggregate
multiple servers, storage infrastructure and networks into shared pools of resources. Then dynamically
deliver those resources, securely and reliably, to applications as needed. This pioneering approach lets
our customers use building blocks of inexpensive industry-standard servers to build a self-optimizing
datacenter and deliver high levels of utilization, availability, automation and flexibility.
Virtual Infrastructure Benefits:

Gain the benefits of virtualization in production-scale IT environments by building your virtual


infrastructure with the leading virtualization platform from VMware. VMware Infrastructure 3 unifies
discrete hardware resources to create a shared dynamic platform, while delivering built in availability,
security and scalability to applications. It supports a wide range of operating system and application
environments, as well as networking and storage infrastructure.

We have designed our solutions to function independently of the hardware and operating system so you
have a broad platform choice. Our solutions provide a key integration point for hardware and
infrastructure management vendors and partners to deliver differentiated value that can be applied
uniformly across all application and operating system environments.

Get More from your Existing Hardware

Results after adopting virtual infrastructure solutions, including:

• 60-80% utilization rates for x86 servers (up from 5-15% in non-virtualized PCs)

• Cost savings of more than $3,000 annually for every workload virtualized

• Ability to provision new applications in minutes instead of days or weeks

• 85% improvement in recovery time from unplanned downtime

Energy Efficiency

ENERGY EFFICIENCY

BUILD A GREEN IT INFRASTRUCTURE WITH VIRTUALIZATION


Reduce the energy demands of your datacenter through server consolidation and dynamic management
of computer assets across a pool of servers. Deliver the resources you need where you need them with
VMware.

• Reduce energy costs by 80%.

• Power down servers without affecting applications or users

• Green your datacenter while decreasing costs and improving service levels

Increase Energy Efficiency with Virtualization

Energy consumption is a critical issue for IT organizations today, whether the goal is to reduce cost,
save the environment or keep your datacenter running. In the United States alone, datacenters
consumed $4.5 billion worth of electricity in 2006. Industry analyst Gartner1 estimates that over the
next 5 years, most enterprise data centers will spend as much on energy (power and cooling) as they do
on hardware infrastructure.

Save Energy by Eliminating Server Sprawl and Underutilization

VMware customers reduce their energy costs and consumption by up to 80% through virtualization.
Most servers and desktops today are in use only 5-15% of the time they are powered on, yet most x86
hardware consumes 60-90% of the normal workload power even when idle. VMware virtualization has
advanced resource and memory management features that enable consolidation ratios of 15:1 or more
which increase hardware utilization to as much as 85%. Once virtualized, a feature of VMware
Distributed Resource Scheduler (DRS) called Distributed Power Management (DPM) monitors
utilization across the datacenter and intelligently powers off unneeded physical servers without
impacting applications and users. With VMware virtualization customers can dramatically reduce
energy consumption without sacrificing reliability or service levels.
Reduce the Environmental Impact of IT

Beside the company bottom line effect, virtualization is positively impacting the environment. Gartner2
estimates that 1.2 million workloads run in VMware virtual machines, which represents an aggregate
power savings of about 8.5 billion kWh—more electricity than is consumed annually in all of New
England for heating, ventilation and cooling.

While this is a good start, there are plenty of opportunities for saving even more energy and money.
Analyst firm IDC3 states that the un-utilized server capacity equates to approximately:

• $140 billion

• 3 years supply of hardware

• More than 20 million servers


At 4 tons of carbon dioxide (CO2) annually per server, these un-utilized servers produce a total of more
than 80 million tons of CO2 per year. This is more than is emitted from the country of Thailand and
more than half of ALL countries in South America.

Operating System Virtualization

The most prevalent form of virtualization today, virtual operating systems (or virtual machines) are
quickly becoming a core component of the IT infrastructure. Generally, this is the form of virtualization
end-users are most familiar with. Virtual machines are typically full implementations of standard
operating systems, such as Windows Vista or RedHat Enterprise Linux, running simultaneously on the
same physical hardware. Virtual Machine Managers (VMMs) manage each virtual machine
individually; each OS instance is unaware that

1) it’s virtual and

2) that other virtual

operating systems are (or may be) running at the same time. Companies like Microsoft, VMware, Intel,
and AMD are leading the way in breaking the physical relationship between an operating system and its
native hardware, extending this paradigm into the data center. As the primary driving force, data center
consolidation is bringing the benefi ts of virtual machines to the mainstream market, allowing
enterprises to reduce the number of physical machines in their data centers without reducing the number
of underlying applications. This trend ultimately saves enterprises money on hardware, co-location fees,
rack space, power, cable management, and more.

Application Server Virtualization

Application Server Virtualization has been around since the fi rst load balancer, which explains why
“application virtualization” is often used as a synonym for advanced load balancing. The core concept of
application server virtualization is best seen with a reverse proxy load balancer: an appliance or service
that provides access to many different application services transparently. In a typical deployment, a
reverse proxy will host a virtual interface accessible to the end user on the “front end.” On the “back
end,” the reverse proxy will load balance a number of different servers and applications such as a web
server. The virtual interface—often referred to as a Virtual IP or VIP—is exposed to the outside world,
represents itself as the actual web server, and manages the connections to and from the web server as
needed. This enables the load balancer to manage multiple web servers or applications as a single
instance, providing a more secure and robust topology than one allowing users direct access to
individual web servers. This is a one:many (one-to-many) virtualization representation: one server is
presented to the world, hiding the availability of multiple servers behind a reverse proxy appliance.
Application Server Virtualization can be applied to any (and all) types of application deployments and
architectures, from fronting application logic servers to distributing the load between multiple web
server platforms, and even all the way back in the data center to the data and storage tiers with database
virtualization.

Application Virtualization

While they may sound very similar, Application Server and Application Virtualization are two
completely different concepts. What we now refer to as application virtualization we used to call “thin
clients.” The technology is exactly the same, only the name has changed to make it more IT-PC
(politically correct, not personal computer). Softgrid by Microsoft is an excellent example of deploying
application virtualization. Although you may be running Microsoft Word 2007 locally on your laptop,
the binaries, personal information, and running state are all stored on, managed, and delivered by
Softgrid. Your local laptop provides the CPU and RAM required to run the software, but nothing is
installed locally on your own machine. Other types of Application Virtualization include Microsoft
Terminal Services and browser-based applications. All of these implementations depend on the virtual
application running locally and the management and application logic running remotely.

Management Virtualization

Chances are you already implement administrative virtualization throughout your IT organization, but
you probably don’t refer to it by this phrase. If you implement separate passwords for your
root/administrator accounts between your mail and web servers, and your mail administrators don’t
know the password to the web server and vise versa, then you’ve deployed management virtualization in
its most basic form.

The paradigm can be extended down to segmented administration roles on one platform or box, which
is where segmented administration becomes “virtual.” User and group policies in Microsoft Windows
XP, 2003, and Vista are an excellent example of virtualized administration rights: Alice may be in the
backup group for the 2003 Active Directory server, but not in the admin group. She has read access to
all the fi les she needs to back up, but she doesn’t have rights to install new fi les or software. Although
she is logging into the same sever that the true administrator is logs into, her user experience differs
from the administrator.

Management virtualization is also a key concept in overall data center management. It’s critical that the
network administrators have full access to all the infrastructure gear, such as core routers and switches,
but that they not have admin-level access to servers.

Network Virtualization

Network virtualization may be the most ambiguous, specifi c defi nition of virtualization. For brevity,
the scope of this discussion is relegated to what amounts to virtual IP management and segmentation.

A simple example of IP virtualization is a VLAN: a single Ethernet port may support multiple virtual
connections from multiple IP addresses and networks, but they are virtually segmented using VLAN
tags. Each virtual IP connection over this single physical port is independent and unaware of others’
existence, but the switch is aware of each unique connection and manages each one independently.
Another example is virtual routing tables: typically, a routing table and an IP network port share a 1:1
relationship, even though that single port may host multiple virtual interfaces (such as VLANs or the
“eth0:1” virtual network adapters supported by Linux). The single routing table will contain multiple
routes for each virtual connection, but they are still managed in a single table. Virtual routing tables
change that paradigm into a one:many relationship, where any single physical interface can maintain
multiple routing tables, each with multiple entries.

This provides the interface with the ability to bring up (and tear down) routing services on the fly for
one network without interrupting other services and routing tables on that same interface.
Hardware Virtualization

Hardware virtualization is very similar in concept to OS/Platform virtualization, and to some degree is
required for OS virtualization to occur. Hardware virtualization breaks up pieces and locations of
physical hardware into independent segments and manages those segments as separate, individual
components. Although they fall into different classifi cations, both symmetric and asymmetric
multiprocessing are examples of hardware virtualization. In both instances, the process requesting CPU
time isn’t aware which processor it’s going to run on; it just requests CPU time from the OS scheduler
and the scheduler takes the responsibility of allocating processor time. As far as the process is
concerned, it could be spread across any number of CPUs and any part of RAM, so long as it’s able to
run unaffected.

Another example of hardware virtualization is “slicing”: carving out precise portions of the system to
run in a “walled garden,” such as allocating a fixed 25% of CPU resources to bulk encryption. If there
are no processes that need to crunch numbers on the CPU for block encryption, then that 25% of the
CPU will go unutilized. If too many processes need mathematical computations at once and require
more than 25%, they will be queued and run as a FIFO buffer because the CPU isn’t allowed to give out
more than 25% of its resources to encryption. This type of hardware virtualization is sometimes referred
to as pre-allocation.

Asymmetric multiprocessing is a form of pre-allocation virtualization where certain tasks are only run
on certain CPUs. In contrast, symmetric multiprocessing is a form of dynamic allocation, where CPUs
are interchangeable and used as needed by any part of the management system. Each classification of
hardware virtualization is unique and has value, depending on the implementation. Pre-allocation
virtualization is perfect for very specifi c hardware tasks, such as offl oading functions to a highly
optimized, single-purpose chip. However, pre-allocation of commodity hardware can cause artifi cial
resource shortages if the allocated chunk is underutilized. Dynamic allocation virtualization is a more
standard approach and typically offers greater benefi t when compared to pre-allocation. For true virtual
service provisioning, dynamic resource allocation is important because it allows complete hardware
management and control for resources as needed; virtual resources can be allocated as long as hardware
resources are still available. The downside to dynamic allocation implementations is that they typically
do not provide full control over the dynamicity, leading to processes which can consume all available
resources.
Storage Virtualization

As another example of a tried-and-true technology that’s been dubbed “virtualization,” storage


virtualization can be broken up into two general classes: block virtualization and fi le virtualization.
Block virtualization is best summed up by Storage Area Network (SAN) and Network Attached Storage
(NAS) technologies: distributed storage networks that appear to be single physical devices. Under the
hood, SAN devices themselves typically implement another form of Storage Virtualization: RAID.
iSCSI is another very common and specifi c virtual implementation of block virtualization, allowing an
operating system or application to map a virtual block device, such as a mounted drive, to a local
network adapter (software or hardware) instead of a physical drive controller. The iSCSI network
adapter translates block calls from the application to network packets the SAN understands and then
back again, essentially providing a virtual hard drive.

File virtualization moves the virtual layer up into the more human-consumable fi le and directory
structure level. Most fi le virtualization technologies sit in front of storage networks and keep track of
which fi les and directories reside on which storage devices, maintaining global mappings of file
locations. When a request is made to read a fi le, the user may think this fi le is statically located on
their personal remote drive, P:\My Files\budget.xls; however, the file virtualization appliance knows that
the fi le is actually located on an SMB server in a data center across the globe
at //10.0.16.125/finance/alice/budget-document/budget.xls. File-level virtualization obfuscates the static
virtual location pointer of a fi le (in this case, on Alice’s P:\ drive) from the physical location, allowing
the back-end network to remain dynamic. If the IP address for the SMB server has to change, or the
connection needs to be re-routed to another data center entirely, only the virtual appliance’s location
map needs to be updated, not every user that needs to access their P:\ drive.
Service Virtualization

And finally, we reach the macro defi nition of virtualization: service virtualization. Service
virtualization is consolidation of all of the above definitions into one catch-all catchphrase. Service
virtualization connects all of the components utilized in delivering an application over the network, and
includes the process of making all pieces of an application work together regardless of where those
pieces physically reside.

This is why service virtualization is typically used as an enabler for application availability. For
example, a web application typically has many parts: the user-facing HTML; the application server that
processes user input; the SOA gears that coordinate service and data availability between each
component; the database back-end for user, application, and SOA data; the network that delivers the
application components; and the storage network that stores the application code and data. Service
virtualization allows each one of the pieces to function independently and be “called up” as needed for
the entire application to function properly. When we look deeper into these individual application
components, we may see that the web server is load-balanced between 15 virtual machine operating
systems, the SOA requests are pushed through any number of XML gateways on the wire, the database
servers may be located in one of fi ve global data centers, and so on. Service virtualization combines
these independent pieces and presents them together to the user as a single, complete application.
What is Cloud Computing?

Cloud Computing is a relatively new term that conveys the use of information technology services and
resources that are provided on a service basis. According to a 2008 IEEEpaper, “Cloud Computing is a
paradigm in which information is permanently stored in servers on the internet and cached temporarily
on clients that include desktops, entertainment centers, table computers, notebooks, wall computers,
hand-helds, sensors, monitors, etc.”

History of Cloud Computing

In network diagrams, resources that are provided by an outside entity are depicted in a “cloud”
formation. In the current (but still evolving!) model of cloud computing, the cloud computing
infrastructure consists of services that are offered up and delivered through data centers that can be
accessed from anywhere in the world. The cloud, then, in this model, is the single point of access for the
computing needs of the customers being serviced.

In the cloud computing definitions that are evolving, the services in the cloud are being provided by
enterprises and accessed by others via the internet. The resources are accessed in this manner as a
service – often on a subscription basis. The users of the services being offered often have very little
knowledge of the technology being used. The users also have no control over the infrastructure that
supports the technology they are using.

Pros and Cons of Cloud Computing

In cloud computing models, customers do not own the infrastructure they are using, they basically rent
it, or pay as they use it. The loss of control is seen as a negative, but it is generally out-weighed by
several positives. One of the major selling points of cloud computing is lower costs. Companies will
have lower technology-based capital expenditures, which should enable companies to focus their money
on delivering the goods and services that they specialize in. There will be more device and location
independence, enabling users to access systems no matter where they are located or what kind of device
they are using. The sharing of costs and resources amongst so many users will also allow for efficiencies
and cost savings around things like performance, load balancing, and even locations (locating data
centers and infrastructure in areas with lower real estate costs, for example). Cloud computing is also
thought to affect reliability and scalability in positive ways. One of the major topics in information
technology today is data security. In a cloud infrastructure, security typically improves overall, although
there are concerns about the loss of control over some sensitive data. Finally, cloud computing results in
improved resource utilization, which is good for the sustainability movement (i.e. green technology or
clean technology.)

Cloud Computing – Companies to Watch

There is a big push for cloud computing services by several big companies. Amazon.com has been at the
forefront of the cloud computing movement. Google and Microsoft have also been very publicly
working on cloud computing offerings. Some of the other companies to watch for in this field
are Yahoo!, IBM, Intel, HP and SAP. Several large universities have also been busy with large scale
cloud computing research project.
CONCLUDING REMARKS

Any project is considered successful only if it is completed on time, user compliant and verifies all the
requirements. Finally, by putting great efforts we have successfully completed the project. The LW
Head for Industrial Training, Mr. Vimal Dagal, has classified our project as completed and up to the
specification, after a presentation. The biggest conclusion that I have derived from the training is that,
“Only GOOGLE is your friend in the company. Rest you are own your way.”

There are many potential fields in which we can extend this project in the future. The features that can
be considered as future extensions are:

• An additional security mechanism can be applied that cryptographs the data at front end and then
send it to the back end. The process is also applied on the data transmitted from back end to front
end.

• It can be used in any organistaion where any number of persons want to communicate via e-
mail.
ANNEXURE

Abbreviations are as follows:

• XEN
• VM- Virtual Machine
• VMM- Virtual Machine Manager
• HTTP – Hypertext Type Protocol
• DPS- Distributed Power Management
• DRS- Distributed Resource Schedular

Bibliography is as follows:

• www.wikipedia.org
• www.google.com
• www.lwindia.com
• Linux For Essentials
• www.discovervirtualisation.com
• www.vmware.com
• www.greentech.org
• www.pcquest.com

Вам также может понравиться