Вы находитесь на странице: 1из 5

What Is Utility Computing?

Utility computing is one of a number of developing technologies, services, and products


emerging in the IT world. Along with other technologies such as autonomic computing, grids,
and on-demand or adaptive enterprise, utility computing gives IT management a new way of
managing future workloads and applications.

Utility computing amounts to buying only the amount of computing you need, much like
plugging into the electrical grid. Traditionally, every layer of a computing environment has been
static or fixed, manually set up to support a single computing solution. All components are
treated as products, installed and configured for specific computers. For example, hardware is
assigned for specific uses such as web server or database; the OS is tied to the hardware (one box
runs Windows, another a UNIX OS); and networks provide access to only specific locations. On
top of all this are the applications, which are installed to run inside this hard-coded, static
environment.

In a utility computing environment, on the other hand, hardware and software are no longer
bound to the other. Each layer is virtualized—designed so that it doesn't need to be configured
for specific systems—and assigned, in real-time, to whatever task most needs the resource.

Let's define utility computing this way: Utility computing consists of a virtualized pool of IT
resources that can be dynamically provisioned to ensure that these resources are easily and
continually reallocated in a way that addresses the organization's changing business and service
needs. These resources can be located anywhere and managed by anyone, and the usage of these
resources can be tracked and billed down to the level of an individual user or group.

Utility computing has suddenly become one of the hot topics in the IT analyst community and
increasingly in larger enterprises that are looking for ways to reduce the fixed costs and
complexity of IT. Gartner and Dataquest believe that the advent of utility as a business model
will "fundamentally challenge the established role of channels for suppliers of all types"
(Gartner, "IT Utility Standards Efforts Take Shape," 10/22/03).

There are three major reasons why utility computing will become significant in IT:

• Promises to address pressing business needs, including making the business more agile,
adaptive, and flexible; and, more importantly, able to treat IT as an increasingly variable
cost. The aim of utility computing is to reduce IT costs.
• Can be supplied in small, incremental bites that deliver fast, demonstrable, significant
return on investment, so companies don't have to wait for the full implementation to
achieve payoffs. Much shorter time to market.
• Provides total flexibility in implementation, from in-house and self-managed to fully
outsourced, with everything in-between—including a hybrid deployment model in which
in-house capacity can be supplemented by third-party resources to handle peak needs.

Our consumer utilities such as gas, water, and electricity all arrive on demand and independent
of the uses to which they are put. This makes for a relatively easy billing structure—consistent
infrastructure (pipe, wire) whose capital costs and maintenance are embedded in the usage rate.
Exchange is simple: product in via infrastructure, invoice and payment on separate channels.
Computing can be bought the same way. This is the basic premise of utility computing, which
promises processing power when you need it, where you need it, at the cost of how much you
use.

Who Is Doing What?


Let's begin by reviewing who is doing what in the IT marketplace. Competing marketing terms
(similar though they may be) are used to describe utility computing:

• IBM introduced on-demand computing.


• Hewlett-Packard (HP) uses the term utility data center (UDC). This from the company
that claims to have actually invented the concept, or first wrote about it, more than 20
years ago.
• Sun Microsystems calls it N1, a virtualized version of the network and data center.
• Microsoft announced their Dynamic Systems Initiative (DSI), which proposes to unify
hardware, software, and service vendors around an open software architecture that
enables customers to harness the power of industry-standard hardware; and brings
simplicity, automation and flexibility to IT operations.

There are even more. It gets a little more descriptive or abstract, depending on your perspective,
as the different vendors get into terms for more actual offerings:

• Virtual Data Center. Sun seems to be promoting this term the most, although the others
all have their ways of describing it as a sweet spot. It means pooling resources to make
them seem like one big machine.
• Autonomic computing. This technology is being offered mainly by IBM, but there are
others, at least in this way. Think self-healing, self-managing networks and systems.
• Adaptive infrastructure. HP has this version of utility computing.
• Grid computing. Dozens to hundreds of individual systems (PCs, workstations, servers)
connected via LAN or WAN to solve computer or data-intensive problems, now evolving
from scientific uses to more practical business applications.
• Dynamic data center (DDC). In addition to the main DDI announcement, Microsoft
announced that it will showcase the concept of a DDC that it developed jointly with HP.
DDC features a combination of HP servers, software, storage, and networking hardware
connected based on prescribed network architecture. Microsoft software dynamically
assigns provisions and centrally manages the DDC resources.
• Web services. An overused term in its own right, but one that actually can be better
understood as part of the larger utility computing concept. Its initial promise is to
automate communication between disparate applications, taking advantage of evolving
open standards such as XML. But where it's headed goes well beyond software
communication protocols, to delivering real services—even going beyond "software as a
service" to "business process as a service."
Utility computing can weave these technologies together, so that the users can mix-and-match to
specific needs and requirements.

Examples and Case Studies


Utility computing is already rolling out in a number of diverse application areas. Let's consider
some examples.

• A Canadian power company is saving about $500,000 a year using laptops and PDAs,
with some third-party mobile applications, for its 400 field workers. The goal of this
mobile computing project at Hydro One Networks in Toronto is to switch from error-
prone paper trails to fast, accurate digital data paths. The results are savings in paper
processing costs and much more accurate maintenance data.
• Men's clothing retailer Ahlers set up a self-service web site through which its retailers can
quickly get product information and track orders.
• Holiday gift specialist Harry and David installed large IBM mainframes, UNIX servers,
and Intel servers to deal with an annual traffic surge before the gift-giving season. About
65% of annual sales take place between mid–November and late December; they pay the
highest costs during that time period and not before—an example of "pay as you go."
• Russian transportation company Mostransagentstvo built a new system that lets
customers make travel reservations immediately.
• Swets Information Services is an outsourcing and facilitating partner for the acquisition,
access, and management of scholarly, business, and professional information. They
provide links between 60,000 providers and 65,000 librarians, purchasers, and end users.
Swets Blackwell set up a new online system to let customers see immediate responses to
searches for information in library collections of periodicals.

Policy-Based Utility
In the not-too-distant future, corporations will use business-based policy management
technology to control costs, allocate finite infrastructure resources, manage application access,
and police security. With the advent of utility computing, as well as computer and storage
virtualization, corporate concerns about policy ownership issues have risen to new heights. The
major concern is that business-based policy must be end-to-end, set by corporate management;
then it must be translated into deployment policies within the policy of infrastructure operations;
user workflow; network, storage, and server infrastructure; and application software.

The new world of IT, based on utility services, will require a basic rethinking of the ways by
which policy is created and managed within the corporation. No longer can policy exist in
independent islands, nor can it be in the hands of vendors. This issue is so important that a new
position, chief policy officer (CPO), should be created to tackle the tasks of creating corporate
business-based policy practices and procedures, and identifying and integrating policy-island
infrastructures. This first step will set the foundation for an implementation that's based on the
methodologies required to translate, distribute, administer, monitor, and manage policy end-to-
end within the corporation, from the user to the application, in a seamless view rather than
piecemeal.

For utility computing to succeed, the end-to-end business-based policy management issue must
be addressed and planned for. Both service providers and infrastructure vendors must reorient
their perspectives and focus on receiving policy direction from the customer, rather than
dictating policy to the customer.

Open Standards
Open standards are very important to the IT community, corporations, and software vendors and
suppliers. This is an approach by which everyone can benefit. It aids new technologies such as
utility computing by allowing for flexibility and choice.

Standards evolve, develop from numerous sources, and are managed by standards organizations
—bodies that manage the agreement on, development of, education in, and future research for
standards between all interested parties. Today, according to the National Institute of Standards
and Technology (NIST), a U.S. government body, there are close to 800,000 global standards on
just about every conceivable product, service, object, or endeavor, from shoes to aircraft to
mammograms. Cable TV, doorway heights, computer chips, and other innumerable products and
services rely in some way on technology and standards.

The standards that apply to the IT industry have been both problem and solution. After 50 years
of IT evolution, we are finally coming to the conclusion that it's in everyone's best interest to
have standards for better communication and compatibility within and between vendors,
suppliers, and users. It has taken us 50 years to reach this point, and we still have some way to go
before open standards are universally accepted and implemented. Despite the unqualified
benefits of open standards, many software vendors and corporations are reluctant to embrace this
new approach.

Open standards address long-term, strategic business/industry issues, not simply the short-term,
tactical/technical objectives of a single segment or company within the industry. Successful open
standards expand the opportunities for the entire industry while providing users with long-term
stability for technology. Standards also provide a sound foundation on which users can base their
strategic business decisions.

The battle for "openness" is still being waged. For the most part, businesses are beginning to
embrace open standards as a means of ensuring degrees of flexibility and vendor independence.
Many vendors have also embraced open standards, because their role in the ecosystem as either
provider of horizontal infrastructure or networking capability necessitates it. It's also their desire
to participate in markets dominated by other players, who use their market position to promote
their proprietary interfaces. Some vendors have been successful in exploiting what economists
call the network effect—the tendency toward adoption of a common platform owing to the
intersecting interests and interdependencies of ecosystem participants, including consumers. In
turn, these companies have been able to exert control over programming interfaces and document
formats to protect their market positions.
The advantages of open standards to utility computing are significant:

• Allows disparate and previously incompatible hardware and software from multiple
vendors to work together seamlessly
• Can allow different network protocols to work together
• Substantial IT costs and savings
• Breaks down the barriers of proprietary systems by providing common platforms
• Decreases or spread the complexity of architectures and systems in general
• Savings on capital expenses, as existing computing resources can be used instead of
purchasing new machines
• Lower operating costs

Flexibility and independence are the watchwords of the future.

Conclusions
Utility computing has the potential for wide-reaching business benefits. By freeing resources
from specific tasks, it becomes easier to manage applications and systems. This directly impacts
total cost of ownership and leads to significant capital savings. Furthermore, by treating their IT
infrastructure as a utility, companies become much more flexible and agile in their management
and support of all resources—what people are calling the dynamic or adaptive enterprise. IDC
asked more than 400 customers worldwide to list the most valuable potential benefit of utility
computing. The number one answer was lowering IT operating costs.

Over the next few years, customers will address one of their main concerns—high data center
management costs—by adopting platform monitoring and management tools. Then they will
move into server- and application-level provisioning embedded in utility computing. It's a great
technology that can help IT, end users, and management to deliver value and services quickly
and with flexibility to meet the swings in demand.

Вам также может понравиться