Вы находитесь на странице: 1из 121

http://cleverlogic.

net/articles/cloud-computing-security-issues-and-solutions
Cloud computing :
Cloud computing is the use of computing resources (hardware and software) that are delivered as
a service over a network.
Cloud systems are very economical and useful for businesses of all sizes. Cloud computing is a
technology that everyone would love to take full advantage of, it offers so much:
1. Limitless Flexibility: With access to millions of different databases, and the ability to
combine them into customized services.
2. Better Reliability and Security: users no longer need to worry about their hardware
failure, or hardware being stolen.
3. Enhanced Collaboration: By enabling online sharing of information and applications,
the cloud offers users new ways of working together.
4. Portability: Users can access their data from anywhere.
5. Simpler devices: With data stored and processed in the cloud, users simply need an
interface to access and use this data, play games, etc.
6. Unlimited Storage
7. Access to lightening quick processing power.
Cloud services are very exciting and useful, but have many open security issues (Weis & Alves-
Foss, 2011). One issue with cloud computing is that the management of the data might not be
fully trustworthy; the risk of malicious insiders in the cloud and the failing of cloud services have
received a strong attention by companies (AlZain, Soh, & Pardede, 2012).
Security is a troubling concern for cloud computing as shown in a Survey conducted by the IDC
enterprise panel which confirms that Security is the top concern of cloud users (Behl & Behl,
2012). Cloud systems has lots of potential; however, several concerns such as those discussed in
this article slows the adoption, and in turn, the growth and usage of cloud systems; therefore,
cloud Security is one of the issues that need to be addressed in allow faster growth of cloud
computing.


Current Problems and Solutions
The main problems cloud computing faces are preserving confidentiality and integrity of data in
aiding data security. The primary solution for these problems is encryption of data stored in the
cloud. However, encryption of data also brings up new problems. Here is an overview of some of
the main problems faced by cloud systems and some solutions.
Trust
Trust between the Service provider and the customer is one of the main issues cloud computing
faces today. There is no way for the customer to be sure whether the management of the Service
is trustworthy, and whether there is any risk of insider attacks. This is a major issue and has
received strong attention by companies. The only legal document between the customer and
service provider is the Service Level Agreement (SLA). This document contains all the
agreements between the customer and the service provider; it contains what the service provider
is doing and is willing to do (Weis & Alves-Foss, 2011). However, there is currently no clear
format for the SLA, and as such, there may be services not documented in the SLA that the
customer may be unaware that it will need these services at some later time.
Legal Issues
There are several regulatory requirements, privacy laws and data security laws that cloud
systems need to adhere to. One of the major problems with adhering to the laws is that laws vary
from country to country, and users have no control over where their data is physically located.
Confidentiality
Confidentiality is preventing the improper disclosure of information. Preserving confidentiality
is one of the major issues faced by cloud systems since the information is stored at a remote
location that the Service Provider has full access to. Therefore, there has been some method of
preserving the confidentiality of data stored in the cloud. The main method used to preserve data
confidentiality is data encryption; however encryption brings about its own issues, some of
which are discussed later.
Authenticity (Integrity and Completeness)
Integrity is preventing the improper modification of information. Preserving Integrity, like
confidentiality is another major issue faced by cloud systems that needs to be handled, and is
also mainly done by the use of data encryption.
In a common database setup, there would be many users with varying amount of rights. A user
with a limited set of rights might need to access a subset of data, and might also want to verify
that the delivered results are valid and complete (that is, not poisoned, altered or missing
anything) (Weis & Alves-Foss, 2011).
A common approach to such a problem is to use digital signatures; however, the problem with
digital signatures is that not all users have access to the data superset, therefore they cannot
verify any subset of the data even if theyre provided with the digital signature of the superset;
and too many possible subsets of data exist to create digital signatures for each.
Recently, researchers have tried to find solutions to this problem. The primary proposal is to
provide customers with the supersets signature and some metadata along with the query results.
This metadata (called verification objects) lets customers fill in the blanks of the data which they
dont have access to and be able to validate the signature. There are two primary variations of
this idea, one based on Merkle trees and the other based on signature aggregation (Weis &
Alves-Foss, 2011).
Encryption
The main method used for ensuring data security in the cloud is by encryption. Encryption seems
like the perfect solution for ensuring data security; however, it is not without its drawbacks.
Encryption takes considerably more computational power, and this is multiplied by several
factors in the case of databases (Weis & Alves-Foss, 2011). Cryptography greatly affects
database performance because each time a query is run, a large amount of data must be
decrypted; and since the main operation on a database is running queries, the amount of
decryption operations quickly become excessive. There are several approaches developed to
handle data encryption; each having its own compromises and downsides, some provide better
security mechanisms, and some focus on facilitating more operations to the customers. Some of
these methods are mentioned below:
Early Approaches
Early approaches have used extensions to the query language that simply applied encryption
before writing to the database and apply decryption before reading from the database.
Querying Encrypted Data
There are several methods that were proposed to handle Querying of Encrypted Data, one such
method was proposed by Purushothama B.R. and B.B. Amberker in (Purushothama &
Amberker, 2013).
In the proposed scheme, several cryptographic methods were used to encrypt the data in each cell
of each table to be stored in the cloud. When a user needs to query this data, the query
parameters are encrypted and checked against the stored data. No data decryption is done in the
cloud, thus protecting the Authenticity and integrity of the information. When the results of the
query is returned (in encrypted form) to the user, the user then decrypts the data and uses it.
This scheme also has significant improvements for select queries over previous related schemes.
Key Management
Since encryption is the main method used to ensure data security, naturally we would be faced
with the problem of key management. The encryption keys cannot be stored on the cloud,
therefore the customer must manage and control a key management system for any
cryptographic method used (Weis & Alves-Foss, 2011). For simple encryption schemas such as
the Early Approaches described above, there might not be a problem since a single encryption
and decryption key can be used for the entire system. However, almost any real database requires
a more complex system (Weis & Alves-Foss, 2011). This simple system to manage keys might
even have to take the form of a small database which would have to be a secure local database;
which again, may defeat the purpose of moving the original database to the cloud.
Clearly Key Management is a real problem for cloud systems using encryption, and recent
research has been done on using two-level encryption which allows the Key Management system
to be stored in the cloud. This scheme is efficient, and may be the solution to the Key
Management problems cloud systems faces; however, it hasnt yet been applied specifically to
database encryption.
Data Splitting
Some methods have been developed that serve as alternatives to encryption. These methods are
generally faster than encryption but have their own drawbacks.
Data Splitting was initially developed by Divyakant Agrawal and his colleagues. The idea is to
split the data over multiple hosts that cannot communicate with each other; only the owner who
can access both hosts can collect and combine the separate datasets to recreate the original. This
method is extremely fast compared to encryption but it requires at least two separate, but
homogeneous service providers.
Multi-clouds Database Model (MCDB)
(AlZain, Soh, & Pardede, 2012)
This is a method of Data Splitting which uses multiple clouds and several other techniques to
ensure data is split in across clouds in a manner that preserves the data Confidentiality, Integrity
and ensures Availability.
MCDB provides cloud with database storage in multi-clouds. MCDB model does not preserve
security in a single cloud; rather security and privacy of data will be preserved by applying
multi-shares technique on multi-clouds. By doing so, it avoids the negative effects of single
cloud, reduces the security risks from malicious insiders in cloud computing environment and
reduces the negative impact of encryption techniques (AlZain, Soh, & Pardede, 2012).
MCDB preserves security and privacy of users data by replicating data among several clouds,
using a secret sharing approach that uses Shamirs secret sharing algorithm, and using a triple
modular redundancy (TMR) technique with the sequential method. It deals with the cloud
manager to manage and control operations between the clients and the multi-clouds inside super
cloud service provider (AlZain, Soh, & Pardede, 2012).
Multi-Tenancy
Cloud systems share computational resources, storage, services between multiple customer
applications (tenants) in order to achieve efficient utilization of resources while decreasing cost,
this is referred to as multi-tenancy. However, this sharing of resources violates the
confidentiality of tenants IT Assets. This implies that unless theres a degree of isolation
between these tenants, it is very difficult to keep an eye on the data flowing between different
realms which make the multi-tenancy model insecure for adoption (Behl & Behl, 2012). Some
multi-tenancy issues are:
Virtual Machine Attacks
Typically, in a cloud, business data and applications are stored and ran within virtual machines.
These virtual machines are usually running on a server with other virtual machines, some of
which can be malicious. Research has shown that attacks against, with and between virtual
machines are possible.
If one of the virtual machines on a server hosts a malicious application that breaches legal or
operational barriers; this may lead legal authorities, the service provider or other authorities to
shutting down and blocking access the entire server. This would greatly affect the users of the
other Virtual Machines on the server.
Shared Resources
Assuming the cloud system isnt running on a virtual machine, the hardware is now an issue.
Research has shown that it is possible for information to flow between processor cores, meaning
that an application running on one core of a processor can get access to information of another
application running on another core. Applications can also pass data between cores.
Multicore processors often have complex and large caches. With these hardware resources, if
data is decrypted in the cloud, if even for a moment for comparison, it would then exist
unencrypted in the memory of some one of the cloud machines. The problem is that we dont
know what other application is running on these machines. Other malicious cloud users or the
service provider can me monitoring the machine memory and be able to read our data. However,
the likelihood of these hardware attacks is very small (Weis & Alves-Foss, 2011).
If one of the applications on a server hosts is malicious, this may lead to the service provider or
some other authority shutting down and blocking access the entire server in order to investigate
and determine the malicious application. This would greatly affect the users of the other
applications on the server.
Discussion/Conclusion
Cloud Computing offers some incredible benefits: unlimited storage, access to lightening quick
processing power and the ability to easily share and process information; however, it does have
several issues, and most of them are security related. Cloud systems must overcome many
obstacles before it becomes widely adopted, but it can be utilized right now with some
compromises and in the right conditions. People can enjoy the full benefits of cloud computing if
we can address the very real security concerns that comes along with storing sensitive
information in databases scattered around the internet.
We have discussed several security issues that currently affect cloud systems; however, there
may be many unmentioned and undiscovered security issues. Research is currently being done
on the different known issues faced by cloud systems and possible solutions for these issues,
however there is still a need for better solutions if cloud systems are to be widely adopted.
One of the main problems that need to be addressed is coming up with a clear and standardized
format for the Service Level Agreement (SLA), a format that fully documents all of the services,
what services and processes would be provided by the service provider to back up its assurances.
When customers have the right level of expectations and the insecurities are deemed
manageable, cloud computing as a whole will gain ground and take hold as usable technology
(Weis & Alves-Foss, 2011).
Another major issue cloud systems face is Encryption. Encryption is the main method of
ensuring security of data stored in the cloud; however, encryption is computationally expensive.
Encryption methods specific to DaaS (Cloud Databases) has been developed and more research
is currently being done on Encryption mechanisms for cloud systems, however, more efficient
methods are still needed to help accelerate the adoption of cloud systems.


http://stackoverflow.com/questions/9587919/what-is-the-difference-between-scalability-and-
elasticity
SCALABILITY - ability of a system to increase the workload on its current hardware resources
(scale up);
ELASTICITY - ability of a system to increase the workload on its current and additional
(dynamically added on demand) hardware resources (scale out);
Usually, when someone says a platform or architectural scales, they mean that hardware costs
increase linearly with demand. For example, if one server can handle 50 users, 2 servers can
handle 100 users and 10 servers can handle 500 users. If every 1,000 users you get, you need 2x
the amount of servers, then it can be said your design does not scale, as you would quickly run
out of money as your user count grew.
Elasticity is used to describe how well your architecture can adapt to workload in real time.
For example,
if you had one user logon every hour to your site, then you'd really only need one server to
handle this. However, if all of a sudden, 50,000 users all logged on at once, can your architecture
quickly (and possibly automatically) provision new web servers on the fly to handle this load? If
so, it could be said that your design is elastic.
Scalability is the ability of the system to accomodate larger loads jut by adding resources either
making hardware stronger (scale up) or adding additional nodes (scale out)
Elasticity is the ability to fit the resources needed to cope with loads dynamically usually in
relation to scale out. So that when load increase you scale by adding more resources and when
demand wanes you shrink back and remove unneeded resources.
Elasticity is mostly important in Cloud environment where you pay-per-use and don't want to
pay for resources you do not currently need on the one hand, and want to meet rising demand
when needed on the other hand.
http://abhisarswami.blogspot.in/2011/05/cloud-elastic-or-scalable-whats.html
What means by Elasticity in relation to cloud is that it could scale up or down the infrastructure
to meet the requirement. Elasticity is about instantly bringing you the necessary resource when
you need it and instantly decommission them when you don't need it. Elasticity is an important
feature of the cloud offering as you are charged only during the time when you are using it.

Scalability in terms of the application architecture means that how well an application could
gracefully handle the increased load. One of the ways to achieve Scalability is adding additional
hardware. So to introduce additional hardware either we go the manual intervention way or the
automatic provisioning. The automatic provisioning is the elastic nature of the platform. Not
only does the elastic platform has the capability of adding additional hardware but scaling down
when it not required. With the elasticity of the platform, your application should have the
capability to use the additional or reduced hardware.

Hense Elasticity makes sense for the platform and Scalability makes sense for the application. A
scalable application only can best use the elastic platform.
http://www.cs.ucsb.edu/~sudipto/papers/dasfaa.pdf
2.1 Scalability
Scalability is a desirable property of a system, which indicates its ability to either handle
growing amounts of work in a graceful manner or its ability to improve throughput when
additional resources (typically hardware) are added. A system whose performance
improves after adding hardware, proportionally to the capacity added, is said to be a
scalable system. Similarly, an algorithm is said to scale if it is suitably efficient and
practical when applied to large situations (e.g. a large input data set or large number of
participating nodes in the case of a distributed system). If the algorithm fails to perform
when the resources increase then it does not scale.

There are typically two ways in which a system can scale by adding hardware
resources.

The first approach is when the system scales vertically and is referred to as scale-up.
To scale vertically (or scale up) means to add resources to a single node in a system,
typically involving the addition of processors or memory to a single computer.Such
vertical scaling of existing systems also enables them to use virtualization technology
more effectively, as it provides more resources for the hosted set of operating system
and application modules to share.

An example of taking advantage of such shared resources is by by increasing the
number of Apache daemon processes running The other approach of scaling a system
is by adding hardware resources horizontally referred to as scale-out. To scale
horizontally (or scale out) means to add more nodes to a system, such as adding a new
computer to a distributed software application.

An example might be scaling out from one web-server system to a system with three
webservers.As computer prices drop and performance demand continue to increase,
low cost commodity systems can be used for building shared computational
infrastructures for deploying high-performance applications such as Web search and
other web-based services. Hundreds of small computers may be configured in a cluster
to obtain aggregate computing power which often exceeds that of single traditional
RISC processor based supercomputers. This model has been further fueled by the
availability of high performance interconnects. The scale-out model also creates an
increased demand for shared data storage with very high I/O performance especially
where processing of large amounts of data is required.

In general, the scale-out paradigm has served as the fundamental design paradigm for
the large-scale data-centers of today. The additional complexity introduced by the scale-
out design is the overall complexity of maintaining and administering a large number of
compute and storage nodes.

Note that the scalability of a system is closely related to the underlying algorithm or
computation. In particular, given an algorithm if there is a fraction _ that is inherently
sequential then that means that the remainder 1 _ is parallelizable and hence can
benefit from multiple processors. The maximum scaling or speedup of such a system
using N CPUs is bounded as specified by Amdahls law [1]:

Speedup = 1 whole by I,e / alpha + one minus alpha by N



http://searchcloudprovider.techtarget.com/answer/How-do-cloud-elasticity-and-cloud-scalability-differ

Cloud elasticity supports short-term, tactical needs, while cloud scalability supports long-term, strategic
needs.

http://wiki.answers.com/Q/What_is_cloud_computing
Scalability
Scalability refers to the ability to service a theoretical number of users. The better an application's
scalability, the more users it can handle simultaneously.
http://us.hudson.com/it/migrating-to-cloud-computing-scalability
CLOUD COMPUTING PROVIDES SCALABILITY : Scalability means your IT organization can
quickly and easily grow or shrink depending on the business need at hand.
http://www.finalternatives.com/node/14728

This is when all of your data is stored on a server out side your house not on your computer. EX: Google
Docs. For business cloud could avoid costly servers and its maintenance. Cloud comes in different flavors
and a business could embrace a particular cloud platform as per their requirement. SaaS or software as
a service is the best cloud platform as far as small business are concerned.

http://www.hostsearch.com/articles/scalability-in-cloud-computing.asp
Cloud computing offers organizations, both big and small, the opportunity to scale their
computing resources whenever they deem it necessary. This is done by either increasing or
decreasing the required resources, meaning you're not paying for resources which you are not
utilizing.

http://blog.evolveip.net/index.php/2012/05/24/cloud-elasticity-and-cloud-scalability-are-
not-the-same-thing-2/
Elasticity is aimed at companies building consumer- or business-facing software applications
that they plan to sell on a subscription basis.
Think: Evernote, Netflix, Dropbox, and Salesforce.com.
Elasticity basically means that your platform can handle sudden, unanticipated, and
extraordinary loads. This could be the result of a Superbowl ad or some other widespread
promotional technique that results in a massive but brief influx of users and load on the system.
Think of elasticity as, essentially, unlimited head room. When youre a software developer
building SaaS that you plan to offer to the entire planet (such as Facebook, which hopes to have
the whole world as users), you need unlimited head room for those unpredictable moments.
Contrast that to scalability. Scalability is a planned level of capacity, with appropriate overhead,
that you anticipate your companys systems to require over time, in addition to the ability to
scale in a quick and easy manner when (and if) you need more (or less) resources.
For example, if youre a business leader and you have 500 users who will be using a particular
set of software applications that you want to put in the cloud, you know that you will need to
have a specific level of capacity if all 500 users are logged on at the same time.
You will also want the scalability benefit of quickly adding 100 or 200 users, because you know
that the necessary resources are easily available to you. You might want to double or triple the
number of users over a period of time. Or, you might want to add a nationwide group of business
partners using these applications. Adding more users is quick and easily scalable in the cloud,
but it certainly does not require elasticity.
Scalability also works the other way. Lets say you have a business downturn and need capacity
for 50 less users than you previously had, and you dont want to have to continue paying for all
500. You dont need to provision or pay for more capacity than you need (such as unlimited head
room), when you know that you will only need to support a specific number of maximum users
at one time.
The smaller your business, the more this applies. The typical enterprise forecasts, monitors, and
adjusts its capacity planning on an annual or quarterly basis. If the business is rapidly growing or
has a crucial initiative, it might re-evaluate its required capacity monthly.
Scalability is much more specific and gradual than elasticity, and it is very controlled by you and
your cloud services provider in conjunction with your IT department. By no means does the
typical enterprise need elasticity for its production environments.
In reality, cloud elasticity only applies to e-commerce; mobile and web development; and SaaS,
as well as any other software development companies. But for an organization like yoursthat
wants to put some or all of your business infrastructure in the cloud (i.e. a law firm, call center,
mortgage banker, car dealer)scalability is the key metric for capacity planning, maximizing
operational performance, and pinching pennies. Elasticity has nothing to do with it.
So how exactly has this myththat elasticity is interchangeable with scalability, and therefore,
crucial to your appsmanaged to catch on?
Because major public cloud vendors such as Rackspace, Amazon, and Google, have been
grooming the market to expect it. And their efforts to do so have been so successful that even
leading analyst organizations like Gartner mention elasticity in tandem with scalability, further
muddying the waters for the typical business organization.
Elasticity is also a term that was coined to promote and enable metered use, which is so prevalent
in public cloud, development, and test environments. Coincidently, Gartner also cites metered
use in its Five Attributes of Cloud Computing. Any reasonably talented business analyst will
quickly figure out that a metered use model will easily cost more in the overwhelming majority
of typical production business environments.
But frankly, this is a public-versus-private-cloud argument, and we, as an industry, need to start
connecting elasticity to the public cloud and scalability to the private cloud. Now that you
understand the difference between the two, you can see why elasticity would be important for the
public cloud, but scalability is the crucial metric for a private cloud.
Lets also not forget the not-so-small fact that in order for something to be elastic, ALL parts of
the equation need to be infinitely elastic. That includes firewalls, VPN concentrators, switches,
QoS policies, private bandwidth, and any other devices that enable the so-called elastic
applications.
We all know beyond any reason of a doubt that in private, secure environments, this is simply
not practical, if possible at all. Yes, they can be scalable but they are simply not elastic. This is
an exact case of only being as strong as your weakest link. Simply put: There is a reasonably
major tradeoff between private and scalable versus public and elastic. I wont go into this topic
in detail here, but we will visit it in a later post.
The bottom-line is that when it comes to elasticity and scalability, business owners and IT
directors need to remember that its scalability thats important for success with the private
cloud. Dont be confused by the hype on elasticity its real, but its also irrelevant to the small-
and mid-sized business, unless you are a building a public-facing application that you fully
expect to need to handle the entire planet logging onto it simultaneously.
Tagged AlwaysOn OnDemand conference, Cloud computing, Cloud elasticity, Cloud myths,
Cloud preparation, Cloud scalability, Gartner, Small business
http://www.ccltng.com/my-cloud-is-scalable-but-is-it-elastic
Scalability and Elasticity
In Cloud Computing, scalability is important in order to cater for ubiquitous and on-demand
services.
Scalability is concerned with how quickly Cloud Computing components such as servers,
network access, or databases can be provisioned to meet the demands of the Cloud service and
the user community of that service.
Closely related to scalability is the topic of elasticity which relates to the performance of the
Cloud Computing component after it has been brought online to provide for increasing or
reducing performance needs.
Scalability is about growing to meet the service demands, while elasticity is about being able to
adapt to actual user and service needs as they occur.
It is a measure of the scalability of a Cloud based application or service on how easy it is easy to
allocate extra resources and balance the service demands across all available resources to provide
a quality and available service using the best available load balancing methods .
In this context, the topics of load balancing and performance management of different Cloud
Computing components are important. In addition, questions such as the approaches to be used
for auto-scaling and the ability of Cloud Computing service providers to provide auto-scaling
features is a topic for another day and post.


Cloud computing
From Wikipedia, the free encyclopedia
Jump to: navigation, search

This article may be too technical for most readers to understand. Please help improve this
article to make it understandable to non-experts, without removing the technical details. The
talk page may contain suggestions. (January 2013)
Cloud computing, or the cloud, is a colloquial expression used to describe a variety of different
types of computing concepts that involve a large number of computers connected through a real-
time communication network such as the Internet.
[1]
Cloud computing is a term without a
commonly accepted unequivocal scientific or technical definition. In science, cloud computing is
a synonym for distributed computing over a network and means the ability to run a program on
many connected computers at the same time. The phrase is also, more commonly used to refer to
network-based services which appear to be provided by real server hardware, which in fact are
served up by virtual hardware, simulated by software running on one or more real machines.
Such virtual servers do not physically exist and can therefore be moved around and scaled up (or
down) on the fly without affecting the end user - arguably, rather like a cloud.
The popularity of the term can be attributed to its use in marketing to sell hosted services in the
sense of application service provisioning that run client server software on a remote location.
Contents
1 Advantages
2 Hosted services
3 History
o 3.1 The 1950s
o 3.2 The 1960s1990s
o 3.3 The 1990s
o 3.4 Since 2000
o 3.5 Growth and popularity
o 3.6 Financials
o 3.7 Origin of the term
4 Similar systems and concepts
5 Characteristics
o 5.1 On-demand self-service
6 Service models
o 6.1 Infrastructure as a service (IaaS)
o 6.2 Platform as a service (PaaS)
o 6.3 Software as a service (SaaS)
o 6.4 Network as a service (NaaS)
7 Cloud management
o 7.1 Cloud management challenges
8 Cloud clients
9 Deployment models
o 9.1 Private cloud
o 9.2 Public cloud
o 9.3 Community cloud
o 9.4 Hybrid cloud
o 9.5 Distributed cloud
o 9.6 Cloud management strategies
o 9.7 Aspects of cloud management systems
10 Architecture
o 10.1 The Intercloud
o 10.2 Cloud engineering
11 Issues
o 11.1 Threats and opportunities of the cloud
o 11.2 Privacy
o 11.3 Compliance
o 11.4 Legal
o 11.5 Vendor lock-in
o 11.6 Open source
o 11.7 Open standards
o 11.8 Security
o 11.9 Sustainability
o 11.10 Abuse
o 11.11 IT governance
o 11.12 Consumer end storage
o 11.13 Ambiguity of terminology
o 11.14 Performance interference and noisy neighbors
o 11.15 Monopolies and privatization of cyberspace
12 Research
13 See also
14 References
15 External links
Advantages
Cloud computing relies on sharing of resources to achieve coherence and economies of scale
similar to a utility (like the electricity grid) over a network.
[2]
At the foundation of cloud
computing is the broader concept of converged infrastructure and shared services.
The cloud also focuses on maximizing the effectiveness of the shared resources. Cloud resources
are usually not only shared by multiple users but are also dynamically re-allocated per demand.
This can work for allocating resources to users. For example, a cloud computer facility, which
serves European users during European business hours with a specific application (e.g. email)
while the same resources are getting reallocated and serve North American users during North
America's business hours with another application (e.g. web server). This approach should
maximize the use of computing powers thus reducing environmental damage as well since less
power, air conditioning, rackspace, etc. is required for a variety of functions.
The term "moving to cloud" also refers to an organization moving away from a traditional
CAPEX model (buy the dedicated hardware and depreciate it over a period of time) to the OPEX
model (use a shared cloud infrastructure and pay as you use it).
Proponents claim that cloud computing allows companies to avoid upfront infrastructure costs,
and focus on projects that differentiate their businesses instead of infrastructure.
[3]
Proponents
also claim that cloud computing allows enterprises to get their applications up and running faster,
with improved manageability and less maintenance, and enables IT to more rapidly adjust
resources to meet fluctuating and unpredictable business demand.
[3][4][5]

Hosted services
In marketing, cloud computing is mostly used to sell hosted services in the sense of application
service provisioning that run client server software at a remote location. Such services are given
popular acronyms like 'SaaS' (Software as a Service), 'PaaS' (Platform as a Service), 'IaaS'
(Infrastructure as a Service), 'HaaS' (Hardware as a Service) and finally 'EaaS' (Everything as a
Service). End users access cloud-based applications through a web browser, thin client or mobile
app while the business software and user's data are stored on servers at a remote location.
History
The 1950s
The underlying concept of cloud computing dates back to the 1950s, when large-scale mainframe
computers became available in academia and corporations, accessible via thin clients/terminal
computers, often referred to as "dumb terminals", because they were used for communications
but had no internal processing capacities. To make more efficient use of costly mainframes, a
practice evolved that allowed multiple users to share both the physical access to the computer
from multiple terminals as well as to share the CPU time. This eliminated periods of inactivity
on the mainframe and allowed for a greater return on the investment. The practice of sharing
CPU time on a mainframe became known in the industry as time-sharing.
[6]

The 1960s1990s
John McCarthy opined in the 1960s that "computation may someday be organized as a public
utility."
[7]
Almost all the modern-day characteristics of cloud computing (elastic provision,
provided as a utility, online, illusion of infinite supply), the comparison to the electricity industry
and the use of public, private, government, and community forms, were thoroughly explored in
Douglas Parkhill's 1966 book, The Challenge of the Computer Utility. Other scholars have
shown that cloud computing's roots go all the way back to the 1950s when scientist Herb Grosch
(the author of Grosch's law) postulated that the entire world would operate on dumb terminals
powered by about 15 large data centers.
[8]
Due to the expense of these powerful computers, many
corporations and other entities could avail themselves of computing capability through time
sharing and several organizations, such as GE's GEISCO, IBM subsidiary The Service Bureau
Corporation (SBC, founded in 1957), Tymshare (founded in 1966), National CSS (founded in
1967 and bought by Dun & Bradstreet in 1979), Dial Data (bought by Tymshare in 1968), and
Bolt, Beranek and Newman (BBN) marketed time sharing as a commercial venture.
The 1990s
In the 1990s, telecommunications companies,who previously offered primarily dedicated point-
to-point data circuits, began offering virtual private network (VPN) services with comparable
quality of service, but at a lower cost. By switching traffic as they saw fit to balance server use,
they could use overall network bandwidth more effectively. They began to use the cloud symbol
to denote the demarcation point between what the provider was responsible for and what users
were responsible for. Cloud computing extends this boundary to cover servers as well as the
network infrastructure.
[9]

As computers became more prevalent, scientists and technologists explored ways to make large-
scale computing power available to more users through time sharing, experimenting with
algorithms to provide the optimal use of the infrastructure, platform and applications with
prioritized access to the CPU and efficiency for the end users.
[10]

Since 2000
After the dot-com bubble, Amazon played a key role in all the development of cloud computing
by modernizing their data centers, which, like most computer networks, were using as little as
10% of their capacity at any one time, just to leave room for occasional spikes. Having found
that the new cloud architecture resulted in significant internal efficiency improvements whereby
small, fast-moving "two-pizza teams" (teams small enough to feed with two pizzas) could add
new features faster and more easily, Amazon initiated a new product development effort to
provide cloud computing to external customers, and launched Amazon Web Services (AWS) on
a utility computing basis in 2006.
[11][12]

In early 2008, Eucalyptus became the first open-source, AWS API-compatible platform for
deploying private clouds. In early 2008, OpenNebula, enhanced in the RESERVOIR European
Commission-funded project, became the first open-source software for deploying private and
hybrid clouds, and for the federation of clouds.
[13]
In the same year, efforts were focused on
providing quality of service guarantees (as required by real-time interactive applications) to
cloud-based infrastructures, in the framework of the IRMOS European Commission-funded
project, resulting to a real-time cloud environment.
[14]
By mid-2008, Gartner saw an opportunity
for cloud computing "to shape the relationship among consumers of IT services, those who use
IT services and those who sell them"
[15]
and observed that "organizations are switching from
company-owned hardware and software assets to per-use service-based models" so that the
"projected shift to computing ... will result in dramatic growth in IT products in some areas and
significant reductions in other areas."
[16]

On March 1, 2011, IBM announced the IBM SmartCloud framework to support Smarter
Planet.
[17]
Among the various components of the Smarter Computing foundation, cloud
computing is a critical piece.
Growth and popularity
The development of the Internet from being document centric via semantic data towards more
and more services was described as "Dynamic Web".
[18]
This contribution focused in particular
in the need for better meta-data able to describe not only implementation details but also
conceptual details of model-based applications.
The present availability of high-capacity networks, low-cost computers and storage devices as
well as the widespread adoption of hardware virtualization, service-oriented architecture,
autonomic, and utility computing have led to a growth in cloud computing.
[19][20][21]

Financials
Cloud vendors are experiencing growth rates of 90% per annum.
[22]

Origin of the term
The origin of the term cloud computing is unclear. The expression cloud is commonly used in
science to describe a large agglomeration of objects that visually appear from a distance as a
cloud and describes any set of things whose details are not inspected further in a given context.
Meteorology: a weather cloud is an agglomeration.
Mathematics: a large number of points in a coordinate system in mathematics is seen as a point
cloud;
Astronomy: stars that appear crowded together in the sky are known as nebula (Latin for mist or
cloud), e.g. the Milky Way;
Physics: The indeterminate position of electrons around an atomic kernel appears like a cloud to
a distant observer
In analogy to above usage the word cloud was used as a metaphor for the Internet and a
standardized cloud-like shape was used to denote a network on telephony schematics and later to
depict the Internet in computer network diagrams. The cloud symbol was used to represent the
Internet as early as 1994,
[23][24]
in which servers were then shown connected to, but external to,
the cloud symbol.
References to cloud computing in its modern sense can be found as early as 1996, with the
earliest known mention to be found in a Compaq internal document.
[25]

The term became popular after Amazon.com introduced the Elastic Compute Cloud in 2006.
Similar systems and concepts
Cloud Computing is the result of evolution and adoption of existing technologies and paradigms.
The goal of cloud computing is to allow users to take benet from all of these technologies,
without the need for deep knowledge about or expertise with each one of them. The cloud aims
to cut costs, and help the users focus on their core business instead of being impeded by IT
obstacles.
[26]

The main enabling technology for cloud computing is virtualization. Virtualization generalizes
the physical infrastructure, which is the most rigid component, and makes it available as a soft
component that is easy to use and manage. By doing so, virtualization provides the agility
required to speed up IT operations, and reduces cost by increasing infrastructure utilization. On
the other hand, autonomic computing automates the process through which the user can
provision resources on-demand. By minimizing user involvement, automation speeds up the
process and reduces the possibility of human errors.
[26]

Users face difficult business problems every day. Cloud computing adopts concepts from
Service-oriented Architecture (SOA) that can help the user break these problems into services
that can be integrated to provide a solution. Cloud computing provides all of its resources as
services, and makes use of the well-established standards and best practices gained in the domain
of SOA to allow global and easy access to cloud services in a standardized way.
Cloud computing also leverages concepts from utility computing in order to provide metrics for
the services used. Such metrics are at the core of the public cloud pay-per-use models. In
addition, measured services are an essential part of the feedback loop in autonomic computing,
allowing services to scale on-demand and to perform automatic failure recovery.
Cloud computing is a kind of grid computing; it has evolved by addressing the QoS (quality of
service) and reliability problems. Cloud computing provides the tools and technologies to build
data/compute intensive parallel applications with much more affordable prices compared to
traditional parallel computing techniques.
[26]

Cloud computing shares characteristics with:
Clientserver model Clientserver computing refers broadly to any distributed application
that distinguishes between service providers (servers) and service requestors (clients).
[27]

Grid computing "A form of distributed and parallel computing, whereby a 'super and virtual
computer' is composed of a cluster of networked, loosely coupled computers acting in concert
to perform very large tasks."
Mainframe computer Powerful computers used mainly by large organizations for critical
applications, typically bulk data processing such as: census; industry and consumer statistics;
police and secret intelligence services; enterprise resource planning; and financial transaction
processing.
[28]

Utility computing The "packaging of computing resources, such as computation and storage,
as a metered service similar to a traditional public utility, such as electricity."
[29][30]

Peer-to-peer A distributed architecture without the need for central coordination.
Participants are both suppliers and consumers of resources (in contrast to the traditional client
server model).
Cloud gaming Also known as on-demand gaming, is a way of delivering games to computers.
Gaming data is stored in the provider's server, so that gaming is independent of client
computers used to play the game.
Characteristics
Cloud computing exhibits the following key characteristics:
Agility improves with users' ability to re-provision technological infrastructure resources.
Application programming interface (API) accessibility to software that enables machines to
interact with cloud software in the same way that a traditional user interface (e.g., a computer
desktop) facilitates interaction between humans and computers. Cloud computing systems
typically use Representational State Transfer (REST)-based APIs.
Cost: cloud providers claim that computing costs reduce. A public-cloud delivery model converts
capital expenditure to operational expenditure.
[31]
This purportedly lowers barriers to entry, as
infrastructure is typically provided by a third-party and does not need to be purchased for one-
time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained,
with usage-based options and fewer IT skills are required for implementation (in-house).
[32]
The
e-FISCAL project's state-of-the-art repository
[33]
contains several articles looking into cost
aspects in more detail, most of them concluding that costs savings depend on the type of
activities supported and the type of infrastructure available in-house.
Device and location independence
[34]
enable users to access systems using a web browser
regardless of their location or what device they use (e.g., PC, mobile phone). As infrastructure is
off-site (typically provided by a third-party) and accessed via the Internet, users can connect
from anywhere.
[32]

Virtualization technology allows sharing of servers and storage devices and increased utilization.
Applications can be easily migrated from one physical server to another.
Multitenancy enables sharing of resources and costs across a large pool of users thus allowing
for:
o centralization of infrastructure in locations with lower costs (such as real estate,
electricity, etc.)
o peak-load capacity increases (users need not engineer for highest possible load-levels)
o utilisation and efficiency improvements for systems that are often only 1020%
utilised.
[11][35]

Reliability improves with the use of multiple redundant sites, which makes well-designed cloud
computing suitable for business continuity and disaster recovery.
[36]

Scalability and elasticity via dynamic ("on-demand") provisioning of resources on a fine-grained,
self-service basis near real-time,
[37][38]
without users having to engineer for peak loads.
[39][40][41]

Performance is monitored, and consistent and loosely coupled architectures are constructed
using web services as the system interface.,
[32][42]

Security can improve due to centralization of data, increased security-focused resources, etc.,
but concerns can persist about loss of control over certain sensitive data, and the lack of
security for stored kernels.
[43]
Security is often as good as or better than other traditional
systems, in part because providers are able to devote resources to solving security issues that
many customers cannot afford to tackle.
[44]
However, the complexity of security is greatly
increased when data is distributed over a wider area or over a greater number of devices, as
well as in multi-tenant systems shared by unrelated users. In addition, user access to security
audit logs may be difficult or impossible. Private cloud installations are in part motivated by
users' desire to retain control over the infrastructure and avoid losing control of information
security.
Maintenance of cloud computing applications is easier, because they do not need to be installed
on each user's computer and can be accessed from different places.
The National Institute of Standards and Technology's definition of cloud computing identifies
"five essential characteristics":
On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server
time and network storage, as needed automatically without requiring human interaction with each
service provider.
Broad network access. Capabilities are available over the network and accessed through standard
mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile
phones, tablets, laptops, and workstations).
Resource pooling. The provider's computing resources are pooled to serve multiple consumers
using a multi-tenant model, with different physical and virtual resources dynamically assigned
and reassigned according to consumer demand. ...
Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases
automatically, to scale rapidly outward and inward commensurate with demand. To the
consumer, the capabilities available for provisioning often appear unlimited and can be
appropriated in any quantity at any time.
Measured service. Cloud systems automatically control and optimize resource use by leveraging a
metering capability at some level of abstraction appropriate to the type of service (e.g., storage,
processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and
reported, providing transparency for both the provider and consumer of the utilized service.
National Institute of Standards and Technology
[2]

On-demand self-service
See also: Self-service provisioning for cloud computing services and Service catalogs for cloud computing
services
On-demand self-service allows users to obtain, configure and deploy cloud services themselves
using cloud service catalogues, without requiring the assistance of IT.
[45][46]
This feature is listed
by the National Institute of Standards and Technology (NIST) as a characteristic of cloud
computing.
[2]

The self-service requirement of cloud computing prompts infrastructure vendors to create cloud
computing templates, which are obtained from cloud service catalogues. Manufacturers of such
templates or blueprints include BMC Software (BMC), with Service Blueprints as part of their
cloud management platform
[47]
Hewlett-Packard (HP), which names its templates as HP Cloud
Maps
[48]
RightScale
[49]
and Red Hat, which names its templates CloudForms.
[50]

The templates contain predefined configurations used by consumers to set up cloud services. The
templates or blueprints provide the technical information necessary to build ready-to-use
clouds.
[49]
Each template includes specific configuration details for different cloud
infrastructures, with information about servers for specific tasks such as hosting applications,
databases, websites and so on.
[49]
The templates also include predefined Web service, the
operating system, the database, security configurations and load balancing.
[50]

Cloud computing consumers use cloud templates to move applications between clouds through a
self-service portal. The predefined blueprints define all that an application requires to run in
different environments. For example, a template could define how the same application could be
deployed in cloud platforms based on Amazon Web Service, VMware or Red Hat.
[51]
The user
organization benefits from cloud templates because the technical aspects of cloud configurations
reside in the templates, letting users to deploy cloud services with a push of a button.
[52][53]

Developers can use cloud templates to create a catalog of cloud services.
[54]

Service models
Cloud computing providers offer their services according to several fundamental models:
[2][55]

infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS)
where IaaS is the most basic and each higher model abstracts from the details of the lower
models. Other key components in anything as a service (XaaS) are described in a comprehensive
taxonomy model published in 2009,
[56]
such as Strategy-as-a-Service, Collaboration-as-a-
Service, Business Process-as-a-Service, Database-as-a-Service, etc. In 2012, network as a service
(NaaS) and communication as a service (CaaS) were officially included by ITU (International
Telecommunication Union) as part of the basic cloud computing models, recognized service
categories of a telecommunication-centric cloud ecosystem.
[57]


Infrastructure as a service (IaaS)
See also: Category:Cloud infrastructure
In the most basic cloud-service model, providers of IaaS offer computers - physical or (more
often) virtual machines - and other resources. (A hypervisor, such as Xen or KVM, runs the
virtual machines as guests. Pools of hypervisors within the cloud operational support-system can
support large numbers of virtual machines and the ability to scale services up and down
according to customers' varying requirements.) IaaS clouds often offer additional resources such
as a virtual-machine disk image library, raw (block) and file-based storage, firewalls, load
balancers, IP addresses, virtual local area networks (VLANs), and software bundles.
[58]
IaaS-
cloud providers supply these resources on-demand from their large pools installed in data
centers. For wide-area connectivity, customers can use either the Internet or carrier clouds
(dedicated virtual private networks).
To deploy their applications, cloud users install operating-system images and their application
software on the cloud infrastructure. In this model, the cloud user patches and maintains the
operating systems and the application software. Cloud providers typically bill IaaS services on a
utility computing basis
[citation needed]
: cost reflects the amount of resources allocated and consumed.
Cloud communications and cloud telephony, rather than replacing local computing
infrastructure, replace local telecommunications infrastructure with Voice over IP and other off-
site Internet services.
Platform as a service (PaaS)
Main article: Platform as a service
See also: Category:Cloud platforms
In the PaaS model, cloud providers deliver a computing platform, typically including operating
system, programming language execution environment, database, and web server. Application
developers can develop and run their software solutions on a cloud platform without the cost and
complexity of buying and managing the underlying hardware and software layers. With some
PaaS offers, the underlying computer and storage resources scale automatically to match
application demand so that the cloud user does not have to allocate resources manually. The
latter has also been proposed by an architecture aiming to facilitate real-time in cloud
environments.
[59]

Software as a service (SaaS)
Main article: Software as a service
In the business model using software as a service (SaaS), users are provided access to application
software and databases. Cloud providers manage the infrastructure and platforms that run the
applications. SaaS is sometimes referred to as "on-demand software" and is usually priced on a
pay-per-use basis. SaaS providers generally price applications using a subscription fee.
In the SaaS model, cloud providers install and operate application software in the cloud and
cloud users access the software from cloud clients. Cloud users do not manage the cloud
infrastructure and platform where the application runs. This eliminates the need to install and run
the application on the cloud user's own computers, which simplifies maintenance and support.
Cloud applications are different from other applications in their scalabilitywhich can be
achieved by cloning tasks onto multiple virtual machines at run-time to meet changing work
demand.
[60]
Load balancers distribute the work over the set of virtual machines. This process is
transparent to the cloud user, who sees only a single access point. To accommodate a large
number of cloud users, cloud applications can be multitenant, that is, any machine serves more
than one cloud user organization. It is common to refer to special types of cloud based
application software with a similar naming convention: desktop as a service, business process as
a service, test environment as a service, communication as a service.
The pricing model for SaaS applications is typically a monthly or yearly flat fee per user,
[61]
so
price is scalable and adjustable if users are added or removed at any point.
[62]

Proponents claim SaaS allows a business the potential to reduce IT operational costs by
outsourcing hardware and software maintenance and support to the cloud provider. This enables
the business to reallocate IT operations costs away from hardware/software spending and
personnel expenses, towards meeting other goals. In addition, with applications hosted centrally,
updates can be released without the need for users to install new software. One drawback of
SaaS is that the users' data are stored on the cloud provider's server. As a result, there could be
unauthorized access to the data.
Network as a service (NaaS)
Main article: Network as a service
A category of cloud services where the capability provided to the cloud service user is to use
network/transport connectivity services and/or inter-cloud network connectivity services.
[63]

NaaS involves the optimization of resource allocations by considering network and computing
resources as a unified whole.
[64]

Traditional NaaS services include flexible and extended VPN, and bandwidth on demand.
[63]

NaaS concept materialization also includes the provision of a virtual network service by the
owners of the network infrastructure to a third party (VNP VNO).
[65][66]



Cloud management
Legacy management infrastructures, which are based on the concept of dedicated system
relationships and architecture constructs, are not well suited to cloud environments where
instances are continually launched and decommissioned.
[67]
Instead, the dynamic nature of cloud
computing requires monitoring and management tools that are adaptable, extensible and
customizable.
[68]

Cloud management challenges
Cloud computing presents a number of management challenges. Companies using public clouds
do not have ownership of the equipment hosting the cloud environment, and because the
environment is not contained within their own networks, public cloud customers dont have full
visibility or control.
[68]
Users of public cloud services must also integrate with an architecture
defined by the cloud provider, using its specific parameters for working with cloud components.
Integration includes tying into the cloud APIs for configuring IP addresses, subnets, firewalls
and data service functions for storage. Because control of these functions is based on the cloud
providers infrastructure and services, public cloud users must integrate with the cloud
infrastructure management.
[69]

Capacity management is a challenge for both public and private cloud environments because end
users have the ability to deploy applications using self-service portals. Applications of all sizes
may appear in the environment, consume an unpredictable amount of resources, then disappear
at any time.
[70]

Chargebackor, pricing resource use on a granular basisis a challenge for both public and
private cloud environments.
[71]
Chargeback is a challenge for public cloud service providers
because they must price their services competitively while still creating profit.
[70]
Users of public
cloud services may find chargeback challenging because it is difficult for IT groups to assess
actual resource costs on a granular basis due to overlapping resources within an organization that
may be paid for by an individual business unit, such as electrical power.
[71]
For private cloud
operators, chargeback is fairly straightforward, but the challenge lies in guessing how to allocate
resources as closely as possible to actual resource usage to achieve the greatest operational
efficiency. Exceeding budgets can be a risk.
[70]

which combine public and private cloud services, sometimes with traditional infrastructure
elements, present their own set of management challenges. These include security concerns if
sensitive data lands on public cloud servers, budget concerns around overuse of storage or
bandwidth and proliferation of mismanaged images.
[72]
Managing the information flow in a
hybrid cloud environment is also a significant challenge. On-premises clouds must share
information with applications hosted off-premises by public cloud providers, and this
information may change constantly.
[73]
Hybrid cloud environments also typically include a
complex mix of policies, permissions and limits that must be managed consistently across both
public and private clouds.
[73]






Cloud clients
See also: Category:Cloud clients
Users access cloud computing using networked client devices, such as desktop computers,
laptops, tablets and smartphones. Some of these devices - cloud clients - rely on cloud computing
for all or a majority of their applications so as to be essentially useless without it. Examples are
thin clients and the browser-based Chromebook. Many cloud applications do not require specific
software on the client and instead use a web browser to interact with the cloud application. With
Ajax and HTML5 these Web user interfaces can achieve a similar, or even better, look and feel
to native applications. Some cloud applications, however, support specific client software
dedicated to these applications (e.g., virtual desktop clients and most email clients). Some legacy
applications (line of business applications that until now have been prevalent in thin client
computing) are delivered via a screen-sharing technology.
Deployment models


Cloud computing types
Private cloud
Private cloud is cloud infrastructure operated solely for a single organization, whether managed
internally or by a third-party and hosted internally or externally.
[2]
Undertaking a private cloud
project requires a significant level and degree of engagement to virtualize the business
environment, and requires the organization to reevaluate decisions about existing resources.
When done right, it can improve business, but every step in the project raises security issues that
must be addressed to prevent serious vulnerabilities.
[74]

They have attracted criticism because users "still have to buy, build, and manage them" and thus
do not benefit from less hands-on management,
[75]
essentially "[lacking] the economic model that
makes cloud computing such an intriguing concept".
[76][77]


Comparison between Public and Private Clouds

Public cloud Private cloud
Initial cost Typically zero Typically high
Running cost Unpredictable Unpredictable
Customization Impossible Possible
Privacy No (Host has access to the data) Yes
Single sign-on Impossible Possible
Scaling up Easy while within defined limits Laborious but no limits
Public cloud
A cloud is called a 'Public cloud' when the services are rendered over a network that is open for
public use. Technically there may be little or no difference between public and private cloud
architecture, however, security consideration may be substantially different for services
(applications, storage, and other resources) that are made available by a service provider for a
public audience and when communication is effected over a non-trusted network. Generally,
public cloud service providers like Amazon AWS, Microsoft and Google own and operate the
infrastructure and offer access only via Internet (direct connectivity is not offered).
[32]


It has been suggested that Public cloud be merged into this article. (Discuss) Proposed since
February 2013.
Community cloud
Community cloud shares infrastructure between several organizations from a specific community
with common concerns (security, compliance, jurisdiction, etc.), whether managed internally or
by a third-party and hosted internally or externally. The costs are spread over fewer users than a
public cloud (but more than a private cloud), so only some of the cost savings potential of cloud
computing are realized.
[2]

Hybrid cloud
Hybrid cloud is a composition of two or more clouds (private, community or public) that remain
unique entities but are bound together, offering the benefits of multiple deployment models.
[2]

Such composition expands deployment options for cloud services, allowing IT organizations to
use public cloud computing resources to meet temporary needs.
[78]
This capability enables hybrid
clouds to employ cloud bursting for scaling across clouds.
[2]

Cloud bursting is an application deployment model in which an application runs in a private
cloud or data center and "bursts" to a public cloud when the demand for computing capacity
increases. A primary advantage of cloud bursting and a hybrid cloud model is that an
organization only pays for extra compute resources when they are needed.
[79]

Cloud bursting enables data centers to create an in-house IT infrastructure that supports average
workloads, and use cloud resources from public or private clouds, during spikes in processing
demands.
[80]

By utilizing "hybrid cloud" architecture, companies and individuals are able to obtain degrees of
fault tolerance combined with locally immediate usability without dependency on internet
connectivity. Hybrid cloud architecture requires both on-premises resources and off-site (remote)
server-based cloud infrastructure.
Hybrid clouds lack the flexibility, security and certainty of in-house applications.
[81]
Hybrid
cloud provides the flexibility of in house applications with the fault tolerance and scalability of
cloud based services.
Distributed cloud
Cloud computing can also be provided by a distributed set of machines that are running at
different locations, while still connected to a single network or hub service. Examples of this
include distributed computing platforms such as BOINC and Folding@Home.
Cloud management strategies
Public clouds are managed by public cloud service providers, which include the public cloud
environments servers, storage, networking and data center operations.
[82]
Users of public cloud
services can generally select from three basic categories:
User self-provisioning: Customers purchase cloud services directly from the provider, typically
through a web form or console interface. The customer pays on a per-transaction basis.
Advance provisioning: Customers contract in advance a predetermined amount of resources,
which are prepared in advance of service. The customer pays a flat fee or a monthly fee.
Dynamic provisioning: The provider allocates resources when the customer needs them, then
decommissions them when they are no longer needed. The customer is charged on a pay-per-
use basis.
Managing a private cloud requires software tools to help create a virtualized pool of compute
resources, provide a self-service portal for end users and handle security, resource allocation,
tracking and billing.
[83]
Management tools for private clouds tend to be service driven, as
opposed to resource driven, because cloud environments are typically highly virtualized and
organized in terms of portable workloads.
[84]

In hybrid cloud environments, compute, network and storage resources must be managed across
multiple domains, so a good management strategy should start by defining what needs to be
managed, and where and how to do it.
[72]
Policies to help govern these domains should include
configuration and installation of images, access control, and budgeting and reporting.
[72]

Aspects of cloud management systems
A cloud management system is a combination of software and technologies designed to manage
cloud environments.
[85]
The industry has responded to the management challenges of cloud
computing with cloud management systems. HP, Novell, Eucalyptus, OpenNebula, Citrix and
are among the vendors that have management systems specifically for managing cloud
environments.
[83]

At a minimum, a cloud management solution should be able to manage a pool of heterogeneous
compute resources, provide access to end users, monitor security, manage resource allocation
and manage tracking.
[83]
For composite applications, cloud management solutions also
encompass frameworks for workflow mapping and management.
[86]

Enterprises with large-scale cloud implementations may require more robust cloud management
tools that include specific characteristics, such as the ability to manage multiple platforms from a
single point of reference, include intelligent analytics to automate processes like application
lifecycle management. And high-end cloud management tools should also be able to handle
system failures automatically with capabilities such as self-monitoring, an explicit notification
mechanism, and include failover and self-healing capabilities.,
[42][72]

Architecture


Cloud computing sample architecture
Cloud architecture,
[87]
the systems architecture of the software systems involved in the delivery
of cloud computing, typically involves multiple cloud components communicating with each
other over a loose coupling mechanism such as a messaging queue. Elastic provision implies
intelligence in the use of tight or loose coupling as applied to mechanisms such as these and
others.
The Intercloud
Main article: Intercloud
The Intercloud
[88]
is an interconnected global "cloud of clouds"
[89][90]
and an extension of the
Internet "network of networks" on which it is based.
[91][92][93]

Cloud engineering
Cloud engineering is the application of engineering disciplines to cloud computing. It brings a
systematic approach to the high-level concerns of commercialisation, standardisation, and
governance in conceiving, developing, operating and maintaining cloud computing systems. It is
a multidisciplinary method encompassing contributions from diverse areas such as systems,
software, web, performance, information, security, platform, risk, and quality engineering.
Issues
Threats and opportunities of the cloud
Critical voices including GNU project initiator Richard Stallman and Oracle founder Larry
Ellison warned that the whole concept is rife with privacy and ownership concerns and constitute
merely a fad.
[94]

However, cloud computing continues to gain steam
[95]
with 56% of the major European
technology decision-makers estimate that the cloud is a priority in 2013 and 2014, and the cloud
budget may reach 30% of the overall IT budget.
[citation needed][96]

According to the TechInsights Report 2013: Cloud Succeeds based on a survey, the cloud
implementations generally meets or exceedes expectations across major service models, such as
Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service
(SaaS)".
[97]

Several deterrents to the widespread adoption of cloud computing remain. Among them, are:
reliability, availability of services and data, security, complexity, costs, regulations and legal
issues, performance, migration, reversion, the lack of standards, limited customization and issues
of privacy. The cloud offers many strong points: infrastructure flexibility, faster deployment of
applications and data, cost control, adaptation of cloud resources to real needs, improved
productivity, etc. The early 2010s cloud market is dominated by software and services in SaaS
mode and IaaS (infrastructure), especially the private cloud. PaaS and the public cloud are
further back.
Privacy
Privacy advocates have criticized the cloud model for giving hosting companies' greater ease to
controland thus, to monitor at willcommunication between host company and end user, and
access user data (with or without permission). Instances such as the secret NSA program,
working with AT&T, and Verizon, which recorded over 10 million telephone calls between
American citizens, causes uncertainty among privacy advocates, and the greater powers it gives
to telecommunication companies to monitor user activity.
[98][99]
A cloud service provider (CSP)
can complicate data privacy because of the extent of virtualization (virtual machines) and cloud
storage used to implement cloud service.
[100]
CSP operations, customer or tenant data may not
remain on the same system, or in the same data center or even within the same provider's cloud;
this can lead to legal concerns over jurisdiction. While there have been efforts (such as US-EU
Safe Harbor) to "harmonise" the legal environment, providers such as Amazon still cater to
major markets (typically the United States and the European Union) by deploying local
infrastructure and allowing customers to select "availability zones."
[101]
Cloud computing poses
privacy concerns because the service provider can access the data that is on the cloud at any
time. It could accidentally or deliberately alter or even delete information.
[102]

Compliance
To comply with regulations including FISMA, HIPAA, and SOX in the United States, the Data
Protection Directive in the EU and the credit card industry's PCI DSS, users may have to adopt
community or hybrid deployment modes that are typically more expensive and may offer
restricted benefits. This is how Google is able to "manage and meet additional government
policy requirements beyond FISMA"
[103][104]
and Rackspace Cloud or QubeSpace are able to
claim PCI compliance.
[105]

Many providers also obtain a SAS 70 Type II audit, but this has been criticised on the grounds
that the hand-picked set of goals and standards determined by the auditor and the auditee are
often not disclosed and can vary widely.
[106]
Providers typically make this information available
on request, under non-disclosure agreement.
[107][108]

Customers in the EU contracting with cloud providers outside the EU/EEA have to adhere to the
EU regulations on export of personal data.
[109]

U.S. Federal Agencies have been directed by the Office of Management and Budget to use a
process called FedRAMP (Federal Risk and Authorization Management Program) to assess and
authorize cloud products and services. Federal CIO Steven VanRoekel issued a memorandum to
federal agency Chief Information Officers on December 8, 2011 defining how federal agencies
should use FedRAMP. FedRAMP consists of a subset of NIST Special Publication 800-53
security controls specifically selected to provide protection in cloud environments. A subset has
been defined for the FIPS 199 low categorization and the FIPS 199 moderate categorization. The
FedRAMP program has also established a Joint Accreditation Board (JAB) consisting of Chief
Information Officers from DoD, DHS and GSA. The JAB is responsible for establishing
accreditation standards for 3rd party organizations who perform the assessments of cloud
solutions. The JAB also reviews authorization packages, and may grant provisional authorization
(to operate). The federal agency consuming the service still has final responsibility for final
authority to operate.
[110]

A multitude of laws and regulations have forced specific compliance requirements onto many
companies that collect, generate or store data. These policies may dictate a wide array of data
storage policies, such as how long information must be retained, the process used for deleting
data, and even certain recovery plans. Below are some examples of compliance laws or
regulations.
In the United States, the Health Insurance Portability and Accountability Act (HIPAA) requires a
contingency plan that includes, data backups, data recovery, and data access during
emergencies.
The privacy laws of the Switzerland demand that private data, including emails, be physically
stored in the Switzerland.
In the United Kingdom, the Civil Contingencies Act of 2004 sets forth guidance for a Business
contingency plan that includes policies for data storage.
In a virtualized cloud computing environment, customers may never know exactly where their
data is stored. In fact, data may be stored across multiple data centers in an effort to improve
reliability, increase performance, and provide redundancies. This geographic dispersion may
make it more difficult to ascertain legal jurisdiction if disputes arise.
[111]

Legal
As with other changes in the landscape of computing, certain legal issues arise with cloud
computing, including trademark infringement, security concerns and sharing of proprietary data
resources.
The Electronic Frontier Foundation has criticized the United States government during the
Megaupload seizure process for considering that people lose property rights by storing data on a
cloud computing service.
[112]

One important but not often mentioned problem with cloud computing is the problem of who is
in "possession" of the data. If a cloud company is the possessor of the data, the possessor has
certain legal rights. If the cloud company is the "custodian" of the data, then a different set of
rights would apply. The next problem in the legalities of cloud computing is the problem of legal
ownership of the data. Many Terms of Service agreements are silent on the question of
ownership.
[113]

These legal issues are not confined to the time period in which the cloud based application is
actively being used. There must also be consideration for what happens when the provider-
customer relationship ends. In most cases, this event will be addressed before an application is
deployed to the cloud. However, in the case of provider insolvencies or bankruptcy the state of
the data may become blurred.
[111]

Vendor lock-in
Because cloud computing is still relatively new, standards are still being developed.
[114]
Many
cloud platforms and services are proprietary, meaning that they are built on the specific
standards, tools and protocols developed by a particular vendor for its particular cloud
offering.
[114]
This can make migrating off a proprietary cloud platform prohibitively complicated
and expensive.
[114]

Three types of vendor lock-in can occur with cloud computing:
[115]

Platform lock-in: cloud services tend to be built on one of several possible virtualization
platforms, for example VMWare or Xen. Migrating from a cloud provider using one platform to a
cloud provider using a different platform could be very complicated.
Data lock-in: since the cloud is still new, standards of ownership, i.e. who actually owns the data
once it lives on a cloud platform, are not yet developed, which could make it complicated if
cloud computing users ever decide to move data off of a cloud vendor's platform.
Tools lock-in: if tools built to manage a cloud environment are not compatible with different
kinds of both virtual and physical infrastructure, those tools will only be able to manage data or
apps that live in the vendor's particular cloud environment.
Heterogeneous cloud computing is described as a type of cloud environment that prevents
vendor lock-in, and aligns with enterprise data centers that are operating hybrid cloud
models.
[116]
The absence of vendor lock-in lets cloud administrators select his or her choice of
hypervisors for specific tasks, or to deploy virtualized infrastructures to other enterprises without
the need to consider the flavor of hypervisor in the other enterprise.
[117]

A heterogeneous cloud is considered one that includes on-premise private clouds, public clouds
and software-as-a-service clouds. Heterogeneous clouds can work with environments that are not
virtualized, such as traditional data centers.
[118]
Heterogeneous clouds also allow for the use of
piece parts, such as hypervisors, servers, and storage, from multiple vendors.
[119]

Cloud piece parts, such as cloud storage systems, offer APIs but they are often incompatible with
each other.
[120]
The result is complicated migration between backends, and makes it difficult to
integrate data spread across various locations.
[120]
This has been described as a problem of
vendor lock-in.
[120]
The solution to this is for clouds to adopt common standards.
[120]

Heterogeneous cloud computing differs from homogeneous clouds, which have been described
as those using consistent building blocks supplied by a single vendor.
[121]
Intel General Manager
of high-density computing, Jason Waxman, is quoted as saying that a homogenous system of
15,000 servers would cost $6 million more in capital expenditure and use 1 megawatt of
power.
[121]

Open source
See also: Category:Free software for cloud computing
Open-source software has provided the foundation for many cloud computing implementations,
prominent examples being the Hadoop framework
[122]
and VMware's Cloud Foundry.
[123]
In
November 2007, the Free Software Foundation released the Affero General Public License, a
version of GPLv3 intended to close a perceived legal loophole associated with free software
designed to run over a network.
[124]

Open standards
See also: Category:Cloud standards
Most cloud providers expose APIs that are typically well-documented (often under a Creative
Commons license
[125]
) but also unique to their implementation and thus not interoperable. Some
vendors have adopted others' APIs and there are a number of open standards under development,
with a view to delivering interoperability and portability.
[126]
As of November 2012, the Open
Standard with broadest industry support is probably OpenStack, founded in 2010 by NASA and
Rackspace, and now governed by the OpenStack Foundation.
[127]
OpenStack supporters include
AMD, Intel, Canonical, SUSE Linux, Red Hat, Cisco, Dell, HP, IBM, Yahoo and now
VMware.
[128]

Security
Main article: Cloud computing security
As cloud computing is achieving increased popularity, concerns are being voiced about the
security issues introduced through adoption of this new model.
[1][129]
The effectiveness and
efficiency of traditional protection mechanisms are being reconsidered as the characteristics of
this innovative deployment model can differ widely from those of traditional architectures.
[130]

An alternative perspective on the topic of cloud security is that this is but another, although quite
broad, case of "applied security" and that similar security principles that apply in shared multi-
user mainframe security models apply with cloud security.
[131]

The relative security of cloud computing services is a contentious issue that may be delaying its
adoption.
[132]
Physical control of the Private Cloud equipment is more secure than having the
equipment off site and under someone else's control. Physical control and the ability to visually
inspect data links and access ports is required in order to ensure data links are not compromised.
Issues barring the adoption of cloud computing are due in large part to the private and public
sectors' unease surrounding the external management of security-based services. It is the very
nature of cloud computing-based services, private or public, that promote external management
of provided services. This delivers great incentive to cloud computing service providers to
prioritize building and maintaining strong management of secure services.
[133]
Security issues
have been categorised into sensitive data access, data segregation, privacy, bug exploitation,
recovery, accountability, malicious insiders, management console security, account control, and
multi-tenancy issues. Solutions to various cloud security issues vary, from cryptography,
particularly public key infrastructure (PKI), to use of multiple cloud providers, standardisation of
APIs, and improving virtual machine support and legal support.
[130][134][135]

Cloud computing offers many benefits, but is vulnerable to threats. As cloud computing uses
increase, it is likely that more criminals find new ways to exploit system vulnerabilities. Many
underlying challenges and risks in cloud computing increase the threat of data compromise. To
mitigate the threat, cloud computing stakeholders should invest heavily in risk assessment to
ensure that the system encrypts to protect data, establishes trusted foundation to secure the
platform and infrastructure, and builds higher assurance into auditing to strengthen compliance.
Security concerns must be addressed to maintain trust in cloud computing technology.
[1]

Sustainability
Although cloud computing is often assumed to be a form of green computing, no published study
substantiates this assumption.
[136]

The primary environmental problem associated with the cloud is energy use. Phil Radford of
Greenpeace said we are concerned that this new explosion in electricity use could lock us into
old, polluting energy sources instead of the clean energy available today.
[137]
Greenpeace ranks
the energy usage of the top ten big brands in cloud computing, and successfully urged several
companies to switch to clean energy. On Thursday, December 15, 2011, Greenpeace and
Facebook announced together that Facebook would shift to use clean and renewable energy to
power its own operations.
[138][139]
Soon thereafter, Apple agreed to make all of its data centers
coal free by the end of 2013 and doubled the amount of solar energy powering its Maiden, NC
data center.
[140]
Following suit, Salesforce agreed to shift to 100% clean energy by 2020.
[141]

Citing the servers' effects on the environmental effects of cloud computing, in areas where
climate favors natural cooling and renewable electricity is readily available, the environmental
effects will be more moderate. (The same holds true for "traditional" data centers.) Thus
countries with favorable conditions, such as Finland,
[142]
Sweden and Switzerland,
[143]
are trying
to attract cloud computing data centers. Energy efficiency in cloud computing can result from
energy-aware scheduling and server consolidation.
[144]
However, in the case of distributed clouds
over data centers with different sources of energy including renewable energy, the use of energy
efficiency reduction could result in a significant carbon footprint reduction.
[145]

Abuse
As with privately purchased hardware, customers can purchase the services of cloud computing
for nefarious purposes. This includes password cracking and launching attacks using the
purchased services.
[146]
In 2009, a banking trojan illegally used the popular Amazon service as a
command and control channel that issued software updates and malicious instructions to PCs that
were infected by the malware.
[147]

IT governance
Main article: Corporate governance of information technology
The introduction of cloud computing requires an appropriate IT governance model to ensure a
secured computing environment and to comply with all relevant organizational information
technology policies.
[148][149]
As such, organizations need a set of capabilities that are essential
when effectively implementing and managing cloud services, including demand management,
relationship management, data security management, application lifecycle management, risk and
compliance management.
[150]
A danger lies with the explosion of companies joining the growth
in cloud computing by becoming providers. However, many of the infrastructural and logistical
concerns regarding the operation of cloud computing businesses are still unknown. This over-
saturation may have ramifications for the industry as whole.
[151]

Consumer end storage
The increased use of cloud computing could lead to a reduction in demand for high storage
capacity consumer end devices, due to cheaper low storage devices that stream all content via the
cloud becoming more popular.
[citation needed]
In a Wired article, Jake Gardner explains that while
unregulated usage is beneficial for IT and tech moguls like Amazon, the anonymous nature of
the cost of consumption of cloud usage makes it difficult for business to evaluate and incorporate
it into their business plans.
[151]
The popularity of cloud and cloud computing in general is so
quickly increasing among all sorts of companies, that in May 2013, through its company
Amazon Web Services, Amazon started a certification program for cloud computing
professionals.
Ambiguity of terminology
Outside of the information technology and software industry, the term "cloud" can be found to
reference a wide range of services, some of which fall under the category of cloud computing,
while others do not. The cloud is often used to refer to a product or service that is discovered,
accessed and paid for over the Internet, but is not necessarily a computing resource. Examples of
service that are sometimes referred to as "the cloud" include, but are not limited to, crowd
sourcing, cloud printing, crowd funding, cloud manufacturing.
[152][153]

Performance interference and noisy neighbors
Due to its multi-tenant nature and resource sharing, Cloud computing must also deal with the
"noisy neighbor" effect. This effect in essence indicates that in a shared infrastructure, the
activity of a virtual machine on a neighboring core on the same physical host may lead to
increased performance degradation of the VMs in the same physical host, due to issues such as
e.g. cache contamination. Due to the fact that the neighboring VMs may be activated or
deactivated at arbitrary times, the result is an increased variation in the actual performance of
Cloud resources. This effect seems to be dependent also on the nature of the applications that run
inside the VMs but also other factors such as scheduling parameters and the careful selection
may lead to optimized assignment in order to minimize the phenomenon. This has also led to
difficulties in comparing various cloud providers on cost and performance using traditional
benchmarks for service and application performance, as the time period and location in which
the benchmark is performed can result in widely varied results.
[154]

Monopolies and privatization of cyberspace
Philosopher Slavoj iek points out that, although cloud computing enhances content
accessibility, this access is "increasingly grounded in the virtually monopolistic privatization of
the cloud which provides this access". According to him, this access, necessarily mediated
through a handful of companies, ensures a progressive privatization of global cyberspace. iek
criticises the argument purported by supporters of cloud computing that this phenomenon is part
of the "natural evolution" of the Internet, sustaining that the quasi-monopolies "set prices at will
but also filter the software they provide to give its "universality" a particular twist depending on
commercial and ideological interests."
[155]

Research
Many universities, vendors, Institutes and government organizations are investing in research
around the topic of cloud computing:
[156][157]

In October 2007, the Academic Cloud Computing Initiative (ACCI) was announced as a multi-
university project designed to enhance students' technical knowledge to address the challenges
of cloud computing.
[158]

In April 2009, UC Santa Barbara released the first open source platform-as-a-service, AppScale,
which is capable of running Google App Engine applications at scale on a multitude of
infrastructures.
In April 2009, the St Andrews Cloud Computing Co-laboratory was launched, focusing on
research in the important new area of cloud computing. Unique in the UK, StACC aims to
become an international centre of excellence for research and teaching in cloud computing and
provides advice and information to businesses interested in cloud-based services.
[159]

In October 2010, the TClouds (Trustworthy Clouds) project was started, funded by the European
Commission's 7th Framework Programme. The project's goal is to research and inspect the legal
foundation and architectural design to build a resilient and trustworthy cloud-of-cloud
infrastructure on top of that. The project also develops a prototype to demonstrate its
results.
[160]

In December 2010, the TrustCloud research project
[161][162]
was started by HP Labs Singapore to
address transparency and accountability of cloud computing via detective, data-centric
approaches
[163]
encapsulated in a five-layer TrustCloud Framework. The team identified the need
for monitoring data life cycles and transfers in the cloud,
[161]
leading to the tackling of key cloud
computing security issues such as cloud data leakages, cloud accountability and cross-national
data transfers in transnational clouds.
In June 2011, two Indian Universities i.e. University of Petroleum and Energy Studies and
University of Technology and Management introduced cloud computing as a subject in India, in
collaboration with IBM.
[164]

In July 2011, the High Performance Computing Cloud (HPCCLoud) project was kicked-off aiming
at finding out the possibilities of enhancing performance on cloud environments while running
the scientific applications - development of HPCCLoud Performance Analysis Toolkit which was
funded by CIM-Returning Experts Programme - under the coordination of Prof. Dr. Shajulin
Benedict.
In June 2011, the Telecommunications Industry Association developed a Cloud Computing White
Paper, to analyze the integration challenges and opportunities between cloud services and
traditional U.S. telecommunications standards.
[165]

In December 2011, the VISION Cloud EU-funded project proposed an architecture along with an
implementation of a cloud environment for data-intensive services aiming to provide a
virtualized Cloud Storage infrastructure.
[166]

In December 2012, a study released by Microsoft and the International Data Corporation
(IDC)showed that millions of cloud-skilled workers would be needed. Millions of cloud-related IT
jobs are sitting open and millions more will open in the coming couple of years, due to a
shortage in cloud-certified IT workers.
In February 2013, the BonFIRE project launched a multi-site cloud experimentation and testing
facility. The facility provides transparent access to cloud resources, with the control and
observability necessary to engineer future cloud technologies, in a way that is not restricted, for
example, by current business models.
[167]

In April 2013, A 2013 report by IT research and advisory firm Gartner., Inc. says that app
developers will embrace cloud services, predicting that in three years, 40% of the mobile app
development projects will use cloud backed services. Cloud mobile backed services offer a new
kind of PaaS, used to enable the development of mobile apps.
See also
Cloud collaboration
Cloud computing comparison
Cloud telephony
List of cloud computing conferences
Mobile cloud computing
Web operating system
References
1. ^ Jump up to:
a

b

c
Mariana Carroll, Paula Kotz, Alta van der Merwe (2012). "Securing Virtual
and Cloud Environments". In I. Ivanov et al. Cloud Computing and Services Science, Service
Science: Research and Innovations in the Service Economy. Springer Science+Business Media.
doi:10.1007/978-1-4614-2326-3.
2. ^ Jump up to:
a

b

c

d

e

f

g

h
"The NIST Definition of Cloud Computing". National Institute of
Standards and Technology. Retrieved 24 July 2011.
3. ^ Jump up to:
a

b
"What is Cloud Computing?". Amazon Web Services. 2013-3-19. Retrieved
2013-3-20.
4. Jump up ^ "Baburajan, Rajani, "The Rising Cloud Storage Market Opportunity Strengthens
Vendors," infoTECH, August 24, 2011". It.tmcnet.com. 2011-08-24. Retrieved 2011-12-02.
5. Jump up ^ Oestreich, Ken, (2010-11-15). "Converged Infrastructure". CTO Forum.
Thectoforum.com. Retrieved 2011-12-02.
6. Jump up ^ Strachey, Christopher (June 1959). "Time Sharing in Large Fast Computers".
Proceedings of the International Conference on Information processing, UNESCO. paper B.2.19:
336341.
7. Jump up ^ Simson Garfinkel (3 October 2011). "The Cloud Imperative". Technology Review
(MIT). Retrieved 31 May 2013.
8. Jump up ^ Ryan; Falvey; Merchant (October 2011). "Regulation of the Cloud in India". Journal of
Internet Law 15 (4)
9. Jump up ^ "July, 1993 meeting report from the IP over ATM working group of the IETF". CH:
Switch. Retrieved 2010-08-22.
10. Jump up ^ Corbat, Fernando J. "An Experimental Time-Sharing System". SJCC Proceedings. MIT.
Retrieved 3 July 2012.
11. ^ Jump up to:
a

b
"Jeff Bezos' Risky Bet". Business Week
12. Jump up ^ "Amazon's early efforts at cloud computing partly accidental". IT Knowledge
Exchange. Tech Target. 2010-06-17
13. Jump up ^ B Rochwerger, J Caceres, RS Montero, D Breitgand, E Elmroth, A Galis, E Levy, IM
Llorente, K Nagin, Y Wolfsthal, E Elmroth, J Caceres, M Ben-Yehuda, W Emmerich, F Galan. "The
RESERVOIR Model and Architecture for Open Federated Cloud Computing", IBM Journal of
Research and Development, Vol. 53, No. 4. (2009)
14. Jump up ^ D Kyriazis, A Menychtas, G Kousiouris, K Oberle, T Voith, M Boniface, E Oliveros, T
Cucinotta, S Berger, "A Real-time Service Oriented Infrastructure", International Conference on
Real-Time and Embedded Systems (RTES 2010), Singapore, November 2010
15. Jump up ^ Keep an eye on cloud computing, Amy Schurr, Network World, 2008-07-08, citing the
Gartner report, "Cloud Computing Confusion Leads to Opportunity". Retrieved 2009-09-11.
16. Jump up ^ Gartner Says Worldwide IT Spending On Pace to Surpass Trillion in 2008, Gartner,
2008-08-18. Retrieved 2009-09-11.
17. Jump up ^ "Launch of IBM Smarter Computing". Retrieved 1 March 2011.
18. Jump up ^ Andreas Tolk. 2006. What Comes After the Semantic Web - PADS Implications for the
Dynamic Web. 20th Workshop on Principles of Advanced and Distributed Simulation (PADS '06).
IEEE Computer Society, Washington, DC, USA
19. Jump up ^ "Cloud Computing: Clash of the clouds". The Economist. 2009-10-15. Retrieved 2009-
11-03.
20. Jump up ^ "Gartner Says Cloud Computing Will Be As Influential As E-business". Gartner.
Retrieved 2010-08-22.
21. Jump up ^ Gruman, Galen (2008-04-07). "What cloud computing really means". InfoWorld.
Retrieved 2009-06-02.
22. Jump up ^ "The economy is flat so why are financials Cloud vendors growing at more than 90
percent per annum?". FSN. March 5, 2013.
23. Jump up ^ Figure 8, "A network 70 is shown schematically as a cloud", US Patent 5,485,455,
column 17, line 22, filed Jan 28, 1994
24. Jump up ^ Figure 1, "the cloud indicated at 49 in Fig. 1.", US Patent 5,790,548, column 5 line 56-
57, filed April 18, 1996
25. Jump up ^ Antonio Regalado (31 October 2011). "Who Coined 'Cloud Computing'?". Technology
Review (MIT). Retrieved 31 July 2013.
26. ^ Jump up to:
a

b

c
HAMDAQA, Mohammad (2012). Cloud Computing Uncovered: A Research
Landscape. Elsevier Press. pp. 4185. ISBN 0-12-396535-7.
27. Jump up ^ "Distributed Application Architecture". Sun Microsystem. Retrieved 2009-06-16.
28. Jump up ^ "Sun CTO: Cloud computing is like the mainframe".
Itknowledgeexchange.techtarget.com. 2009-03-11. Retrieved 2010-08-22.
29. Jump up ^ "It's probable that you've misunderstood 'Cloud Computing' until now". TechPluto.
Retrieved 2010-09-14.
30. Jump up ^ Danielson, Krissi (2008-03-26). "Distinguishing Cloud Computing from Utility
Computing". Ebizq.net. Retrieved 2010-08-22.
31. Jump up ^ "Recession Is Good For Cloud Computing Microsoft Agrees". CloudAve. Retrieved
2010-08-22.
32. ^ Jump up to:
a

b

c

d
"Defining "Cloud Services" and "Cloud Computing"". IDC. 2008-09-23.
Retrieved 2010-08-22.
33. Jump up ^ "e-FISCAL project state of the art repository".
34. Jump up ^ Farber, Dan (2008-06-25). "The new geek chic: Data centers". CNET News. Retrieved
2010-08-22.
35. Jump up ^ He, Sijin; L. Guo, Y. Guo, M. Ghanem,. Improving Resource Utilisation in the Cloud
Environment Using Multivariate Probabilistic Models. 2012 2012 IEEE 5th International
Conference on Cloud Computing (CLOUD). pp. 574581. doi:10.1109/CLOUD.2012.66. ISBN 978-
1-4673-2892-0.
36. Jump up ^ King, Rachael (2008-08-04). "Cloud Computing: Small Companies Take Flight".
Businessweek. Retrieved 2010-08-22.
37. Jump up ^ Mao, Ming; M. Humphrey (2012). "A Performance Study on the VM Startup Time in
the Cloud". Proceedings of 2012 IEEE 5th International Conference on Cloud Computing
(Cloud2012): 423. doi:10.1109/CLOUD.2012.103. ISBN 978-1-4673-2892-0.
38. Jump up ^ He, Sijin; L. Guo, Y. Guo (2011). "Real Time Elastic Cloud Management for Limited
Resources". Proceedings of 2011 IEEE 4th International Conference on Cloud Computing
(Cloud2011): 622629. doi:10.1109/CLOUD.2011.47. ISBN 978-0-7695-4460-1.
39. Jump up ^ "Defining and Measuring Cloud Elasticity". KIT Software Quality Departement.
Retrieved 13 August 2011.
40. Jump up ^ "Economies of Cloud Scale Infrastructure". Cloud Slam 2011. Retrieved 13 May 2011.
41. Jump up ^ He, Sijin; L. Guo, Y. Guo, C. Wu, M. Ghanem, R. Han. Elastic Application Container: A
Lightweight Approach for Cloud Resource Provisioning. 2012 IEEE 26th International Conference
on Advanced Information Networking and Applications (AINA). pp. 1522.
doi:10.1109/AINA.2012.74. ISBN 978-1-4673-0714-7.
42. ^ Jump up to:
a

b
A Self-adaptive hierarchical monitoring mechanism for Clouds Elsevier.com
43. Jump up ^ "Encrypted Storage and Key Management for the cloud". Cryptoclarity.com. 2009-07-
30. Retrieved 2010-08-22.
44. Jump up ^ Mills, Elinor (2009-01-27). "Cloud computing security forecast: Clear skies". CNET
News. Retrieved 2010-08-22.
45. Jump up ^ David Perera (2012-07-12). "The real obstacle to federal cloud computing".
FierceGovernmentIT. Retrieved 2012-12-15.
46. Jump up ^ "Top 10 Reasons why Startups should Consider Cloud". Cloudstory.in. 2012-09-05.
Retrieved 2012-12-15.
47. Jump up ^ "BMC Service Catalog Enforces Workload Location". eweek.com. 2011-08-02.
Retrieved 2013-03-10.
48. Jump up ^ "HP's Turn-Key Private Cloud - Application Development Trends". Adtmag.com. 2010-
08-30. Retrieved 2012-12-15.
49. ^ Jump up to:
a

b

c
Babcock, Charles (2011-06-03). "RightScale Launches App Store For
Infrastructure - Cloud-computing". Informationweek.com. Retrieved 2012-12-15.
50. ^ Jump up to:
a

b
"Red Hat launches hybrid cloud management software - Open Source".
Techworld. 2012-06-06. Retrieved 2012-12-15.
51. Jump up ^ Brown, Rodney (April 10, 2012). "Spinning up the instant cloud". CloudEcosystem.
52. Jump up ^ Riglian, Adam (December 1, 2011). "VIP Art Fair picks OpDemand over RightScale for
IaaS management". Search Cloud Applications. TechTarget. Retrieved January 25, 2013.
53. Jump up ^ Samson, Ted (April 10, 2012). "HP advances public cloud as part of ambitious hybrid
cloud strategy". InfoWorld. Retrieved 2012-12-14.
54. Jump up ^ "HP Cloud Maps can ease application automation". SiliconIndia. Retrieved 22 January
2013.
55. Jump up ^ Voorsluys, William; Broberg, James; Buyya, Rajkumar (February 2011). "Introduction
to Cloud Computing". In R. Buyya, J. Broberg, A.Goscinski. Cloud Computing: Principles and
Paradigms. New York, USA: Wiley Press. pp. 144. ISBN 978-0-470-88799-8.
56. Jump up ^ "Tony Shan, "Cloud Taxonomy and Ontology"". February 2009. Retrieved 2 February
2009.
57. Jump up ^ "ITU-T NEWSLOG - CLOUD COMPUTING AND STANDARDIZATION: TECHNICAL
REPORTS PUBLISHED". International Telecommunication Union (ITU). Retrieved 16 December
2012.
58. Jump up ^ Amies, Alex; Sluiman, Harm; Tong, Qiang Guo; Liu, Guo Ning (July 2012).
"Infrastructure as a Service Cloud Concepts". Developing and Hosting Applications on the Cloud.
IBM Press. ISBN 978-0-13-306684-5.
59. Jump up ^ Platform-as-a-Service Architecture for Real-Time Quality of Service Management in
Clouds [1]
60. Jump up ^ Hamdaqa, Mohammad. A Reference Model for Developing Cloud Applications.
61. Jump up ^ Chou, Timothy. Introduction to Cloud Computing: Business & Technology.
62. Jump up ^ "HVD: the cloud's silver lining". Intrinsic Technology. Retrieved 30 August 2012.
63. ^ Jump up to:
a

b
"ITU Focus Group on Cloud Computing - Part 1". International
Telecommunication Union (ITU) TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU.
Retrieved 16 December 2012.
64. Jump up ^ "Cloud computing in Telecommunications". Ericsson. Retrieved 16 December 2012.
65. Jump up ^ "Network Virtualisation Opportunities and Challenges". Eurescom. Retrieved 16
December 2012.
66. Jump up ^ "The role of virtualisation in future network architectures". Change Project. Retrieved
16 December fghfg2012.
67. Jump up ^ Cole, Arthur. (2013-01-13) Cloud Management, Front and Center, ITBusinessEdge.
[2]
68. ^ Jump up to:
a

b
Lee, Anne. (2012-01-24) Cloud Computing: How It Affects Enterprise and
Performance Monitoring, Sys-Con Media [3]
69. Jump up ^ Linthicum, David. (2011-04-27) How to integrate with the cloud, InfoWorld: Cloud
Computing, April 27, 2011. [4]
70. ^ Jump up to:
a

b

c
Semple, Bryan. (2011-07-14) Five Capacity Management Challenges for
Private Clouds, Cloud Computing Journal. [5]
71. ^ Jump up to:
a

b
Golden, Barnard. (2010-11-05) Cloud Computing: Why You Can't Ignore
Chargeback, CIO.com. [6]
72. ^ Jump up to:
a

b

c

d
Sullivan, Dan. (2011-02) Hybrid cloud management tools and strategies,
SearchCloudComputing.com [7]
73. ^ Jump up to:
a

b
Rigsby, Josette. (2011-08-30) IBM Offers New Hybrid Cloud Solution Using Cast
Iron, Tivoli, CMS Wire. [8]
74. Jump up ^ "Is a Private Cloud Really More Secure?". Dell.com. Retrieved 07-11-12.
[dead link]

75. Jump up ^ Foley, John. "Private Clouds Take Shape". InformationWeek. Retrieved 2010-08-22.
76. Jump up ^ Haff, Gordon (2009-01-27). "Just don't call them private clouds". CNET News.
Retrieved 2010-08-22.
77. Jump up ^ "There's No Such Thing As A Private Cloud". InformationWeek. 2010-06-30. Retrieved
2010-08-22.
78. Jump up ^ Metzler, Jim; Taylor, Steve. (2010-08-23) "Cloud computing: Reality vs. fiction,"
Network World. [9]
79. Jump up ^ Rouse, Margaret. "Definition: Cloudbursting," May 2011.
SearchCloudComputing.com. [10]
80. Jump up ^ Vizard, Michael. "How Cloudbursting 'Rightsizes' the Data Center", (2012-06-21).
Slashdot. [11]
81. Jump up ^ Stevens, Alan (June 29, 2011). "When hybrid clouds are a mixed blessing". The
Register. Retrieved March 28, 2012.
82. Jump up ^ Gens, Frank. (2008-09-23) Defining Cloud Services and Cloud Computing, IDC
Exchange. [12]
83. ^ Jump up to:
a

b

c
Henderson, Tom and Allen, Brendan. (2010-12-20) Private clouds: Not for the
faint of heart, NetworkWorld. [13]
84. Jump up ^ Whitehead, Richard. (2010-04-19) A Guide to Managing Private Clouds, Industry
Perspectives. [14]
85. Jump up ^ Definition: Cloud management, ITBusinessEdge/Webopedia
86. Jump up ^ An innovative workflow mapping mechanism for Grids in the frame of Quality of
Service Elsevier.com
87. Jump up ^ "Building GrepTheWeb in the Cloud, Part 1: Cloud Architectures".
Developer.amazonwebservices.com. Retrieved 2010-08-22.
88. Jump up ^ Bernstein, David; Ludvigson, Erik; Sankar, Krishna; Diamond, Steve; Morrow,
Monique (2009-05-24). "Blueprint for the Intercloud - Protocols and Formats for Cloud
Computing Interoperability". Blueprint for the Intercloud Protocols and Formats for Cloud
Computing Interoperability. IEEE Computer Society. pp. 328336. doi:10.1109/ICIW.2009.55.
ISBN 978-1-4244-3851-8.
89. Jump up ^ "Kevin Kelly: A Cloudbook for the Cloud". Kk.org. Retrieved 2010-08-22.
90. Jump up ^ "Intercloud is a global cloud of clouds". Samj.net. 2009-06-22. Retrieved 2010-08-22.
91. Jump up ^ "Vint Cerf: Despite Its Age, The Internet is Still Filled with Problems".
Readwriteweb.com. Retrieved 2010-08-22.
92. Jump up ^ "SP360: Service Provider: From India to Intercloud". Blogs.cisco.com. Retrieved 2010-
08-22.
93. Jump up ^ Canada (2007-11-29). "Head iaaan the clouds? Welcome to the future". Toronto:
Theglobeandmail.com. Retrieved 2010-08-22.
94. Jump up ^ Bobby Johnston. Cloud computing is a trap, warns GNU founder Richard Stallman.
The Guardian, 29 September 2008.
95. Jump up ^ http://www.morganstanley.com/views/perspectives/cloud_computing.pdf
96. Jump up ^ Challenges & Opportunities for IT partners when transforming or creating a business
in the Cloud. compuBase consulting. 2012. p. 77.
97. Jump up ^ Cloud Computing Grows Up: Benefits Exceed Expectations According to Report. Press
Release, May 21, 2013. [15]
98. Jump up ^ Cauley, Leslie (2006-05-11). "NSA has massive database of Americans' phone calls".
USATODAY.com. Retrieved 2010-08-22.
99. Jump up ^ "NSA taps in to user data of Facebook, Google and others, secret files reveal".
Guardian News and Media. 2013-06-07. Retrieved 2013-06-07.
100. Jump up ^ Winkler, Vic (2011). Securing the Cloud: Cloud Computer Security Techniques
and Tactics. Waltham, Massachusetts: Elsevier. p. 60. ISBN 978-1-59749-592-9.
101. Jump up ^ "Feature Guide: Amazon EC2 Availability Zones". Amazon Web Services.
Retrieved 2010-08-22.
102. Jump up ^ "Cloud Computing Privacy Concerns on Our Doorstep".
103. Jump up ^ "FISMA compliance for federal cloud computing on the horizon in 2010".
SearchCompliance.com. Retrieved 2010-08-22.
104. Jump up ^ "Google Apps and Government". Official Google Enterprise Blog. 2009-09-15.
Retrieved 2010-08-22.
105. Jump up ^ "Cloud Hosting is Secure for Take-off: Mosso Enables The Spreadsheet Store,
an Online Merchant, to become PCI Compliant". Rackspace. 2009-03-14. Retrieved 2010-08-22.
106. Jump up ^ "Amazon gets SAS 70 Type II audit stamp, but analysts not satisfied".
SearchCloudComputing.com. 2009-11-17. Retrieved 2010-08-22.
107. Jump up ^ "Assessing Cloud Computing Agreements and Controls". WTN News.
Retrieved 2010-08-22.
108. Jump up ^ "Cloud Certification From Compliance Mandate to Competitive
Differentiator". Cloudcor. Retrieved 2011-09-20.
109. Jump up ^ "How the New EU Rules on Data Export Affect Companies in and Outside the
EU | Dr. Thomas Helbing Kanzlei fr Datenschutz-, Online- und IT-Recht". Dr. Thomas Helbing.
Retrieved 2010-08-22.
110. Jump up ^ "FedRAMP". U.S. General Services Administration. 2012-06-13. Retrieved
2012-06-17.
111. ^ Jump up to:
a

b
Chambers, Don (July 2010). "Windows Azure: Using Windows Azures
Service Bus to Solve Data Security Issues]". Rebus Technologies. Retrieved 2012-12-14.
112. Jump up ^ Cohn, Cindy; Samuels, Julie (31 October 2012). "Megaupload and the
Government's Attack on Cloud Computing]". Electronic Frontier Foundation. Retrieved 2012-12-
14.
113. Jump up ^ Maltais, Michelle (26 April 2012). "Who owns your stuff in the cloud?". Los
Angeles Times. Retrieved 2012-12-14.
114. ^ Jump up to:
a

b

c
McKendrick, Joe. (2011-11-20) "Cloud Computing's Vendor Lock-In
Problem: Why the Industry is Taking a Step Backward," Forbes.com [16]
115. Jump up ^ Hinkle, Mark. (2010-6-9) "Three cloud lock-in considerations", Zenoss Blog
[17]
116. Jump up ^ Staten, James (2012-07-23). "Gelsinger brings the 'H' word to VMware".
ZDNet. [18]
117. Jump up ^ Vada, Eirik T. (2012-06-11) "Creating Flexible Heterogeneous Cloud
Environments", page 5, Network and System Administration, Oslo University College [19]
118. Jump up ^ Geada, Dave. (June 2, 2011) "The case for the heterogeneous cloud," Cloud
Computing Journal [20]
119. Jump up ^ Burns, Paul (2012-01-02). "Cloud Computing in 2012: What's Already
Happening". Neovise.[21]
120. ^ Jump up to:
a

b

c

d
Livenson, Ilja. Laure, Erwin. (2011) "Towards transparent integration
of heterogeneous cloud storage platforms", pages 2734, KTH Royal Institute of Technology,
Stockholm, Sweden. [22]
121. ^ Jump up to:
a

b
Gannes, Liz. GigaOm, "Structure 2010: Intel vs. the Homogeneous
Cloud," June 24, 2010. [23]
122. Jump up ^ Jon Brodkin (July 28, 2008). "Open source fuels growth of cloud computing,
software-as-a-service". Network World. Retrieved 2012-12-14.
123. Jump up ^ "VMware Launches Open Source PaaS Cloud Foundry". Simpler Media Group,
Inc. 2011-04-21. Retrieved 2012-12-14.
124. Jump up ^ "AGPL: Open Source Licensing in a Networked Age". Redmonk.com. 2009-04-
15. Retrieved 2010-08-22.
125. Jump up ^ GoGrid Moves API Specification to Creative Commons
[dead link]

126. Jump up ^ "Eucalyptus Completes Amazon Web Services Specs with Latest Release".
Ostatic.com. Retrieved 2010-08-22.
127. Jump up ^ "OpenStack Foundation launches". Infoworld.com. 2012-09-19. Retrieved
2012-17-11.
128. Jump up ^ "Did OpenStack Let VMware Into The Henhouse?". Informationweek.com.
2012-10-19. Retrieved 2012-17-11.
129. Jump up ^ M Carroll, P Kotz, Alta van der Merwe (2011), Secure virtualization: benefits,
risks and constraints, 1st International Conference on Cloud Computing and Services Science,
Noordwijkerhout, The Netherlands, 79 May 2011
130. ^ Jump up to:
a

b
Zissis, Dimitrios; Lekkas (2010). "Addressing cloud computing security
issues". Future Generation Computer Systems 28 (3): 583. doi:10.1016/j.future.2010.12.006.
131. Jump up ^ Winkler, Vic (2011). Securing the Cloud: Cloud Computer Security Techniques
and Tactics. Waltham, MA USA: Syngress. pp. 187, 189. ISBN 978-1-59749-592-9.
132. Jump up ^ "Are security issues delaying adoption of cloud computing?". Network World.
Retrieved 2010-08-22.
133. Jump up ^ "Security of virtualization, cloud computing divides IT and security pros".
Network World. 2010-02-22. Retrieved 2010-08-22.
134. Jump up ^ Armbrust, M; Fox, A., Griffith, R., Joseph, A., Katz, R., Konwinski, A., Lee, G.,
Patterson, D., Rabkin, A., Zaharia, (2010). "A view of cloud computing". Communication of the
ACM 53 (4): 5058. doi:10.1145/1721654.1721672.
135. Jump up ^ Anthens, G (2010). "Security in the cloud". Communications of the ACM 53
(11): 16. doi:10.1145/1839676.1839683.
136. Jump up ^ James Urquhart (January 7, 2010). "Cloud computing's green paradox". CNET
News. Retrieved March 12, 2010. "... there is some significant evidence that the cloud is
encouraging more compute consumption"
137. Jump up ^ "Dirty Data Report Card". Greenpeace. Retrieved 2013-08-22.
138. Jump up ^ "Facebook and Greenpeace settle Clean Energy Feud". Techcrunch. Retrieved
2013-08-22.
139. Jump up ^ "Facebook Commits to Clean Energy Future". Greenpeace. Retrieved 2013-
08-22.
140. Jump up ^ "Apple is leaving Microsoft and Amazon in dust for its clean internet efforts
Greenpeace". Greenpeace. Retrieved 2013-08-22.
141. Jump up ^ "Salesforce Announces Commitment to a Cloud Powered by 100% Renewable
Energy". Greenpeace. Retrieved 2013-08-22.
142. Jump up ^ Finland First Choice for Siting Your Cloud Computing Data Center..
Retrieved 4 August 2010.
143. Jump up ^ Swiss Carbon-Neutral Servers Hit the Cloud.. Retrieved 4 August 2010.
144. Jump up ^ Berl, Andreas, et al., Energy-Efcient Cloud Computing, The Computer
Journal, 2010.
145. Jump up ^ Farrahi Moghaddam, Fereydoun, et al., Low Carbon Virtual Private Clouds,
IEEE Cloud 2011.
146. Jump up ^ Alpeyev, Pavel (2011-05-14). "Amazon.com Server Said to Have Been Used in
Sony Attack". Bloomberg. Retrieved 2011-08-20.
147. Jump up ^ Goodin, Dan (2011-05-14). "PlayStation Network hack launched from
Amazon EC2". The Register. Retrieved 2012-05-18.
148. Jump up ^ Hsu, Wen-Hsi L., "Conceptual Framework of Cloud Computing Governance
Model - An Education Perspective", IEEE Technology and Engineering Education (ITEE), Vol 7, No
2 (2012) [24]
149. Jump up ^ Stackpole, Beth, "Governance Meets Cloud: Top Misconceptions",
InformationWeek, 7 May 2012 [25]
150. Jump up ^ Joha, A and M. Janssen (2012) "Transformation to Cloud Services Sourcing:
Required IT Governance Capabilities", ICST Transactions on e-Business 12(7-9) [26]
151. ^ Jump up to:
a

b
Gardner, Jake (2013-03-28). "Beware: 7 Sins of Cloud Computing".
Wired.com. Retrieved 2013-06-20.
152. Jump up ^ S. Stonham and S. Nahalkova (2012) "What is the Cloud and how can it help
my business?" [27]
153. Jump up ^ S. Stonham and S. Nahalkova (2012), Whitepaper "Tomorrow Belongs to the
Agile (PDF)" [28]
154. Jump up ^ George Kousiouris, Tommaso Cucinotta, Theodora Varvarigou, "The Effects of
Scheduling, Workload Type and Consolidation Scenarios on Virtual Machine Performance and
their Prediction through Optimized Artificial Neural Networks"[29] , The Journal of Systems and
Software (2011),Volume 84, Issue 8, August 2011, pp. 1270-1291, Elsevier,
doi:10.1016/j.jss.2011.04.013.
155. Jump up ^ Slavoj iek (May 2, 2011). "Corporate Rule of Cyberspace". Inside Higher Ed.
Retrieved July 10, 2013.
156. Jump up ^ "Cloud Net Directory. Retrieved 2010-03-01". Cloudbook.net. Retrieved
2010-08-22.
157. Jump up ^ " National Science Foundation (NSF) News National Science Foundation
Awards Millions to Fourteen Universities for Cloud Computing Research US National Science
Foun". Nsf.gov. Retrieved 2011-08-20.
158. Jump up ^ Rich Miller (2008-05-02). "IBM, Google Team on an Enterprise Cloud".
DataCenterKnowledge.com. Retrieved 2010-08-22.
159. Jump up ^ "StACC - Collaborative Research in Cloud Computing". University of St
Andrews department of Computer Science. Retrieved 2012-06-17.
160. Jump up ^ "Trustworthy Clouds: Privacy and Resilience for Internet-scale Critical
Infrastructure". Retrieved 2012-06-17.
161. ^ Jump up to:
a

b
Ko, Ryan K. L.; Jagadpramana, Peter; Lee, Bu Sung (2011). "Flogger: A
File-centric Logger for Monitoring File Access and Transfers within Cloud Computing
Environments". Proceedings of the 10th IEEE International Conference on Trust, Security and
Privacy of Computing and Communications (TrustCom-11): 765.
doi:10.1109/TrustCom.2011.100. ISBN 978-1-4577-2135-9.
162. Jump up ^ Ko, Ryan K. L.; Jagadpramana, Peter; Mowbray, Miranda; Pearson, Siani;
Kirchberg, Markus; Liang, Qianhui; Lee, Bu Sung (2011). "TrustCloud: A Framework for
Accountability and Trust in Cloud Computing". Proceedings of the 2nd IEEE Cloud Forum for
Practitioners (IEEE ICFP 2011), Washington DC, USA, July 78, 2011.
163. Jump up ^ Ko, Ryan K. L. Ko; Kirchberg, Markus; Lee, Bu Sung (2011). "From System-
Centric Logging to Data-Centric Logging - Accountability, Trust and Security in Cloud
Computing". Proceedings of the 1st Defence, Science and Research Conference 2011 -
Symposium on Cyber Terrorism, IEEE Computer Society, 34 August 2011, Singapore.
164. Jump up ^ "UTM/UPES-IBM India Collaboration". 2011.
165. Jump up ^ "Publication Download". Tiaonline.org. Retrieved 2011-12-02.
166. Jump up ^ A Cloud Environment for Data-intensive Storage Services
167. Jump up ^ "Testbeds for cloud experimentation and testing". Retrieved 2013-04-09.
External links

Wikimedia Commons has media related to Cloud computing.
The NIST Definition of Cloud Computing. Peter Mell and Timothy Grance, NIST Special
Publication 800-145 (September 2011). National Institute of Standards and Technology, U.S.
Department of Commerce.
Guidelines on Security and Privacy in Public Cloud Computing. Wayne Jansen and Timothy
Grance, NIST Special Publication 800-144 (December 2011). National Institute of Standards and
Technology, U.S. Department of Commerce.
Cloud Deployment Models
Cloud Computing - Benefits, risks and recommendation for information security. Daniele
Cattedu and Giles Hobben, European Network and Information Security Agency 2009.
Fighting cyber crime and protecting privacy in the cloud. European Parliament - Directorate-
General for Internal Policies. 2012
Cloud Computing: What are the Security Implications?: Hearing before the Subcommittee on
Cybersecurity, Infrastructure Protection, and Security Technologies of the Committee on
Homeland Security, House of Representatives, One Hundred Twelfth Congress, First Session,
October 6, 2011
PCI Compliant E-Commerce In The Cloud Hosting E-Commerce Based on Cloud Computing
Cloud Computing represents both a significant opportunity and a potential challenge
Cloud and Datacenter Solution Hub on Microsoft TechNet
Forbes article: security issues arising from Snowden situation
[hide]
v
t
e
Cloud computing


Platforms
Abiquo Enterprise Edition
Amazon
Apache CloudStack
AppScale
Cloud Foundry
EMC Atmos
Engine Yard
Eucalyptus
Force.com
GoGrid
Google App Engine
Google Storage
GreenButton
GreenQloud
Heroku
HP Converged Cloud
IBM cloud computing
iland
Inktank
Jelastic
Joyent
Lunacloud
Mendix
Nimbula
Nimbus
OpenNebula
OpenQRM
OpenShift
OpenStack
OrangeScape
OVirt
Rackspace Cloud
RightScale
Tsuru
Windows Azure
Wakame-vdc
Zadara Storage


Category
Commons

Categories:
Cloud computing
Cloud infrastructure
Cloud platforms
Navigation menu
Create account
Log in
Article
Talk
Read
Edit
View history

Main page
Contents
Featured content
Current events
Random article
Donate to Wikipedia
Interaction
Help
About Wikipedia
Community portal
Recent changes
Contact page
Toolbox
Print/export
Languages
Afrikaans


Basa Banyumasan


Catal
esky
Dansk
Deutsch
Eesti
Espaol
Esperanto
Euskara

Franais
Frysk


Bahasa Indonesia
slenska
Italiano


Latvieu
Lietuvi
Magyar


Bahasa Melayu
Nederlands


Norsk bokml
Ozbekcha
Polski
Portugus
Romn

Simple English
Slovenina
Slovenina
/ srpski
Suomi
Svenska




Trke

Ting Vit
Winaray


Edit links
This page was last modified on 28 September 2013 at 08:38.
Text is available under the Creative Commons Attribution-ShareAlike License; additional terms
may apply. By using this site, you agree to the Terms of Use and Privacy Policy.
Wikipedia is a registered trademark of the Wikimedia Foundation, Inc., a non-profit
organization.
Privacy policy
About Wikipedia
Disclaimers
Contact Wikipedia
Developers
Mobile view





Scalability
Scalability refers to the ability to service a theoretical number of users. Web-based applications
(called Cloud Computing) are often mentioned as scalable up to tens of thousands, hundreds of
thousands, millions, or even more, simultaneous users. That means that at full capacity (usually
marked as 80%), the system can handle that many users without failure to any user or without crashing
as a whole because of resource exhaustion. The better an application's scalability, the more users it
can handle simultaneously.
How can you implement cloud in cloud computing?
Actually, there's really no one-size-fits-all implementation solution for cloud computing. Your
implementation process will be dictated by the size of your company and what applications you want to...
How clouds are formed in cloud computing?
Cloud computing environment makes use of a number of different servers in a cluster to ensure that there
will always be at least one server available to cope up with the load on the cluster or to...
Does cloud computing means computing through clouds?
No. Cloud computing means sharing and basically storing your files on the internet a.k.a the "cloud". Its
very much similar to how you access your emails from services like gmail, yahoo, hotmail,...
Is cloud technology and cloud computing the same?
Cloud Technology and Cloud Computing is are related terms. Cloud Technology deals with information
technology infrastructure such as remote servers and data centers mostly hardware based. Cloud...
What is a cloud in cloud computing?
The cloud is the internet where files are uploaded to and downloaded from.The cloud is one of those terms
started by a non-technical person, I'm sure. The cloud is a way of referring to several...
VOL. 2, NO. 12, December 2011 ISSN 2079-8407
Journal of Emerging Trends in Computing and Information Sciences
2009-2011 CIS Journal. All rights reserved.
http://www.cisjournal.org
70 5
Comparative Study of Scalability and Availability in
Cloud and Utility Computing
1 Farrukh Shahzad Ahmed, 2 Ammad Aslam, 3 Shahbaz Ahmed, 4 M. Abdul Qadoos Bilal
1 Netsolace Information Technology PVT (LTD), Islamabad
2 White Wings PVT (LTD), Islamabad
3,4 Department of Computer Science & software engineering , International Islamic University Islamabad
{1 fshahzad11@gmail.com, 2 amaa08@student.bth.se, 3 shahbaz.ahmed@iiu.edu.pk}
Ever increasing use and popularity of Internet
impel Internet into a distributed computing platform. This
shift has been persuading companies worldwide to
outsource their computing resources, business processes,
business applications and data storage and maintenance to
get the benefit of up-to-date IT technologies in order to
focus on their core business competencies [2] to survive
and compete. This survival competition is the key driving
factor for evolving the Internet into a distributed
computing platform. Diverse support, ability to scale from
small networks with few devices to many devices up to a
global scale and support for wireless technology is some of
intriguing features of distributed networks for companies.
The support for the new devices will increase in future [3].
Cloud and Utility computing envisioned as next generation
Computing platforms [4][5] -> Grid Utility Cloud
Grid computing is defined as,
A system that uses open, general purpose protocols to federate distributed resources and to deliver
better-than best- effort qualities of service [6].

Utility computing is defined as,
A collection of technologies and business practices that enables computing to be delivered
seamlessly and reliably across multiple computers [7].

The idea of a Cloud is a,
system which has loose boundaries and can be able to interact and merge with
other such systems. There is no precise and comprehensive definition of cloud computing yet available. The notion
is that the applications run somewhere on the Cloud which users are least concerned about.

http://www.f5.com/it-management/solutions/cloud-performance-scalability/overview/









Portability :

http://www.serverwatch.com/news/article.php/3879351/Cloud-Computing-Is-About-Portability-
Interop.htm
sPortability

Cloud Computing Is About Portability: Interop


LAS VEGAS -- The cloud leverages virtual infrastructure, but that doesn't mean there aren't real
reasons for enterprises to consider it. In a keynote here at the Interop conference, Kristof
Kloeckner, IBM's CTO for cloud computing, argued that the cloud is a viable mechanism for
application deployment in the enterprise.
Taking a page from tech pundit Nicholas Carr, Kloeckner defined cloud computing as the "big
switch" that will enable organizations to have more choices of how and where to deploy their
applications.
"We're moving from being the builders and owners of IT assets to an organization that sources
IT solutions," Kloeckner said. "That's the journey of cloud computing."
Kloeckner admitted that he has faced skepticism from clients about the cloud. Some have told
him that they were already doing hosting and virtualization and they didn't see a difference
between that and the cloud. Yet according to Kloeckner, there is a very real difference.
"What's new is that technology is coming together in a perfect storm to change the way IT
services can be consumed and delivered," Kloeckner said.
According to Kloeckner, cloud computing has a number of distinguishing characteristics over
previous models. Among them is the cloud's ability to provision on-demand services, ubiquitous
network access, location independence and rapid elasticity.
"Our clients are adopting cloud computing based on workload affinity," he said, referring to the
question of whether the workloads can move or if they're encumbered by business regulations or
other requirements.
But for Kloeckner, the bottom line is that IBM does have large clients moving to the cloud,
including the U.S. Air Force and Panasonic, that are improving their efficiency and IT capital
utilization.
The idea of workload affinity is closely tied to the issue of portability. In a keynote panel, Simon
Crosby, CTO of Citrix's datacenter and cloud division, explained that virtualization portability is
the key.
Crosby 's company supports the Xen hypervisor, but the actual hypervisor shouldn't be a barrier
for virtual application delivery, as there is now a standard packaging format for virtual machines
called the Open Virtualization Format (OVF).
"OVF is a packaging format that lets you package multiple tiers of an application and make it
workload independent of a particular hypervisor," Crosby said. "Virtualization is a mandate for
everyone and no one is serviced by proprietary virtualization application packaging formats."
Crosby added that with application portability enabled by virtualization and cloud delivery
platforms, enterprises can put their workload where it's most cost effective. In his view, the job
of enterprise IT isn't about building infrastructure.
"IT's mission is to invest resources in the strategically critical components of their business," he
said.
Sean Michael Kerner is a senior editor at InternetNews.com, the news service of Internet.com,
the network for technology professionals.

http://www.webopedia.com/TERM/C/cloud_portability.html
cloud portability
In cloud (cloud computing) terminology, the phrase "cloud portability" means the ability to
move applications and its associated data between one cloud provider and another -- or between
public and private cloud environments.



Reliability
http://www.sdpsnet.org/sdps/index.php/sdps-2012-
sessions/159-cloud-computing-security-and-reliability
Cloud Computing: Security and Reliability
Cloud Computing is an emerging idea and technology with pros and cons, but the cloud
innovation definitely will leave its impact and footprints while facing new economic realities
during the second decade of a new century. Rather than installing a series of commercial
packages for each computer, including never ending security patches, users would only have to
load one application. That application would allow workers to log on to a Web-based service
which hosts all the programs the user would need for their job. Remote servers owned by the
service provider would run everything from e-mail to word processing to complex data analysis
programs. It's called cloud computing, the fifth utility (after electric power, gas, water and
telephony) and it could change the way individuals and companies operate. However, as often
apparent from the news media describing outages as simple glitches (usually downplayed by the
cloud hosting companies and their providers or assigned responsible managers who boast about
their 99.99% reliability), the crucial problem with cloud computing is its occasional, though
dramatic lack of desired reliability and security. Both of these key features need to be duly and
timely assessed in order to manage this new model of distributed computing, i.e. cloud. This
workshop will examine methods and software programs that achieve these challenging goals, i.e.
assessment and management hurdles from the cloud hosting (producers risk) perspective in
addition to the customer (consumers risk) base, an avenue which has been examined before by
the author. The purpose is to prioritize and cost-optimize the countermeasures needed to reach a
desirable level of customer satisfaction as well as cloud hosting best practices. Quantitative
methods of statistical inference on the Quality of Service (QoS) or conversely, Loss of Service
(LoS), as commonly used customer satisfaction metrics of system reliability and security
performance is reviewed. Subsequently, as an analytical alternative to the simulation practices, a
cloud Risk-O-Meter approach is studied to assess risk and manage it cost optimally through an
information gathering data-base type algorithm. The primary goal of those methods is to
optimize plans to improve the quality of a cloud operation and what countermeasures to take.
Among the simulation alternatives, a discrete event simulation (DES) is reviewed to estimate the
risk indices in a large cloud computing environment to compare with the intractable and lengthy
theoretical Markov solutions.


http://www.searchtechword.com/2011/06/how-safe-and-reliable-is-cloud-computing/


How Safe And Reliable Is Cloud Computing
How apt is the name cloud computing where users dont know what will happen with all their
data stored online. It is as cloudy as it can get with a smoked mirror for the users. Will it always
be available or someday the clouds will vanish and the users will lose their data to these large
clouds in the technological industry, namely Apple, Google, Amazon etc. Is your data really safe
out there and how reliable these Clouds are for your business? Many questions are raised about
choosing Clouds for you business.
What is Cloud Computing?
Cloud computing is roughly based on the premise that instead of storing data locally, store it
online, which they call the cloud. The benefits, as it clearly highlights is that you could access
your data from any computer and anywhere in the world. Also, for small and medium sized
companies, it is a low-cost based opportunity to store data on these clouds cutting down the
investing in purchasing and maintaining data servers locally.

It is Cloud-y out there. Think before you leap!!

Now the conditional clause in this is as long as you have access to the Internet. This sounds
alright till you start reading between the lines and ask yourselves, the if and how
questions.As always, it is largely being promoted by companies like Apple, Google and Amazon,
the common denominator being the U.S .based companies, where Internet availability is far
advanced than the rest of the world. What if you are in a place where there is no Internet and you
need to complete some deadline bound work. Or you lose connection to Internet, while working
on an important task.Or worst of all, the servers at Apple, Google and Amazon go down.
How Reliable is Cloud Computing?
Just recently, Amazon servers embarrassingly went down for a day bringing websites like
Foursquare down, with Amazon having to apologise to its corporate users. This raises further
question about reliability and performance when the real capacity of the data servers is tested to
the limit with the increased number of users/ usage.
Can Your business afford a downtime of one whole day?
How Safe is your data on Cloud servers?
More important issue is the fear about the safety and security of data stored online. Just how
secure and good are these companies IT security architecture and security policies? Recent
hacking attacks on Sony on its PlayStation network, Google on its Gmail and RSA on its
secure ID tokennetwork just highlight the risk of storing information online. Hacking Seems to
be showing its ugly head again recently. Do these attacks reflect the fact that, due to the dynamic
technological advancements, it is becoming increasingly challenging for the companies to stop
these hacking attacks. Hackers are becoming increasingly smart and conniving in their attacks.
Currently in the cloud computing market place, Amazon seems to be a market leader with its
large base of corporate users. Google has launched its own version of cloud computing while
Apple has just recently launched its iCloud services which allows the users to store their
iTunes music and other data from their Macs, iPad and Macbook Pros online.
So what do we conclude here?
With limited knowledge amongst users about this cloud concept, the companies on their part
should clearly highlight the benefits, limitation and risks involved in the services offered. It is
difficult to foresee just how successful will the cloud computing become in the years to come,
but it is clearly getting quite cloud-y out there. However, till then the users, especially the
corporate users should make themselves aware of the limitations and risks of using clouds to
store data and have a backup disaster recovery plan. Or else they might see the clouds fall
down one day on their heads and find them running for cover!

http://www.thecloudcomputing.org/2012/

.
Google
Remember the days when Google was just a search engine? Well, today it is not only a search
engine but a giant company with so many innovations you will ask; what is the line business of
Google really? Some business experts say that Google puts its finger in so many pies at once.
But with regard to cloud computing, they are one of the best cloud computing vendor there is.
Most cloud computing companies only offer one type of service but Google offers two. These
two are Software as a Service (SaaS) and Platform as a service (PaaS). Yes, Google makes bold
moves but most of these turn into flops but with regard to cloud computing, they are one of the
best and one of the safest choice amongst cloud computing company. Google offers
collaboration and business email (Google maintains the file in their very own servers for
consumers). As a PaaS expert Google hosts software development through its very own
platform.
Amazon
When Amazon started its business in 1995, they only sell books, books and books. From its
humble beginning now we see the best cloud computing technology provider we have under its
very own name. It has really gone far, and still is growing and undisputed in the world of cloud
computing. It is one of the best in the Infrastructure as a service (IaaS) cloud computing model.
Amazon offers two types of services under this model, the Elastic Compute Cloud (ECC)
wherein a consumer can create his own servers through Amazons cloud and load any program
or data. Another type of service Amazon offers is Simple Storage Service where a client who
tenants a storage system through Amazon can access the data anytime and anywhere.
Microsoft Azure Service Platform
Microsoft the developer of one of the most used operating system today also has its value in
cloud computing. Microsoft Azure Service Platform (MASP) specializes in Platform as a
Service, through its operating system called Microsoft Azure a company can develop programs
and applications on the cloud. This application can be .NET, SQL and Live services. Just like
Google, several other cloud computing companies compete with Microsoft in this service model
but no one has won yet.
What To Expect Next From Cloud Computing Companies
March 6th, 2012
Cloud computing emerged from virtualization, this allowed more cost effective and efficient
computing solutions that did not only changed how computing is done and how it works. Since
computing needs are ever expanding new developments from cloud computing companies are
expected with newly improved products and services. In recent years, security is one of the
biggest concerns amongst cloud computing consumers but through the collaboration and
development by different cloud computing companies, sandboxing is now the most accepted
security technology for multi-tenant cloud. This safety measure separates data through a security
cloak which can only be accessed by the owner and allowed users through the internet. Security
breach still poses a threat but through the Sandboxing technology, the threats are minimized
therefore giving tenants more acceptable ways in protecting their data and information.

Another development worth mentioning are the trends amongst cloud computing companies.
These trends include lower fee for IaaS, which Amazon is currently on the top spot among cloud
computing companies. Although most consumers migrated and adopted cloud computing for its
access anywhere, from any device innovations, some are still after the savings theyll have in
the long run.
Most vendors, who compete in the cloud computing business are aware of what the competitors
are best at and what their weakest point. This makes the business to expand and develop new
technologies to put competitors at bay but only enough to keep the piece of the pie they already
have. Concerns that consumers face are still the main priority of these cloud computing
companies, these are security, service level agreement, consumer privacy and local, international
law compliance.
Businesses and individuals who already have migrated to cloud computing still encounters issues
which are real and perceived these concerns are broken down into several categories which helps
in determining the root cause. Although, it is true that most of these concerns are customer
created (due to lack of information and knowledge), these issues are still the number one priority
of cloud computing companies and resolutions are being developed for customer satisfaction.
Google, the number one provider of free cloud computing solutions for individuals and
businesses continuously develops a more user friendly interface for the betterment of their
services. Mobility, is the number one requirement today, people are on the go and they need to
access files, programs, data and applications where ever they go. This feature is still is the anchor
of the ship, this makes the job be done on time without delays and issues. This is the number one
requirement of businesses and individuals before they had migrated to cloud computing, as this
is the premise cloud computing is developed in the first place.
The biggest concern amongst cloud computing companies is the limited wireless bandwidth for
smartphones, tablets and all other devices that use wireless technology. As of today, wireless
devices outnumber the available wireless bandwidth available. Since cloud computing is about
mobility, this issue maybe the roadblock for cloud computing. But, as usual as these kind of
issues arise there will be a solution but as for now, lets see what the cloud computing companies
are cooking and who will cook the best dish.
How To Pick The Best Cloud Computing Company
March 1st, 2012
There are hundreds if not thousands of cloud computing companies out there, if your planning
to migrate from conventional computing to cloud computing, there factors that you need to
consider. First things first, if you own a business, and you already have IT experts in your
company, it is best that you bring this plans to their attention. Some applications and programs
are too complex to run in the cloud, instead of doing that, you may need local servers installed in
your office for this programs to run smoothly. There are other things that you need to consider
here; pricing, availability, support, service levels and other technical issues your business may
encounter.

The first thing a business leader should consider is the pricing, will you save more money if you
enter an agreement with a pay-as-you-go vendor? How many employees does your business
have? Would all of them require access to the same applications and software? Would it be best
that you have these software installed locally on a server instead of being in the cloud? These
things among others should be thought of before entering a contract with any cloud computing
companies. The next thing to put into consideration is the type of cloud computing model your
business requires. Does your business need multiple applications to run smoothly? Do you need
to keep a large volume of data and files? These aspects of your business should be listed down
and be brought into the attention of the vendor so they can suggest the best cloud computing
model for your business.
Types of cloud computing models
Several cloud computing companies do not offer all types of cloud computing models. Today,
there are three cloud computing models to choose from. To give you a brief idea of what they
are I will list them all down.
Platform as a Service (PaaS) Operating systems, database and other applications and programs
will all be outsourced to a vendor thereby, all of these no longer needs to be locally managed.
Software as a Service (SaaS) This model allows consumers or cloud tenants to access programs
and applications in the cloud. This applications are not installed locally in your companys server
but are rather installed somewhere and is outsourced for local access. A business no longer needs
a license from software vendors for the applications it uses.
Infrastructure as a Service (IaaS) This cloud computing model allows consumers to use servers
and storage provided by vendors. Storage, networking equipments and other infrastructure
requirements that your business may need at the present and in the future
In considering a cloud computing vendor there are other factors a business leader needs to know.
Availability of the services is also crucial since cloud computing can only be accessed through
networking or the internet, it should be maintained in a way a backup internet service is always
available. Know your vendors product, several cloud computing companies have several
products that may help your business grow further. Migration and transition, if a business is new,
it is best that the transition is done as soon as possible with flexibility in terms of product and
service availability.
How Cloud Computing Can Benefit You
February 25th, 2012
Cloud computing is one of the fastest emerging technology being employed and used by big
and small companies and businesses today. To define cloud computing one can say that is the
use of outsourced computing resources that can be employed or accessed through networking
or the internet. Cloud computing can be compared to electricity and power grid. The consumer
can easily gain access to electricity or power by simply turning on an appliance; although the
consumer does not involve in the manufacturing or know the location where electricity is
produced accessing it is very simple.
Several Cloud computing companies around the globe offer good packages for the services
they offer. How would these products and services help you grow your business? Most of the
time, companies and individuals are doubtful in migrating their computing needs to the cloud,
this is understandable since not everyone is ready to migrate computing to vendors whose
hardware and equipment location is not within the country where the business is headquartered.
Several Cloud Computing Companies have addressed this by making sure that client is aware
where their data and information will be located and how it is being accessed and who can access
it. Data security is no longer a problem with cloud computing these days since, most Cloud
Computing Companies are locally available.

Cloud computing is designed for computing scalability, to ensure that businesses meet their
requirements if tasks that involve computer use rises. Since these services are outsourced
migration to cloud computing is fairly easy, everything will be done and be provided by the
vendor. Conventional computing usually, involves it takes months and months of preparation and
installation of servers, however since cloud computing is outsourced these phases are no longer
required hence cutting down the number of days and hours for installation alone.
Today, environment issues are discussed each day, since cloud computing allows multiple users
to use and access the valuable computer resources, this technology is one of the greenest and
most effective way to reduce your businesss carbon footprint. Cloud computing
companies today competes with their pricing and technology, consumers in turn gets the best
price for the latest technology available in the market without sacrificing quality, efficiency and
technology. Cloud computing also allows instant software updates, this makes sure that all the
software a company uses are instantly updated on time on the servers without affecting the
subscribers. These benefits and more are being enjoyed by several businesses at low cost, since
cloud computing can be released or employed anytime needed, business processes and
requirements are met all the time.
There are several other benefits in migrating to the cloud, Cloud Computing Companies now are
more flexible and accessible than ever. Since most cloud computing services and products are
only accessible through networking and the internet, it is best that a business or an individual
discusses all available options with their IT expert and define what possible issues may arise and
how these Cloud Computing Companies can ensure that all issues are properly addressed and
how it can be prevented if not totally eliminated. A business model should be fully evaluated
before migrating to cloud computing since there are other factors that needs to be addressed first.
Cloud Computing Companies: What Makes Them So
Popular?
February 20th, 2012
Everyone of us may have heard of cloud computing at least once, the only problem is not
everyone is aware of what it is all about and how it can change our computing styles and resolve
some of the complex computing issues we may have encountered. Several cloud computing
companies are making noise around the world and have revolutionized business processes for
the better. What makes these cloud computing companies so popular? Its the computing models
they offer and what these models and how these models can resolve complex computing issues.
Computers are complicated machines and applications are full of restrictions, and takes space to
run and install in computers. Imagine all the applications and programs you can run without that
ultra powerful, expensive and power hungry server seating there in that air-conditioned room
staring at you.

In a study conducted by NIST or National Institute of Standard and Technology businesses saves
at least 15 percent of their revenues yearly by just migrating to cloud computing. To address all
other issues like slow performance, lags and other known software issues a users computing
requirements are outsourced to a vendor and this vendor in turn provides all the computing needs
of the user. By just simply turning on a device the client can access all the data, applications and
programs he requires where ever a client is. Through the cloud computing technology the best
of the best solutions are always at the clients finger tips. Accessing files, programs and software
through the cloud is one of the mainstream technology now. Businesses may now host their
virtual machines side by side without any conflicts and share your newly installed software, with
everyone in the team where ever they are.
Besides from bringing these multitude of benefits cloud computing, these cloud computing
companies are popular because they are pioneers in the computing and other internet related
types of business. To name a few of these cloud computing giants on top of the list we have
Google, Amazon and Microsoft. Microsoft being a famous in the programming field, it
strengthen its leverage on that, through Microsoft Azure. Programmers who outsource their
computing needs through cloud computing may now design and write their codes through
Microsofts Azure in different programming protocol or platform even without a powerhouse
system. Businesses who require large file storage may choose between Google or Amazons
cloud computing models. These computing models are designed for multiple user access with
different access level. This also brings the question of management, most of these high caliber
cloud computing companies use the ITIL model for management which includes the following:
This approach to It services has a life cycle, which includes the following;
1. Service Strategy This means an IT organization has to understand who the customers are,
what services would meet the customers needs, what IT capabilities and resources are
appropriate for the solution to be executed in a successful manner. This part of this life cycle, is
mainly driven by strategies, cost has to be defined consistently throughout this lifecycle and
defining this at this early stage is crucial.
2. Service Design This part of ITIL life cycle is the testing part of the new and changed
services through IT. This part of ITIL service lifecycle, improvements if needed is also
integrated.
3. Service Transition phase this part of the ITIL life cycle is where the new services are
integrated with production of the company. This phase addresses the possible changes,
controlling the assets and item configuration.
The best way to gauge the success of cloud computing companies is through customer based
satisfaction rate, there are websites being run by non government organizations that measures the
service level of most cloud computing companies around the world.
What Are The Best Cloud Computing Companies?
December 15th, 2011
At the present time of modern world, you dont need to carry large amounts of documents with
you during traveling from town to town or country to country, because a lot of cloud computing
companies are available to serve you. Cloud computing is one kind of web-based storage that
permits you to store numerous information from everywhere. You only you need to have an
internet connection, and a subscription from any of the cloud computing companies.
Nowadays, online business is expanding day by day. Many people show eagerness to earn extra
money through online business. They need to communicate with their large amount of target
audiences. They have to share their business policy with clients. They have to set several
meetings with their partners also. If they have no cloud computing system, they will have to face
a lot of problems in carrying the documents. But now with the help of cloud computing
companies you dont need to carry these documents with you.

Really, cloud computing providers have provided many opportunities in business or online
business. Now, you are able to access any kind of information from anywhere. It is very simple
to maintain and store the information with a simple click of mouse. If you want to communicate
with your clients, you will be able to access information easily and simply through cloud
computing. By using the cloud computing services you can enjoy a flexible operation in your
business or business related jobs from anywhere along with anytime.
But you have to check that what are the best cloud computing companies? You can check their
reputation and trustworthiness.To know the reputation and trustworthiness of a cloud computing
company, you can try to consult with former clients and partners of that company. You can
check their service level agreements. From their speediness of working, you can know the
support commitment. If there is a good combination between commitment and services, you can
prefer them as best. Apart from this, you can to see the support department. You can check the
suitability for performing well. If it is not suitable for your company, you should not hire it. A
good cloud computing company will ensure the best protection. If any company offers sound as
well as safe infrastructure, you can think it to rent for your use.
After choosing the best cloud computing services, you can enjoy the facility using a tablet
computer, laptop, a Smartphone, and so on to access the critical business information from
anywhere and anytime.
How Cloud Computing Companies Can Easily Scam You
December 13th, 2011
Cloud computing may be a new term to anyone. But who dont know the cloud counting
companies, they are living without knowledge about modern world. Modern science has offered
new looks of the world. Everything has been changed dramatically. Cloud computing companies
are the newest buzz in the IT sector. Cloud computing offered by cloud companies may be
defined as a form of accessing different data such as documents, applications, music files,
pictures, video files from anyplace around the world. At earlier time, people used memory cards
or portable hard disks to store the data and carried it to access data. Nowadays, you are tension
free to carry the data with you. You can store any kind of data by cloud computing and can
access these data anytime from anywhere. Due to several advantages of cloud computing, people
are using this brilliant system. It is also very helpful in business because it saves overall business
costs. Only internet connection is needed to you.

Nowadays, it is a question that, how do we be familiar with the best cloud computing
companies? Ok! There are several things to take in account to choose the best cloud computing
company. The cloud computing provider with committed hosting services ought to be
comprehensible. Anyway, some factors are here for helping to choose the best cloud computing
company. Before choosing, to know the customary level agreement or CLA is very essential.
Then you can take in account its customer support. Cloud computing suppliers have to provide
sufficient customer support. The details about customer support should be listed in the customary
level agreement (CLA). Again, billing system is another factor to choose the best cloud
computing provider. By getting a billing instruction, you have to know the total charge or billing
system. Otherwise you may be scammed by them.
There are several excellent cloud computing companies for providing brilliant facilities. Google,
Akamai, furthermore VMware are three high class cloud computing companies in the world.
GoGrid in San Francisco is another excellent cloud computing company. Microsoft in Redmond
, RightScale in Santa Barbara , Rackspace in San Antonio , and NetSuite in San Mateo , is also
excellent cloud computing company. From the large number of cloud computing companies, it is
very important to check the providers background, customer service record, security, finances,
as well as software requirements. These factors will be handy to decide the best cloud computing
company and which will be suitable for your organization or business related job.
Finding the Best Cloud Computing Company for You
Cloud computing is a new kind if storage technology, by which you can share software, data or
documents to computers as well as other devices on demand. On the other hand, it can be said
that cloud computing is one kind of form of accessing data including documents, applications,
music files, pictures, video files, and so on from anywhere around the world without carrying
these in memory cards, flash cards or hard disks. Using Cloud computing companies is very
handy for any work including business and business like works.
In your business, you have to share business related information with your partners. Again, you
have to represents some documents at any meetings with your target audiences. For performing
these tasks, you need to carry the data, documents and so forth with you. But if you are using
cloud computing companies, you dont need to carry all these things with you. You can freely
move to one place to another place. Only you have a net connection with your device.

Nearly all business persons want to enjoy the cloud computing for its some advantages. You can
easily attach all infrastructure and applications to the cloud. To access any item, you have to dial
into your cloud. It doesnt require installing on each computer. Flexibility is another benefit of
cloud computing. You can store data according to your needs and can access it anytime from
anywhere. Cloud computing is very economical to any user. For it you have to buy necessary
infrastructure, support devices and net connection. To store the additional data and information
of company is a constant pain for IT department of any recognized company. Cloud computing is
very useful to any company because there is no need to purchase additional device for storing
data, documents, information etc. It saves the money along with energy. Really, cloud computing
is useful for small business to large business.

Today, many cloud computing companies are easily available to serve anyone. But, Google,
VMware, and Akamai are three top cloud computing companies around the world. GoGrid in
San Francisco is another company that provides cloud computing services. Microsoft in
Redmond , Rackspace in San Antonio , NetSuite in San Mateo , and RightScale in Santa Barbara
are high class company to provide cloud computing facilities. Apart from these companies, there
are many companies that are always ready to offer cloud computing facilities. You can choose
any company to enjoy the brilliant cloud counting.
Posted in Uncategorized | No Comments
Why The Cloud Computing Market Is Always Growing
December 11th, 2011
Nowadays, cloud computing service is one kind of cost-saving technique offered by the cloud
computing companies. It is mainly helpful to store files online. A lot of business companies
dont know the use of brilliant cloud computing technique and they have no cost saving
technique in their hands. Cloud computing is the excellent way to save the money in any
business policy and best medium to focus the objectives of the company.
Mainly, cloud computing is one kind of model to use storage space online. Many data storage
modes are available now but all of them dont allow to you to use more space. When you need to
use more space online, you have to pay additional fee for it. But some cloud computing
companies allow using enough storage space for anyone. There is no need to pay extra fee; only
you have to pay for used space that is allowed to you. Thus, clouding computing services
eliminating the problem by allowing more storage space on various sites and it can be used by
several users. A lot of cloud computing companies are offering this service. For small software
company embracing cloud computing will be best.

However, cloud computing is the effective way to minimize the cost as well as to maximize the
efficiency of a company. This service reduces the cost in several ways and it is most economical
than any fixed size storage mode. On the other hand, if there is cloud computing service in your
company, you dont need to pay additional money to your technical workers needed to keep an
eye on services in the fixed space. By adopting this service in your company you can minimize
the compulsion of centralizing the storage. Really, cloud computing service acts as a feasible
option for saving cost for any Software as a Service (SaaS) companies.
This service is more flexible in increasing and decreasing usages offered by the cloud
computing companies. Whatever the reason, cloud computing is considered as a new and
brilliant architecture. It is really amazing service in IT sector. Numerous cloud computing
companies are available to provide this service. If you want to use this service for your company,
you can get it online but you have to pay a small amount. This payment may be monthly or so
on. After all, cloud computing service is becoming excellent system to save extra cost for
purchasing additional online space. Thus cloud computing marketis growing day by day.
Most Popular Cloud Computing Providers
December 10th, 2011
There are a number of cloud computing companies on market now, but it is relatively difficult
to choose the best one for your business purpose. Who is excellent cloud provider it is very
important to know because it will be handy to fulfill your requirements. At the time of
researching, you should check the criteria of that cloud provider. Here are some basic
requirements for identifying the excellent and popular cloud computing providers below:
Reputation and Reliability
Reputation and reliability of cloud computing companies will be essential to know its excellence.
To understand the reputation and reliability it is very important to know how long it has been in
industry. It is also very important to know the clients as well as partnerships of that cloud
computing company. Moreover to know the reputation and reliability it will be better to consult
with the partners and clients of that cloud computing provider. By this way reputation and
reliability of that company can be measured and evaluated.
Suitability
While the business runs through a suitable cloud environment, then it can be considered the
existing cloud computing company is exact for your business. If there is no-obligation for free
trail offered by a company, it will be taken in account as more suitable for you. Thus you can
know the suitability of a company to run the business in a suitable cloud environment and how
the company works before making an enduring commitment.

Support along with Service Level Agreements
Support as well as service level agreements play an important role in making that provider or
company popular. Support commitment of a cloud providers can be known from the speediness
of doing work. If there is far difference between their commitment and work speed, it will not be
a good cloud provider. If you go to the office of a cloud computing company, you ought to
request to see their support department.
Safety of the Cloud
If the company is excellent, it will ensure the security of the environment including the business
process along with system. If the company offers safe and sound infrastructure at different levels,
it will be more suitable than other companies. Besides, good cloud computing
companies ensures more security of the data center of the business. Finally, it will ensure a
safety environment for the business.
A cloud computing provider will be excellent and most popular while aforementioned
requirements will be content.
http://www.utdallas.edu/~hamlen/hamlen-ijisp10.pdf

http://www.computerweekly.com/news/2240089111/Top-five-cloud-computing-security-issues

Top five cloud computing security issues
In the last few years, clou computing has grown from being a promising business concept to one
of the fastest growing segments of the IT industry. Now, recession-hit companies are
increasingly realising that simply by tapping into the cloud they can gain fast access to best-of-
breed business applications or drastically boost their infrastructure resources, all at negligible
cost. But as more and more information on individuals and companies is placed in the
cloud,concerns are beginning to grow about just how safe an environment it is.
1. Every breached security system was once thought infallible
2. Understand the risks of cloud computing
3. How cloud hosting companies have approached security
4. Local law and jurisdiction where data is held
5. Best practice for companies in the cloud
Read more about cloud computing and security:
A history of cloud computing
Security trends for 2009
Security Handbook: The essential guide to establishing a security policy
Security Zone: opinions and insights from experienced professionals
Whitepaper: Hosted Security
Cloud computing could frustrate and irritate finance sector
Microsoft slashes cost of cloud computing
Will Cloud Computing lead to several own goals?
Cloud computing faces security storm

Every breached security system was once thought infallible
SaaS (software as a service) and PaaS (platform as a service) providers all trumpet the robustness
of their systems, often claiming that security in the cloud is tighter than in most enterprises. But
the simple fact is that every security system that has ever been breached was once thought
infallible.
Google was forced to make an embarrassing apology in February when its Gmail service
collapsed in Europe , while Salesforce.com is still smarting from a phishing attack in 2007 which
duped a staff member into revealing passwords.
While cloud service providers face similar security issues as other sorts of organisations, analysts
warn that the cloud is becoming particularly attractive to cyber crooks.
"The richer the pot of data, the more cloud service providers need to do to protect it," says IDC
research analyst David Bradshaw.

Understand the risks of cloud computing
Cloud service users need to be vigilant in understanding the risks of data breaches in this new
environment.
"At the heart of cloud infrastructure is this idea of multi-tenancy and decoupling between
specific hardware resources and applications," explains Datamonitor senior analyst Vuk
Trifkovi. "In the jungle of multi-tenant data, you need to trust the cloud provider that your
information will not be exposed."
For their part, companies need to be vigilant, for instance about how passwords are assigned,
protected and changed. Cloud service providers typically work with numbers of third parties, and
customers are advised to gain information about those companies which could potentially access
their data.
IDC's Bradshaw says an important measure of security often overlooked by companies is how
much downtime a cloud service provider experiences. He recommends that companies ask to see
service providers' reliability reports to determine whether these meet the requirements of the
business. Exception monitoring systems is another important area which companies should ask
their service providers about, he adds.
London-based financial transaction specialists SmartStream Technologies made its foray into the
cloud services space last month with a new SaaS product aimed at providing smaller banks and
other financial institutions with a cheap means of reconciling transactions. Product
manager Darryl Twiggs says that the service has attracted a good deal of interest amongst small
to mid-tier banks, but that some top tier players are also being attracted by the potential cost
savings.
An important consideration for cloud service customers, especially those responsible for highly
sensitive data, Twiggs says, is to find out about the hosting company used by the provider and if
possible seek an independent audit of their security status.
"Customers we engage with haven't been as stringent as we thought they would have been with
this".

How cloud hosting companies have approached security
As with most SaaS offerings, the applications forming SmartClear's offering are constantly being
tweaked and revised, a fact which raises more security issues for customers. Companies need to
know, for instance, whether a software change might actually alter its security settings.
"For every update we review the security requirements for every user in the system," Twiggs
says.
One of the world's largest technology companies, Google, has invested a lot of money into the
cloud space, where it recognises that having a reputation for security is a key determinant of
success. "Security is built into the DNA of our products," says a company spokesperson.
"Google practices a defense-in-depth security strategy, by architecting security into our people,
process and technologies".
However, according to Datamonitor's Trifkovi, the cloud is still very much a new frontier with
very little in the way of specific standards for security or data privacy. In many ways he says that
cloud computing is in a similar position to where the recording industry found itself when it was
trying to combat peer-to-peer file sharing with copyright laws created in the age of analogue.
"In terms of legislation, at the moment there's nothing that grabs my attention that is specifically
built for cloud computing," he says. "As is frequently the case with disruptive technologies, the
law lags behind the technology development for cloud computing."
What's more, many are concerned that cloud computing remains at such an embryonic stage that
the imposition of strict standards could do more harm than good.
IBM, Cisco, SAP, EMC and several other leading technology companies announced in late
March that they had created an 'Open Cloud Manifesto' calling for more consistent security and
monitoring of cloud services.
But the fact that neither Amazon.com, Google nor Salesforce.com agreed to take part suggests
that broad industry consensus may be some way off. Microsoft also abstained, charging that IBM
was forcing its agenda.
"Standards by definition are restrictive. Consequently, people are questioning whether cloud
computing can benefit from standardisation at this stage of market development." says Trifkovi.
"There is a slight reluctance on the part of cloud providers to create standards before the market
landscape is fully formed."
Until it is there are nevertheless a handful of existing web standards which companies in the
cloud should know about. Chief among these is ISO27001, which is designed to provide the
foundations for third party audit, and implements OECD principles governing security of
information and network systems. The SAS70 auditing standard is also used by cloud service
providers.


Local law and jurisdiction where data is held
Possibly even more pressing an issue than standards in this new frontier is the emerging question
of jurisdiction. Data that might be secure in one country may not be secure in another. In many
cases though, users of cloud services don't know where their information is held. Currently in the
process of trying to harmonise the data laws of its member states, the EU favours very strict
protection of privacy, while in America laws such as the US Patriot Act invest government and
other agencies with virtually limitless powers to access information including that belonging to
companies.
UK-based electronics distributor ACAL is using NetSuite OneWorld for its CRM. Simon Rush,
IT manager at ACAL, has needed to ensure that ACAL had immediate access to all of its data
should its contract with NetSuite be terminated for any reason, so that the information could be
quickly relocated. Part of this included knowing in which jurisdiction the data is held. "We had
to make sure that, as a company, our data was correctly and legally held."
European concerns about about US privacy laws led to creation of the US Safe Harbor Privacy
Principles, which are intended to provide European companies with a degree of insulation from
US laws. James Blake from e-mail management SaaS provider Mimecast suspects that these
powers are being abused. "Counter terrorism legislation is increasingly being used to gain access
to data for other reasons," he warns.
Mimecast provides a comprehensive e-mail management service in the cloud for over 25,000
customers, including 40% of the top legal firms in the UK .
Customers benefit from advanced encryption that only they are able to decode, ensuring that
Mimecast acts only as the custodian, rather than the controller of the data, offering companies
concerned about privacy another layer of protection. Mimecast also gives customers the option
of having their data stored in different jurisdictions.
For John Tyreman, IT manager for outsourced business services provider Liberata, flexibility
over jurisdiction was a key factor in his choosing Mimecast to help the company meet its
obligations to store and manage e-mails from 2500 or so staff spread across 20 countries. The
company is one of the UK 's leading outsourcing providers for the Public Sector, Life Pensions
and Investments and Corporate Pensions leading. "Storing our data in the US would have been a
major concern," Tyreman says.
Read more about cloud computing and security >>

Best practice for companies in the cloud
Inquire about exception monitoring systems
Be vigilant around updates and making sure that staff don't suddenly gain access privileges
they're not supposed to.
Ask where the data is kept and inquire as to the details of data protection laws in the relevant
jurisdictions.
Seek an independent security audit of the host
Find out which third parties the company deals with and whether they are able to access your
data
Be careful to develop good policies around passwords; how they are created, protected and
changed.
Look into availability guarantees and penalties.
Find out whether the cloud provider will accommodate your own security policies
























Flexibility

ttp://www.thecloudcomputing.org/2012/

Types of cloud computing models
Today, there are three cloud computing models to choose from. To give you a brief idea of what they
are I will list them all down.

Platform as a Service (PaaS) Operating systems, database and other applications and programs will
all be outsourced to a vendor thereby, all of these no longer needs to be locally managed.

Software as a Service (SaaS) This model allows consumers or cloud tenants to access programs and
applications in the cloud. This applications are not installed locally in your companys server but are
rather installed somewhere and is outsourced for local access. A business no longer needs a license
from software vendors for the applications it uses.

Infrastructure as a Service (IaaS) This cloud computing model allows consumers to use servers and
storage provided by vendors. Storage, networking equipments and other infrastructure requirements
that your business may need at the present and in the future

In considering a cloud computing vendor there are other factors a business leader needs to know.
Availability of the services is also crucial since cloud computing can only be accessed through
networking or the internet, it should be maintained in a way a backup internet service is always
available. Know your vendors product, several cloud computing companies have several products that
may help your business grow further. Migration and transition, if a business is new, it is best that the
transition is done as soon as possible with flexibility in terms of product and service availability.













How Cloud Computing Can Benefit You

Cloud computing is one of the fastest emerging technology being employed and used by big and small
companies and businesses today. To define cloud computing one can say that is the use of outsourced
computing resources that can be employed or accessed through networking or the internet. Cloud
computing can be compared to electricity and power grid. The consumer can easily gain access to
electricity or power by simply turning on an appliance; although the consumer does not involve in the
manufacturing or know the location where electricity is produced accessing it is very simple.

Several Cloud computing companies around the globe offer good packages for the services they offer.
How would these products and services help you grow your business? Most of the time, companies and
individuals are doubtful in migrating their computing needs to the cloud, this is understandable since
not everyone is ready to migrate computing to vendors whose hardware and equipment location is not
within the country where the business is headquartered. Several Cloud Computing Companies have
addressed this by making sure that client is aware where their data and information will be located and
how it is being accessed and who can access it. Data security is no longer a problem with cloud
computing these days since, most Cloud Computing Companies are locally available.


Cloud computing is designed for computing scalability, to ensure that businesses meet their
requirements if tasks that involve computer use rises. Since these services are outsourced migration to
cloud computing is fairly easy, everything will be done and be provided by the vendor. Conventional
computing usually, involves it takes months and months of preparation and installation of servers,
however since cloud computing is outsourced these phases are no longer required hence cutting down
the number of days and hours for installation alone.
Today, environment issues are discussed each day, since cloud computing allows multiple users to use
and access the valuable computer resources, this technology is one of the greenest and most effective
way to reduce your businesss carbon footprint. Cloud computing companies today competes with
their pricing and technology, consumers in turn gets the best price for the latest technology available in
the market without sacrificing quality, efficiency and technology. Cloud computing also allows instant
software updates, this makes sure that all the software a company uses are instantly updated on time on
the servers without affecting the subscribers. These benefits and more are being enjoyed by several
businesses at low cost, since cloud computing can be released or employed anytime needed, business
processes and requirements are met all the time.

There are several other benefits in migrating to the cloud, Cloud Computing Companies now are more
flexible and accessible than ever. Since most cloud computing services and products are only accessible
through networking and the internet, it is best that a business or an individual discusses all available
options with their IT expert and define what possible issues may arise and how these Cloud Computing
Companies can ensure that all issues are properly addressed and how it can be prevented if not totally
eliminated. A business model should be fully evaluated before migrating to cloud computing since
there are other factors that needs to be addressed first.




Cloud Computing Companies: What Makes Them So Popular?
February 20th, 2012
Everyone of us may have heard of cloud computing at least once, the only problem is not everyone is
aware of what it is all about and how it can change our computing styles and resolve some of the
complex computing issues we may have encountered. Several cloud computing companies are
making noise around the world and have revolutionized business processes for the better. What makes
these cloud computing companies so popular? Its the computing models they offer and what these
models and how these models can resolve complex computing issues. Computers are complicated
machines and applications are full of restrictions, and takes space to run and install in computers.
Imagine all the applications and programs you can run without that ultra powerful, expensive and
power hungry server seating there in that air-conditioned room staring at you.

In a study conducted by NIST or National Institute of Standard and Technology businesses saves at
least 15 percent of their revenues yearly by just migrating to cloud computing. To address all other
issues like slow performance, lags and other known software issues a users computing requirements
are outsourced to a vendor and this vendor in turn provides all the computing needs of the user. By just
simply turning on a device the client can access all the data, applications and programs he requires
where ever a client is. Through the cloud computing technology the best of the best solutions are
always at the clients finger tips. Accessing files, programs and software through the cloud is one of the
mainstream technology now. Businesses may now host their virtual machines side by side without any
conflicts and share your newly installed software, with everyone in the team where ever they are.
Besides from bringing these multitude of benefits cloud computing, these cloud computing companies
are popular because they are pioneers in the computing and other internet related types of business. To
name a few of these cloud computing giants on top of the list we have Google, Amazon and Microsoft.
Microsoft being a famous in the programming field, it strengthen its leverage on that, through
Microsoft Azure. Programmers who outsource their computing needs through cloud computing may
now design and write their codes through Microsofts Azure in different programming protocol or
platform even without a powerhouse system. Businesses who require large file storage may choose
between Google or Amazons cloud computing models. These computing models are designed for
multiple user access with different access level. This also brings the question of management, most of
these high caliber cloud computing companies use the ITIL model for management which includes the
following:
This approach to It services has a life cycle, which includes the following;
1. Service Strategy This means an IT organization has to understand who the customers are, what
services would meet the customers needs, what IT capabilities and resources are appropriate for the
solution to be executed in a successful manner. This part of this life cycle, is mainly driven by
strategies, cost has to be defined consistently throughout this lifecycle and defining this at this early
stage is crucial.
2. Service Design This part of ITIL life cycle is the testing part of the new and changed services
through IT. This part of ITIL service lifecycle, improvements if needed is also integrated.
3. Service Transition phase this part of the ITIL life cycle is where the new services are integrated
with production of the company. This phase addresses the possible changes, controlling the assets and
item configuration.
The best way to gauge the success of cloud computing companies is through customer based
satisfaction rate, there are websites being run by non government organizations that measures the
service level of most cloud computing companies around the world.

What Are The Best Cloud Computing Companies?
December 15th, 2011
At the present time of modern world, you dont need to carry large amounts of documents with you
during traveling from town to town or country to country, because a lot of cloud computing companies
are available to serve you. Cloud computing is one kind of web-based storage that permits you to store
numerous information from everywhere. You only you need to have an internet connection, and a
subscription from any of the cloud computing companies.
Nowadays, online business is expanding day by day. Many people show eagerness to earn extra money
through online business. They need to communicate with their large amount of target audiences. They
have to share their business policy with clients. They have to set several meetings with their partners
also. If they have no cloud computing system, they will have to face a lot of problems in carrying the
documents. But now with the help of cloud computing companies you dont need to carry these
documents with you.

Really, cloud computing providers have provided many opportunities in business or online business.
Now, you are able to access any kind of information from anywhere. It is very simple to maintain and
store the information with a simple click of mouse. If you want to communicate with your clients, you
will be able to access information easily and simply through cloud computing. By using the cloud
computing services you can enjoy a flexible operation in your business or business related jobs from
anywhere along with anytime.
But you have to check that what are the best cloud computing companies? You can check their
reputation and trustworthiness.To know the reputation and trustworthiness of a cloud computing
company, you can try to consult with former clients and partners of that company. You can check their
service level agreements. From their speediness of working, you can know the support commitment. If
there is a good combination between commitment and services, you can prefer them as best. Apart from
this, you can to see the support department. You can check the suitability for performing well. If it is
not suitable for your company, you should not hire it. A good cloud computing company will ensure
the best protection. If any company offers sound as well as safe infrastructure, you can think it to rent
for your use.
After choosing the best cloud computing services, you can enjoy the facility using a tablet computer,
laptop, a Smartphone, and so on to access the critical business information from anywhere and anytime.

Cloud computing may be a new term to anyone. But who dont know the cloud counting companies,
they are living without knowledge about modern world. Modern science has offered new looks of the
world. Everything has been changed dramatically. Cloud computing companies are the newest buzz in
the IT sector. Cloud computing offered by cloud companies may be defined as a form of accessing
different data such as documents, applications, music files, pictures, video files from anyplace around
the world. At earlier time, people used memory cards or portable hard disks to store the data and carried
it to access data. Nowadays, you are tension free to carry the data with you. You can store any kind of
data by cloud computing and can access these data anytime from anywhere. Due to several advantages
of cloud computing, people are using this brilliant system. It is also very helpful in business because it
saves overall business costs. Only internet connection is needed to you.

Nowadays, it is a question that, how do we be familiar with the best cloud computing companies? Ok!
There are several things to take in account to choose the best cloud computing company. The cloud
computing provider with committed hosting services ought to be comprehensible. Anyway, some
factors are here for helping to choose the best cloud computing company. Before choosing, to know the
customary level agreement or CLA is very essential. Then you can take in account its customer support.
Cloud computing suppliers have to provide sufficient customer support. The details about customer
support should be listed in the customary level agreement (CLA). Again, billing system is another
factor to choose the best cloud computing provider. By getting a billing instruction, you have to know
the total charge or billing system. Otherwise you may be scammed by them.
There are several excellent cloud computing companies for providing brilliant facilities. Google,
Akamai, furthermore VMware are three high class cloud computing companies in the world. GoGrid in
San Francisco is another excellent cloud computing company. Microsoft in Redmond , RightScale in
Santa Barbara , Rackspace in San Antonio , and NetSuite in San Mateo , is also excellent cloud
computing company. From the large number of cloud computing companies, it is very important to
check the providers background, customer service record, security, finances, as well as software
requirements. These factors will be handy to decide the best cloud computing company and which will
be suitable for your organization or business related job.


























Finding the Best Cloud Computing Company for You

Cloud computing is a new kind if storage technology, by which you can share software, data or
documents to computers as well as other devices on demand. On the other hand, it can be said that
cloud computing is one kind of form of accessing data including documents, applications, music files,
pictures, video files, and so on from anywhere around the world without carrying these in memory
cards, flash cards or hard disks. Using Cloud computing companies is very handy for any work
including business and business like works.
In your business, you have to share business related information with your partners. Again, you have to
represents some documents at any meetings with your target audiences. For performing these tasks, you
need to carry the data, documents and so forth with you. But if you are using cloud computing
companies, you dont need to carry all these things with you. You can freely move to one place to
another place. Only you have a net connection with your device.

Nearly all business persons want to enjoy the cloud computing for its some advantages. You can easily
attach all infrastructure and applications to the cloud. To access any item, you have to dial into your
cloud. It doesnt require installing on each computer. Flexibility is another benefit of cloud computing.
You can store data according to your needs and can access it anytime from anywhere. Cloud computing
is very economical to any user. For it you have to buy necessary infrastructure, support devices and net
connection. To store the additional data and information of company is a constant pain for IT
department of any recognized company. Cloud computing is very useful to any company because there
is no need to purchase additional device for storing data, documents, information etc. It saves the
money along with energy. Really, cloud computing is useful for small business to large business.

Today, many cloud computing companies are easily available to serve anyone. But, Google, VMware,
and Akamai are three top cloud computing companies around the world. GoGrid in San Francisco is
another company that provides cloud computing services. Microsoft in Redmond , Rackspace in San
Antonio , NetSuite in San Mateo , and RightScale in Santa Barbara are high class company to provide
cloud computing facilities. Apart from these companies, there are many companies that are always
ready to offer cloud computing facilities. You can choose any company to enjoy the brilliant cloud
counting.













Why The Cloud Computing Market Is Always Growing
Nowadays, cloud computing service is one kind of cost-saving technique offered by the cloud
computing companies. It is mainly helpful to store files online. A lot of business companies dont know
the use of brilliant cloud computing technique and they have no cost saving technique in their hands.
Cloud computing is the excellent way to save the money in any business policy and best medium to
focus the objectives of the company.
Mainly, cloud computing is one kind of model to use storage space online. Many data storage modes
are available now but all of them dont allow to you to use more space. When you need to use more
space online, you have to pay additional fee for it. But some cloud computing companies allow using
enough storage space for anyone. There is no need to pay extra fee; only you have to pay for used space
that is allowed to you. Thus, clouding computing services eliminating the problem by allowing more
storage space on various sites and it can be used by several users. A lot of cloud computing companies
are offering this service. For small software company embracing cloud computing will be best.

However, cloud computing is the effective way to minimize the cost as well as to maximize the
efficiency of a company. This service reduces the cost in several ways and it is most economical than
any fixed size storage mode. On the other hand, if there is cloud computing service in your company,
you dont need to pay additional money to your technical workers needed to keep an eye on services in
the fixed space. By adopting this service in your company you can minimize the compulsion of
centralizing the storage. Really, cloud computing service acts as a feasible option for saving cost for
any Software as a Service (SaaS) companies.
This service is more flexible in increasing and decreasing usages offered by the cloud computing
companies. Whatever the reason, cloud computing is considered as a new and brilliant architecture. It
is really amazing service in IT sector. Numerous cloud computing companies are available to provide
this service. If you want to use this service for your company, you can get it online but you have to pay
a small amount. This payment may be monthly or so on. After all, cloud computing service is becoming
excellent system to save extra cost for purchasing additional online space. Thus cloud computing
market is growing day by day.














Most Popular Cloud Computing Providers
There are a number of cloud computing companies on market now, but it is relatively difficult to
choose the best one for your business purpose. Who is excellent cloud provider it is very important to
know because it will be handy to fulfill your requirements. At the time of researching, you should
check the criteria of that cloud provider. Here are some basic requirements for identifying the excellent
and popular cloud computing providers below:
Reputation and Reliability
Reputation and reliability of cloud computing companies will be essential to know its excellence. To
understand the reputation and reliability it is very important to know how long it has been in industry. It
is also very important to know the clients as well as partnerships of that cloud computing company.
Moreover to know the reputation and reliability it will be better to consult with the partners and clients
of that cloud computing provider. By this way reputation and reliability of that company can be
measured and evaluated.
Suitability
While the business runs through a suitable cloud environment, then it can be considered the existing
cloud computing company is exact for your business. If there is no-obligation for free trail offered by a
company, it will be taken in account as more suitable for you. Thus you can know the suitability of a
company to run the business in a suitable cloud environment and how the company works before
making an enduring commitment.

Support along with Service Level Agreements
Support as well as service level agreements play an important role in making that provider or company
popular. Support commitment of a cloud providers can be known from the speediness of doing work.
If there is far difference between their commitment and work speed, it will not be a good cloud
provider. If you go to the office of a cloud computing company, you ought to request to see their
support department.
Safety of the Cloud
If the company is excellent, it will ensure the security of the environment including the business
process along with system. If the company offers safe and sound infrastructure at different levels, it will
be more suitable than other companies. Besides, good cloud computing companies ensures more
security of the data center of the business. Finally, it will ensure a safety environment for the business.
A cloud computing provider will be excellent and most popular while aforementioned requirements
will be content.
http://www.utdallas.edu/~hamlen/hamlen-ijisp10.pdf











http://www.computerweekly.com/news/2240089111/Top-five-cloud-computing-security-issues

Top five cloud computing security issues

In the last few years, clou computing has grown from being a promising business concept to one of the
fastest growing segments of the IT industry. Now, recession-hit companies are increasingly realising
that simply by tapping into the cloud they can gain fast access to best-of-breed business applications or
drastically boost their infrastructure resources, all at negligible cost. But as more and more information
on individuals and companies is placed in the cloud,concerns are beginning to grow about just how safe
an environment it is.
1. Every breached security system was once thought infallible
2. Understand the risks of cloud computing
3. How cloud hosting companies have approached security
4. Local law and jurisdiction where data is held
5. Best practice for companies in the cloud
Read more about cloud computing and security:
A history of cloud computing
Security trends for 2009
Security Handbook: The essential guide to establishing a security policy
Security Zone: opinions and insights from experienced professionals
Whitepaper: Hosted Security
Cloud computing could frustrate and irritate finance sector
Microsoft slashes cost of cloud computing
Will Cloud Computing lead to several own goals?
Cloud computing faces security storm

Every breached security system was once thought infallible
SaaS (software as a service) and PaaS (platform as a service) providers all trumpet the robustness of
their systems, often claiming that security in the cloud is tighter than in most enterprises. But the simple
fact is that every security system that has ever been breached was once thought infallible.
Google was forced to make an embarrassing apology in February when its Gmail service collapsed in
Europe , while Salesforce.com is still smarting from a phishing attack in 2007 which duped a staff
member into revealing passwords.
While cloud service providers face similar security issues as other sorts of organisations, analysts warn
that the cloud is becoming particularly attractive to cyber crooks.
"The richer the pot of data, the more cloud service providers need to do to protect it," says IDC research
analyst David Bradshaw.
Read more about cloud computing and security >>

Understand the risks of cloud computing
Cloud service users need to be vigilant in understanding the risks of data breaches in this new
environment.
"At the heart of cloud infrastructure is this idea of multi-tenancy and decoupling between specific
hardware resources and applications," explains Datamonitor senior analyst Vuk Trifkovi. "In the
jungle of multi-tenant data, you need to trust the cloud provider that your information will not be
exposed."
For their part, companies need to be vigilant, for instance about how passwords are assigned, protected
and changed. Cloud service providers typically work with numbers of third parties, and customers are
advised to gain information about those companies which could potentially access their data.
IDC's Bradshaw says an important measure of security often overlooked by companies is how much
downtime a cloud service provider experiences. He recommends that companies ask to see service
providers' reliability reports to determine whether these meet the requirements of the business.
Exception monitoring systems is another important area which companies should ask their service
providers about, he adds.
London-based financial transaction specialists SmartStream Technologies made its foray into the cloud
services space last month with a new SaaS product aimed at providing smaller banks and other
financial institutions with a cheap means of reconciling transactions. Product manager Darryl
Twiggs says that the service has attracted a good deal of interest amongst small to mid-tier banks, but
that some top tier players are also being attracted by the potential cost savings.
An important consideration for cloud service customers, especially those responsible for highly
sensitive data, Twiggs says, is to find out about the hosting company used by the provider and if
possible seek an independent audit of their security status.
"Customers we engage with haven't been as stringent as we thought they would have been with this".

How cloud hosting companies have approached security
As with most SaaS offerings, the applications forming SmartClear's offering are constantly being
tweaked and revised, a fact which raises more security issues for customers. Companies need to know,
for instance, whether a software change might actually alter its security settings.
"For every update we review the security requirements for every user in the system," Twiggs says.
One of the world's largest technology companies, Google, has invested a lot of money into the cloud
space, where it recognises that having a reputation for security is a key determinant of success.
"Security is built into the DNA of our products," says a company spokesperson. "Google practices a
defense-in-depth security strategy, by architecting security into our people, process and technologies".
However, according to Datamonitor's Trifkovi, the cloud is still very much a new frontier with very
little in the way of specific standards for security or data privacy. In many ways he says that cloud
computing is in a similar position to where the recording industry found itself when it was trying to
combat peer-to-peer file sharing with copyright laws created in the age of analogue.
"In terms of legislation, at the moment there's nothing that grabs my attention that is specifically built
for cloud computing," he says. "As is frequently the case with disruptive technologies, the law lags
behind the technology development for cloud computing."
What's more, many are concerned that cloud computing remains at such an embryonic stage that the
imposition of strict standards could do more harm than good.
IBM, Cisco, SAP, EMC and several other leading technology companies announced in late March that
they had created an 'Open Cloud Manifesto' calling for more consistent security and monitoring of
cloud services.
But the fact that neither Amazon.com, Google nor Salesforce.com agreed to take part suggests that
broad industry consensus may be some way off. Microsoft also abstained, charging that IBM was
forcing its agenda.
"Standards by definition are restrictive. Consequently, people are questioning whether cloud computing
can benefit from standardisation at this stage of market development." says Trifkovi. "There is a slight
reluctance on the part of cloud providers to create standards before the market landscape is fully
formed."
Until it is there are nevertheless a handful of existing web standards which companies in the cloud
should know about. Chief among these is ISO27001, which is designed to provide the foundations for
third party audit, and implements OECD principles governing security of information and network
systems. TheSAS70 auditing standard is also used by cloud service providers.
Read more about cloud computing and security >>

Local law and jurisdiction where data is held
Possibly even more pressing an issue than standards in this new frontier is the emerging question of
jurisdiction. Data that might be secure in one country may not be secure in another. In many cases
though, users of cloud services don't know where their information is held. Currently in the process of
trying to harmonise the data laws of its member states, the EU favours very strict protection of privacy,
while in America laws such as the US Patriot Act invest government and other agencies with virtually
limitless powers to access information including that belonging to companies.
UK-based electronics distributor ACAL is using NetSuite OneWorld for its CRM. Simon Rush, IT
manager at ACAL, has needed to ensure that ACAL had immediate access to all of its data should its
contract with NetSuite be terminated for any reason, so that the information could be quickly relocated.
Part of this included knowing in which jurisdiction the data is held. "We had to make sure that, as a
company, our data was correctly and legally held."
European concerns about about US privacy laws led to creation of the US Safe Harbor Privacy
Principles, which are intended to provide European companies with a degree of insulation from US
laws. James Blake from e-mail management SaaS provider Mimecast suspects that these powers are
being abused. "Counter terrorism legislation is increasingly being used to gain access to data for other
reasons," he warns.
Mimecast provides a comprehensive e-mail management service in the cloud for over 25,000
customers, including 40% of the top legal firms in the UK .
Customers benefit from advanced encryption that only they are able to decode, ensuring that Mimecast
acts only as the custodian, rather than the controller of the data, offering companies concerned about
privacy another layer of protection. Mimecast also gives customers the option of having their data
stored in different jurisdictions.
For John Tyreman, IT manager for outsourced business services provider Liberata, flexibility over
jurisdiction was a key factor in his choosing Mimecast to help the company meet its obligations to store
and manage e-mails from 2500 or so staff spread across 20 countries. The company is one of the UK 's
leading outsourcing providers for the Public Sector, Life Pensions and Investments and Corporate
Pensions leading. "Storing our data in the US would have been a major concern," Tyreman says.

Best practice for companies in the cloud
Inquire about exception monitoring systems
Be vigilant around updates and making sure that staff don't suddenly gain access privileges they're not
supposed to.
Ask where the data is kept and inquire as to the details of data protection laws in the relevant
jurisdictions.
Seek an independent security audit of the host
Find out which third parties the company deals with and whether they are able to access your data
Be careful to develop good policies around passwords; how they are created, protected and changed.
Look into availability guarantees and penalties.
Find out whether the cloud provider will accommodate your own security policies
Cloud computing pros and cons for regulated data
Cloud computing legal issues: Developing cloud computing contracts
Jose Granado on securing cloud computing, data management
Cloud Controls Matrix
The Daily Cloud repository for March
Top five must-read cloud computing blogs
Private cloud computing security issues
Top five service-oriented cloud computing articles in 2011
Private cloud computing models alleviate some cloud security issues
Panel discusses cloud computing security issues































The Top 5 Security Risks of Cloud Computing

http://blogs.cisco.com/smallbusiness/the-top-5-security-risks-of-cloud-computing/

Evaluate potential providers based on their responses to these key concerns.
More and more, small businesses are moving to cloud computing, signing up with private providers
that make sophisticated applications more affordable as well as setting up their own accounts with
public social media sites like Facebook. The trend is confirmed by Microsoft in its global SMB Cloud
Adoption Study 2011, which found that 49 percent of small businesses expect to sign up for at least one
cloud service in the next three years.
Private and public clouds function in the same way: Applications are hosted on a server and accessed
over the Internet. Whether youre using a Software as a Service (SaaS) version of customer relationship
management (CRM) software, creating offsite backups of your company data, or setting up a social
media marketing page, youre trusting a third-party company with information about your business and,
most likely, your customers.
Although cloud computing can offer small businesses significant cost-saving benefitsnamely, pay-as-
you-go access to sophisticated software and powerful hardwarethe service does come with certain
security risks. When evaluating potential providers of cloud-based services, you should keep these top
five security concerns in mind.
1. Secure data transfer. All of the traffic travelling between your network and whatever service youre
accessing in the cloud must traverse the Internet. Make sure your data is always travelling on a secure
channel; only connect your browser to the provider via a URL that begins with https. Also, your data
should always be encrypted and authenticated using industry standard protocols, such as IPsec (Internet
Protocol Security), that have been developed specifically for protecting Internet traffic.
2. Secure software interfaces. The Cloud Security Alliance (CSA)
ttp://www.thecloudcomputing.org/2012/

Change we are leading is the theme of CLOUD 2012. Cloud Computing has become a scalable
services consumption and delivery platform in the field of Services Computing. The technical
foundations of Cloud Computing include Service-Oriented Architecture (SOA) and Virtualizations of
hardware and software. The goal of Cloud Computing is to share resources among the cloud service
consumers, cloud partners, and cloud vendors in the cloud value chain. The resource sharing at various
levels results in various cloud offerings such as infrastructure cloud (e.g. hardware, IT infrastructure
management), software cloud (e.g. SaaS focusing on middleware as a service, or traditional CRM as a
service), application cloud (e.g. Application as a Service, UML modeling tools as a service, social
network as a service), and business cloud (e.g. business process as a service). Extended versions of
selected research track papers will be invited for potential publication in the IEEE Transactions on
Services Computing (TSC), International Journal of Web Services Research (JWSR), andInternational
Journal of Business Process Integration and Management (IJBPIM). Both TSC and JWSR are
indexed by SCI and EI [Link]. CLOUD Proceedings are EI indexed. According to Thomson
Scientific, JWSR is listed in the 2008 Journal Citation Report with an Impact Factor of 1.200. The
journal ranks #47 of 99 in the Computer Science, Information Systems and ranks #37 of 86 in
Computer Science, Software Engineering.
Under the umbrella of the IEEE 2012 World Congress on Services (SERVICES 2012), CLOUD 2012
will co-locate with the following service-oriented sister conferences: the 19th IEEE 2012 International
Conference on Web Services (ICWS 2012), the 9th IEEE 2012 International Conference on Services
Computing (SCC 2012), the 1st IEEE International Conference on Mobile Services (MS 2012), and the
1st IEEE International Conference on Services Economics(SE 2012). The five co-located theme topic
conferences will all center around "services," while each focusing on exploring different aspects
(cloud-based services, web-based services, business services, mobile services, and economics of
services).
To discuss this emerging enabling technology of the modern services industry, CLOUD 2012 invites
you to join the largest academic conference to explores modern services and software sciences in the
field of Services Computing, which was formally promoted by IEEE Computer Society since 2003.
From technology foundation perspective, Services Computing has become the default discipline in the
modern services industry.
The 2012 IEEE Fifth International Conference on Cloud Computing (CLOUD 2012) is the theme topic
conference for modeling, developing, publishing, monitoring, managing, delivering XaaS (everything
as a service) in the context of various types of cloud environments.
The 2012 IEEE tenth International Conference on Web Services (ICWS 2012) is the theme topic
conference for Web-based services, featuringdata-centric services modeling, development,
publishing, discovery, composition, testing, adaptation, and delivery, Web services technologies and
standards, and service-oriented science.
The 2012 IEEE Ninth International Conference on Services Computing (SCC 2012) is the theme topic
conference for services lifecycle management, enterprise modeling, business consulting, solution
creation, services orchestration, services optimization, services management, services marketing,
business process integration and management.
The 2012 IEEE 1st International Conference on Mobile Services (MS 2012) is the theme topic
conference for the development, publication, discovery, orchestration, invocation, testing, delivery,
and certification of mobile applications and services.
The 2012 IEEE 1st International Conference on Services Economics (SE 2012) is the theme topic
conference for economic and enterprise transformation aspects of the utility-oriented services paradigm


















What To Expect Next From Cloud Computing Companies
March 6th, 2012
Cloud computing emerged from virtualization, this allowed more cost effective and efficient
computing solutions that did not only changed how computing is done and how it works. Since
computing needs are ever expanding new developments from cloud computing companies are expected
with newly improved products and services. In recent years, security is one of the biggest concerns
amongst cloud computing consumers but through the collaboration and development by different cloud
computing companies, sandboxing is now the most accepted security technology for multi-tenant cloud.
This safety measure separates data through a security cloak which can only be accessed by the owner
and allowed users through the internet. Security breach still poses a threat but through the Sandboxing
technology, the threats are minimized therefore giving tenants more acceptable ways in protecting their
data and information.

Another development worth mentioning are the trends amongst cloud computing companies. These
trends include lower fee for IaaS, which Amazon is currently on the top spot among cloud computing
companies. Although most consumers migrated and adopted cloud computing for its access
anywhere, from any device innovations, some are still after the savings theyll have in the long run.
Most vendors, who compete in the cloud computing business are aware of what the competitors are
best at and what their weakest point. This makes the business to expand and develop new technologies
to put competitors at bay but only enough to keep the piece of the pie they already have. Concerns that
consumers face are still the main priority of these cloud computing companies, these are security,
service level agreement, consumer privacy and local, international law compliance.
Businesses and individuals who already have migrated to cloud computing still encounters issues which
are real and perceived these concerns are broken down into several categories which helps in
determining the root cause. Although, it is true that most of these concerns are customer created (due to
lack of information and knowledge), these issues are still the number one priority of cloud computing
companies and resolutions are being developed for customer satisfaction.
Google, the number one provider of free cloud computing solutions for individuals and businesses
continuously develops a more user friendly interface for the betterment of their services. Mobility, is
the number one requirement today, people are on the go and they need to access files, programs, data
and applications where ever they go. This feature is still is the anchor of the ship, this makes the job be
done on time without delays and issues. This is the number one requirement of businesses and
individuals before they had migrated to cloud computing, as this is the premise cloud computing is
developed in the first place.
The biggest concern amongst cloud computing companies is the limited wireless bandwidth for
smartphones, tablets and all other devices that use wireless technology. As of today, wireless devices
outnumber the available wireless bandwidth available. Since cloud computing is about mobility, this
issue maybe the roadblock for cloud computing. But, as usual as these kind of issues arise there will be
a solution but as for now, lets see what the cloud computing companies are cooking and who will cook
the best dish.







How To Pick The Best Cloud Computing Company
March 1st, 2012
There are hundreds if not thousands of cloud computing companies out there, if your planning to
migrate from conventional computing to cloud computing, there factors that you need to consider. First
things first, if you own a business, and you already have IT experts in your company, it is best that you
bring this plans to their attention. Some applications and programs are too complex to run in the cloud,
instead of doing that, you may need local servers installed in your office for this programs to run
smoothly. There are other things that you need to consider here; pricing, availability, support, service
levels and other technical issues your business may encounter.

The first thing a business leader should consider is the pricing, will you save more money if you enter
an agreement with a pay-as-you-go vendor? How many employees does your business have? Would all
of them require access to the same applications and software? Would it be best that you have these
software installed locally on a server instead of being in the cloud? These things among others should
be thought of before entering a contract with any cloud computing companies. The next thing to put
into consideration is the type of cloud computing model your business requires. Does your business
need multiple applications to run smoothly? Do you need to keep a large volume of data and files?
These aspects of your business should be listed down and be brought into the attention of the vendor so
they can suggest the best cloud computing model for your business.
Types of cloud computing models
Several cloud computing companies do not offer all types of cloud computing models. Today, there are
three cloud computing models to choose from. To give you a brief idea of what they are I will list
them all down.
Platform as a Service (PaaS) Operating systems, database and other applications and programs will
all be outsourced to a vendor thereby, all of these no longer needs to be locally managed.
Software as a Service (SaaS) This model allows consumers or cloud tenants to access programs and
applications in the cloud. This applications are not installed locally in your companys server but are
rather installed somewhere and is outsourced for local access. A business no longer needs a license
from software vendors for the applications it uses.
Infrastructure as a Service (IaaS) This cloud computing model allows consumers to use servers and
storage provided by vendors. Storage, networking equipments and other infrastructure requirements
that your business may need at the present and in the future
In considering a cloud computing vendor there are other factors a business leader needs to know.
Availability of the services is also crucial since cloud computing can only be accessed through
networking or the internet, it should be maintained in a way a backup internet service is always
available. Know your vendors product, several cloud computing companies have several products that
may help your business grow further. Migration and transition, if a business is new, it is best that the
transition is done as soon as possible with flexibility in terms of product and service availability.
How Cloud Computing Companies Can Easily Scam You
December 13th, 2011
Cloud computing may be a new term to anyone. But who dont know the cloud counting companies,
they are living without knowledge about modern world. Modern science has offered new looks of the
world. Everything has been changed dramatically. Cloud computing companies are the newest buzz in
the IT sector. Cloud computing offered by cloud companies may be defined as a form of accessing
different data such as documents, applications, music files, pictures, video files from anyplace around
the world. At earlier time, people used memory cards or portable hard disks to store the data and carried
it to access data. Nowadays, you are tension free to carry the data with you. You can store any kind of
data by cloud computing and can access these data anytime from anywhere. Due to several advantages
of cloud computing, people are using this brilliant system. It is also very helpful in business because it
saves overall business costs. Only internet connection is needed to you.

Nowadays, it is a question that, how do we be familiar with the best cloud computing companies? Ok!
There are several things to take in account to choose the best cloud computing company. The cloud
computing provider with committed hosting services ought to be comprehensible. Anyway, some
factors are here for helping to choose the best cloud computing company. Before choosing, to know the
customary level agreement or CLA is very essential. Then you can take in account its customer support.
Cloud computing suppliers have to provide sufficient customer support. The details about customer
support should be listed in the customary level agreement (CLA). Again, billing system is another
factor to choose the best cloud computing provider. By getting a billing instruction, you have to know
the total charge or billing system. Otherwise you may be scammed by them.
There are several excellent cloud computing companies for providing brilliant facilities. Google,
Akamai, furthermore VMware are three high class cloud computing companies in the world. GoGrid in
San Francisco is another excellent cloud computing company. Microsoft in Redmond , RightScale in
Santa Barbara , Rackspace in San Antonio , and NetSuite in San Mateo , is also excellent cloud
computing company. From the large number of cloud computing companies, it is very important to
check the providers background, customer service record, security, finances, as well as software
requirements. These factors will be handy to decide the best cloud computing company and which will
be suitable for your organization or business related job.
Finding the Best Cloud Computing Company for You
December 12th, 2011
Cloud computing is a new kind if storage technology, by which you can share software, data or
documents to computers as well as other devices on demand. On the other hand, it can be said that
cloud computing is one kind of form of accessing data including documents, applications, music files,
pictures, video files, and so on from anywhere around the world without carrying these in memory
cards, flash cards or hard disks. Using Cloud computing companies is very handy for any work
including business and business like works.

In your business, you have to share business related information with your partners. Again, you have to
represents some documents at any meetings with your target audiences. For performing these tasks, you
need to carry the data, documents and so forth with you. But if you are using cloud computing
companies, you dont need to carry all these things with you. You can freely move to one place to
another place. Only you have a net connection with your device.

You can easily attach all infrastructure and applications to the cloud. To access any item, you have to
dial into your cloud. It doesnt require installing on each computer. Flexibility is another benefit of
cloud computing. You can store data according to your needs and can access it anytime from anywhere.
Cloud computing is very economical to any user. For it you have to buy necessary infrastructure,
support devices and net connection. To store the additional data and information of company is a
constant pain for IT department of any recognized company. Cloud computing is very useful to any
company because there is no need to purchase additional device for storing data, documents,
information etc. It saves the money along with energy. Really, cloud computing is useful for small
business to large business.

Today, many cloud computing companies are easily available to serve anyone. But, Google, VMware,
and Akamai are three top cloud computing companies around the world. GoGrid in San Francisco is
another company that provides cloud computing services. Microsoft in Redmond , Rackspace in San
Antonio , NetSuite in San Mateo , and RightScale in Santa Barbara are high class company to provide
cloud computing facilities. Apart from these companies, there are many companies that are always
ready to offer cloud computing facilities. You can choose any company to enjoy the brilliant cloud
counting.
Why The Cloud Computing Market Is Always Growing
Nowadays, cloud computing service is one kind of cost-saving technique offered by the cloud
computing companies. It is mainly helpful to store files online. A lot of business companies dont know
the use of brilliant cloud computing technique and they have no cost saving technique in their hands.
Cloud computing is the excellent way to save the money in any business policy and best medium to
focus the objectives of the company.
Mainly, cloud computing is one kind of model to use storage space online. Many data storage modes
are available now but all of them dont allow to you to use more space. When you need to use more
space online, you have to pay additional fee for it. But some cloud computing companies allow using
enough storage space for anyone. There is no need to pay extra fee; only you have to pay for used space
that is allowed to you. Thus, clouding computing services eliminating the problem by allowing more
storage space on various sites and it can be used by several users. A lot of cloud computing companies
are offering this service. For small software company embracing cloud computing will be best.

However, cloud computing is the effective way to minimize the cost as well as to maximize the
efficiency of a company. This service reduces the cost in several ways and it is most economical than
any fixed size storage mode. On the other hand, if there is cloud computing service in your company,
you dont need to pay additional money to your technical workers needed to keep an eye on services in
the fixed space. By adopting this service in your company you can minimize the compulsion of
centralizing the storage. Really, cloud computing service acts as a feasible option for saving cost for
any Software as a Service (SaaS) companies.
This service is more flexible in increasing and decreasing usages offered by the cloud computing
companies. Whatever the reason, cloud computing is considered as a new and brilliant architecture. It
is really amazing service in IT sector. Numerous cloud computing companies are available to provide
this service. If you want to use this service for your company, you can get it online but you have to pay
a small amount. This payment may be monthly or so on. After all, cloud computing service is becoming
excellent system to save extra cost for purchasing additional online space. Thus cloud computing
market is growing day by day.
Most Popular Cloud Computing Providers
December 10th, 2011
There are a number of cloud computing companies on market now, but it is relatively difficult to
choose the best one for your business purpose. Who is excellent cloud provider it is very important to
know because it will be handy to fulfill your requirements. At the time of researching, you should
check the criteria of that cloud provider. Here are some basic requirements for identifying the excellent
and popular cloud computing providers below:
Reputation and Reliability
Reputation and reliability of cloud computing companies will be essential to know its excellence. To
understand the reputation and reliability it is very important to know how long it has been in industry. It
is also very important to know the clients as well as partnerships of that cloud computing company.
Moreover to know the reputation and reliability it will be better to consult with the partners and clients
of that cloud computing provider. By this way reputation and reliability of that company can be
measured and evaluated.
Suitability
While the business runs through a suitable cloud environment, then it can be considered the existing
cloud computing company is exact for your business. If there is no-obligation for free trail offered by a
company, it will be taken in account as more suitable for you. Thus you can know the suitability of a
company to run the business in a suitable cloud environment and how the company works before
making an enduring commitment.

Support along with Service Level Agreements
Support as well as service level agreements play an important role in making that provider or company
popular. Support commitment of a cloud providers can be known from the speediness of doing work.
If there is far difference between their commitment and work speed, it will not be a good cloud
provider. If you go to the office of a cloud computing company, you ought to request to see their
support department.
Safety of the Cloud
If the company is excellent, it will ensure the security of the environment including the business
process along with system. If the company offers safe and sound infrastructure at different levels, it will
be more suitable than other companies. Besides, good cloud computing companies ensures more
security of the data center of the business. Finally, it will ensure a safety environment for the business.
A cloud computing provider will be excellent and most popular while aforementioned requirements
will be content.
http://www.utdallas.edu/~hamlen/hamlen-ijisp10.pdf


http://www.computerweekly.com/news/2240089111/Top-five-cloud-computing-security-issues

























The Top 5 Security Risks of Cloud Computing

http://blogs.cisco.com/smallbusiness/the-top-5-security-risks-of-cloud-computing/

Evaluate potential providers based on their responses to these key concerns.
More and more, small businesses are moving to cloud computing, signing up with private providers
that make sophisticated applications more affordable as well as setting up their own accounts with
public social media sites like Facebook. The trend is confirmed by Microsoft in its global SMB Cloud
Adoption Study 2011, which found that 49 percent of small businesses expect to sign up for at least one
cloud service in the next three years.
Private and public clouds function in the same way: Applications are hosted on a server and accessed
over the Internet. Whether youre using a Software as a Service (SaaS) version of customer relationship
management (CRM) software, creating offsite backups of your company data, or setting up a social
media marketing page, youre trusting a third-party company with information about your business and,
most likely, your customers.
Although cloud computing can offer small businesses significant cost-saving benefitsnamely, pay-as-
you-go access to sophisticated software and powerful hardwarethe service does come with certain
security risks. When evaluating potential providers of cloud-based services, you should keep these top
five security concerns in mind.
1. Secure data transfer. All of the traffic travelling between your network and whatever service youre
accessing in the cloud must traverse the Internet. Make sure your data is always travelling on a secure
channel; only connect your browser to the provider via a URL that begins with https. Also, your data
should always be encrypted and authenticated using industry standard protocols, such as IPsec (Internet
Protocol Security), that have been developed specifically for protecting Internet traffic.
2. Secure software interface



























Elasticity in Cloud Computing
Abstract
One of yet unresolved challenges of cloud computing is the problem of making
an application elastic, which consists in making it automatically adjust to variations
in load without the need of intervention of a human administrator and without the
need to change its code.
In this project we first identified several design issues that have to be addressed
when making an application elastic: defining the granularity of the elastic components
in the application, handling their interconnections and controlling the elasticity at
execution time. We studied the approach of autonomic computing and proposed an
architecture of an elastic application manager with a well defined separation between
general concepts and application-specific parts.
We used a load injection application (Clif) as a use case. We studied the ar-
chitecture of the application and the Fractal component model it is based on and
implemented an elastic manager. In an experiment with a performance evaluation of
a web application we have shown that the approach is feasible and working.
Keywords: Cloud computing, autonomic computing, elasticity, Fractal, Clif


Contents
I Introduction 1
1 Context of Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
3 Application Elasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
4 Autonomic Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
5 Subject of Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
II State of the Art 5
1 Commercial Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Research Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
III Elastic Application Management 13
1 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1 Component Granularity . . . . . . . . . . . . . . . . . . . . . . . 14
2.2 Component Elasticity . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3 Interconnections . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4 Elasticity Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2 Application Map . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3 Application-specific Part . . . . . . . . . . . . . . . . . . . . . . . 20
4 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
IV Use Case 27
1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2 Load Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3 Clif . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.1 Presentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2 Component Model . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.3 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4 Elastic Clif . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.1 Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.2 Application Architecture . . . . . . . . . . . . . . . . . . . . . . . 31
4.3 Execution Environment . . . . . . . . . . . . . . . . . . . . . . . 31
4.4 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
v
4.5 Elastic Observer . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.6 Stopping the Application . . . . . . . . . . . . . . . . . . . . . . 35
V Evaluation 37
1 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3 Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
VI Conclusion 41
vi
Chapter I
Introduction
1 Context of Work
The work presented in this report was realized as part of project SARDES1.
SARDES is a project and a research team of the LIG laboratory located at INRIA
Grenoble-Rhne-Alpes.
The research domain of SARDES is distributed systems systems composed of
spatially distributed machines interconnected with a network. The main research
interest is then the construction of MultiScale Open Systems distributed systems
that can scale from resource-constrained embedded systems to cloud-based systems.
Such systems are known for being highly complex, dynamic and heterogeneous.
It is often impossible for an administrator to keep track of their current state and
do their administration efficiently. The aim is to develop self-managed software in-
frastructures which would be able to operate autonomously without the need for an
external administrator.
Within SARDES, the subject of work was to provide a solution to manage the
scalability of the applications deployed in cloud environment through a particular
technique called elasticity. The work was done in collaboration with Orange Labs
(Bruno Dillenseger).
2 Cloud Computing
Cloud computing platforms have received a great deal of attention in the business
world in recent years. The main motivation for companies to consider transferring
their existing systems or creating new ones on top of a cloud platform is the flexibility
they promise to provide and their pricing model.
The fundamental difference between a classical approach with a fixed infrastruc-
ture and cloud computing can be illustrated on an example of a startup company
launching a new service on the web.
With the fixed infrastructure approach, the company first has to estimate the
amount of hardware it will need to provide the service with a required level of quality
to the estimated number of customers. Then it has to acquire the hardware (by buying
or by renting), install the application and start providing the service. However as the
number of clients is often difficult to predict and changes significantly in time (e.g.
1. SARDES: System Architecture for Reflective Distributed Environments
1
Figure I.1 Comparison of classical scalability and elasticity approaches: In the
first case the system has been enlarged to a capacity corresponding to the expected
maximum load. For most of the time its capacity is not used, while at the peak
the load is higher than the expected value. In the second case, the system reacts
dynamically to the change in the load, adapting as needed and leaving just a small
part of its capacity unused.
a publication of an article about the company on a highly visited news server can
multiply the number of visitors in a short time period), the company risks that it
will either buy more hardware than necessary and pay unnecessary costs or on the
other hand that the hardware will not be sufficient enough, the quality of the service
will be low and will discourage potential clients.
Cloud computing claims to provide a solution to this problem: Instead of buying
a hardware and building its own infrastructure, the company rents capacity from a
cloud provider and only pays for the time it is actually used. The cloud computing
provider runs huge data centers with hundreds of servers and is thus able to pay lower
prices for the hardware and lower operating costs. According to a study of Armbrust
et al. [8], the saving on costs can be of factor 5 to 7. Instead of paying a high fixed
cost of installation and subsequent operating costs, the company would only pay the
operating costs.
Another advantage of the cloud computing is in the elasticity it can provide to
applications. In a huge data center, new resources can be allocated and assigned to
clients in a short time. The startup company could flexibly allocate new resources
when the number of visitors augments and later release these resources when they
are not needed anymore. This way it could prevent the deterioration in the quality
of service and save money on not paying for non-used infrastructure.
3 Application Elasticity
The variations in load pose a specific challenge on applications in distributed en-
vironments. In the previous section we presented an example with a web application,
2
Figure I.2 Self-configuration control loop
however such variations can happen in generally any kind of application. A tradi-
tional solution to this problem has been to ensure scalability the ability of the
system to be enlarged to a size which was expected to accommodate a future growth.
The disadvantage of such an approach is that the size of the system has to be esti-
mated in advance. If the growth in the load does not correspond to the estimation,
the assigned capacity is not used effectively. Another disadvantage is that scaling up
implies extending the physical resources, and it may be difficult to scale down after
a scale up.
A possible solution to the problem is to ensure elasticity. Elasticity aims to solve
the problem in an opposite direction - instead of setting the target physical size of
the system in advance, the system dynamically reacts to actual load by adding new
virtual resources. When the load on a component increases over a given limit, new
instance of the component is added to accommodate the growth. If on the other hand
the load decreases, unnecessary instance can be removed. Figure I.1 illustrates the
difference between the two approaches.
4 Autonomic Computing
The presented problem falls into the category of problems of autonomic computing
[14]. In general, autonomic computing can be explained as an effort to develop
self-managed complex software systems that would have the following characteristics:
Self-configuration automatic configuration of components
Self-healing automatic discovery and correction of faults
Self-optimization automatic provisioning of resources
Self-protection automatic identification and protection from attacks
3
The problem of application elasticity is thus a self-optimization problem.
Autonomic computing makes use of autonomic managers which implement feed-
back control loops. These loops regulate and optimize the behavior of the managed
system. Figure I.2 presents such a loop in a self-optimization manager. The self-
optimization manager is a component external to the managed system. It consists of
a sensor which periodically checks the actual state of the system, a decision module
which decides whether an action has to be taken and an actuator which modifies the
system. The manager does not have detailed knowledge of the system it only works
with a simplified model.
5 Subject of Work
Our main research objective was to provide a general solution for the management
of application elasticity in the context of cloud environment. Particularly, we aimed
to find a solution which would:
Provide reactive ways of scaling up and down
Minimize the impact of implementation of elasticity on the way the applications
are programmed
In order to show that the concept is practically feasible, the goal was to implement
an elastic manager, which would be executed as a separate application and make an
existing application elastic without modifying its source code.We used a load injector
Clif as a use case.
4
Chapter II
State of the Art
This chapter first overviews several existing cloud computing platforms and de-
scribes their approach to application elasticity. In the second part, several research
projects dealing with the issue are presented. The third part summarizes our obser-
vations.
1 Commercial Platforms
Existing commercial platforms can be divided into three classes according to the
layer at which the platform interfaces with the application (see Figure II.1):
Figure II.1 Cloud computing layers
Infrastructure as a Service (IaaS) the service provider provides basic infras-
tructure, typically just empty virtual machines. A client has to install his own
operating system and applications.
Platform as a Service (PaaS) the service platform provides a platform on
which clients applications have to be developed and executed. The client does
not care about the administration of the underlying hardware and operating
system, he only has to provide applications.
Software as a Service (SaaS) the whole stack is provided by the service
provider. A client is not requested to do any administrative tasks and only
pays for usage.
5
As we can see the services in the lower layers require the clients to handle more
administrative tasks, while giving them more freedom in the choice of software and
programming models to use. Services in the higher levels make administration easier,
but restrict the variety of applications that can be run.
We have selected three commercial cloud computing platforms to illustrate dif-
ferent approaches to cloud computing and their ability to provide elastic services. In
the subsequent sections, we always first provide a general overview of a platform and
then evaluate its suitability for elastic applications.
Amazon EC2
Amazon EC2 [1] is the only infrastructure as a service in the comparison. A client
uploads a virtual image of his system and runs it on Amazon servers. The client either
provides the whole virtual machine with the operating system and applications or
chooses one of the prepared images.
For each instance, the client chooses the type (which defines the computation ca-
pacity in number of CPUs, data storage capacity etc.) and one of the three purchasing
options: On-demand instances are flexible and are paid by the number of hours they
ran. Reserved instances guarantee capacity during all the time. Spot instances allow
the client to set the maximum value he is willing to pay per hour. Depending on the
current usage of the cloud, the price per hour is changing. When the price is lower
that the threshold, the instance is executed, when it gets higher, it is terminated.
Amazon EC2 provides an auto-scaling mechanism for application instances. It
works in the following way: The user defines lower and upper thresholds on CPU,
memory or network usage of the application instance and a time period (in the number
of minutes) over which the value is evaluated. If a threshold is exceeded, the number
of instances is adjusted (new instance started or a running instance terminated). The
time of reaction is at least one minute plus the time needed to start a new instance.
An Elastic Load Balancer is provided at the TCP layer to distribute the load over
the running instances.
Amazon EC2 provides two options for storing data: Elastic Block Store is an
unformatted block device with capacity from 1 GB to 1 TB. It can be used as a block
device for virtual images or be mounted as a standard block device to store data files.
Amazon Simple Storage Service (S3) is a scalable data object storage. User can store
data objects of size from 1 byte to 5 GB identified by a unique key and retrieve them
afterwards. It provides REST and SOAP based interfaces.
There are two database options: SimpleDB is a scalable non-relational data store.
It stores items described as attribute-value pairs organized in sets called domains.
It provides a simple API to modify the attributes and values and to retrieve data
using a Select command. The Select only works on one domain. Relational Database
Service (RDS) is a MySQL 5.1 compatible database. For the database instance, the
client chooses the instance type (sets the computation capacity and storage capacity).
RDS can create Read replicas of a database instance.
Amazon EC2 is an IaaS service, thus it enables the client to run applications
based on an arbitrary platform with any specific settings. Applications can be scaled
automatically, the unit of scalability is the whole instance and the reaction time is in
the order of minutes.
6
Microsoft Windows Azure
Azure [5] is a platform as a service offering from Microsoft. It is based on already
established Microsoft products the Windows operating system, the .NET platform
and the SQL server and aims to enable easy transfer of existing applications to the
cloud.
The cloud provides the operating system, runtime environment and distributed
database. The client is only required to upload and configure its application. The
application is called a service and consists of one or several roles. Each role can be
run in one or several instances. There are two types of roles web role is optimized
for web application programming supported by IIS 7 and ASP.NET, worker role is
intended for general application development, it might offer background processing
for a web role.
Services are developed, compiled and packaged using Windows Azure Tools, which
is an extension to Microsofts standard development tools (Visual Studio or Visual
Web Developer). A developer can use any programming language supported by the
.NET platform and use the .NET Framework 3.5 library. The service package is
accompanied by a configuration file, which specifies the roles, the number of instances
of each role to run and the type of hardware configuration.
It is possible to execute several instances of one role these instances are auto-
matically grouped into upgrade domains. It is possible to change configuration of in-
stances and restart the instances by upgrade domains (i.e. the operation of start/stop
affects the whole domain). This can be done on-the-fly as long as there are no new
roles added to the application. A client can carry out these changes either via a web
interface or a REST API. There is a diagnostic API which a client can use to get
runtime information about the instances (number of requests, processing time, cpu,
disk, memory, network usage,...). However there is no tool to modify the instance
settings automatically. It is only possible to setup e-mail or instant messaging noti-
fications and do the changes manually using the REST API. The billing window on
running time of an instance is one hour.
Azure storage services can store binary large objects, queues or tables and provide
access via a REST API. Microsoft also offers SQL Azure. It is a relational database
based on the SQL Server and provides the same interface, so it is possible to use
the whole set of SQL Server features (tables, views, procedures, triggers, ...). The
database is replicated in the data center and data recovery in case of failure is assured
by the provider. The size of one database is limited to 10 GB, a client account can
contain several databases. Microsoft encourages the clients to use scale-out on the
databases a horizontal scaling by dividing data into several databases and accessing
them in parallel. This way it can better distribute the database workload in its
datacenters, however it is a complex issue for the implementation at the clients side
as the whole logic of increasing and reducing the number of databases, distribution
of data etc. has to be done by the application.
Azure is based on .NET platform so it is possible for the developer to use the
whole range of existing libraries and development tools. It does not provide any
built-in support for automatic scaling of applications. It is not possible to change the
number of running instances on-the-fly. If the number of instances changes, the whole
upgrade domain has to be restarted.
7
Google App Engine
Google App Engine [4] is a platform as a service offering from Google. It claims
to be written as language-independent, however at the current time, only support for
Java and Python is provided. The applications are isolated from the running platform
and can only access it using the provided API. Java applications can only use a
limited subset of the JRE (and they are not allowed to create new threads), Python
applications can use frameworks supporting WSGI (such as Django, CherryPy, ...).
There are special APIs for the platform functions URLfetch for HTTP access to
the outside world, mail, XMPP, image manipulation, memcache. Access to the file
system is read-only.
The App Engine is optimized for Web Applications an application can be in-
voked by a HTTP/HTTPS call or as a cron job. There is no support for stand-alone
applications. Once invoked, the application has maximum of 30 seconds to complete
its execution and return the response.
Java applications are deployed as servlets with a deployment descriptor XML
file. Python applications are deployed as a batch of source files with a configuration
in a YAML file. A client cannot select hardware for the application instance all
applications run in a uniform environment and are regulated by quotas. There are two
types of quotas billable quotas are set by the user depending on his budget and are
application-specific. Fixed quotas are set by the App Engine and cannot be exceeded
by any application in the cloud. These quotas cover CPU usage, bandwidth used,
number of e-mails sent and the amount of data in the data storage. Furthermore, there
are per-minute quotas on CPU time, number of requests and bandwidth. A client
cannot change the per-minute quota as it is defined by the App Engine. The number
and location of executed instances of the application is determined automatically
it can be seen in the administration console, but cannot be changed by the user.
For storing the data, Google App Engine provides the Data Store a non-
relational, schema less database. It stores items described as property-value pairs
in tables. The entity corresponds to an object and the name of the table is the name
of the class. There is a simple query language GQL with a SELECT command which
only works on one table. There is no support for joining the tables, one-to-many and
many-to-many relations can be emulated using a ReferenceProperty mechanism. The
entity size is restricted to 1 MB, number of entities affected by a PUT and DELETE
commands is restricted to 500.
A blob store is able to store binary objects identified by unique keys of size up to
2 GB.
Google App Engine provides an easy administration of the application, as the
execution details are hidden from the user. It is not possible to implement a wide
variety of applications. Stand-alone applications are not supported, only a subset of
the JRE can be used, there is no other network connection with the outside world
than HTTP/HTTPS. The quota system is restrictive and does not allow for high
elasticity: the fixed quotas cannot be changed and the per-minute quotas prevent the
application from flexibly reacting to short periods of high demand.
8
2 Research Projects
We have selected several research projects which solve an issue similar to elasticity
or which could provide some inspiration in the way they handle the management of
a distributed application.
BOINC [7] is an infrastructure for public-service computing. The purpose is for
scientist to implement projects which require a big amount of computational resources
and for volunteers connected via internet to participate on their computation.
A project is divided into a set of workunits each workunit represents the inputs to
a computation. It has parameters such as compute, memory and storage requirements
and a soft deadline for completion. A result is a set of output files for a given workunit.
A client registers for a project and connects to the project server. Here it downloads
a workunit, computes individually the result and sends it back to the server.
Because the clients are not reliable, can connect or disconnect at any time and may
even provide invalid results, BOINC introduces the concept of redundant computing
- the same unit is sent to several clients and the results they provide are compared
to verify that they are correct.
BOINC in an established platform with more than 300 000 users, which proves
that such a system is feasible in a large scale, however the scope of applications is by
definition limited to problems where input can be easily divided to small parts and
processed independently.
ASKALON [13] is a system used to develop and port scientific applications as
workflows in the Austrian Grid project. A user composes Grid workflow applications
at a high-level of abstraction using an XML-based language (AGWL). The AGWL
representation of a workflow is then given to the middleware services (run-time sys-
tem) for scheduling and reliable execution. In AGWL, a workflow application is com-
posed from atomic units of work called activities interconnected through control flow
and data flow dependencies. Activities are represented at two abstract levels: activity
types and activity deployements. An acitivity type is a simplified abstract description
of functions or semantics of an activity, whereas an activity deployment refers to an
executable or a deployed Web service and describes how they can be accessed and
executed on the grid. The control flow constructs include sequences, directed acyclic
graphs, for, while, do-while loops and parallel activities. Basic data flow is specified
by connecting input and output ports between activities.
ASKALON middleware provides the following set of services: The resource man-
ager is responsible for negotiation, reservation and allocation of resources, as well as
automatic deployment of service required to execute grid applications. The enactment
engine service targets reliable and fault-tolerant execution of workflows through tech-
niques such as checkpointing, migration, restart, retry and replication. Performance
analysis supports automatic instrumentation and bottleneck detection (e.g. excessive
synchronization, communication, load imbalance, inefficiency, non-scalability). The
performance prediction focuses on estimating execution times of workflow activities
through training phase and statistical methods. The scheduler is a service that de-
termines effective mappings of single or multiple workflow applications onto the grid
using graph-based heuristics and optimization algorithms on top of the performance
prediction and resource manager services.
9
ASKALON solves the problem of deployment of work-flow applications with a
known structured in a known environment. It uses a prediction system which is opti-
mized for scientific applications, where the behavior is predictable and benchmarking
runs can be executed at the beginning. The topology of the applications is fixed at
the beginning, there is no space for elasticity.
Demberel et al. [11] focuses on an application which uses server resources from
a shared computer infrastructure opportunistically. An external controller launches
application functions based on a knowledge of what resources are available from the
cloud, their cost, and their value to the application through time. The application
has to be able to adapt and use the assigned resources as efficiently as possible.
A special case of an application is covered a server performance measurement
tool which runs a set of experiments on servers while setting different configuration
parameters and evaluates their performance. Based on the measurements of already
processed experiments it estimates the needs of the remaining experiments and uses
a greedy heuristics to schedule the ones that would most efficiently use the available
resources.
Such an approach only works with applications where it is easy to estimate the
future costs of tasks based on the observations of previously run similar tasks.
Lim et al. [15] addresses elastic control for multi-tier application services that
allocate and release resources in discrete units, such as virtual server instances of
predetermined sizes. It focuses on the elastic control of the storage tier, in which
adding or removing a storage node requires rebalancing stored data across the nodes.
The target environment is an elastic guest application hosted on server instances
obtained on a pay-as-you-go basis from a cloud provider. The application has defined
a Service Level Objective (SLO) that characterizes the target level of acceptable
performance, typically a maximum acceptable response time for a web application.
The purpose of the elasticity if to grow and shrink the active server instance as needed
to meet the SLO under the observed or predicted workload.
The controller process collects the inputs from sensors, analyzes them and drives
actuators, which control the application. The actuators operate in a discrete way, i.e.
they can only add or remove one whole instance of a resource. The storage tier is a
distributed service that runs on a group of service instances provisioned for storage. It
exports an API which allows a newly acquired instance to join the group of instances
or an arbitrary instance to leave a group. When the number of instances changes, the
storage engine redistributes (rebalances) the data.
The horizontal scale controller is responsible for growing or shrinking the number
of storage nodes. When the average CPU utilization on instances exceeds a maximum
threshold, a new instance is added. When the average CPU utilization gets lower than
a minimum threshold, an instance is removed. This way the average CPU utilization
is kept in the limits which assure efficient usage of the resources. The horizontal scale
controller is responsible for controlling the data transfers to rebalance the storage
after adding/removing an instance. It calculates the amount of bandwidth allocated
for the rebalancing operation as a tradeoff between a low bandwidth (which causes
the operation to run unacceptably long) and a high bandwidth (which consumes
application resources and increases its response time).
The proposal seems to be a feasible solution for storages which are able to rebal-
10
ance in a reasonably short time. If the storage is big and the operation takes a longer
time, the system might not be able to react fast enough to an increase in the number
of requests.
3 Summary
In this chapter we have presented several commercial cloud platforms. We have
seen that Amazon EC2 [1] provides a mechanism for automatic scaling of an applica-
tion. The granularity of such a scaling is set as a whole virtual machine. Amazon does
not provide any automatic reconfiguration or data redistribution among the copies
of the same machine. This has to be assured by the programmer.
Microsoft Windows Azure [5] can create several instances of one role and provides
an interface to do it remotely. Unlike Amazon, it does not provide a controller to
invoke the replication automatically. Nor does it provide any way to reconfigure the
application and redistribute data.
Google App Engine [4] does not allow any replication of application components.
The quota system is restrictive and does not allow for a high elasticity. The fixed
quotas cannot be changed and the per-minute quotas prevent the application from
flexibly reacting to short periods of high demand.
BOINC [7] and ASKALON [13] provide management systems for specific classes
of applications. BOINC handles applications which can be divided into small worku-
nits and processed independently to compose the final result. ASKALON supports
applications which can be described as a set of components with interconnecting data
flows. Both these systems use benchmarking to predict the behavior of application
components and available resources.
The same holds for Demberel et al. [11], where an external manager assigns ap-
plication components to resources based on the prediction of their needs.
Lim et al. [15] uses the concept of a Service Level Objective. Based on the current
performance, the management system decides when a component needs to be repli-
cated and handles application reconfiguration and data redistribution. The scope of
the paper is however limited to storage applications.
11

Chapter III
Elastic Application Management
The aim of the project was to study the management of the lifecycle of an elastic
application. In the first section of this chapter we state the problem. In the second
section we give a detailed analysis. In the third section we explain the design decisions
we made and the architecture of our solution. In the fourth section we describe how
the solution was implemented. In the fifth section an example application is presented.
1 Problem Definition
In the context of our study, an application can be arbitrarily any software system,
composed of several components. These components are defined at a certain level of
abstraction single objects, running program instances, virtual machines or even a
set of physical machines.
The architecture of the application is known and can be precisely described.
Application is composed of components and their interconnections. The architecture
description defines in which order the components and interconnections should be
started. Some of the components can be elastic, which means that several instances of
the same component can be created and their number changed dynamically during the
execution of the application. Interconnections with other components should be able
to reconfigure themselves accordingly. Some of the components can be dynamic, which
means that they can be started/stopped during the execution of the application.
Elastic components are particular cases of dynamic components.
An elastic manager is a program which controls the whole lifecycle of the elastic
application. It has to assure the execution of the following phases of the application:
Starting Create the components and interconnections in the order defined
by the application architecture
Execution Observe the execution of the application and add or remove
the instances of elastic components according to the actual load, start or stop
dynamic components
Stopping Destroy the components and the interconnections
The elastic manager is composed of two parts:
Application-independent core Implements the functions which are generic
13
for all elastic applications
Application-specific code Implements parts of the management which are
specific for the application, such as creation of the components, configuration
and reconfiguration of specific types of interconnections, etc.
The aim of this research was to identify the requirements on such an applica-
tion manager, evaluate different levels of abstraction at which it could operate and
the consequences of this choice on the amount of code which could be implemented
as application-independent and the code which would have to remain application-
specific.
The expected output was a proposition of an architecture of such a manager and
a demonstration of the feasibility of the concept with an example implementation.
2 Analysis
In this section we analyze several issues in the conception of an elastic manager
and present different design choices which have to be taken into consideration. Namely
we will present the choice of the component granularity, issues in making a component
elastic, different ways of interconnecting the elastic components and the control of
the elastic application.
2.1 Component Granularity
An important notion in the problem description is the concept of a component.
Following the concept which is used in commercial cloud computing and which was
presented in Section 1, the components could be defined at one of the following layers:
Application A component would correspond to a set of object instances in
a programming language
Platform A component would be an instance of a running application
Infrastructure A component would be a whole virtual machine
Handling the elasticity at each of the levels of abstraction would have several
advantages and disadvantages.
Application layer would provide the most detailed description of the application
and allow to optimize the performance at the level of the smallest components. How-
ever, the complexity of such a description would easily become very high as it would
require defining precisely which components should be elastic and the replication
mechanism for each of the components and interconnection. Also such a solution
would be restricted only to application running on one specific application platform.
Platform layer would operate with higher level description of the application the
model would become less complex and easier to define for a developer. However even
in such an approach some requirements on the way application is implemented would
remain. In order for a manager to be able operate and reconfigure the components,
it would still have to follow the restrictions on the software platform used.
Infrastructure layer would give the most freedom for the developer. The appli-
cation components would be the whole virtual machines running arbitrary software
14
_ _
___________
_ ___ _
_____________ _
_______ _
__
Figure III.1 Three tier web application architecture
platforms. Such a solution on the other hand does not allow for very detailed descrip-
tion as the virtual machines are quite large execution units, which cannot be added
or removed as fast as single application or even single object instances.
As we can see the advantages and disadvantages of the different choices follow
the same pattern as in the cloud computing platforms. There is no single answer to
the question which decision is the best as each of them might be suitable for different
type of application.
2.2 Component Elasticity
With component elasticity we understand the problem of introducing a new in-
stance of an existing component to a running application. If the application is to be
elastic, it should be possible to insert or remove such an instance without affecting
the rest of the application, it is without the need to stop all the other components.
A component of an application defined at any layer consists of two distinct parts
the execution code and the internal state. We suppose that the execution code for
each of the component is available and does not change during the execution of the
application. This is however not the case with the internal state. With respect to the
internal state, two instances of the same component can be either independent or
they might need to share at least part of their internal state.
Independent instances are easier to manage. It is sufficient to start a new instance
without even notifying the existing ones. An example of such an instance is a web
server, which only servers static content or generates dynamic pages (and does not
support HTTP sessions) in a three tier web application architecture (see Figure III.1).
Dependent instances are much more difficult to manage. First it has to be deter-
mined which part of the internal state should be shared and which is instance-specific.
In some cases it is easy to determine. For instance if the web servers in the above
mentioned architecture had to support HTTP sessions, the only shared state informa-
15
tion would be the data associated with sessions. The other state variables associated
with the generation of dynamic pages would remain with the instances. However,
if we consider an arbitrary application component, such a distinction might not be
obvious.
Once the shared state is determined, the sharing mechanism has to be imple-
mented. There exists a variety of mechanisms from a shared memory or message
passing interfaces to more complex ones. As can be seen in our example, in some
cases a simple solution might work (a shared memory would be a feasible way of
sharing session-related data between webservers), however in other cases, much more
complex mechanism would have to be implemented (as in case of a replicated database
in our example).
There is not one best way of implementing component elasticity and the choice
of the mechanism will always depend on the specific application needs.
2.3 Interconnections
In order for the application to work, the application components need to exchange
information. In our abstract concept, any communication channel between two com-
ponents is called interconnection. Such an interconnection can be realized using an
arbitrary mechanism, such as direct function call, shared memory, message passing
interface, RPC, HTTP protocol etc.
Let us consider a successfully deployed application composed of components and
their interconnections. When introducing new instances of an existing component,
the already established connections have to be reconfigured in such a way that:
the communication between existing components is not affected
no data is lost in the application
new instance of the component starts receiving messages in the same way as
other instances of the component and gets integrated to the application in as
short a time as possible
While these requirements can be fulfilled by a proper implementation of the re-
configuration mechanism, there is one issue which remains conceptual and requires
cooperation of the application developer. It is the problem of the cardinality of in-
terconnections. An interconnection might be of one of the following types:
1:n
Figure III.2 1:n connection
This is a connection between one component and several instances of the other
component (Figure III.2). In order to establish such a connection, the communication
interface has to support such a configuration and a semantic of the communication
16
between the static component A and the elastic component B has to be defined.
One possible semantic is that a message from component A will be delivered to both
instances B1 and B2 (i.e. in case of updating configuration of two application servers).
Another semantic is that the message is delivered only to one of the two instances
(in a similar way as by a load balancer). This behavior might also be different for
different messages. It is not possible to decide which mechanism to use without a
detailed knowledge of the specific application.
m:n
Figure III.3 m:m connection
The establishment of communication channels between components in case of m:n
connection (Figure III.3) is even more complex as it might not depend only on the
knowledge of a sending component but on the state of the whole system. Figure III.4
shows some of the many configurations an interconnection between two components,
each in 4 instances, can be established. Each of them might be valid in some case
in some application and the decision again needs to be made by the application
developer.
Figure III.4 Different ways of establishment of communication channels in an m:n
connection
2.4 Elasticity Control
As stated in Section 1 an elastic manager is a program which controls the exe-
cution of the elastic application. Such a manager operates from outside of the appli-
cation the manager knows the architecture of the application and can manipulate
17
its components and connections, while the application is not aware of the presence
of the manager.
As we already stated, the elastic manager has three tasks start the application,
control its execution and stop the application. In this document we focus on the
control part.
To control an elastic application means to:
determine when to increase or decrease the number of instances of an elastic
component
realize the operation
As we have already presented in Chapter I, the main motivation for introducing
elasticity is to improve the performance of an application by increasing the capacity of
saturated components. The question of determining when a component is saturated
and one more instance needs to be added, cannot be answered with one answer for
all applications. The definition of saturation depends on a specific component and a
specific application. As an example, it might be feasible to use a CPU usage as metric
to evaluate saturation of a web server as the number of requests per second increases,
the CPU usage increases until at a certain point the server becomes saturated and a
new instance should be added. In some other application, a limit on the number of
threads executed in parallel can be imposed, etc.
The operation of creating or removing an instance of a component has to be
implemented according to the layer of abstraction the components correspond to
(see Section 2.1). There is a significant difference between the time and resources
needed to create a new component at the different layers. Creating a new object in
memory might only last few miliseconds, while starting a new virtual machine can
take several minutes. This has also to be taken into account when implementing the
elastic controller.
Again in this case we have identified some issues which are application specific
and for which there is no universal solution.
3 Design
3.1 Architecture
In Section 2 we have analyzed the problem and identified the boundaries between
the part of the system which can be used for an arbitrary elastic application and the
part which needs to remain application-specific. We have seen that the application-
specific part has to handle several relatively complex features, while the general part
only handles the essential abstract notions of components and connections. We have
also presented in the Introduction (Section 4) the notion of the self-configuration
control loop.
These observations could be directly used to design an elastic manager. In or-
der to make the usage of the manager more convenient, we have decided to use a
modular design and support several additional features. Firstly, we wanted to make
the management of elasticity independent on the process of starting the application.
It is, to make it possible to start an application and join a controller of elasticity
later if needed. Secondly, we wanted to separate the part responsible for adding and
removing components from the decision-taking part to make it possible to implement
several decision logics for the same application.
18
Figure III.5 Components of the elastic manager
Our final design is composed of four interconnected servers (Figure III.5):
Starter Creates components and connections of the application
Configurator Stores the application configuration
Controller Creates dynamic components and changes the number of in-
stances of elastic components
Observer Observes the runtime state of the application
In order to start the application, the configurator server has to be running. The
starter server then reads the application description, creates the components and
connections, transmits the information about the application architecture to the con-
figurator and terminates.
To make the application elastic, first a controller server has to be started. It
connects to the configurator and retrieves the application architecture. Second, an
observer server connects to the controller, observers the runtime state of the appli-
cation and asks the controller to adjust the components when needed.
The described architecture is an implementation of the self-control loop, where
the sensor and decision parts of the loop are assured by the observer server, while the
actuator part is assured by the controller. The system model (the current application
configuration) is stored at the controller and retrieved by the observer. (Figure III.6)
3.2 Application Map
In order to start the application, the Starter component needs a description of the
architecture. This description in our design is provided in the form of an application
map definition of components and their interconnections. In Section 1 we have
already established three different kinds of components. In the design stage of the
development of the elastic manager, we specify the components in more detail.
The description of a component provided by an application developer contains
the following information:
19
Figure III.6 Self-configuration control loop within the elastic manager
Id a unique component id
Type type of the component, one of the following:
Static the component can only be started at the beginning and destroyed
at the end of the application lifetime
Dynamic the component can be started/stopped during the execution of
the application
Elastic the component can exist in several instances (initial, minimal and
maximal number can be set) and the number of instances can change during
the execution of the application
Startup flag determines whether the component will be started on the ap-
plication startup
Stage the number of the startup stage at which the component will be created
The description of a connection contains the following information:
Id a unique connection id
Type an application specific type of the connection
ComponentA id of the source component it interconnects
ComponentB id of the target component it interconnects
As we can see the application is described at a high level of abstraction. The
application description can be extended with other information as required by the
application-specific part of the implementation (see the next subsection), namely the
type of the connection will always be defined by the needs of the application.
3.3 Application-specific Part
In order to make the elastic manager operational with a given application, the
application developer has to provide an implementation of the following features:
20
Starter and Controller servers the following functions:
startComponent(component) create instance(s) of a component as specified
in the application description
stopComponent(component) destroy instance(s) of a component
reconfigureComponent(component) adjust the number of instances of an
elastic component by creating new ones or removing existing ones
bindConnection(connection) connect components with a connection as spec-
ified in the application description
unbindConnection(connection) disconnect components by removing a con-
nection
reconfigureConnection(connection) adjust a connection which ends in an
elastic component to the modified number of component instances
Observer server a control loop which will periodically check the status of the
application (the sensor part), decide which action to take and send a request
to the controller (the decision part)
Configurator server does not require any application-specific code.
It should be noted that the specification of the functions is given at a high level of
abstraction. It does not specify any particular behavior with respect to different types
of components and connections. It is up to the developer to define how the imple-
mentation will handle different choices in designing an elastic application presented
in Section 2.
Particularly, the presented architecture does not impose any level of component
granularity and it is a developers decision whether the startComponent function
will actually start a virtual machine, create an object instance or do something else.
The architecture does not specify any concrete way to handle elastic components
and different cardinalities of interconnections between them. When a number of in-
stances of an elastic component is requested to change, a reconfigureComponent
function is first called. This function obtains a reference to an object describing
the component its type, configuration parameters and references to all existing
instances. The reconfigureComponent function will create the new instance (or
remove an existing one) and notify and reconfigure the other instances if needed.
Once this is done, the reconfigureConnection function is called for each con-
nection going from or to the reconfigured component. The function obtains as an
argument a reference to an object describing the component its type, configura-
tion parameters and source and target component description. Based on the needs of
the specific application, the function will establish actual connections between com-
ponents instances and handle the 1:n and m:n connections. This function can also
be used to execute on components functions that have to be executed after other
components are created and the connection established.
4 Implementation
The elastic manager presented in this chapter was implemented in Java. The
general part of the elastic manager constitutes package org.ow2.ea. It contains all the
necessary classes needed to implement an elastic manager for a specific application.
21
The application developer is expected to inherit some classes from the org.ow2.ea
package and to implement the application-specific parts. (See Figure III.8 for a class
diagram of the org.ow2.ea package).
The four components of the elastic manager are distinct classes with their own
main functions. They can therefore be executed as stand-alone applications on dif-
ferent machines. They use a protocol based on Java sockets for communication. The
following functions can be invoked by other components on their interfaces:
Starter
start(String appMapFilename) Start the application described in the given
file and send the map to the Configurator
Configurator
putMap(AppMap map) Store the application map
getMap() Get the stored application map
Controller
getMap() Get the application map
startComponent(String id) Start a dynamic component
stopComponent(String id) Stop a dynamic component
incComponent(String id) Increase the number of instances of an elastic
component
decComponent(String id) Decrease the number of instances of an elastic
component
In the current implementation, the application map is read from a Java Properties
file. Components and connections are identified by a unique String id.
5 Example
In this section we show how an elastic manager for a simple two-tier web architec-
ture would be implemented. The application consists of a database and web servers.
We suppose that the database is able to handle load much superior to any expected
scenario and so there is no need to replicate the database. On the other hand, the
web servers can get overloaded easily when the number of requests increases. It is
possible to create several identical web servers and distribute the load among them
using a load balancer. The architecture is depicted in Figure III.7.
We suppose that the application is running in a cloud and that:
The components are stored in separate virtual machines
It is possible to create several instances of the virtual machine with the web
server
It is possible to obtain a reference and connect to a newly created virtual
machine
There is no need to share state between web servers
The load balancer is accessible from outside of the cloud and its address is
known to users
In terms of our architecture, the application is composed of three components
two static components (LoadBalancer and Database) and one elastic component
22
Figure III.7 Two tier web application architecture
(WebServer). There are two connections one between the database and the web
server(s) (DatabaseWebServer) and one between the load balancer and the web
server(s) (LoadBalancerWebServer).
The application map for such an application would look like the following (in
form of a Java properties file):
appmap.nrComponents = 3
appmap.nrConnections = 2
appmap.component.0.id = LoadBalancer
appmap.component.0.type = static
appmap.component.0.vm.name = load-balancer
appmap.component.0.startup = true
appmap.component.0.stage = 3
appmap.component.1.id = WebServer
appmap.component.1.type = elastic
appmap.component.1.instances.min = 1 // the minimal number of instances
appmap.component.1.instances.max = 10 // the maximal number of instances
appmap.component.1.instances.def = 1 // the initial number of instances
appmap.component.1.vm.name = web-server
appmap.component.1.startup = true
appmap.component.1.stage = 1
appmap.component.2.id = Database
appmap.component.2.type = static
appmap.component.2.vm.name = database
appmap.component.2.startup = true
appmap.component.2.stage = 0
23
appmap.connection.0.id = DatabaseWebServer
appmap.connection.0.type = database-webserver
appmap.connection.0.componentA = Database
appmap.connection.0.componentB = WebServer
appmap.connection.0.stage = 2
appmap.connection.1.id = LoadBalancerWebServer
appmap.connection.1.type = load-balancer-web-server
appmap.connection.1.componentA = LoadBalancer
appmap.connection.1.componentB = WebServer
appmap.connection.1.stage = 4
As we can see, each of the components and connection is assigned a number of
a stage at which it will be created. To start the application, the Starter will execute
the following functions:
1. startComponent(Database) to start a virtual machine database
2. startComponent(WebServer) to start one virtual machine web-server (one
is the default number of instances of the web server)
3. bindConnection(DatabaseWebServer) to wait until the database and web
servers are started and connect the web server to the database
4. startComponent(LoadBalancer) to start a virtual machine load-balancer
5. bindConnection(LoadBalancerWebServer) to wait until the load balancer
is started and connect the load balancer to the web server
After the last step, the application is ready and starts accepting incoming con-
nections. The Starter server sends the application map to the Configurator server.
If desired, the Controller server for the application can be started. It will retrieve
the application map from the Configurator and wait for incoming connections from
an Observer.
The Observer, once started, will connect to the Controller, get the application
map and connect to the existing web servers and load balancer. It will periodically
check the load on the servers and if the load reaches a certain limit, it will call the
incComponent function of the Controller.
To increase the number of instances of the WebServer component, the Controller
will execute the following functions:
1. reconfigureComponent(WebServer) to create one more virtual machine
web-server
2. reconfigureConnection(DatabaseWebServer) to wait until the new vir-
tual machine is created and connect the new web server to the database
3. reconfigureConnection(LoadBalancerWebServer) to connect the new web
server to the load balancer and reconfigure the load balancer
The processes of decreasing the number of instances and stopping the application
are reverses of the described processes.
24
Figure III.8 Class diagram of the org.ow2.ea package the part of the elastic
manager which is common for all applications
25

Chapter IV
Use Case
This chapter presents the implementation of an elastic application manager for
a load injection application Clif [2]. The first section explains why this use case was
selected. The second section introduces the problem of load testing. The third section
presents the architecture of the Clif application and the fourth section describes the
elastic application manager for this application.
1 Motivation
In the previous chapter we have showed that in case of some application and
some design choices the problem of making an application elastic can become very
complex. We were therefore looking for an application which would make the process
less complicated, while still allow us to show the main features and the feasibility of
the approach.
In cooperation with our industrial partner Orange Labs we decided to use a load
injection application Clif as a use case. Clif has several advantages for our purposes. It
is written in a component-oriented framework, thus the identification and separation
of components is already done. Furthermore, there is no complex communication
between components which need to be elastic. And finally, the performance of such
an application can be easily measured for further analysis.
2 Load Testing
The idea of load testing is to check that the behavior of a given system under
test (SUT) conforms to specification. A common approach is to send requests to the
SUT, wait for replies and measure response times as they are experienced by the user
or a client system.
Traffic generators are often referred to as load injectors. Several load injectors
might be used at the same time to generate heavy workload. The generated workload
may emulate the traffic of a number of real users of clients through so-called virtual
users.
Probes are used to obtain accurate measurements of performance at the injectors
and the SUT. These measurements include characteristics such as CPU or memory
consumption and can be used for tuning both the SUT and the injectors.
A supervisor is in charge of controlling and monitoring the distributed set of
load injectors and probes. Figure IV.1 shows a scheme of a load testing platform.
3 Clif
3.1 Presentation
Clif is a load injection framework which has been jointly developed by France
Telecom and INRIA in the context of the Java Middleware Open Benchmarking
Initiative an initiative dedicated to benchmarking and performance issues in the
context of the ObjectWeb open source community. The idea was to provide a generic,
scalable and user-friendly platform for load injection and performance reporting [12].
Clif platform provides several types of load injectors for generating traffic sup-
porting common protocols such as HTTP, FTP, SIP, etc. For measuring resource
usage several probes are implemented for processor and memory consumption, net-
work traffic, etc. Clif provides tools for supervising running tests and analysis tools.
It can be controlled via a command line interface or one of several graphical interfaces
(including an Eclipse plug-in).
Clif is an open source application and can be extended by implementing new
injectors and probes in Java language.
3.2 Component Model
Clif is based on Fractal. Fractal is a component model, which was also developed
by the ObjectWeb community [3]. One of the motivations for this decision was to
evaluate Fractals support for distributed applications and its claimed flexibility [12].
Main goals of the Fractal component model were to implement, deploy and man-
age (i.e. monitor and dynamically reconfigure) complex software systems. The main
features of Fractal are:
Composite components to provide a uniform view of applications at various
abstraction levels
Shared components to model resources
Introspection capabilities to monitor a running system
Configuration and reconfiguration capabilities to deploy and dynamically re-
configure an application
Another goal was to be applicable to many software, from embedded software to
application servers and information systems [9].
Fractal component model is defined as an extensible system of relations between
well defined concepts and corresponding APIs that Fractal components may or many
not implement. This set of specifications is organized as increasing levels of control.
At the lowest level, a Fractal component is a runtime entity that does not provide
any control capability to other components and is therefore like an object.
At the next level, a component provides a standard interface that other compo-
nents can use to discover all its external interfaces (external introspection).
At the last level, the component allows other components not only to discover,
but also to modify its content, i.e. what is inside the component. In the Fractal model,
this content is made of other Fractal components, called its subcomponents, bound
together through bindings. [10]
Component discovery is assured by using Fractal registry. It is a standalone ap-
plication running at a network location, whose address is known to all components
of the application. When a Fractal component is created, it can register itself in the
registry using a unique string id. Other components can later use the id to retrieve a
reference to the component.
With Fractal it is possible to deploy components at another physical machine.
Two special Fractal servers have to be running at known locations the registry and
the code server. The process works as follows: First, the target machine creates an
empty Fractal server and registers it in the registry. The source machine connects to
the registry and retrieves reference to the server. Then it requests the server on the
target machine to deploy the component. The target Fractal server connects to the
code server, downloads component bytecode and resource files, creates the component
and registers it in the registry.
3.3 Architecture
The architecture of Clif is based on the architecture presented in Section 2. It
relies on five component types (Figure IV.2):
Load injector components
Probe components
One supervisor component
One storage component to store all measures and test plan definitions
One or several analysis components that get measures from the storage com-
ponent and provide performance analysis and reporting facilities
Load injectors and probes are autonomous in their activity and are controlled by
the supervisor. They produce data (measures, lifecycle events, alarms,...) that are
retrieved by the storage component at the end of the test component. Monitoring is
achieved in the following way: each load injector and probe maintains moving sta-
tistical values about their activity. A supervisor can request the values when needed
using the components DataCollector interface.
In Clif, due to their similarity, the load injectors and probes are considered a
unique component type the blade type. Blades contain arbitrary computation ca-
pabilities that can be controlled and monitored by the supervisor component. Further-
more they conform to a well-specified lifecycle. A blade component is first deployed,
then initialized and started, and then possibly suspended and resumed. At the end
of its activity, the blade can be either aborted (in case of error), completed (in case
of a successful execution) or stopped (from outside).
In the context of load testing, the deployed blades are first initialized so that they
become ready to start as soon as they are requested to. This leaves enough time to
prefetch libraries (i.e. Java classes), get data sets from files etc. When all the blades
are ready, the supervisor requests them all to start their activity. The activity may
or may not terminate by itself, either because the workload scenario (respectively the
scheduled observation duration) has completed or failed, or it may be terminated by
a stop request. The supervisor then asks the storage component to collect the results
from the blades.
4 Elastic Clif
4.1 Specification
In our use case we focused on measuring the maximal throughput of a web appli-
cation, i.e. the task was to find out the maximal number of requests a web application
is able to handle before it becomes saturated and its response times deteriorate sig-
nificantly. To do so, it is necessary to use a load injector which generates traffic to the
application the system under test (SUT). The traffic injector consumes significant
amounts of physical resources of the machine it is running on and it is often needed
to use several injectors running on different machines. It is usually difficult to deter-
mine the result in advance and set the number of injectors accordingly. A possible
solution to this problem might be to use an elastic approach to gradually increase
30
the number of injectors until the server is saturated.
The details of performance evaluation of web applications were out of scope of
our project and we did not aim to find an exact method to do such an evaluation.
Instead, we focused on the challenge of making the Clif application elastic. The aim
of our work was therefore to develop an elastic manager for Clif which would be able
to adjust the number of injectors during the execution of a test upon a request from
an elastic observer.
4.2 Application Architecture
In order to make an application elastic, one has to find out which components the
application is composed of, what are their interconnections and which components
should be made elastic. With Clif, this is easy to determine as the application itself
is already component-based and the Fractal component model allows to distribute
the components to several locations and handles the communication.
As we have seen in Section 3.3, Clif application is composed of three main compo-
nents: the supervisor, the data storage and the load injector. Out of these components,
only load injector has to be elastic.
The supervisor component is accompanied with data files needed to execute a
particular test namely a .cls file describing the probes and injectors and a file
defining the HTTP injection scenario.
In addition to these components, two servers of the Fractal component model have
to be running a registry, which is used for the components to be able to discover
other components and a code server, which is used to distribute the bytecode of the
components to different machines.
Another important design decision is about the platform and component granu-
larity. In case of Clif one could envisage a platform as a service (PaaS) platform which
would provide support for running single Fractal components. However, as no such a
platform exists at this time, we have decided for an infrastructure as a service (IaaS)
approach and distribute Clif components to several standalone virtual machines.
We did not find a particular reason for separating the supervisor and the data
storage to different machines. With respect to elasticity, our application is therefore
composed of two components:
clif-supervisor (static) contains the supervisor and data storage compo-
nents, the Fractal registry and code server
clif-injector (elastic) contains the load injector
For the needs of the application it is sufficient to define just one interconnec-
tion between the components clif-supervisor-injector. The default number of
instances of the elastic component is set to one.
4.3 Execution Environment
We used a set of physical machines connected to a local network to simulate an
IaaS cloud platform. To create such a platform, two essential components are needed
a hypervisor to launch virtual machines and a convenient interface for applications
to control the instances of the virtual machines.
31
4.3.1 Hypervisor
Nowadays, there are several hypervisors available. As the particular choice of
the hypervisor did not have effect on our experiments, we did not need to do any
further analysis and used Xen, which had been already used in the SARDES team.
Xen is a hypervisor for IA-32, x86-64, Itanium and ARM architecture. According
to its authors, it is an open source industry standard for virtualization [6]. Xen
systems have a structure with the Xen hypervisor running as the lowest and most
privileged layer. Above this layer run one of more guest operating systems, which
the hypervisor schedules across the physical CPUs. The first guest operating system,
called dom0 boots automatically and has special management privileges. The system
administrator can log into dom0 to manage any other guest operating systems.
Xen virtual machines are stored on the hard drive of the physical machine and
consist of several files a configuration file which contains parameters of the machine
(such as its name, network address, etc.) and the location of other files, which contain
the images of the virtual hard drives of the machine. A user can start a machine by
simply entering the following command on the console of the dom0 system:
xm create <configuration_file>
Once the machine is started, the user can connect to the console of the machine
and control it in an interactive way. In the cloud computing context, the machines
are expected to start and run autonomously without the need for any action from
the administrator. They would therefore contain all the required initialization in their
startup scripts.
A pure Xen-based platform has two limitations for the needs of our simulated
IaaS. Firstly, it is not possible to start several instances of the same machine (i.e. to
use the same configuration file and disk image to create two instances) and secondly,
it is not possible to pass arguments to the machine upon startup.
4.3.2 Cloud Interface
To overcome these limitation, we have decided to develop a Simple Cloud Interface
(Sclint). Sclint is a server application running on the dom0 system. It accepts con-
nections on a predefined port and uses a Java-socket based protocol to communicate
with clients.
When started, the Sclint server reads a configuration file with a list of instance
types and instances which are available at the machine. An instance type is an en-
tity defined by its unique String id and consists of several instances. An instance
corresponds to one Xen virtual machine.
A client can request Sclint to start a new instance. It does so by sending a Create
command with two parameters the instance type id and the arguments. Sclint
server then starts one of the instances of the requested instance type which is not yet
running. When the instance is started, it can connect to the Sclint server and obtain
the arguments it was started with.
The Sclint implementation also provides a simple interactive console based client.
32
4.3.3 Elastic Manager
The elastic manager servers are running on a separate machine(s). The elastic
starter can be located outside the cloud and only needs to be able to connect to the
Sclint interface and to the running elastic configurator.
The elastic controller and elastic observer have to be located in the cloud as they
need to have direct connection not only to the Sclint interface, but also to the running
application components.
4.4 Implementation
Chapter III presented the general architecture of an elastic application manager
and gave several guidelines on implementing an elastic manager for a specific appli-
cation. It was stated that the application developer is supposed to inherit several
classes from the org.ow2.ea package and implement several functions. Furthermore
the startup behavior of the components (virtual machines) has to be defined.
This example also shows that the notion of interconnection in our systems does
not necessarily correspond to a physical interconnection. Instead it can be used to
carry on operation that can only be done once the two components are fully initialized
or to move part of the operation to an external entity.
4.4.1 Machine Startup
Defining the startup behavior of the machines is straightforward from the de-
scription in Section 4.2. Each of the machines has to initialize the components it
contains.
clif-supervisor Upon startup, the supervisor machine starts the two servers needed
to run a Fractal application (registry and code server). For the sake of simplicity, we
assigned a fixed IP address to the machine to make sure that the servers will always
be accessible at the same address. When the registry is ready, the supervisor and data
collector components are created. The test plan (.cls) file is loaded to the memory
and the components registered to the registry. From now, the supervisor machine is
waiting until all the probes from the test plan are deployed on servers by an external
entity. Once it happens, it starts the execution of the test.
clif-injector-server This machine is supposed to contain the clif-injector. However
in Clif, the injectors are defined in the test plan (.cls) file and deployed once the test
is started. During the machine startup this file is not available. Instead, the startup
script will create an empty Fractal server and connect to the Sclint interface to
obtain the startup arguments. It will register the server in the registry under the
name obtained as a value of the clif-name argument.
4.4.2 Elastic Starter
The elastic starter works in three steps it creates the clif-supervisor machine, the
clif-injector-server machine and the clif-supervisor-injector-server interconnection.
33
1. To create the clif-supervisor component, the starter will request the Sclint inter-
face to start one instance of type clif-supervisor. It will then periodically check the
address of the Fractal registry until the registry appears and the supervisor compo-
nent is registered.
2. The starter will request the Sclint interface to create one instance of type clif-
injector-server and pass an argument clif-name as defined in the application map.
When the new Fractal server appears in the registry, the machine is considered
started.
3. The starter will obtain from the registry a reference to the supervisor component
and get the test plan. It will read the test plan and deploy the blades on the Fractal
servers accordingly. Then it will bind the new components to the supervisor and data
collector components.
It should be noted that after step 3. the control of the application execution goes
back to the supervisor component, which starts the test execution.
At this moment, the test is running according to the test plan. The starter sends
the application map to the configurator and terminates.
4.4.3 Elastic Controller
The only operation supported by the elastic controller is increasing the number of
instances of the clif-injector-server component upon request from an observer. This
operation consists of two steps creating new component and reconfiguration of the
connection with supervisor.
1. As explained in Section 4.3.2 the Sclint interface supports creating of several
instances of the same instance type. The only problem in case of our application is
that the registry requires servers to be registered with a unique name. The elastic
controller uses the following convention for assigning the value of the clif-name
argument: For the first instance, the value is passed as defined in the application
map. For the second instance, -2 is appended at the end of the name, for the third
instance -3 etc. As before, the component is considered started when the new server
appears in the registry.
2. The controller connects to the supervisor and obtains the test plan. It check for
the blades that had been deployed on the first instance. For each of the blades a new
instance has to be created in deployed on the new server. As the blade names also
have to unique, the controller uses the same convention as for the servers to name
the blades. It deploys the blades, binds them to the supervisor and data collector and
checks for the state of the already existing blades. It changes the state of the new
blades to match the state of the existing ones (i.e. if the existing blades are in state
RUNNING, it will initialize and start the new blades too).
The operation of adding new blades does not take into account the internal state
of the existing ones and that existing blades are not notified of the existence of the
34
new ones. This is a design decision motivated by the fact that in case of Clif no such
synchronization is needed.
4.5 Elastic Observer
The task of the elastic observer is to monitor the execution of the test and adjust
the number of instances of elastic components accordingly. As the performance testing
was not in scope of our project, we did not implement an automatic control loop for
a particular test. We implemented a simple scenario, where the number of elastic
instances is increased in predefined time intervals.
To collect intermediate data from the probes, the observer is running a separate
thread, which periodically connects to the supervisor and obtains the list of blades.
It then connects to the DataCollector interface of the blades and obtains the values
available. This data could be used by an automatic control loop to take decision. In
our implementation, they are stored in .csv files for further analysis.
4.6 Stopping the Application
The observer is responsible for issuing a command to stop the application. It
sends a command to the controller, which has to stop the connection and then the
components.
In the connection stopping phase, the controller will connect to the supervisor,
send a stop and collect commands for the supervisor to collect the data from the
probes and save them on disk on the supervisor machine.
To stop the components, the controller will call Sclint to destroy the running
instances.
The data obtained by the collect operation can be found in files on the super-
visor machine. The data obtained by the observer are located in files on the machine
running the observer.
35

Chapter V
Evaluation
In this chapter we present results obtained when measuring the maximum through-
put of a web application. In the first section we explain the test configuration, in the
second section the obtained results are presented and in the third section we sum-
marize our observations.
1 Configuration
As an example web application for our experiment we used MyStore. MyStore is
an application developed at France Telecom for experiments with performance test-
ing. It simulates behavior of an usual online shop (listing items, adding to cart etc.).
MyStore is written in Java and runs on application server JOnAS. It is distributed
as a sample program with a deployment tool JaDOrT. All these applications are
parts of an open source stack for enterprise computing developed by the ObjectWeb
consortium (nowadays OW2). The system was installed in a virtual machine in the
the cloud.
We used a configuration with maximum of five injectors. Each injector generates
traffic corresponding to 20 virtual users per second. As we already stated in Chapter
IV we did not implement an automatic control loop. Instead we started with one
injector and were increasing the number of injectors five minutes after the previously
inserted injector was initialized.
Apart from the injectors, the measurements were also obtained from two probes
(CPU and RAM) deployed directly on the machine with the web application.
We used two physical machines for our experiments. The injectors were running
in virtual machines on physical machine scloud02, the web server in a virtual ma-
chine on physical machine scloud01. The physical machines were dual core processor
machines with Intel Core Duo 1.66 GHz, 2 GB RAM and Linux 2.6.32 and Xen 3.0
installed. The virtual machines were each assigned 256 MB RAM and were running
Linux 2.6.18 and Java Virtual Machine version 1.6.0. The physical machines were
connected via a 10 Gb switch in a separate network isolated from other network
traffic.
37
Figure V.1 Average response time of the server. Vertical lines show times when a
new injector was inserted.
2 Results
The important measurement was the response time of the server. Figure V.1 shows
the values obtained. It can be seen that the response time remains relatively low as
long as maximum of three injectors are present. After inserting the fourth injector,
the response time increases dramatically to a level which is not acceptable for most
users (around 1000 ms instead of 30 ms at the beginning).
Figure V.2 explains what happens in the server. When one injector is present,
the CPU usage is constant at the level of around 19 %. After inserting the second
injector, it doubles and starts slightly linearly increasing. When the third injector is
added, the usage again increases significantly and the linear increase becomes faster.
This is caused by the resource contention when the number of users is becoming too
high. After inserting the fourth injector the CPU usage reaches 100 %.
Figure V.3 shows similar increase in the usage of RAM. In this case, after inserting
the fourth injector, the usage remains at 93 %.
The results show that the system under test starts having serious problems when
more than 60 virtual users per second are present. When the number of virtual
users reaches 80, the system is saturated and the time of response is not acceptable
anymore.
3 Observations
The experiment shows that elasticity on injectors is a feasible approach to perfor-
mance evaluation. The tested system was running on a not very powerful machine, so
38
the number of injectors needed was low. However the approach is not limited in size
and could be used on a much larger infrastructure with several hundreds of injectors.
A drawback of the current implementation is the time it takes to insert new in-
jector. When the controller is asked to insert new instance of the injector component,
it has to start a new virtual machine, wait until it is started and initialized. With
our infrastructure this operation took on average 85 seconds.
To make this time shorten, it would be possible to modify the mechanism so that
the controller would keep several machines started in advance and only initialize and
start the blades when requested. This way the time of reaction could be lowered to
few seconds.
39
Figure V.2 CPU usage of the server. Vertical lines show times when a new injector
was inserted.
Figure V.3 RAM usage of the server. Vertical lines show times when a new injector
was inserted.
40
Chapter VI
Conclusion
Cloud computing has been becoming increasingly popular within business world
in recent years. Several industrial providers run large scale cloud computing platforms
offering different services at various levels of infrastructure, platform and application.
One of the problems that is not yet fully resolved is the problem of making applica-
tions elastic, it is to make them automatically adjust to variations in load without
the need of intervention of a human administrator and without the need to change
the code of existing applications.
In the context of autonomic computing, which is an effort to develop self-managed
complex software systems, the problem can be stated as a self-optimization problem.
Autonomic computing suggests a solution using autonomic managers an external
application, which continuously observes performance of the application and recon-
figures it to better handle actual needs.
In this project, we first analyzed the challenges faced when implementing elastic
control of an arbitrary application. We have identified several design choices that
have to be made when making an application elastic: the choice of granularity of
components, determining which components are elastic, handling interconnections
of elastic components and the actual control of elasticity. We then presented an
architecture of an elastic manager with a well defined separation between general
concepts and application-specific parts.
To show the feasibility of the approach, we have selected a load injection appli-
cation Clif. We studied the Fractal component model used for development of the
application, analyzed its architecture and identified elastic components. In an exper-
iment with a web application we have shown that the approach is working and can
be used for development of elastic controllers for real-world applications.
There are two possible directions of future work on the topic: One would be to
continue with the Clif experiment and implement and evaluate a fully automatic
control loop which would be able to control a larger scale of experiments. The other
direction would be to move the control from the infrastructure layer to the plat-
form layer and develop an elastic manager which would work directly with Fractal
components.
41

Bibliography
[1] Amazon EC2, May 2011. URL http://aws.amazon.com/ec2/.
[2] Clif, May 2011. URL http://clif.ow2.org/.
[3] Fractal, May 2011. URL http://fractal.ow2.org/.
[4] Google App Engine, May 2011. URL http://code.google.com/appengine/.
[5] Microsoft Windows Azure, May 2011. URL http://www.microsoft.com/
windowsazure/.
[6] Xen, May 2011. URL http://www.xen.org/.
[7] D.P. Anderson. BOINC: a system for Public-Resource computing and storage.
In Fifth IEEE/ACM International Workshop on Grid Computing, pages 410,
Pittsburgh, PA, USA.
[8] Michael Armbrust, Armando Fox, Rean Griffith, Anthony D Joseph, Randy H
Katz, Andrew Konwinski, Gunho Lee, David A Patterson, Ariel Rabkin, and
Matei Zaharia. Above the clouds: A berkeley view of cloud computing. 2009.
[9] Eric Bruneton, Thierry Coupaye, Matthieu Leclercq, Vivien Quma, and Jean-
Bernard Stefani. The FRACTAL component model and its support in java.
Software: Practice and Experience, 36(11-12):12571284, September 2006.
[10] Eric Bruneton, Thierry Coupaye, and Jean-Bernard Stefani. The fractal compo-
nent model. February 2004. URL http://fractal.ow2.org/specification/
index.html.
[11] Azbayar Demberel, Jeff Chase, and Shivnath Babu. Reflective control for an
elastic cloud application: an automated experiment workbench. In Proceedings
of the 2009 conference on Hot topics in cloud computing, pages 88, San Diego,
California, 2009. USENIX Association.
[12] Bruno Dillenseger. CLIF, a framework based on fractal for flexible, distributed
load testing. Annals of Telecommunications, 64(1-2):101120, 2008.
[13] T. Fahringer, R. Prodan, Rubing Duan, F. Nerieri, S. Podlipnig, Jun Qin,
M. Siddiqui, Hong-Linh Truong, A. Villazon, and M. Wieczorek. ASKALON:
a grid application development and computing environment. In Grid Comput-
ing, IEEE/ACM International Workshop on, pages 122131, Los Alamitos, CA,
USA, 2005. IEEE Computer Society.
[14] J.O. Kephart and D.M. Chess. The vision of autonomic computing. Computer,
36(1):4150, 2003.
[15] Harold C. Lim, Shivnath Babu, and Jeffrey S. Chase. Automated control for
elastic storage. In Proceeding of the 7th international conference on Autonomic
computing - ICAC 10, page 1, Washington, DC, USA, 2010.



GRID COMPUTING
http://dlib.cs.odu.edu/WhatIsTheGrid.pdf

What is the Grid? A Three Point Checklist
Ian Foster
Argonne National Laboratory & University of Chicago
foster@mcs.anl.gov
July 20, 2002
The recent explosion of commercial and scientific interest in the Grid makes it timely to
revisit the question: What is the Grid, anyway? I propose here a three-point checklist for
determining whether a system is a Grid. I also discuss the critical role that standards must
play in defining the Grid.
The Need for a Clear Definition
Grids have moved from the obscurely academic to the highly popular. We read about
Compute Grids, Data Grids, Science Grids, Access Grids, Knowledge Grids, Bio Grids,
Sensor Grids, Cluster Grids, Campus Grids, Tera Grids, and Commodity Grids. The
skeptic can be forgiven for wondering if there is more to the Grid than, as one wag put it,
a funding conceptand, as industry becomes involved, a marketing slogan. If by
deploying a scheduler on my local area network I create a Cluster Grid, then doesnt
my Network File System deployment over that same network provide me with a Storage
Grid? Indeed, isnt my workstation, coupling as it does processor, memory, disk, and
network card, a PC Grid? Is there any computer system that isnt a Grid?
Ultimately the Grid must be evaluated in terms of the applications, business value, and
scientific results that it delivers, not its architecture. Nevertheless, the questions above
must be answered if Grid computing is to obtain the credibility and focus that it needs to
grow and prosper. In this and other respects, our situation is similar to that of the Internet
in the early 1990s. Back then, vendors were claiming that private networks such as SNA
and DECNET were part of the Internet, and others were claiming that every local area
network was a form of Internet. This confused situation was only clarified when the
Internet Protocol (IP) became widely adopted for both wide area and local area networks.
Early Definitions
Back in 1998, Carl Kesselman and I attempted a definition in the book The Grid:
Blueprint for a New Computing Infrastructure. We wrote:
A computational grid is a hardware and software infrastructure
that provides dependable, consistent, pervasive, and inexpensive
access to high-end computational capabilities.
Of course, in writing these words we were not the first to talk about on-demand access to
computing, data, and services. For example, in 1969 Len Kleinrock suggested
presciently, if prematurely:
We will probably see the spread of computer utilities, which,
like present electric and telephone utilities, will service individual
homes and offices across the country. [link]
In a subsequent article, The Anatomy of the Grid, co-authored with Steve Tuecke in
2000, we refined the definition to address social and policy issues, stating that Grid
computing is concerned with coordinated resource sharing and problem solving in
dynamic, multi-institutional virtual organizations. The key concept is the ability to
negotiate resource-sharing arrangements among a set of participating parties (providers
and consumers) and then to use the resulting resource pool for some purpose. We noted:
The sharing that we are concerned with is not primarily file
exchange but rather direct access to computers, software, data, and
other resources, as is required by a range of collaborative problemsolving
and resource-brokering strategies emerging in industry,
science, and engineering. This sharing is, necessarily, highly
controlled, with resource providers and consumers defining clearly
and carefully just what is shared, who is allowed to share, and the
conditions under which sharing occurs. A set of individuals and/or
institutions defined by such sharing rules form what we call a
virtual organization.
We also spoke to the importance of standard protocols as a means of enabling
interoperability and common infrastructure.
A Grid Checklist
I suggest that the essence of the definitions above can be captured in a simple checklist,
according to which a Grid is a system that:
1) coordinates resources that are not subject to centralized control
(A Grid integrates and coordinates resources and users that live
within different control domainsfor example, the users desktop
vs. central computing; different administrative units of the same
company; or different companies; and addresses the issues of
security, policy, payment, membership, and so forth that arise in
these settings. Otherwise, we are dealing with a local management
system.)
2) using standard, open, general-purpose protocols and interfaces
(A Grid is built from multi-purpose protocols and interfaces that
address such fundamental issues as authentication, authorization,
resource discovery, and resource access. As I discuss further
below, it is important that these protocols and interfaces be
standard and open. Otherwise, we are dealing with an applicationspecific
system.)
3) to deliver nontrivial qualities of service. (A Grid allows its
constituent resources to be used in a coordinated fashion to deliver
various qualities of service, relating for example to response time,
throughput, availability, and security, and/or co-allocation of
multiple resource types to meet complex user demands, so that the
utility of the combined system is significantly greater than that of
the sum of its parts.)
Of course, the checklist still leaves room for reasonable debate, concerning for example
what is meant by centralized control, standard, open, general-purpose protocols, and
qualities of service. I speak to these issues below. But first lets try the checklist on a
few candidate Grids.
First, lets consider systems that, according to my checklist, do not qualify as Grids. A
cluster management system such as Suns Sun Grid Engine, Platforms Load Sharing
Facility, or Veridians Portable Batch System can, when installed on a parallel computer
or local area network, deliver quality of service guarantees and thus constitute a powerful
Grid resource. However, such a system is not a Grid itself, due to its centralized control
of the hosts that it manages: it has complete knowledge of system state and user requests,
and complete control over individual components. At a different scale, the Web is not
(yet) a Grid: its open, general-purpose protocols support access to distributed resources
but not the coordinated use of those resources to deliver interesting qualities of service.
On the other hand, deployments of multi-site schedulers such as Platforms MultiCluster
can reasonably be called (first-generation) Gridsas can distributed computing systems
provided by Condor, Entropia, and United Devices, which harness idle desktops; peer-topeer
systems such as Gnutella, which support file sharing among participating peers; and
a federated deployment of the Storage Resource Broker, which supports distributed
access to data resources. While arguably the protocols used in these systems are too
specialized to meet criteria #2 (and are not, for the most part, open or standard), each
does integrate distributed resources in the absence of centralized control, and delivers
interesting qualities of service, albeit in narrow domains.
The three criteria apply most clearly to the various large-scale Grid deployments being
undertaken within the scientific community, such as the distributed data processing
system being deployed internationally by Data Grid projects (GriPhyN, PPDG, EU
DataGrid, iVDGL, DataTAG), NASAs Information Power Grid, the Distributed ASCI
Supercomputer (DAS-2) system that links clusters at five Dutch universities, the DOE
Science Grid and DISCOM Grid that link systems at DOE laboratories, and the TeraGrid
being constructed to link major U.S. academic sites. Each of these systems integrates
resources from multiple institutions, each with their own policies and mechanisms; uses
open, general-purpose (Globus Toolkit) protocols to negotiate and manage sharing; and
addresses multiple quality of service dimensions, including security, reliability, and
performance.
The Grid: The Need for InterGrid Protocols
My checklist speaks to what it means to be a Grid, yet the title of this article asks what
is the Grid. This is an important distinction. The Grid vision requires protocols (and
interfaces and policies) that are not only open and general-purpose but also standard. It is
standards that allow us to establish resource-sharing arrangements dynamically with any
interested party and thus to create something more than a plethora of balkanized,
incompatible, non-interoperable distributed systems. Standards are also important as a
means of enabling general-purpose services and tools.
In my view, the definition of standard InterGrid protocols is the single most critical
problem facing the Grid community today. Fortunately, we are making good progress.
On the standards side, we have the increasingly effective Global Grid Forum. On the
practical side, six years of experience and refinement have produced a widely used de
facto standard, the open source Globus Toolkit. And now, within the Global Grid Forum
we have major efforts underway to define the Open Grid Services Architecture (OGSA),
which modernizes and extends Globus Toolkit protocols to address emerging new
requirements, while also embracing Web services. Companies such as IBM, Microsoft,
Platform, Sun, Avaki, Entropia, and United Devices have all expressed strong support for
OGSA. I hope that in the near future, we will be able to state that for an entity to be part
of the Grid it must implement OGSA InterGrid protocols, just as to be part of the Internet
an entity must speak IP (among other things). Both open source and commercial products
will interoperate effectively in this heterogeneous, multi-vendor Grid world, thus
providing the pervasive infrastructure that will enable successful Grid applications.
Thanks for reading this far. I expect to be writing further columns for Grid Today, so
please feel free to contact me if there are issues that you would like to see raised in this
forum.


http://cisjournal.org/journalofcomputing/archive/vol2no12/vol2no12_9.pdf

VOL. 2, NO. 12, December 2011 ISSN 2079-8407
Journal of Emerging Trends in Computing and Information Sciences
2009-2011 CIS Journal. All rights reserved.
http://www.cisjournal.org
70 5
Comparative Study of Scalability and Availability in
Cloud and Utility Computing
1 Farrukh Shahzad Ahmed, 2 Ammad Aslam, 3 Shahbaz Ahmed, 4 M. Abdul Qadoos Bilal
1 Netsolace Information Technology PVT (LTD), Islamabad
2 White Wings PVT (LTD), Islamabad
3,4 Department of Computer Science & software engineering , International Islamic University Islamabad
{1 fshahzad11@gmail.com, 2 amaa08@student.bth.se, 3 shahbaz.ahmed@iiu.edu.pk}
ABSTRACT
Cloud computing and Utility computing paradigms are two resource sharing architectures. The vivid and multi-
institutional
natures of these environments instigate different challenges in context of availability and scalability. In this report
we
discuss the normal architecture of Cloud and Utility computing followed by crucial areas which are availability
and
scalability. To address these problems we proposed a new controlling and scheduling mechanism, Optimize
Scheduler
Authenticator and Controller (OSAC) [1]. Qualitative and quantitative research strategies are used to emphasise
on these
areas. Experiment is conducted to get a comparative view of availability in a real context. Interviews are used as
mean of
data collection for scalability issues.
Keywords: Scalability, availability, cloud computing, utility computing.
1. INTRODUCTION
Extensive study of both paradigms revealed that
they rely on same underlying infrastructure inherited from
Grid computing infrastructure. One can be differentiated
from other on the basis of its implementation. In attempt to
address to address scalability and availability in these two
paradigms, we proposed Optimized Scheduler
Authenticator and Controller (OSAC). General overview
of these areas is addressed in this paper while low level
implementation details of OSAC and communication
mechanisms between different OSACs will be addressed
in future.
Ever increasing use and popularity of Internet
impel Internet into a distributed computing platform. This
shift has been persuading companies worldwide to
outsource their computing resources, business processes,
business applications and data storage and maintenance to
get the benefit of up-to-date IT technologies in order to
focus on their core business competencies [2] to survive
and compete. This survival competition is the key driving
factor for evolving the Internet into a distributed
computing platform. Diverse support, ability to scale from
small networks with few devices to many devices up to a
global scale and support for wireless technology is some of
intriguing features of distributed networks for companies.
The support for the new devices will increase in future [3].
Cloud and Utility computing envisioned as next generation
computing platforms [4][5]. Extend traditional distributed
computing providing large scale sharing of storage and
computation resources [1]. Grid computing is defined as,
A system that uses open, general purpose protocols to
federate distributed resources and to deliver better-thanbest-
effort qualities of service [6]. Utility computing is
defined as, A collection of technologies and business
practices that enables computing to be delivered
seamlessly and reliably across multiple computers [7].
The idea of a Cloud is a system which has loose
boundaries and can be able to interact and merge with
other such systems. There is no precise and comprehensive
definition of cloud computing yet available. The notion is
that the applications run somewhere on the Cloud which
users are least concerned about. The whims of both
paradigms are still overlapping. Cloud computing relates
to underlying architecture in which services are designed
which may perhaps equally apply to Utility services [8].
Couple of definitions of Cloud computing are: According
to Gartner Cloud computing is, A style of computing
where massively scalable IT-related capabilities are
provided as a service across the Internet to multiple
external customers [9]. According to Aaron Weiss Clouds
computing is: Powerful services and applications are
being integrated and packaged on the Web in what the
industry now calls cloud computing [10].
As mentioned earlier the Cloud and Utility
computing are comparatively new and evolving areas in IT
industry and there is lot to be done. The foremost
concerning issues are Scalability and Availability. The
purpose of this paper is to study the following issues in
Cloud and Utility computing paradigms,
1. Availability
2. Scalability
VOL. 2, NO. 12, December 2011 ISSN 2079-8407
Journal of Emerging Trends in Computing and Information Sciences
2009-2011 CIS Journal. All rights reserved.
http://www.cisjournal.org
706
2. BACKGROUND AND RELATED WORK
Cloud computing addresses both platform and
application [11] whereas Utility computing is the
combination of computing resources as a metered service.
It is a service level agreement SLA between the user and
the service provider like any other physical public utility
[5]. Underlying architecture differentiates both computing
architectures from each other. Both are Service Oriented
Architectures (SOA) where services (combination of
hardware and/or software) are delivered on demand.
Utility computing delivers application infrastructure
resources [8] correlation to business. On contrary, in Cloud
computing applications are developed and deployed in a
way that they can run in virtualized environment.
Dynamically allocating and sharing of resources which
allow them to grow, shrink and self heal. Cloud computing
is characterised by this dynamic behaviour. James
Governor, the Analyst for RedMonk, has been an IBM and
Microsoft corporate watcher for 8 years argued that
machines in Cloud architecture are not visible [12] to the
users. These resource pools in Cloud architecture can be
located anywhere in the world [13]. Contrary to Utility
computing, which offers Software as a Service SaaS
(examples are Salsesforce, Gmail, Gliffy), Cloud
computing also offers Platform as a Service PaaS
(examples are Mosso, Google App Engine, Rails One) [1]
applications run on virtual operating system. Another
innovation in the Cloud computing which differentiate it
from Utility computing is Infrastructure as a Service
(IaaS). Companies like Joyent, Amazon Web Services,
Nirvanix etc., are offering whole computing Infrastructure
as a Service from online development platform to normal
computing requirements. Systems based on the Cloud
computing framework can interact with each other, sharing
and pooling resources for greater efficiency over a large
deployment such as an enterprise [14]. Complex multi-tier
nature of enterprise applications makes it challenging to
deploy on Cloud framework. As a result, dynamically
adjusting resources to an application not only has to take
into account the local resource demands at node where a
component of that application is hosted, but also the
resource demands of all the other application related
components on other nodes [8].
Related Work
To the best of our knowledge, no experimental
studies have been conducting that compare and evaluate
the availability of the resources between two different
architectures. Further no comparison has been made
between two computing service providers in terms of
scalability.
Optimized Scheduler Authenticator and Controller
(OSAC)
In enterprise applications, different amount of
resources are required at different tiers [15]. Despite the
fact that resources are available, data centres are often not
fully utilized due to this vary reason. The role of OSAC is
to allocate resources intelligently in a way that they are
fully utilized, considering the performance and
requirements parameters. A generic architecture of OSAC
is given in figure 1 [1].
Figure 1: Generic Architecture of Optimized
Scheduler Authenticator and Controller
Working of OSAC
Optimized Scheduler Authenticator and
Controller OSAC is comprised of two modules, first one is
the Optimized Scheduler and Authenticator OSA and
second is termed as Optimized Controller OC. OSAC runs
on virtual network layer, the core of Cloud and Utility
computing infrastructures. Users connect the network via
OSA which besides authenticating user also schedules and
allocates resource availability to the user and its processes.
In other words OSA schedules users resource requests.
On further level down, OSA has further two sub-modules
the Optimized Scheduler and the Authenticator. The user is
authenticated by Authenticator while Optimized Scheduler
schedules the resources for the authenticated users. The
token request for the required resource is passed to OC
which in turn provides the available resources list to OSA.
OSA then intelligently on the basis of best possible
performance allocates the requested resource to the user.
The parameters for calculating performance are latency,
network traffic and bandwidth required by the user.
VOL. 2, NO. 12, December 2011 ISSN 2079-8407
Journal of Emerging Trends in Computing and Information Sciences
2009-2011 CIS Journal. All rights reserved.
http://www.cisjournal.org
707
Figure 2: Architecture of Optimized Scheduler and
Controller
When ever any resource becomes unavailable or
any node goes down, OSA without interrupting the user
checks the availability of the resource in the resource pool
through OC. OC provides the current updated list of the
available resources to the OSA which will in-turn again
allocate the resource to the user. As soon as user finished
up with the resource and resource is free, the OSA returns
the resource to the resource pool by notifying to OC. The
interesting feature of OSAC is that as soon as OC gets an
update about any free resource from OSAs it will notify
to OSAs the updated list of resource pool which they
utilize to recalculate the allocation of that resource for the
users currently using the specific resource. The resource is
then reallocated to the user without interrupting its
processing.
Availability
Availability of system or component is fraction of
time it is available and it describes the system behaviour. It
is defined as, The degree to which a system or component
is operational and accessible when required for use [16].
Availability is calculated as, Availability = Uptime /
Uptime + DownTime = MTBF / MTBF + MTTR Mean
time to recover / repair (MTTR) - Average time it takes to
recover Mean time between failures (MTBF) - Average
time between failures
If MTBF is much greater than MTTR then, Availability
1 MTTR / MTBF
The system with 0.99 availability has 1- 0.99 =
0.1 probability of failures. Availability measures give the
reliability of system which is defined as:
Reliability of a system is the probability, over a given
period of time that the system will correctly deliver the
services as expected by the user [16].
Table 1: Uptime and Maximum Downtime
Uptime Uptime
Downtime
per Downti-
Year me per
Week
Seven nines 99.99999% 0.3 s 6 ms
Six nines 99.9999% 31.5 s 0.605 s
Five nines 99.999% 5 min 35 s 6.05 s
Four nines 99.99% 52 min 33 s 1.01 min
Three nines 99.9% 8 hrs 46 min 10.1 min
Two nines 99.0% 87 hrs 36 min 1.68 hrs
One nine 90.0% 36 days 12 hrs 16.8 hrs
Availability, reliability and performance are
though different terminologies yet they are closely link to
each other. Goal is always to achieve the best throughput
from the resources provided they are reliable and available
when needed. The nines of availability (shown in table 1)
best describe in which category the system lies. Achieving
high availability is always expensive. But for the critical
systems, like life airplane system computers, defence
systems seven nines availability is required whereas
telecom, navigation, banking and ATMs can rely on six
nines. Office systems and messaging systems three and
four nines availability respectively serve the purpose.
New enterprise data centres are now being
planned with a Cloud and Utility computing architectures.
All hardware resources and in some implementations
applications as well (for example Google App Engine) are
pooled into a common shared infrastructure and users
share these resources on demand basis which change over
time [17]. The request for the resources is scheduled by
OSA while controlled by OC as explained above.
3. EXPERIMENT PLANNING
We have conducted an experiment to find and
compare the availability of Amazon Elastic Compute
Cloud (EC2) architecture with OSAC architecture. As the
experiment is not real time so we have made following
assumptions [1]:
1. Normal and controlled working environment.
2. The values are not real and are supposed for the
experiment.
3. No extra functionality is added into the system.
4. The systems that are requesting for resource
allocation are from same level of performance.
5. All the systems are requesting for the same type
of resources.
6. All systems are in operational mode from the
same time.
VOL. 2, NO. 12, December 2011 ISSN 2079-8407
Journal of Emerging Trends in Computing and Information Sciences
2009-2011 CIS Journal. All rights reserved.
http://www.cisjournal.org
708
Hypothesis Testing
In our experiment we have used both Null
Hypothesis and Alternative Hypothesis defined as follows,
H0: Probability of failure in OSAC architecture will be
higher than Amazon EC2 architecture. H1: Availability of
resources in OSAC architecture will be lower as compared
to Amazon EC2 architecture.
Variables Selection
Both dependent and independent variables are
used for experiment. Detail is as follows:
Independent Variables
Mean Time to Recover (MTTR):
Mean Time to Recover determines the average
time taken by the device from recovering any type or
failure.
Uptime:
Uptime defines for how long the system is
running or up for example, 10 days, 30 days etc.,
Downtime:
It determines for how long systems resource(s) is
not available for access.
Dependent Variables
Mean Time between Failure (MTTB):
It is used for measuring the failure average time
that occur in between the system. It is calculated as:
MTTB = (Number of system x time period) /
number of failure during that time.
Availability Variables:
It is depended of the Uptime and down time and
defines that how often the resources are available.
Instrumentation
To conduct the experiment, the required resources are
prepared and installed on machines on which the test is to
be performed.
All systems are connected to high speed
broadband internet connection.
The Amazon EC2 and OSAC service available on
all systems
For measuring the independent variables, system
clock software is installed into the systems. It is used to
gather statistical data for the time for failure, uptime and
downtime. Besides independent variables, the dependent
variables are also measured with the help of software to
get whole picture.
Experiment Design
The experiment is based upon three types of
design.
Randomized Design: As this experiment is based upon
randomized design so, each system has been given access
to make a request for resources to any type of service
whether from OSAC or Amazon EC2. A system that is
using a service of one the architectures may ask for a
service to other architecture, for example a system that is
using Amazon (EC2) service may request to utilize the
service of OSAC.
Balancing: To ensure balance in experiment equal types
of resources accessed by the systems (objects) from the
both services. The broadband connection is same and the
systems that are used have same specifications.
Blocking: As the experiment is taking place in an open
environment i.e. through internet therefore, every
computer can access resources from these services. An ID
is allotted to each system that is verified before accessing
the resources and the systems that do not contain any type
of ID will be blocked ultimately.
Experiment Operation
Preparation
Data is gathered through automated software.
This minimized the user physical interaction and human
errors resulting in precise results. Software use different
algorithms for randomly accessing the resources and
different services.
Execution
The total length of the experiments was 15 days
and 6 hours. All the systems access the services as it
should and no system result in failure.
The average time periods and results are
calculated from special software installed on each system.
It also validated from the log files on the servers that all
the systems have access the resources that make sure that
VOL. 2, NO. 12, December 2011 ISSN 2079-8407
Journal of Emerging Trends in Computing and Information Sciences
2009-2011 CIS Journal. All rights reserved.
http://www.cisjournal.org
709
balancing, randomization and blocking took place in an
intended way.
Validity Evaluation
Like other scientific experiments our experiment
has also some types of risks associated with its valid ness.
This section is covers these risks. The validity of this
experiment is based on validity evaluation framework
proposed by Wohlin [18].
Internal Validity
It is the approximate accuracy about inferences
concerning cause-effect or causal relationships [19]. The
experiment is conducted on number of computer systems
constitute a group. Therefore, posing single group risks
rather than multiple group risks.
History Risk
All the systems have the same level of
specification. They all belong to the same level of
effectiveness and history so there is no risk involve in this
experiment related to the selection history.
Maturation Risk:
This risk concerns our study on account of
different requests from the system for different resources.
Testing Risk:
It is not valid risk for our study because we are
concerned with how pre-test conditions related to the posttests.
Instrumentation Risks:
As we are using different types of software and
services so, chances are there that produce inaccurate
results. This is major level of risk into our experiment and
all the outcomes may become false due to this.
Mortality Risk:
All the systems are from the same level of
functionality and devices or software installed on it.
Therefore, it is again not concern with this specific
experiment.
Regression Risk:
It is also not a valid risk because machines are
participating here and the selection is not based upon there
pervious functionality.
External Validity
It is the degree to which the conclusions in the
experiment would hold for other persons in other places
and at other times [20]. In our experiment we are using the
systems as objects. The specification are the same, but in
general term we are unaware of the other systems working
with different specifications, bandwidth and in work
environment.
The other risk concerns with limited scope of our
test. We are comparing OSAC only with Amazon (EC2),
there are many other architecture available in the market
that can produce better results with our proposed
architecture.
Another risk is in systems softwares that are
installed. These softwares direct as to which operation to
be performed but generally in actual practice with human
interference, our results may or may not hold their validity.
Construct Validity
Interaction of Different Treatments:
This thread is involved in our experiment because
the systems used in the experiments many associate in
other type of performing some other types of functions at
the involved concurrently in some other programs
designed to have similar effects at the same time.
Restricted generalizability across contracts:
Although the same software are installed on all
machines but there is a possibility that one software don
not perform a better resource allocation requests form the
services and end-up with false construct validity.
Confounding Constructs and Levels of Constructs:
It is thread that on the sequence of systems
requests the OSAC is performing better while compared to
Amazon (EC2. But, when used in industry where the
requests sequence is different than this architecture may
not perform better.
Conclusion Validity
Low Statistical Power:
This threat is present because some systems may
not take part in the experiments i.e. system error and
hardware failures.
VOL. 2, NO. 12, December 2011 ISSN 2079-8407
Journal of Emerging Trends in Computing and Information Sciences
2009-2011 CIS Journal. All rights reserved.
http://www.cisjournal.org
710
Reliability of Measures:
We are measuring the availability with the
specific and standard formula so there is risk involved in
terms of the reliability of the outcomes.
Data Analysis
It is used to illustrate the fundamental features of
the data in an experiment. With the help of graphic
analysis they provide simple summaries about the sample
and the measures [21]. The procedures used for data
analysis propped by Wohlin [18], based on the following
steps, Descriptive statistics Data set reduction Hypothesis
testing
Descriptive Statistics:
It is used to illustrate the fundamental features of
the data in an experiment.
In table 1, we apply how much the availability a
system can provide using the current architecture of the
Utility and Cloud computing. We test the system with
different set of values and find that the probability of
failure is very low when it provides higher availability but
in most of the cases its values do not satisfy the required
availability failure result.
Table 2: System availability Analysis under current
structure
Availability Probability Failure
0,99 0,01
0,89 0,11
0,91 0,09
0,92 0,8
0,88 0,12
99 0,01
95 0,05
90,2 0,011
93,5 0,65
93,8 0,62
95,1 0,049
Figure 2: System Availability Analysis of
Current Structure (EC2)
Data Set Reduction
When we compare this architecture with our
proposed OSAC by applying the same set of values it is
found that there are comparatively less occurrences failure.
After taking the data from description statistics approach,
researcher use data set reduction to present the data. This is
due to the outliners presented in the dataset that may
influence the results [18]. To produce better results of the
experiment we identified an outliner in our dataset that is
having unusual behaviour from the dataset. A value that is
unusual is excluded due to the power failure of a system
with the help of scatter plot. If we include this value it
changes the whole scenario of the results.
Hypothesis Testing
We have performed Hypothesis testing on the null
hypothesis that is, H0: Probability of failure in OSAC
architecture will be higher than Amazon EC2 architecture.
The null hypothesis testing is based upon the samples
taken from statistical distribution. A sample is selected to
reject the fact of null hypothesis [18]. The factor used here
is availability and as discussed above, the experiment type
is single group, single treatment. Therefore, we used Chi-2
method because it deals with frequencies of data [18].
Table 2 defines the frequencies as they are obtained during
the analysis of statistical distribution. We have used
ordinal measurement scale because this method is selected
as non parametric test based on design type. The results of
the experiment are collected by executing the test in order
to draw the conclusion.
VOL. 2, NO. 12, December 2011 ISSN 2079-8407
Journal of Emerging Trends in Computing and Information Sciences
2009-2011 CIS Journal. All rights reserved.
http://www.cisjournal.org
711
4. DISCUSSION
After executing the experiment and gathering the
results it is analyzed that the OSAC architecture is a better
architecture for the availability of the resources. The
results nullify the null hypothesis and support the
alternative hypothesis. The result shows that the current
availability does not provide the higher availability. Figure
7.3 defines that the current system is not appropriate to use
and the chances are more towards the unavailability of the
resources. OSAC provide a comparatively more system
availability
Table 3: System availability Analysis of OSAC
Availability Probability Failure
0.99 0.1
0.95 0.05
0.96 0.04
0.92 0.08
0.93 0.07
0.963 0.37
96.8 0.032
97.1 0.29
91.5 0.085
96.8 0.032
98.1 0.019
Scalability
The notion of scalability is that How well the
solution to some problem will work when the size of the
problem increases [22]. There are many different vendors
for example Hewlett-Packard, Sun Microsystems, IBM
etc., are offering Cloud and Utility computing services.
Scalability has become an important aspect of the
infrastructure. Different vendors are providing different
types of services on Cloud and Utility computing and they
have the different level of scalability along with different
definition of it as there are no definition available that
defines the scalability in a universal term. The system is
called un-scalable if the additional cost of coping with a
given increase in traffic or size is excessive, or that it
cannot cope at this increased level at all [23].
Figure 3: Probability of Failure in Current and OSAC
Architecture
Qualitative Research Strategy
The qualitative research is devise to start with
data collection which will be carried out interviews and
observation. There are two different types of action
performed in data collection process. One is data
elicitation and other is data recording for interpretation of
data in the textual form that would be used afterward for
data analysis.
Data Collection
As there are many different companies offering
these services so we have chosen two different companies
IBM and Hewlett-Packard (HP). Data collection is the
done by taking the semi-structured interview from the
users (objects) and observations are made on the basis on
interviews.
Interviews are used for data collection because it
is a simple, reliable and powerful approach to get accurate
and precise information from the customers of the service.
In structured part of the interview the questions are
predefined in for scalability testing. The interviewers are
free if they like to mention any other issue that is not
available in the interview question list or any other
problem that is not related to scalability.
IThe interviews are conducted from the users that
have an experience about both companies in terms of
scalability. The interview is a video-recorded so that host
does not waste his time in taking the notes from what a
customer says. Despite that they should spend more time
for taking the accurate answers from the object. As
scalability concerns much about marketing so, the
questions in the interview that are mostly related to
VOL. 2, NO. 12, December 2011 ISSN 2079-8407
Journal of Emerging Trends in Computing and Information Sciences
2009-2011 CIS Journal. All rights reserved.
http://www.cisjournal.org
712
marketing managers are prioritizing as compare to other
stakeholders like developers or designers etc.
Data Analysis Procedure
Data is gathered through videos. The video is then
converted into textual form. The text not only contains the
speeches during the interview but also the expression such
say voice expression, feelings, and expression language.
All expressions are taken so that we can analyze the data in
an accurate way and to produce accurate research results.
The results are also distributed into a presentable format to
the different stakeholders of the company i.e. designers,
developers so that they can analyze what they are required
from the computing service. The analysis produces the
results that IBM computing services are providing better
scalable services to the user as compared to HP.
Validity of the Study
A possible risk may be lead to misunderstanding
of the subject while answering a question. For example,
Cloud computing and Utility computing is based upon the
Grid computing and a person might answer the question in
the context of grid computing. The solution to this threat is
provided in this way; the host will provide an example to
the asked question so that the subject do not confused and
answer the questions in the required manner. The other
threat is from worthless data. We are analyzing all the data
in the textual form. This data will also involve raw data
which is useless in the qualitative research; the
involvement of this worthless data in the research may lead
us towards inaccurate results.
Expected Outcome
By implementing IT infrastructure on Cloud or
Utility computing paradigms, company will gain increase
in its revenue as both paradigm are scalable to greater
extend are offering to accommodate infinite number of
users. The major expected outcome from the study is to
find out which company is providing better computing
service in terms of scalability. As discussed above,
scalability is used mainly as an advertisement source.
Therefore, the main stakeholders in out research are the
marketing managers.
The results are provided to the market
departments of both companies so that they can analyze
which what are the strengths and weakness of them. It also
helps them to deign the strategy of their business both in
terms of internal and external appraisals.
5. CONCLUSION
Using shared resources is a cost effective
approach. Cloud and Utility computing provides a
infrastructure where user can use these shared sources.
Both paradigms offer services on demand. Resources are
allocated, de-allocated, configure and reconfigure
dynamically within pre-defined rules by the service
provider on contrary to the Utility computing which
denotes both a separation between service provider and
consumer with the provision of having desired set of rules
defined by the user. After comparing the two aspects of the
two paradigms Scalability and Availability, we proposed
our own controller structure OSAC. OSAC is a generic
controller and aim to produce better throughput then the
current implementation. The experiment proved that the
availability of resources is better allocated by the OSAC
architecture. There is a big room for the future researchers
to address these three issues fully as well as other
concerning issues.
REFERENCES
[1]. A. A. Nauman, A. Aslam, B. Garapati, Cloud
Computing Versus Utility Computing: A
Comparative Study of Availability, Scalability
and Security Aspects of the Two Paradigms,
Department of Computer Science Blekinge
Institute of Technology, Ronneby, Sweden.
[2]. M. J. Buco, R. N. Chang, L. Z. Luan, C. Ward, J.
L. Wolf, P. S. Yu, Utility computing SLA
management based upon business objectives,
IBM Systems Journal, vol. 43, no. 1, 2004.
[3]. M. Milenkovic, S. H. Robinson, R. C.
Knauerhase, D. Barkai, S. Garg, V. Tewari, T. A.
Anderson, M. Bowman, Toward Internet
Distributed Computing, pp 38-46, vol. 36 ,
Issue 5, May 2003.
[4]. R. Buyya , D. Abramson , J. Giddy, A Case for
Economy Grid Architecture for Service Oriented
Grid Computing, Proceedings of the 10th
Heterogeneous Computing Workshop HCW
2001 (Workshop 1), vol. 2, p.20083.1, April 23-
27, 2001.
[5]. C. S. Yeo, M. D. Assunao, J. Yu, A. Sulistio, S.
Venugopal, M. Placek, and R. Buyya, Utility
Computing and Global Grids, Grid Computing
and Distributed Systems (GRIDS) Laboratory
Department of Computer Science and Software
Engineering The University of Melbourne, VIC
3010, Australia.
[6]. I. Foster, What is the grid? A three-point
checklist, Available, http://wwwfp.
mcs.anl.gov/~foster/Articles/WhatIsTheGrid.p
df, [Accessed 8th May 2009].
VOL. 2, NO. 12, December 2011 ISSN 2079-8407
Journal of Emerging Trends in Computing and Information Sciences
2009-2011 CIS Journal. All rights reserved.
http://www.cisjournal.org
713
[7]. J. W. Ross G. Westerman, Preparing for utility
computing: The role of IT architecture and
relationship management, IBM systems journal,
Vol 43, no 1, 2004.
[8]. G. Perry, How Cloud & Utility Computing Are
Different, http://gigaom.com/2008/02/28/howcloud-
utility-computing-are-different/, [Accessed
12-05-2009].
[9]. L. Dignan, Behind the Myths of Cloud
Computing,
http://seekingalpha.com/article/71589-behind-themyths-
of-cloud-computing, [Accessed April 16,
2009].
[10]. A. Weiss, Computing in the Clouds,
Vol. 11, No. 4 ACM New York, NY, USA
(2007).
[11]. G. Boss, P. Malladi, D. Quan, L.
Legregni, H. Hall, Cloud Computing, IBM
Corporation 2007.
[12]. James Governor, 15 Ways to Tell
Its Not Cloud Computing,
http://redmonk.com/jgovernor/2008/03/13/15-
ways-to- tell-its-not-cloud-computing/ [Accessed
09-05-2009].
[13]. 3tera, Cloudware - Cloud Computing
without compromise, Available
http://3tera.com/Cloud- computing/, [Accessed
19-05-2009].
[14]. T. Eilam et al, Using a utility
computing framework to develop utility systems,
IBM SYSTEMS JOURNAL, VOL 43, NO 1,
2004.
[15]. P. Padala, X. Zhu, M. Uysal, Z. Wang,
S. Singhal, A. Merchant, K. Salem, Adaptive
Control of Virtualized Resources in Utility
Computing Environments, ACM SIGOPS
Operating Systems Review, Vol41, No 3. June
2007.
[16]. V. Srinivasan, The Embedded Quality
Framework - A Strategic Approach to
Managing Quality of Web Software
Products, Brassring Inc., Waltham, MA 02453,
U.S.A.
[17]. S. Graupner, J. Pruyne, and S. Singhal,
Making the utility data center a power station
for the enterprise grid. Technical Report HPL-
2003-53, Hewlett Packard Laboratories, March
2003.
[18]. Wohlin. C, Runeson P, H. M, Ohlsson.
M. C, Regnell. B, Wesslen. A Experimentation
in Software Engineering: An Introduction,
Kluwer Academic Publishers.
[19]. Web center for social research methods,
External Validity,
http://www.socialresearchmethods.net/kb/intval.
php, [Access on: 10th May, 2009].
[20]. Web center for social research methods,
External Validity,
http://www.socialresearchmethods.net/kb/external
.php, [Access on: 10th May, 2009].
[21]. Web center for social research
methods, Descriptive Statistics,
http://www.socialresearchmethods.net/kb/statdes
c.php, [Access on: 10th May, 2009].
[22]. Dictionary.com:
http://dictionary.reference.com, [Accessed 10th
May 2009]
[23]. Bondi, Andre B., Characteristics of
scalability and their impact on performance, In:
Proceedings of the 2nd international workshop
on Software and performance. Ottawa, Canada :
[24]. ACM Press New York, NY, USA,
2000, 195 203.
`

http://www.researchgate.net/post/What_are_the_differences_between_grid_computing_and_cloud_computing

Irshad Ahmad Yala Islamic University
Grid and Cloud are two terms used in computing to refer to two types of resource sharing techniques where
multiple computing devices and usually the Internet are involved.

Grid computing
Grid computing is a form of distributed computing where a virtual computing system is compiled by using many
loosely connected computing devices to perform a large computing task. They are loosely connected because
they can be from multiple administrative realms coupled to effectively combine computing resources to reach a
general goal. The goal can typically be a single problem usually a scientific of technical problem that requires
a large amount of processing performed on a huge data set.

Cloud Computing
Cloud computing refers to any computing services provided by hosted systems over the Internet. The service
provided can be one of the infrastructure, platform or software services. The salient feature of cloud
computing is that the service is fully managed by the service provider and the user needs minimum facilities
like a personal computer and the Internet to utilize the service. Due to the fact that the service providers host
the services, the services are presented to the users in a simple way where they are not needed to understand
how the services are provided.

Grid computing is a form of distributed system where many loosely connected computers are combined
targeting to supply computing resources to reach a general goal.

Cloud computing is any computing service managed and provided by a service provider over the Internet.

Вам также может понравиться