Академический Документы
Профессиональный Документы
Культура Документы
(A step-by-step installation guide for OpenStack Essex on Ubuntu 12.04 an open source cloud operating system)
June 2012
Contact
ViSolve, Inc. 4010, Moorpark Avenue, #205 San Jose, California 95117 (408) 666 4320 cloud@visolve.com www.visolve.com
Software Development / Support Lab: # 1, Rukmani Nagar, Ramanathapuram, Coimbatore - 641 045, TN. INDIA.
Page 2 of 29
Table of Contents
1 Introduction.................................................................................................................................................................. 5 2 OpenStack Open Source Cloud ................................................................................................................................... 5 2.1 OpenStack Overview ............................................................................................................................................... 5 2.2 Why OpenStack? .................................................................................................................................................... 5 2.3 OpenStack Components .......................................................................................................................................... 5 2.3.1 OpenStack Compute Infrastructure (Nova) .................................................................................................... 5 2.3.1.1 Components of OpenStack Compute ............................................................................................. 6 2.3.2 OpenStack Imaging Service (Glance) ............................................................................................................. 6 2.3.3 OpenStack Identity Service (Keystone)........................................................................................................... 7 2.3.3.1 Components of Identity Service ..................................................................................................... 7 2.3.4 OpenStack Administrative Web-Interface (Horizon)....................................................................................... 7 2.3.5 OpenStack Storage Infrastructure (Swift) ...................................................................................................... 7 2.3.5.1 Components of Swift ..................................................................................................................... 8 2.4 OpenStack Architecture .......................................................................................................................................... 8 3 OpenStack Installation .................................................................................................................................................. 8 3.1 OS Installation ........................................................................................................................................................ 8 3.2 Network Configuration ........................................................................................................................................... 9 3.3 Database Installation.............................................................................................................................................. 9 3.3.1 Creating Databases .....................................................................................................................................10 3.4 Keystone Installation .............................................................................................................................................11 3.4.1 Installing and Configuring Keystone .............................................................................................................11 3.4.2 Creating Tenants .........................................................................................................................................11 3.4.3 Creating Users .............................................................................................................................................11 3.4.4 Creating Roles .............................................................................................................................................12 3.4.5 Listing Tenants, Users and Roles ..................................................................................................................12 3.4.6 Adding Roles to Users in Tenants .................................................................................................................12 3.4.7 Creating Services .........................................................................................................................................13 3.4.8 Creating Endpoints ......................................................................................................................................14 3.4.9 Testing Keystone .........................................................................................................................................14 3.5 Glance Installation .................................................................................................................................................15 3.5.1 Glance Configuration ...................................................................................................................................15 3.5.2 Testing Glance.............................................................................................................................................16 3.6 Nova Installation ...................................................................................................................................................16 3.6.1 Nova Configuration .....................................................................................................................................16 3.6.2 Testing Nova ...............................................................................................................................................19 3.7 Dashboard Installation...........................................................................................................................................20 3.8 Uploading Linux Image ..........................................................................................................................................20 4 Icinga Open Source Monitoring .................................................................................................................................20 4.1 Configuring Icinga Server .......................................................................................................................................21 4.1.1 Pre-requisites ..............................................................................................................................................21 4.1.2 Required Packages ......................................................................................................................................21 4.1.3 Icinga Installation and Configuration ...........................................................................................................21 4.1.4 Installing Nagios plug-in for monitoring .......................................................................................................24 4.1.5 Installation of NRPE (Nagios Remote Plug-in Executor).................................................................................24 4.2 Configuring the Virtual Machines for Monitoring ...................................................................................................25 4.2.1 Installation of Nagios Plug-in for monitoring ................................................................................................25 4.2.2 Installation of NRPE (Nagios Remote Plug-in Executor).................................................................................25 4.3 Configuring Virtual Machines on Icinga Server .......................................................................................................26
Copyright 2012 ViSolve Inc. All rights reserved. Page 3 of 29
Page 4 of 29
1 Introduction
Cloud has enabled efficient use of computing, storage and network resources, and has reduced total cost of ownership drastically. Open Source cloud solutions have driven down the cost much further. It has given opportunity to deliver functionally improved IT services to business and respond faster to market needs. Corporate around the world are migrating their business to open source cloud, the leading one being industry standard OpenStack an open source cloud operating system. This document is an effort to provide step-by-step instructions to install OpenStack to enable organizations deploy and manage their cloud. Also covered in detail are the installation steps of Icinga an open source cloud monitoring tool to monitor cloud data center.
Page 5 of 29
2.3.1.1 2.3.1.1.1
The API server provides an interface for the outside world to interact with the cloud infrastructure. API server is the only component that the outside world uses to manage the infrastructure. The management is done through web services calls using EC2 API. The API Server then, in turn, communicates with the relevant components of the cloud infrastructure through the Message Queue. As an alternative to EC2 API, OpenStack also provides a native API called "OpenStack API". 2.3.1.1.2 Message Queue (Rabbit MQ Server)
OpenStack communicates among them in an asynchronous manner using the message queue via AMQP (Advanced Message Queue Protocol). 2.3.1.1.3 Compute Worker (nova-compute)
Compute workers deal with instance management life cycle. They receive the requests for instance life cycle management via the Message Queue and carry out operations. 2.3.1.1.4 Network Controller (nova-network)
The Network Controller deals with the network configuration of host machines. It does operations like allocating IP addresses, configuring VLANs for projects, implementing security groups and configuring networks for compute nodes. 2.3.1.1.5 Volume Worker (nova-volume)
Volume workers are used for management of LVM-based instance volumes. Volume Workers perform volume related functions such as creation, deletion, attaching a volume to an instance, and detaching a volume from an instance. Volumes provide a way of providing persistent storage for the instances, as the root partition is non-persistent and any changes made to it are lost when an instance is terminated. When a volume is detached from an instance or when an instance, to which the volume is attached, is terminated, it retains the data that was stored on it. This data can be accessed by reattaching the volume to the same instance or by attaching it to other instances. 2.3.1.1.6 Scheduler (nova-scheduler)
The scheduler maps the nova-API calls to the appropriate OpenStack components. It runs as a daemon named nova-schedule and picks up a compute server from a pool of available resources depending on the scheduling algorithm in place.
Page 6 of 29
Every OpenStack service (Nova, Swift, Glance) runs on a dedicated port and on a dedicated URL (host), we call them endpoints. 2.3.3.1.2 Regions
A region defines a dedicated physical location inside a data centre. In a typical cloud setup, most if not all services are distributed across data centers/servers which are also called regions. 2.3.3.1.3 User
Each component that is being connected to or being administered via keystone can be called a service. For example, we can call Glance a keystone service. 2.3.3.1.5 Role
In order to maintain restrictions as to what a particular user can do inside cloud infrastructure it is important to have a role associated. 2.3.3.1.6 Tenant
A tenant is a project with all the service endpoint and a role associated to user who is member of that particular tenant.
Page 7 of 29
2.3.5.1
Components of Swift Swift Account Swift Container Swift Object Swift Proxy The RING
3 OpenStack Installation
3.1 OS Installation
Install 64 bit version of Ubuntu server 12.04 keeping the following configurations in mind. 1. During Installation select only openssh-server in the packages menu. 2. To run nova-volume on this server, you must have a dedicated partition. So, ensure you choose manual partitioning scheme while installing Ubuntu Server and create a dedicated partition with adequate space for this purpose. Also ensure that the partition type is set as Linux LVM. 3. Update the machine using the following commands.
a. # apt-get update b. # apt-get upgrade
4. Install bridge-utils:
a. # apt-get install bridge-utils
Page 8 of 29
2. Create the root password for mysql. The password used in this guide is "openstack" 3. Change the binding address from 127.0.0.1 to 0.0.0.0 in /etc/mysql/my.cnf. It should be identical to this:
bind-address = 0.0.0.0
Page 9 of 29
Page 10 of 29
2. Open /etc/keystone/keystone.conf and change the admin_token = ADMIN line so that it looks like the following:
admin_token = admin
3. Since MySQL database is used to store keystone configuration, replace the following line in /etc/keystone/keystone.conf
connection = sqlite:////var/lib/keystone/keystone.db
with
connection = mysql://keystoneuser:keystonepasswd@<server IP>/keystone
4. Restart Keystone
# service keystone restart
6. Export environment variables which are required while working with OpenStack.
# export SERVICE_ENDPOINT="http://localhost:35357/v2.0" # export SERVICE_TOKEN=admin
7. You can also add these variables to ~/.bashrc, so that you need not have to export them every time.
Page 11 of 29
2. List Roles
# keystone role-list +----------------------------------+--------+ | id | name | +----------------------------------+--------+ | a5119d9a0ca44a5e8e13253119aa13ba | admin | | d09ba199438548538712da783c2ded5b | Member | +----------------------------------+--------+
3. List Users
# keystone user-list +----------------------------------+---------+-------+--------+ | id | enabled | email | name | +----------------------------------+---------+-------+--------+ | 5cbdf67853584b699be0e09943d194ba | True | None | glance | | 9318eb193d2f4a2c9a9169fc532dcac7 | True | None | admin | | b16b2b99cf4d4cb6916611455de8585b | True | None | nova | +----------------------------------+---------+-------+--------+
Note: The values of the 'id' column would be required later when we associate a role to a user in a particular tenant.
Page 12 of 29
Note: The required 'id' can be obtained from the commands - keystone user-list, keystone tenant list, keystone role-list. 1. Add a role of 'admin' to the user 'admin' of the tenant 'admin'.
# keystone user-role-add --user b3de3aeec2544f0f90b9cbfe8b8b7acd --role 2bbe305ad531434991d4281aaaebb700 --tenant_id 7f95ae9617cd496888bc412efdceabfd
2. Add a role of 'admin' to the users 'nova' and 'glance' of the tenant 'service'.
# keystone user-role-add --user ce8cd56ca8824f5d845ba6ed015e9494 --role 2bbe305ad531434991d4281aaaebb700 --tenant_id c7970080576646c6959ee35970cf3199 # keystone user-role-add --user 518b51ea133c4facadae42c328d6b77b --role 2bbe305ad531434991d4281aaaebb700 --tenant_id c7970080576646c6959ee35970cf3199
3. The 'Member' role is used by Horizon. So add the 'Member' role accordingly.
# keystone user-role-add --user b3de3aeec2544f0f90b9cbfe8b8b7acd --role d983800dd6d54ee3a1b1eb9f2ae3291f --tenant_id 7f95ae9617cd496888bc412efdceabfd
Note: Replace the id appropriately as listed by keystone user-list, keystone role-list, and keystone tenant-list.
2. Some of the services that we create are nova-compute, nova-volume, glance, swift, keystone and ec2.
# keystone service-create --name nova --type compute --description 'OpenStack Compute Service' # keystone service-create --name volume --type volume --description 'OpenStack Volume Service' # keystone service-create --name glance --type image --description 'OpenStack Image Service' # keystone service-create --name keystone --type identity --description 'OpenStack Identity Service' # keystone service-create --name ec2 --type ec2 --description 'EC2 Service'
3. Each of the services that have been created above will be identified with a unique id which can be obtained from the following command:
# keystone service-list +----------------------------------+----------+--------------+----------------------------+ | id | name | type | description | +----------------------------------+----------+--------------+----------------------------+ | 040910d0ebbb4b60a30b470dfe729370 | volume | volume | OpenStack Volume Service | | 1bbe94159fb14f09925f075abb046b2d | ec2 | ec2 | EC2 Service |
Copyright 2012 ViSolve Inc. All rights reserved. Page 13 of 29
OpenStack Open Source Cloud Installation Document | 2ac838cec5974afabc6aab8d537dcdb6 | glance | image | OpenStack Image Service | | 6d6603460f1c4d6b9874b3d313ba71f4 | nova | compute | OpenStack Compute Service | | 97f17ae143184d8597f4d34746c3c58c | keystone | identity | OpenStack Identity Service | +----------------------------------+----------+--------------+----------------------------+
Note: The 'id' will be used in defining the endpoint for that service.
Page 14 of 29
OpenStack Open Source Cloud Installation Document # curl -d '{"auth": {"tenantName": "adminTenant", "passwordCredentials":{"username": "adminUser", "password": "secretword"}}}' -H "Content-type:application/json" http://<server IP>:35357/v2.0/tokens | python -m json.tool
If your tests have passed, and you are getting the token returned as you expected, you are officially on your way to having an OpenStack cloud!
2. These values have to be modified as per the configurations made earlier. The admin_tenant_name will be 'service', admin_user will be 'glance' and admin_password is 'glance'. After editing, the lines should be as follows:
admin_tenant_name = service admin_user = glance admin_password = glance
3. Now open /etc/glance/glance-registry-paste.ini and make similar changes at the end of the file.
admin_tenant_name = service admin_user = glance admin_password = glance
4. Open the file /etc/glance/glance-registry.conf and edit the line which contains the option "sql_connection =" to this:
sql_connection = mysql://glanceuser:glancepasswd@<server IP>/glance
5. In order to tell glance to use keystone for authentication, add the following lines at the end of the file.
[paste_deploy] flavor = keystone
6. Open /etc/glance/glance-api.conf and add the following lines at the end of the document.
[paste_deploy] flavor = keystone
OpenStack Open Source Cloud Installation Document # restart glance-api # restart glance-registry
The above command will not return any output. With Glance configured properly and using keystone as the authentication mechanism, now we can upload images to glance.
Page 16 of 29
OpenStack Open Source Cloud Installation Document --nova_url=http://<server IP>:8774/v1.1/ --routing_source_ip=<server IP> --glance_api_servers=<server IP>:9292 --image_service=nova.image.glance.GlanceImageService --sql_connection=mysql://novauser:novapasswd@<server IP>/nova --ec2_url=http://<server IP>:8773/services/Cloud --keystone_ec2_url=http://<server IP>:5000/v2.0/ec2tokens --api_paste_config=/etc/nova/api-paste.ini # vnc specific configuration --novnc_enabled=true --novncproxy_base_url=http://<server IP>:6080/vnc_auto.html --vncserver_proxyclient_address=<server IP> --vncserver_listen=<server IP> # network specific settings --network_manager=nova.network.manager.FlatDHCPManager --public_interface=eth0 --flat_interface=eth0 --flat_network_bridge=br100 --fixed_range=192.168.4.xx/27 --floating_range=172.16.1.xx/24 --network_size=32 --flat_network_dhcp_start=192.168.4.xx --flat_injected=False --force_dhcp_release --iscsi_helper=tgtadm
2. Create nova-volume using the following steps a. List the partitions available
# fdisk -l
Page 17 of 29
OpenStack Open Source Cloud Installation Document First sector (325793792-976773167, default 325793792): Using default value 325793792 Last sector, +sectors or +size{K,M,G} (325793792-976773167, default 976773167): +100G Command (m for help): t Partition number (1-4): 3 Hex code (type L to list codes): L Hex code (type L to list codes): 8e Changed system type of partition 4 to 8e (Linux LVM) Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8) Syncing disks.
3. Change the ownership of the /etc/nova folder and permissions for /etc/nova/nova.conf
# chown -R nova:nova /etc/nova # chmod 644 /etc/nova/nova.conf
4. Open /etc/nova/api-paste.ini and at the end of the file, edit the following lines:
admin_tenant_name = %SERVICE_TENANT_NAME% admin_user = %SERVICE_USER% admin_password = %SERVICE_PASSWORD%
Page 18 of 29
These values have to be modified conforming to configurations made earlier. The admin_tenant_name will be 'service', admin_user will be 'nova' and admin_password is 'nova'. After editing, the lines should be as follows:
admin_tenant_name = service admin_user = nova admin_password = nova
2. If all your services are in an enabled state, and everything is running, you are ready to issue your first command to your cloud. 3. The following three nova commands will give you clear feedback if your cloud is responding to your API calls.
# nova list +----+------+--------+----------+ | ID | Name | Status | Networks | +----+------+--------+----------+ +----+------+--------+----------+
Copyright 2012 ViSolve Inc. All rights reserved. Page 19 of 29
OpenStack Open Source Cloud Installation Document # nova image-list +----+--------------------------------------+--------+ | ID | Name | Status | +----+--------------------------------------+--------+ +----+--------------------------------------+--------+ # nova flavor-list +----+-----------+-----------+------+----------+-------+------------+----------+ | ID | Name | Memory_MB | Swap | Local_GB | VCPUs | RXTX_Quota | RXTX_Cap | +----+-----------+-----------+------+----------+-------+------------+----------+ | 1 | m1.tiny | 512 | 0 | 0 | 1 | 0 | 0 | | 2 | m1.small | 2048 | 0 | 20 | 1 | 0 | 0 | | 3 | m1.medium | 4096 | 0 | 40 | 2 | 0 | 0 | | 4 | m1.large | 8192 | 0 | 80 | 4 | 0 | 0 | | 5 | m1.xlarge | 16384 | 0 | 160 | 8 | 0 | 0 | +----+-----------+-----------+------+----------+-------+------------+----------+
3. Open a browser and enter IP address of the OpenStack server. You should see the OpenStack login prompt. Login with username admin and password admin.
2. Verify if the image has been uploaded by issuing the following command.
# glance index [OR] # nova image-list
Page 20 of 29
4.1.1 Pre-requisites
Before you proceed with installing and configuring Icinga, make sure to install the following packages on the machine in which Icinga Server will be configured.
# # # # apt-get apt-get apt-get apt-get install install install install apache2 build-essential libgd2-xpm-dev libjpeg62 libjpeg62-dev libpng12 libpng12-dev snmp libsnmp5-dev openssl libssl-dev
Note: Sometimes the names of packages change between different releases of the same distribution. So, if you get a message that one of the package cannot be found, please use the search option of your package manager to get the new name.
Page 21 of 29
5. Run the Icinga configuration script and compile the Icinga source code
# ./configure --prefix=/opt/icinga --with-icinga-user=daemon --with-icingagroup=daemon --with-httpd-conf=/opt/lampp/etc # make all
Note: Make sure there are no errors while compiling. In case there are errors, install the required packages and recompile. 6. Install binaries, init script, sample configuration files and set permission on external command directories.
# # # # # make make make make make install install-init install-config install-commandmode install-webconf
8. Create an Admin account for logging into the Icinga Web Interface.
# cd /opt/lampp/bin/ # ./htpasswd -c /opt/icinga/etc/htpasswd.users icingaadmin New password: Re-type new password: Adding password for user icingaadmin
Note: If you need to change the login details later, use the same command. 9. Start Apache
# cd /opt/lampp/ # ./lampp start apache
XAMPP: Starting Apache with SSL (and PHP5)... 10. Check if Apache is working by issuing the appropriate URL on the browser. http://<ServerIP>
Page 22 of 29
11. Now we need to start Icinga but before that we need to check whether Icinga has been compiled properly and the entire configurations are set.
# cd /opt/icinga/
12. Before configuring Icinga we will start and check if we get the page.
# /opt/icinga/bin/icinga -v /opt/icinga/etc/icinga.cfg
If things are OK and there are no serious problems, the below message will be displayed.
Total Warnings: 0 Total Errors: 0
14. Set the appropriate permission for the Icinga directories mentioned below.
# # # # # chmod chmod chmod chmod chmod 777 777 777 777 777 /opt/ /opt/icinga/ /opt/icinga/var/ /opt/icinga/var/rw/ /opt/icinga/var/rw/icinga.cmd
Page 23 of 29
15. You should now be able to access the Icinga Web Interface at the URL below. You will be prompted for the username (icingaadmin) and password specified earlier. http://<serverIP>/icinga
2. Create a Nagios user and extract the Nagios plug-in source code tarball.
# useradd nagios # tar -zxvf nagios-plug-ins-1.4.15.tar.gz # cd nagios-plug-ins-1.4.15
3. Compile and install the plug-ins by changing the installation directory to /opt/icinga/
# ./configure --prefix=/opt/icinga/ --with-nagios-user=daemon --with-nagiosgroup=daemon # make # make install
Page 24 of 29
2. Download Nagios plug-in # wget http://prdownloads.sourceforge.net/sourceforge/nagiosplug/nagios-plug-ins-1.4.15.tar.gz 3. Extract the Nagios plug-in source code tar ball.
# tar -zxvf nagios-plug-ins-1.4.15.tar.gz # cd nagios-plug-ins-1.4.15
4. Compile and install the plug-ins by changing the installation directory to /opt/icinga/
# ./configure --prefix=/opt/icinga/ --with-nagios-user=daemon --with-nagiosgroup=daemon # make # make install # chown -R daemon:daemon /opt/icinga/
Page 25 of 29
= REUSE = stream = 5666 = no = nagios = nagios = /opt/icinga/bin/nrpe = -c /opt/icinga/etc/nrpe.cfg --inetd += USERID = no = 127.0.0.1,<ServerIP>
5. Check if the following private service commands are defined in the nrpe.cfg file.
command[check_users]=/opt/icinga/libexec/check_users -w 5 -c 10 command[check_load]=/opt/icinga/libexec/check_load -w 15,10,5 -c 30,25,20 command[check_disk]=/opt/icinga/libexec/check_disk -w 20% -c 10% -p /dev/sda1 command[check_zombie_procs]=/opt/icinga/libexec/check_procs -w 5 -c 10 -s Z command[check_total_procs]=/opt/icinga/libexec/check_procs -w 150 -c 200 command[check_swap]=/opt/icinga/libexec/check_swap -w 20% -c 10% command[check_memory]=/opt/icinga/libexec/check_mem.pl -u -w 80 -c 90
Note: Any plug-ins that is used in the command lines must reside on the machine that this daemon is running on! The examples below assume that you have plug-ins installed in a /usr/local/nagios/libexec directory. Also note that you will have to modify the definitions below to match the argument format the plug-ins expect. 6. Run NRPE as a service by adding the following line.
# vim /etc/services nrpe 5666/tcp
7. Restart the xinetd service and check whether NRPE has started
# /etc/init.d/xinetd restart # netstat -a |grep nrpe
Example: machine1.cfg
Copyright 2012 ViSolve Inc. All rights reserved. Page 26 of 29
# Define a service to "ping" the local machine define service{ use local-service host_name machine1 service_description PING check_command check_ping!100.0,20%!500.0,60% } # Define a service to check the disk space of the root partition define service{ use local-service host_name machine1 service_description Root Partition check_command check_nrpe!check_disk } # Define a service to check the number of currently logged in define service{ use local-service host_name machine1 service_description Current Users check_command check_local_users!20!50 } # Define a service to check the number of currently running procs define service{ use local-service host_name machine1 service_description Total Processes check_command check_nrpe!check_total_procs } # Define a service to check the load on the local machine. define service{ use local-service host_name machine1 service_description Current Load check_command check_local_load!5.0,4.0,3.0!10.0,6.0,4.0 } # Define a service to check the swap usage define service{ use local-service host_name machine1 service_description Swap Usage check_command check_nrpe!check_swap } # Define a service to check SSH on the local machine define service{ use local-service host_name machine1 service_description SSH check_command check_ssh notifications_enabled 0 } # Define a service to check HTTP define service{
Page 27 of 29
OpenStack Open Source Cloud Installation Document use host_name service_description check_command notifications_enabled } local-service machine1 HHTP check_http 0
Note: Refer localhost.cfg file present in the same location. 1. Add the following lines in the command.cfg file present in the same location.
# check_nrpe definition define command{ command_name check_nrpe command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$ }
2. Add the configuration file path of the monitoring machines in the icinga.cfg configuration file.
# vim icinga.cfg cfg_file=/opt/icinga/etc/objects/machine1.cfg
3. Restart Icinga for the configuration changes to take effect. Kill the existing Icinga process
# killall icinga [OR] kill -9 PID
Run the below command to verify if the Icinga configurations specified are correct.
# /opt/icinga/bin/icinga -v /opt/icinga/etc/icinga.cfg
If there are no errors displayed in the verification step, start Icinga using the below command.
# /opt/icinga/bin/icinga -d /opt/icinga/etc/icinga.cfg # ps -ef|grep icinga
Page 28 of 29
5 Conclusion
The detailed installation instructions given in this document would have helped you to install and deploy OpenStack to deploy and manage cloud. The installation and integration of open source Icinga with OpenStack will help you monitor critical cloud data center to ensure service availability and business continuity. ViSolve provides cloud deployment, customization, management and monitoring as a service. As part of the service, ViSolve can provide you commercial support for installation and deployment of OpenStack and Icinga. For several years ViSolve has been helping SMEs and Fortune 100s to deploy cloud as part of their corporate strategy.
About ViSolve
ViSolve is a leading contributor to Open Source. For over a decade, ViSolve has been advocating and promoting open source technology as the solution for future IT needs. ViSolve has worked on several mission-critical projects for world-wide enterprise customers and has been providing service and support with a focus on leading-edge open source technologies. ViSolve for years has been deploying, managing and monitoring clouds using open source OpenStack and Xen Cloud Platform (XCP), and also proprietary solutions like VMware vCloud and HP Cloud Service Automation (CSA). Our partnership with leading system vendors and global distributors in provisioning cloud infrastructure has helped us intimately familiarize the internals of open source and proprietary cloud solutions. We understand the challenges, complexity and intricacies of live implementations, and best practices to be followed for a successful deployment. Deploy and manage cloud for free at our demo environment at http://cloud.visolve.com. Feel free to send us your feedbacks to cloud@visolve.com For more information:Visit: www.visolve.com Write to: cloud@visolve.com Call: (408) 666 4320
Page 29 of 29