Академический Документы
Профессиональный Документы
Культура Документы
sean.williford@emc.com
1
System Prerequisites
1. A system running a supported OS (e.g. Mac OS X, Ubuntu Linux 12.04)
2. System memory (RAM) of 8GB or more strongly recommended.
3. Users with rights to install software on the system, if required
4. A working internet connection to download software and packages, on an unblocked
network.
$ mkdir labs
$ cd labs
2
7. Follow the documented instructions in the README.md of the repo
https://github.com/corefile/allin1-kilo to install the two VM’s you need for the lab
environment. The router node provides network connectivity for the OpenStack
environment, mediating between the OpenStack environment and your local host.
8. After the virtual machines have started, you can check the status of the VMs using the
following command from the home directory of each VM (where the VM’s Vagrantfile
resides):
$ vagrant status
$ vagrant status
Current machine states:
9. To log into a virtual machine in the lab environment, cd to the home directory of the VM,
where the Vagrant file resides. Then execute the following command:
$ vagrant ssh
10. To suspend or halt VM’s in the lab environment, use the following commands:
$ vagrant suspend
$ vagrant halt
$ vagrant up
11. To destroy the VM, in order to switch to a different environment or reinitialize, use the
following command (but not right now):
$ vagrant destroy
12. As suggested in the project README, the vagrant sandbox feature provided by the
sahara plugin can be a useful alternative to starting over completely from scratch.
3
$ vagrant sandbox off # exits sandbox mode
13. With the lab VM(s) running, launch a web browser and load the following URL to view
the Horizon dashboard:
http://192.168.50.21/dashboard/
15. Look around! You are in the project view for the project ‘tenant1’. What pages do you
see? Do you know what they are for?
Note that the second script that you executed in the allin1-kilo node (runme2-vagrant.sh)
set up a sample project, user, and networking topology. It also started two tiny VM’s in
the ‘tenant1’ project. Once this operation has completed, proving that the environment
can boot VM’s, you may wish to suspend or delete these test VM’s, to conserve system
resources.
16. If you want to learn more about the RDO OpenStack distribution used for this course,
visit the site https://www.rdoproject.org/Main_Page . The OpenStack deployment is
using Packstack, which uses Puppet as an underlying technology.
http://192.168.50.21/dashboard/
http://192.168.50.21/dashboard/project/instances/
3. Click on the Launch Instance button and launch a new instance with the following
required settings:
Name: test1
Flavor: m1.nano
Instance count: 1
Boot source: Boot from image
4
Image Name: cirros
Hit the ‘Launch’ button to request the VM, and watch the Instances listing page update
as the VM is provisioned. This may take a little while, depending on your system.
4. When the instance is running, click on the instance name in the listing to bring up a
details page.
5. From the details page, click on the Log tab to view the instance’s console log. If you see
the CirrOS login prompt, then the instance has finished its OS boot cycle.
6. Click on the Console tab to launch a VNC console to the instance. When the console is
running in-browser, click into the window to log in. The CirrOS image is configured with
the following default login:
User: cirros
Password: cubswin:)
7. When you have logged in, confirm that the instance can connect out from the cloud.
NOTE: This operation will fail if your host computer is connecting to the internet via the
EMC corporate network, which drops ICMP traffic. Using an unfiltered network
connection should yield the expected behavior.
$: ping www.emc.com
$ vagrant ssh
9. When logged in, source the Keystone credentials for user1. Cat the contents of that file
to see what environment variables are being set.
$ source ./keystonerc_user1
10. Now use the Nova CLI to check the state of your instance:
$ nova list
11. When using the CLI, you can use the ‘—debug’ flag to see the exact REST request and
response stream to the OpenStack API. Try running the previous command with that
flag:
5
12. From the output of the list command, make a note of the instance’s ID. Become
superuser and change directories to the Nova log directory:
$ sudo su
# cd /var/log/nova
13. Use the instance’s ID to filter log entries for different services, e.g.
14. Exit the root shell, and terminate the current instance in debug mode:
# exit
$ nova --debug delete test1
15. In the debug mode output, examine the final request of HTTP DELETE to the Nova API
endpoint. In the HTTP response, note that the API set a new header of x-compute-
request-id. Make a note of the value of this header, and use it to filter the Nova
logfiles:
Directed Practice
1. In module 2, we went through the provisioning process for a VM. What do you imagine
the process is for deleting a VM as you just did?
1. With the lab environment running, launch a web browser and load the following URL to
view the Horizon dashboard. If you are already logged in as ‘user1’, log out before
proceeding.
http://192.168.50.21/dashboard/
2. Log in to Horizon using the default admin user that was provisioned when the cloud was
deployed:
6
Username: admin
Password: admin
Note that in addition to the regular Project sidebar navigation tab, the ‘admin’ user sees
two additional sets of views: Admin and Identity.
3. Walk through each of the Admin views. Try exercising your power as an admin by
deleting the ‘m1.nano’ VM flavor. We’ll be recreating a similar flavor very soon!
4. Walk through each of the Identity views. Try creating a new user ‘user2’ for the ‘tenant1’
tenant. Log out and log in as ‘user2’. Is anything different from when you were logged in
earlier as ‘user1’?
5. Log in as the admin again, and disable the ‘user1’ user. What happens when you log out
and attempt to log in as ‘user1’? If you log in as ‘user2’, can you re-enable ‘user1’?
Directed Practice
1. For each admin screen, what OpenStack service do you think is providing the displayed
data and managing the displayed resources?
2. How would you check the details of the existing router in the ‘tenant1’ tenant, as an
admin?
3. What do you think the purpose is for the ‘services’ project? What are the users that are
members of that project?
4. RDO provides a useful command openstack-status to check the health of available
OpenStack services. With your environment set as the admin user, try running this
command and examine the output. (To learn how to do log in as the cloud admin, read
ahead to steps 2. And 3. in the Module 4 exercises below.) Is there anything amiss?
7
$: vagrant ssh
2. Copy the admin credentials into the current directory and change ownership:
$ sudo cp /root/keystonerc_admin .
$ sudo chown vagrant ./keystonerc_admin
$ source ./keystonerc_admin
4. View the current list of OpenStack projects (aka tenants, in some documentation):
7. Try creating a new project in the cloud with the openstack CLI. Note that OpenStack
automatically assigns id numbers to tenants on creation.
8. Try creating a new user ‘stacker’ with the openstack CLI, associated with the new
‘training’ tenant:
9. Add the admin role to your new user, which will enable the user to administer the entire
cloud.
10. Now test your new user by using it to log into Horizon. You should be able to see
resources across the entire cloud and manage users and projects. Enjoy that feeling of
power for the moment, and then come to your senses and log out. Remove the admin
role from your user:
8
11. Log in to your Horizon as ‘stacker’ and confirm what views you have access to.
12. What we really want is for ‘stacker’ to be able to manage resources in both the ‘tenant1’
project as well as its default project ‘training’. To do this, add the ‘_member_’ role to the
user for the ‘tenant1’ project:
13. Log in to Horizon as ‘stacker’ to verify that you can now manage the ‘training’ and ‘demo’
projects.
3. View the consolidated catalog of services, with all associated endpoint URLs:
Directed Practice
1. Try logging into Horizon as the admin tenant and working through the same general
exercises: create a new project. Update the roles on the project to add your ‘stacker’
user as a member. Try logging in as ‘stacker’ and see how your view changes.
2. Find where the Keystone catalog is echoed in Horizon.
3. Log in as the ‘stacker’ user and obtain your RC file from the ‘training’ project. These will
be the credentials that you will use for a number of following exercises. As noted in the
lecture, Keystone also can authenticate a user using EC2-style credentials. Log in as the
‘stacker’ user and download your project RC file for the ‘training’ project via the
Compute->Access&Security page.
4. Extra credit: Try installing the OpenStack python-*client tools locally, so that you don’t
need to SSH into the allinone node to run commands. On a Mac, this could be as easy
as
9
$ sudo -H easy_install pip # assuming pip is not installed
$ sudo -H pip install python-openstackclient
See the documentation online for CLI installation. If you encounter an exception when
running a CLI after installation, a la “Exception("Versioning for this project requires either
an sdist"…”, then you may need to upgrade the following Python package:
5. To try out your new command line tools, source the RC file you downloaded from
Horizon and fire away:
$ source training-openrc.sh
$ openstack catalog list
$: vagrant ssh
$ source ./keystonerc_admin
$ nova flavor-list
4. Add a custom VM flavor ‘m1.nano’, so that you can spin up more than one instance
without QEMU coming up short on available RAM. The CirrOS image only requires
64MB RAM and 1 vCPU to run, so we’ll use that.
10
5. If you haven’t already, create a local RC file for your ‘stacker’ user, using information
from the credentials you downloaded from Horizon, but specifying the original ‘tenant1’
tenant. Remember your ‘stacker’ user can access both this project and its default
‘training’ project, but we need to tell OpenStack which one we are examining:
$ cat ./keystonerc_stacker
export OS_USERNAME=stacker
export OS_TENANT_NAME=tenant1
export OS_PASSWORD=training123
export OS_AUTH_URL=http://192.168.50.21:5000/v2.0/
export OS_REGION_NAME=RegionOne
export PS1='[\u@\h \W(stacker)]\$ '
$ source ./keystonerc_stacker
7. View the current list of available images, using the nova CLI. Note the name of the
Cirros test image.
$ nova image-list
8. Create a new keypair ‘stackerkey’ for VM access. This is really more relevant for fuller
Linux images, but note that the output should be captured immediately if you want to use
the keypair later:
$ nova keypair-list
10. Check the available networks in this project to associate instances with. Note the ID
value of the ‘int’ network.
$ nova network-list
11. Create a new instance using the ‘m1.nano’ flavor and the Cirros test image, referenced
by name or id from step 7. above, and associate a vNIC with the ID of the ‘int’ network:
12. Confirm that your VM has successfully booted by checking the VM’s console log. Note
that the console log cannot be retrieved until the VM shows as ‘Running’ in Nova:
11
$ nova list
$ nova console-log testvm
13. You can also explore the VM via Horizon, including logging into the server via a virtual
console. Log in as the ‘stacker’ user and examine the ‘tenant1’ project’s network
topology. You should see your instance associated with the ‘int’ network.
http://192.168.50.21/dashboard/project/network_topology/
14. While in Horizon, navigate to the VNC console for your instance and try logging in. See if
you can ping out to the internet. (Remember, pings won’t work if your local host is
connected to the internet via the EMC corporate network.)
$ ping www.emc.com
15. Returning to the CLI, check for an available floating IP pool list, and allocate one to your
project:
$ nova floating-ip-pool-list
$ nova floating-ip-create ext
17. Confirm the state of your VM’s fixed and floating IP addresses via the show command:
18. In another window, log into the vagrant router node. Try pinging the VM via it’s floating
(public) IP address:
$ ping <floating_IP_addr>
19. If that did not succeed, check the security group(s) associated with your VM:
20. In Horizon, navigate to the Compute | Access & Security view and check the rules in the
security group associated with your VM, and compare with the rules in the ‘sec1’ security
group. Add the ‘sec1’ security group to your VM:
12
21. Return to the window in the router node and attempt to ping the public IP of the VM
again.
$ ping <floating_IP_addr>
22. From the router node, SSH into the ‘testvm’ instance directly, and create a memento of
your visit. This can be any file on the filesystem. An example:
$ ssh cirros@192.168.80.21
cirros@192.168.80.21's password:
$ cat testdata.txt
Kilroy was here!
23. Now, log out of the instance on the router node, and return attention to your CLI window.
Make an image out of the running instance:
25. Boot a new VM from your saved image, specifying the ‘int’ network and the ‘sec1’
security group this time:
$ nova list
$ nova floating-ip-list
$ nova floating-ip-associate newvm <IP_addr>
27. Return to your router node window and ssh to the new instance as the ‘cirros’ user. Do
you find the data that you had created in the original VM?
13
$ ssh cirros@<floating-IP-addr>
$ less testdata.txt
1. Source the admin RC file and view the current list of existing VM instances:
$ source ./keystonerc_admin
$ nova list
View all tenant VMs. This flag is commonly supported by OpenStack clients:
$ nova hypervisor-list
$ nova hypervisor-stats
3. Show hypervisor details, for a specific compute node. We only have one:
$ nova hypervisor-show 1
9. You can now delete your instance, unless you wish to keep it around for further practice.
Directed Practice
1. Log into Horizon as the ‘stacker’ user and examine again the security rules associated
with the security group ‘sec1’. How might we edit this security group so that SSH is
permitted, but no other inbound traffic.
2. Does this new set of security group rules work as expected if you spin up a VM and
associate its vNIC with this security group?
3. Try launching two instances on the ‘int’ network in the ‘sec1’ security group, A and B. Try
logging into instance A on the console. Can you SSH to instance B? Can you ping
instance B? Why not?
4. Identify a set of circumstances where the ability to create images from running servers is
of particular use.
14
Module 6 Lab: Glance
Glance is the code name for the OpenStack Image service. This service stores base images for
virtual machines. The lab environment comes equipped with a test Cirros image already
available in Glance. You can manage images in Glance via Horizon, or via the glance CLI, or
via the unified openstack CLI. Certain Glance functions are also proxied through the nova
CLI, such as listing available images.
$ vagrant ssh
2. Source the ‘stacker’ Keystone credentials into your environment, created in your
Keystone practice.
$ source ./keystonerc_stacker
3. View the current list of available images, using the glance CLI:
$ glance image-list
4. (Alternate method) View the current list of available images, using the nova CLI:
$ nova image-list
5. To create a new image, we could use a local file or a URL. Let’s grab a fresh copy of the
CirrOS image from the internet:
$ glance image-list
$ glance image-show my-cirros
6. Log into Horizon as the stacker user. Check the list of available images and check if the
new ‘my-cirros’ image is visible from the ‘tenant1’ project, which is the default in our RC
file. Is the ‘my-cirros’ image visible from the ‘training’ project?
http://192.168.50.21/dashboard/project/images/
7. Now update the permissions on the new image to share it with the ‘training’ project. You
will need to retrieve the id of the training tenant. To find the ID for the ‘training’ tenant,
check the Keystone project list, as the cloud admin.
15
$ glance member-create my-cirros <training_tenant_id>
8. Reload the view of available images in horizon for the ‘training’ project. The ‘my-cirros’
image should now be available. You can also check the member list of the image or a
tenant from the CLI. To find the image ID, check the Glance image listing.
1. The API policy for Glance, like a lot of other OpenStack services, is managed by a
policy.json file in the Glance service configuration directory (/etc/glance). We are
going to temporarily change Glance’s policy to restrict the ability to create images to
admins only. Log in to the allin1 node:
$ sudo su
# cd /etc/glance
# vi policy.json
4. Find the line for the add_image action and restrict it to users with an admin role:
5. Save the file and return to the shell. Restart the glance-api service to pick up the policy
change:
6. Now source the OpenStack credentials for the ‘stacker’ user. Try to create a new image
as before. What happens?
16
7. Now source the OpenStack credentials for the ‘admin’ user. Try to create the new
image:
8. Now that you have seen policy in action, let’s revert the change to the policy file so that it
doesn’t get in the way later. Open up the policy.json file in your favorite text editor:
# vi policy.json
9. Find the line for the add_image action and remove the role restriction:
10. Save the file and return to the shell. Restart the glance-api service to pick up the policy
change:
Directed Practice
1. Try logging into Horizon as both the admin and the ‘stacker’ tenant and create new
images. We recommend sticking with CirrOS so that you are not trying to work with
enormous files.
2. As the admin tenant, make your new image public. Confirm that you can view the image
from both the ‘tenant1’ project and the ‘training’ project.
3. When you create a new image as the ‘stacker’ user, can you make the image public? If
not, why might that be?
4. Glance also supports a ‘protected’ flag on images, to guard against accidental deletion.
As the ‘stacker’ user, try setting this flag on one of your images, then try to delete it. Can
an admin delete the image? (hint: glance help image-update)
In order to complete the Cinder exercises, you need to have at least one VM running. If you do
not have a VM running, create one as follows:
17
1. Log into the OpenStack allinone VM
$ vagrant ssh
$ source ./keystonerc_stacker
3. View the current list of available images, using the nova CLI, and verify you have a
CirrOS image available:
$ nova image-list
4. Create a new instance using the m1.nano flavor and the CirrOS test image:
$ nova floating-ip-list
1. Make sure you are logged into the allinone VM with your OpenStack credentials
configured:
$ source ./keystonerc_stacker
2. List the current volumes in the system, using the Cinder CLI:
$ cinder list
3. Create a new volume of 1GB and then check its status on the volume list. Make a note
of the volume’s id value.
18
$ cinder list
4. Attach your volume to one of your running VMs, e.g. testvm. Note that if you are using
the CirrOS image, the volume will be mounted to the next available device (e.g.
/dev/vdb) instead of the device specified in the volume-attach command.
5. Log into testvm and create a filesystem on the attached volume. You can use SSH from
the router node or the VNC console to log in. The command in this and following steps
assume use of the CirrOS image. If you are using a different image, replace commands
with the equivalents in your local OS:
$ ssh cirros@192.168.80.21
cirros@192.168.80.21's password:
$ cat /proc/partitions
$ sudo su
$ echo “Here is some test data” > /mnt/testvol/testdata.txt
$ exit
7. Unmount the volume from the VM and turn back to the OpenStack allinone window:
8. Detach the volume from testvm and create a snapshot of the volume:
9. While the volume is available, create a backup of the volume too. (Note: In the lab
environment, we are using Swift as the object storage back end for Cinder backups. In
order to successfully create and use a backup, your user needs to have the
SwiftOperator role assigned in your current project. If you get a permissions error on
backup creation, your user is likely missing this role. To fix, you can execute as admin
‘openstack role add --project <your_project> --user <your_user>
SwiftOperator’).
19
$ cinder backup-create <volume_id> --display-name mytestbackup
10. Check the status of the snapshot and backup you just created, and make sure that they
are both available:
$ cinder snapshot-list
$ cinder backup-list
12. Create a new VM on the ‘int’ network and allocate it a new floating IP:
14. Log into the VM and mount the filesystem from the attached device. Check for the test
data you created earlier.
$ ssh cirros@192.168.80.21
cirros@192.168.80.21's password:
$ cat /proc/partitions
$ sudo su
$ rm /mnt/testvol/testdata.txt
$ echo “All new data has been created now” > /mnt/testvol/newdata.txt
$ exit
16. Unmount the ‘mytestvol’ volume and return to the OpenStack allin1:
20
17. Look up your volume snapshot and use it to create a new volume ‘snapvol’
18. Detach the current volume ‘mytestvol’ from ‘newvm’ and attach the volume ‘snapvol’:
19. Log into ‘newvm’ and mount the volume. Verify that the original test data is still present:
$ ssh cirros@192.168.80.21
cirros@192.168.80.21's password:
$ cat /proc/partitions
20. Now unmount the volume ‘snapvol’ and return to the OpenStack allin1:
21. Restore the ‘mytestvol’ volume state from the backup you made earlier:
23. Log into ‘newvm’ and mount the volume. Verify that the original test data is still present:
$ ssh cirros@192.168.80.21
cirros@192.168.80.21's password:
$ cat /proc/partitions
21
1. View the current list of available images that you can use to make a bootable image.
These steps assume use of a Cirros image, but any general image will work. Note the id
of the image you wish to use.
$ nova image-list
2. Create a bootable volume from the image by creating a 1GB volume specifying a source
image ID:
3. Monitor the progress of volume creation, and then confirm that the newly available
volume has the bootable flag set to true. Note the id of the new volume:
$ cinder list
$ cinder show <mybootvol_id>
4. Now launch a new VM instance using the bootable volume. Note that the volume is
mapped to the first block device (vda):
5. Check the status of the launched VM, and the volume. The VM should be running, and
the volume should show that it is attached to the VM:
$ nova list
$ cinder list
6. Test the VM by assigning it a public IP and pinging and/or logging in via the VNC
console. Try creating some test data on the volume.
$ ssh cirros@<IP_addr>
7. Now delete the mybootvoltest VM. Check the VM and volume lists:
22
8. Create a new VM instance, referencing the same bootable volume. Once it has booted,
log in either via the VNC console or by assigning a floating IP. Do you see the data you
created previously?
$ nova list
$ ssh cirros@<IP_addr>
$ cat kilroy.txt
Kilroy was here!
Directed Practice
1. Try logging into Horizon as ‘stacker’ tenant and practice creating and managing
volumes. Can you perform the same operations that you did from the CLI?
2. You should see your bootable volume in the list of volumes? Can you attach another VM
to it? How would you make it into an image?
3. The OpenStack documentation illustrates a number of different operations on volumes,
including associating persistent and ephemeral volumes with an instance on boot.
Review these examples, and experiment in your cloud.
$ vagrant ssh
2. Edit the ‘stacker’ Keystone credentials so that you are selecting the ‘training’ tenant for
your work.
$ source ./keystonerc_stacker
23
export OS_USERNAME=stacker
export OS_TENANT_NAME=training # << this is the line to edit!
export OS_PASSWORD=training123
export OS_AUTH_URL=http://192.168.50.21:5000/v2.0/
export OS_REGION_NAME=RegionOne
export PS1='[\u@\h \W(stacker)]\$ '
$ source ./keystonerc_stacker
4. List the available networks in the ‘training’ project. You should see the shared, public
‘ext’ network.
$ neutron net-list
5. Create a private network to the project, ‘private’, and confirm its status:
$ neutron net-list
6. Create the subnet for the ‘private’ network. You could use any IP range, but let’s pick
one that already exists in the ‘tenant1’ project:
$ neutron subnet-list
$ neutron router-list
8. Set the provider network as the gateway for the tenant router:
24
10. Log into Horizon as the ‘stacker’ user to view the network topology that you have
created:
http://192.168.50.21/dashboard/project/network_topology/
11. Also take a look at the list of defined Routers in Horizon, and examine the details of the
tenant-router:
http://192.168.50.21/dashboard/project/routers/
1. Review the available security groups and security group rules for the ‘training’ project:
$ neutron security-group-list
$ neutron security-group-rule-list
2. Create a new security group ‘sec1’. Note the default security group rules associated with
the new security group. If you have questions, look at the visualization of the security
group rules provided by Horizon.
3. Create two new security group rules to enable inbound ICMP and SSH traffic:
4. Check the available floating IPs assigned to the ‘training’ project. If you do not have a
floating IP available, allocate one:
$ neutron floatingip-list
5. Boot a VM with a NIC on the ‘private’ network, then assign a floating IP. Check ping and
SSH connectivity to the instance from the router node, after you are sure the instance
has completed boot.
25
$ nova boot --flavor m1.nano --image cirros --nic net-id=$(neutron net-
list | grep -w private | awk '{print $2}') testvm
$ nova list
$ ping <floating-ip>
$ ssh cirros@<floating-ip>
6. Now associate the VM with the ‘sec1’ security group. Check ping and SSH connectivity
from the router node. Has behavior changed?
$ ping <floating-ip>
$ ssh cirros@<floating-ip>
Directed Practice
1. Note that the subnet used for the ‘subint’ subnet is the same as one of the demo subnets
created in the ‘tenant1’ tenant when you installed your environment. If you create a VM
or already have a VM on that subnet, can you ping its private IP from the ‘testvm’
instance? Assign a floating IP to the instance? Is it now pingable from ‘testvm’?
2. Try creating a second private network in the ‘training’ tenant, called ‘protected’. Add a
subnet of your choice to that network. How would you create a VM with interfaces on
both the ‘private’ and ‘protected’ networks?
Swift is the default OpenStack object storage service. The lab environment contains a very
compressed deployment of Swift, storing 1 replica only, in order to demonstrate Swift API
functions.
26
Using Swift
$ vagrant ssh
$ source ./keystonerc_admin
3. Let’s make sure that both your ‘user1’ and ‘stacker’ users have the right role to use Swift
on the projects that they are members of. The key role is ‘SwiftOperator’. You can check
the roles assigned in a particular project via the CLI or in Horizon (logged in as ‘admin’).
4. Now, load the RC file for either the ‘user1’ user or the ‘stacker’ user:
$ source ./keystonerc_stacker
5. Use the swift CLI to check the status of the current account:
$ swift stat
This command, with no arguments, will return the top-level information about the Swift
account owned by the current OpenStack project.
8. Create a small test file and upload it to your new container ‘textfiles’:
27
$ echo “This is some test data.” >> testdata.txt
$ swift upload files testdata.txt
9. Verify that the file has been uploaded into the container:
10. Now delete the test file from your local directory, and then retrieve a copy from Swift:
$ rm testdata.txt
$ swift download files testdata.txt
$ less testdata.txt
11. Now delete the test file from the Swift container:
12. Let’s force Swift to do some chunking. This is a little artificial, but illustrates a mechanism
that Swift uses automatically if a user attempts to upload a file that is larger than Swift’s
storage limit (5GB). First, let’s grab an image file onto the controller node:
$ wget
https://upload.wikimedia.org/wikipedia/commons/3/3e/EMC_Corporation_log
o.svg
13. Now, let’s peer under the covers of how Swift is storing the data. Log into Horizon as the
user you have been working with and navigate to the Object Store -> Containers view for
the relevant project. You will see a new container created called ‘files_segments’. This is
generated by Swift automatically when chunking a data stream. You can explore this
container to find all of the individual 1K chunks that the proxy server split the file into
during upload, e.g.
28
14. Return to the command line and try downloading the file into a new directory. The proxy
server transparently reassembles the chunks and streams back the original object.
$ mkdir temp
$ cd temp
$ swift download files EMC_Corporation_logo.svg
$ vagrant ssh
2. Source the user credentials into your environment that you were using in the previous
exercise, e.g.
$ source ./keystonerc_stacker
3. Create a new container ‘img’ and examine the metadata on it. Note particularly that there
are parameters ‘Read ACL’ and ‘Write ACL’
4. Upload an image file to this container. You could do this via Horizon or the command
line, e.g.
$ wget
https://upload.wikimedia.org/wikipedia/commons/3/3e/EMC_Corporation_
logo.svg
29
$ swift list img
5. Set the ‘read’ ACL on this container to allow any domain to access its objects:
6. Use the swift stat command to validate the Read ACL on the container:
7. Look up the public URL for the Swift service using the keystone CLI:
8. In a web browser window (ie. Google Chrome or Mozilla Firefox), enter the public URL
for the test file in the ‘files’ container and download the file. The public URL is the
concatenation of
a. The publicURL from the keystone catalog for the current account
b. The container name
c. The object name
For example, the following URL is for the object testdata.txt in the container
public_container in the specified tenant:
http://192.168.50.21:8080/v1/AUTH_d0d8cf2dd3e84966be33942bb4d0958c/img/EMC_C
orporation_logo.svg
10. Try reloading the URL for the file from your browser window. The retrieval will now fail
with an authorization error. The public/private read status on a container is also reflected
in the Containers display in Horizon.
Directed Practice
1. In the compressed lab environment, the allinone node is running account, container, and
object services collocated. You can explore the directory structure that Swift is using to
store data by becoming root and exploring the /srv/node/swiftloopback/
directory:
30
$ sudo su -
# cd /srv/node/swiftloopback
# ls
accounts containers lost+found objects tmp
2. If you follow directories down the accounts/ and containers/ paths, you will
eventually end up in directories storing the SQLite files corresponding to active Swift
account and container storage locations.
# pwd
/srv/node/swiftloopback/containers/141933/7b6/8a9b593b3c8a50c6179592d4d
9e6f7b6
# ls
8a9b593b3c8a50c6179592d4d9e6f7b6.db
8a9b593b3c8a50c6179592d4d9e6f7b6.db.pending
3. If you follow directories down the objects/ path, you will eventually find directories
containing the objects, stored as .data files. You can see the object’s metadata by
getting the ‘swift.metadata’ extended attribute on the file:
# pwd
/srv/node/swiftloopback/objects/16353/457/0ff8510edae272d4799f7d046ca51
457
# ls
1435180238.24558.data
4. The keys to creating and navigating these filesystems are provided by the rings. What
would happen if the rings were lost or had to be completely rebuilt?
31
Ceilometer is the OpenStack metering service, and serves as a central collection and publishing
system for a range of usage and performance measurements from OpenStack services and
components.
Using Ceilometer
$ vagrant ssh
$ source ./keystonerc_admin
$ ceilometer meter-list
4. Now examine the available samples for all meters of the ‘image’ type. These are
datapoints on the existence of images in the system at particular times.
5. Look at aggregated statistics for the ‘image’ meters. The stated period is the time
interval for aggregation of the data points and the duration is the overall period covered
by the query.
7. To see this in action, spin up a new VM as the ‘stacker’ user, and make a note of the
instance’s ID.
8. Now, become the admin user and look for meters on that resource. After 10 minutes or
so from instance creation, you should see the ‘cpu_util’ meter appear.
32
9. If you keep your VM running for a long period of time, you can examine statistics of its
CPU utilization, e.g. hourly
$ vagrant ssh
$ source ./keystonerc_stacker
3. If you have an instance already running in the ‘training’ project, then make a note of the
instance’s ID value. Otherwise, create a new instance:
4. Now, create an alarm for that instance, which will log an alert in the event that the
‘cpu_util’ meter for the instance crosses a threshold value (70%) for three consecutive
evaluation periods of 10 minutes. Note that other actions are possible, such as calling a
webhook URL. By default, alarms are evaluated every minute. Logged alarms will be
captured in the logfile /var/log/ceilometer/alarm-notifier.log
$ ceilometer alarm-list
33
7. You can view the history of the alarm’s state changed via the command. You can check
the alarm-notifier log to see if any default notifications have been generated:
8. After you have been running it a while, you can disable the alarm via
Directed Practice
1. The default storage back end for meter, alarm, and event data is MongoDB, which is
used in this deployment. You can browse the datastore if you first install the MongoDB
client as root:
2. You can then connect the mongo CLI to the ceilometer data store
4. There are circumstances where direct analysis of the data in MongoDB may be
preferable to the statistical aggregates exposed in the Ceilometer API, e.g. calculation of
standard deviation of cpu_util data for a particular tenant. JS programming of MongoDB
map-reduce is beyond the scope of this exercise, but adepts of Mongo are encouraged
to explore.
Using Heat
34
1. Log into the OpenStack allinone VM:
$ vagrant ssh
$ source ./keystonerc_admin
3. Create a new tenant and user to use for testing stack deployments:
4. Grant the new ‘hotuser’ all roles for managing and using Heat templates in its default
project
5. While still admin, check the status of the Heat component services
$ heat service-list
6. Now, log into Horizon as the ‘hotuser’ and create a default compute keypair called
‘default-key’.
7. Go down to the Orchestration view and click on ‘Resource Types’. This will list all
available resource names in the current OpenStack environment which could be
referenced and used in a HOT template. This is the same data as displayed in the CLI
command
$ heat resource-type-list
description: >
HOT template to create a new neutron network plus a router to the
public
35
network, and for deploying two servers into the new network. The
template also
assigns floating IP addresses to each server so they are routable
from the
public network.
parameters:
key_name:
type: string
description: Name of keypair to assign to servers
image:
type: string
description: Name of image to use for servers
flavor:
type: string
description: Flavor to use for servers
public_net:
type: string
description: >
ID or name of public network for which floating IP addresses will
be allocated
private_net_name:
type: string
description: Name of private network to be created
private_net_cidr:
type: string
description: Private network address (CIDR notation)
private_net_gateway:
type: string
description: Private network gateway address
private_net_pool_start:
type: string
description: Start of private network IP address allocation pool
private_net_pool_end:
type: string
description: End of private network IP address allocation pool
…
# hot-environment.txt
parameters:
key_name: default-key
image: cirros
flavor: m1.nano
public_net: ext
private_net_name: private
private_net_cidr: 10.10.10.0/16
private_net_gateway: 10.10.10.1
private_net_pool_start: 10.10.10.2
36
private_net_pool_end: 10.10.10.255
9. Now, try instantiating the stack by going to the Stacks view and clicking on the ‘Launch
Stack’ button. Specify the file locations for the sample HOT template and environment
file. Name the new stack ‘test-stack’, and input the ‘hotuser’ password where asked.
The build may take some time to complete. Horizon will poll the Heat service and update
the stack status as it proceeds with its lifecycle.
10. If the stack build completes successfully, examine the resources now provisioned in the
‘heater’ project and compare with the specifications in the template’s .yaml file. If you
see a bunch of question marks when you view the topology of your stack, then you have
likely encountered a known bug in the RDO distribution of kilo. To fix this, you need to
add one line to the Apache config file for Horizon, noted in bold below, and restart the
service, e.g.
$ sudo vi /etc/httpd/conf.d/15-horizon_vhost.conf
…
WSGIProcessGroup dashboard
WSGIScriptAlias /dashboard "/usr/share/openstack-
dashboard/openstack_dashboard/wsgi/django.wsgi"
Alias /static/dashboard /usr/share/openstack-dashboard/static/dashboard
</VirtualHost>
11. When you are satisfied, delete the stack using the ‘Delete Stack’ button.
Directed Practice
1. What IaaS compute, network, and storage resources would you need to run a simple
three-tier application within a project? Try sketching out the resource relationships to
identify all the resource types you require. Are all of these resource types available in
Heat?
2. Extra credit: put together a Heat template to deploy your stack. Bear in mind that the all-
in-one node is very constrained in terms of compute resources.
37