Вы находитесь на странице: 1из 37

OpenStack Bootcamp Exercises

Class 3, November 16-20, 2015


V1.023

sean.williford@emc.com

1
System Prerequisites
1. A system running a supported OS (e.g. Mac OS X, Ubuntu Linux 12.04)
2. System memory (RAM) of 8GB or more strongly recommended.
3. Users with rights to install software on the system, if required
4. A working internet connection to download software and packages, on an unblocked
network.

Local Lab Environment


This course will utilize a compressed virtual lab environments to exercise OpenStack. The
default environment we will be working with consists of two VirtualBox VM’s. One VM is running
as an all-in-one deployment of OpenStack Kilo, based on the RDO distribution running on
CentOS 7. The other node is a simple router node that provides external connectivity for
instances in the cloud.

1. A system running a supported OS (e.g. Mac OS X, Ubuntu Linux 12.04)


2. System memory (RAM) of 8GB or more strongly recommended.
3. Users with rights to install software on the system, if required
4. A working internet connection to download software and packages, on an unblocked
network.
5. Beware of Cisco AnyConnect! If you attempt to connect to the VPN via AnyConnect after
starting your environment, it will stomp on the routes that are set up for the VirtualBox
nodes and will likely not restore them.

Module 1 Lab: Deploy a Local Lab Environment


1. Validate the system requirements documented at https://github.com/corefile/allin1-kilo
2. If not present, download and install VirtualBox
https://www.virtualbox.org/wiki/Downloads. Follow the instructions on the VirtualBox
installer.
3. If not present, download and install Vagrant
(https://www.vagrantup.com/downloads.html)
4. Install the reload and sahara Vagrant plugins

$ sudo -H vagrant plugin install vagrant-reload


$ sudo -H vagrant plugin install sahara

5. If not present, download and install Git (http://git-scm.com/downloads).


6. Create a local directory for the lab environment and cd to this directory, e.g.

$ mkdir labs
$ cd labs

2
7. Follow the documented instructions in the README.md of the repo
https://github.com/corefile/allin1-kilo to install the two VM’s you need for the lab
environment. The router node provides network connectivity for the OpenStack
environment, mediating between the OpenStack environment and your local host.

Router node: https://github.com/corefile/router


All in one Kilo node: https://github.com/corefile/allin1-kilo

8. After the virtual machines have started, you can check the status of the VMs using the
following command from the home directory of each VM (where the VM’s Vagrantfile
resides):

$ vagrant status

$ vagrant status
Current machine states:

allinone-Kilo running (virtualbox)

9. To log into a virtual machine in the lab environment, cd to the home directory of the VM,
where the Vagrant file resides. Then execute the following command:

$ vagrant ssh

10. To suspend or halt VM’s in the lab environment, use the following commands:

$ vagrant suspend
$ vagrant halt

Use the ‘vagrant up’ command to restart a suspended or stopped VM:

$ vagrant up

11. To destroy the VM, in order to switch to a different environment or reinitialize, use the
following command (but not right now):

$ vagrant destroy

12. As suggested in the project README, the vagrant sandbox feature provided by the
sahara plugin can be a useful alternative to starting over completely from scratch.

$ vagrant sandbox on # enables sandbox mode



$ vagrant sandbox commit # updates snapshot with current state

$ vagrant sandbox rollback # restores VM state to last committed state

3
$ vagrant sandbox off # exits sandbox mode

13. With the lab VM(s) running, launch a web browser and load the following URL to view
the Horizon dashboard:

http://192.168.50.21/dashboard/

14. Log in using the following credentials:

User name: user1


Password: user1

15. Look around! You are in the project view for the project ‘tenant1’. What pages do you
see? Do you know what they are for?

Note that the second script that you executed in the allin1-kilo node (runme2-vagrant.sh)
set up a sample project, user, and networking topology. It also started two tiny VM’s in
the ‘tenant1’ project. Once this operation has completed, proving that the environment
can boot VM’s, you may wish to suspend or delete these test VM’s, to conserve system
resources.

16. If you want to learn more about the RDO OpenStack distribution used for this course,
visit the site https://www.rdoproject.org/Main_Page . The OpenStack deployment is
using Packstack, which uses Puppet as an underlying technology.

Module 2 Lab: Fundamentals

1. Log in to the Horizon dashboard as user1/user1:

http://192.168.50.21/dashboard/

2. Navigate to the Compute->Instances page:

http://192.168.50.21/dashboard/project/instances/

3. Click on the Launch Instance button and launch a new instance with the following
required settings:
Name: test1
Flavor: m1.nano
Instance count: 1
Boot source: Boot from image

4
Image Name: cirros

Networking: NIC1 on int network

Hit the ‘Launch’ button to request the VM, and watch the Instances listing page update
as the VM is provisioned. This may take a little while, depending on your system.

4. When the instance is running, click on the instance name in the listing to bring up a
details page.

5. From the details page, click on the Log tab to view the instance’s console log. If you see
the CirrOS login prompt, then the instance has finished its OS boot cycle.

6. Click on the Console tab to launch a VNC console to the instance. When the console is
running in-browser, click into the window to log in. The CirrOS image is configured with
the following default login:

User: cirros
Password: cubswin:)

7. When you have logged in, confirm that the instance can connect out from the cloud.
NOTE: This operation will fail if your host computer is connecting to the internet via the
EMC corporate network, which drops ICMP traffic. Using an unfiltered network
connection should yield the expected behavior.

$: ping www.emc.com

8. Now, log in to the allinone node from your system:

$ vagrant ssh

9. When logged in, source the Keystone credentials for user1. Cat the contents of that file
to see what environment variables are being set.

$ source ./keystonerc_user1

10. Now use the Nova CLI to check the state of your instance:

$ nova list

11. When using the CLI, you can use the ‘—debug’ flag to see the exact REST request and
response stream to the OpenStack API. Try running the previous command with that
flag:

$ nova –-debug list

5
12. From the output of the list command, make a note of the instance’s ID. Become
superuser and change directories to the Nova log directory:

$ sudo su
# cd /var/log/nova

13. Use the instance’s ID to filter log entries for different services, e.g.

# grep <instanceID> nova-api.log


# grep <instanceID> nova-compute.log

14. Exit the root shell, and terminate the current instance in debug mode:

# exit
$ nova --debug delete test1

15. In the debug mode output, examine the final request of HTTP DELETE to the Nova API
endpoint. In the HTTP response, note that the API set a new header of x-compute-
request-id. Make a note of the value of this header, and use it to filter the Nova
logfiles:

$ sudo grep <request-id> /var/log/nova/nova-api.log


$ sudo grep <request-id> /var/log/nova/nova-compute.log

Directed Practice

1. In module 2, we went through the provisioning process for a VM. What do you imagine
the process is for deleting a VM as you just did?

Module 3 Lab: Horizon


Horizon is the OpenStack web portal for cloud administrators and tenants. You have already
logged into this interface as a project user, and now you will examine the interface as a cloud
administrator.

1. With the lab environment running, launch a web browser and load the following URL to
view the Horizon dashboard. If you are already logged in as ‘user1’, log out before
proceeding.

http://192.168.50.21/dashboard/

2. Log in to Horizon using the default admin user that was provisioned when the cloud was
deployed:

6
Username: admin
Password: admin

Note that in addition to the regular Project sidebar navigation tab, the ‘admin’ user sees
two additional sets of views: Admin and Identity.

3. Walk through each of the Admin views. Try exercising your power as an admin by
deleting the ‘m1.nano’ VM flavor. We’ll be recreating a similar flavor very soon!

4. Walk through each of the Identity views. Try creating a new user ‘user2’ for the ‘tenant1’
tenant. Log out and log in as ‘user2’. Is anything different from when you were logged in
earlier as ‘user1’?

5. Log in as the admin again, and disable the ‘user1’ user. What happens when you log out
and attempt to log in as ‘user1’? If you log in as ‘user2’, can you re-enable ‘user1’?

Directed Practice

1. For each admin screen, what OpenStack service do you think is providing the displayed
data and managing the displayed resources?
2. How would you check the details of the existing router in the ‘tenant1’ tenant, as an
admin?
3. What do you think the purpose is for the ‘services’ project? What are the users that are
members of that project?
4. RDO provides a useful command openstack-status to check the health of available
OpenStack services. With your environment set as the admin user, try running this
command and examine the output. (To learn how to do log in as the cloud admin, read
ahead to steps 2. And 3. in the Module 4 exercises below.) Is there anything amiss?

Module 4 Lab: Keystone


Keystone is the code name for the OpenStack Identity service. This service manages all cloud
tenants (aka projects), individual users and their roles, and also provides a central directory of
endpoints for cloud services. Horizon uses Keystone implicitly for authentication and
authorization, but you can also use Keystone explicitly via the openstack CLI. There is still a
legacy keystone CLI, but the Keystone project team has deprecated this tool in favor of the
unified OpenStackClient project.

Tenants, Users, and Roles

1. Log into the OpenStack allinone VM:

7
$: vagrant ssh

2. Copy the admin credentials into the current directory and change ownership:

$ sudo cp /root/keystonerc_admin .
$ sudo chown vagrant ./keystonerc_admin

3. Source the admin Keystone credentials into your environment:

$ source ./keystonerc_admin

4. View the current list of OpenStack projects (aka tenants, in some documentation):

$ openstack project list

5. View the current list of individual OpenStack users:

$ openstack user list

6. View the list of currently defined roles.

$ openstack role list

7. Try creating a new project in the cloud with the openstack CLI. Note that OpenStack
automatically assigns id numbers to tenants on creation.

$ openstack project create training

8. Try creating a new user ‘stacker’ with the openstack CLI, associated with the new
‘training’ tenant:

$ openstack user create --project training --password training123 --


email your_name@training.local stacker

9. Add the admin role to your new user, which will enable the user to administer the entire
cloud.

$ openstack role add --project training --user stacker admin

10. Now test your new user by using it to log into Horizon. You should be able to see
resources across the entire cloud and manage users and projects. Enjoy that feeling of
power for the moment, and then come to your senses and log out. Remove the admin
role from your user:

$ openstack role remove --project training --user stacker admin

8
11. Log in to your Horizon as ‘stacker’ and confirm what views you have access to.

12. What we really want is for ‘stacker’ to be able to manage resources in both the ‘tenant1’
project as well as its default project ‘training’. To do this, add the ‘_member_’ role to the
user for the ‘tenant1’ project:

$ openstack role add --project tenant1 --user stacker _member_

13. Log in to Horizon as ‘stacker’ to verify that you can now manage the ‘training’ and ‘demo’
projects.

Services and Endpoints

1. View the list of services available in the OpenStack environment:

$ openstack service list

2. View the list of API endpoints available in the OpenStack environment:

$ openstack endpoint list

3. View the consolidated catalog of services, with all associated endpoint URLs:

$ openstack catalog list

4. View details on any particular entry in the service catalog:

$ openstack catalog show <service-name>

Directed Practice

1. Try logging into Horizon as the admin tenant and working through the same general
exercises: create a new project. Update the roles on the project to add your ‘stacker’
user as a member. Try logging in as ‘stacker’ and see how your view changes.
2. Find where the Keystone catalog is echoed in Horizon.
3. Log in as the ‘stacker’ user and obtain your RC file from the ‘training’ project. These will
be the credentials that you will use for a number of following exercises. As noted in the
lecture, Keystone also can authenticate a user using EC2-style credentials. Log in as the
‘stacker’ user and download your project RC file for the ‘training’ project via the
Compute->Access&Security page.
4. Extra credit: Try installing the OpenStack python-*client tools locally, so that you don’t
need to SSH into the allinone node to run commands. On a Mac, this could be as easy
as

9
$ sudo -H easy_install pip # assuming pip is not installed
$ sudo -H pip install python-openstackclient

See the documentation online for CLI installation. If you encounter an exception when
running a CLI after installation, a la “Exception("Versioning for this project requires either
an sdist"…”, then you may need to upgrade the following Python package:

$ sudo -H pip install --upgrade distribute

5. To try out your new command line tools, source the RC file you downloaded from
Horizon and fire away:

$ source training-openrc.sh
$ openstack catalog list

Module 5 Lab: Nova


Nova is the code name of the OpenStack Compute service, which provides virtual machines on
demand to tenants. You can interact with Nova via the Horizon dashboard, the dedicated nova
CLI, or the unified openstack CLI.

Using the Nova CLI

1. Log into the OpenStack allinone VM

$: vagrant ssh

2. Source the admin Keystone credentials into your environment

$ source ./keystonerc_admin

3. List available VM flavors (aka VM types):

$ nova flavor-list

4. Add a custom VM flavor ‘m1.nano’, so that you can spin up more than one instance
without QEMU coming up short on available RAM. The CirrOS image only requires
64MB RAM and 1 vCPU to run, so we’ll use that.

$ nova flavor-create m1.nano 6 64 1 1


$ nova flavor-list

10
5. If you haven’t already, create a local RC file for your ‘stacker’ user, using information
from the credentials you downloaded from Horizon, but specifying the original ‘tenant1’
tenant. Remember your ‘stacker’ user can access both this project and its default
‘training’ project, but we need to tell OpenStack which one we are examining:

$ cat ./keystonerc_stacker
export OS_USERNAME=stacker
export OS_TENANT_NAME=tenant1
export OS_PASSWORD=training123
export OS_AUTH_URL=http://192.168.50.21:5000/v2.0/
export OS_REGION_NAME=RegionOne
export PS1='[\u@\h \W(stacker)]\$ '

6. Source the ‘stacker’ Keystone credentials into your environment

$ source ./keystonerc_stacker

7. View the current list of available images, using the nova CLI. Note the name of the
Cirros test image.

$ nova image-list

8. Create a new keypair ‘stackerkey’ for VM access. This is really more relevant for fuller
Linux images, but note that the output should be captured immediately if you want to use
the keypair later:

$ nova keypair-add stacker_key > stacker_key.pem

9. View keypairs defined in the system:

$ nova keypair-list

10. Check the available networks in this project to associate instances with. Note the ID
value of the ‘int’ network.

$ nova network-list

11. Create a new instance using the ‘m1.nano’ flavor and the Cirros test image, referenced
by name or id from step 7. above, and associate a vNIC with the ID of the ‘int’ network:

$ nova boot --flavor m1.nano --image cirros --key-name stacker_key --


nic net-id=<int-network-ID> testvm

12. Confirm that your VM has successfully booted by checking the VM’s console log. Note
that the console log cannot be retrieved until the VM shows as ‘Running’ in Nova:

11
$ nova list
$ nova console-log testvm

13. You can also explore the VM via Horizon, including logging into the server via a virtual
console. Log in as the ‘stacker’ user and examine the ‘tenant1’ project’s network
topology. You should see your instance associated with the ‘int’ network.

http://192.168.50.21/dashboard/project/network_topology/

14. While in Horizon, navigate to the VNC console for your instance and try logging in. See if
you can ping out to the internet. (Remember, pings won’t work if your local host is
connected to the internet via the EMC corporate network.)

$ ping www.emc.com

15. Returning to the CLI, check for an available floating IP pool list, and allocate one to your
project:

$ nova floating-ip-pool-list
$ nova floating-ip-create ext

16. Associate the free floating IP with the VM:

$ nova floating-ip-associate testvm <IP_addr>

17. Confirm the state of your VM’s fixed and floating IP addresses via the show command:

$ nova show testvm | grep int

18. In another window, log into the vagrant router node. Try pinging the VM via it’s floating
(public) IP address:

$ ping <floating_IP_addr>

19. If that did not succeed, check the security group(s) associated with your VM:

$ nova show testvm | grep security_groups

20. In Horizon, navigate to the Compute | Access & Security view and check the rules in the
security group associated with your VM, and compare with the rules in the ‘sec1’ security
group. Add the ‘sec1’ security group to your VM:

$ nova add-secgroup testvm sec1

12
21. Return to the window in the router node and attempt to ping the public IP of the VM
again.

$ ping <floating_IP_addr>

22. From the router node, SSH into the ‘testvm’ instance directly, and create a memento of
your visit. This can be any file on the filesystem. An example:

$ ssh cirros@192.168.80.21
cirros@192.168.80.21's password:

$ cat > testdata.txt


Kilroy was here!

$ cat testdata.txt
Kilroy was here!

23. Now, log out of the instance on the router node, and return attention to your CLI window.
Make an image out of the running instance:

$ nova image-create --show --poll testvm testvm-snap


$ nova image-list

24. Terminate your testvm now:

$ nova delete testvm


$ nova list

25. Boot a new VM from your saved image, specifying the ‘int’ network and the ‘sec1’
security group this time:

$ nova boot --flavor m1.nano --image testvm-snap --key-name stacker_key


--nic net-id=$(neutron net-list | grep -w int | awk '{print $2}') --security-
groups sec1 newvm

$ nova list

26. Assign a floating IP to the instance, as before:

$ nova floating-ip-list
$ nova floating-ip-associate newvm <IP_addr>

27. Return to your router node window and ssh to the new instance as the ‘cirros’ user. Do
you find the data that you had created in the original VM?

13
$ ssh cirros@<floating-IP-addr>
$ less testdata.txt

Checking Compute Activity via the Nova CLI

1. Source the admin RC file and view the current list of existing VM instances:

$ source ./keystonerc_admin
$ nova list

View all tenant VMs. This flag is commonly supported by OpenStack clients:

$ nova list --all-tenants

2. Show hypervisor statistics:

$ nova hypervisor-list

$ nova hypervisor-stats

3. Show hypervisor details, for a specific compute node. We only have one:

$ nova hypervisor-show 1

9. You can now delete your instance, unless you wish to keep it around for further practice.

$ nova delete newvm


$ nova list --all-tenants

Directed Practice

1. Log into Horizon as the ‘stacker’ user and examine again the security rules associated
with the security group ‘sec1’. How might we edit this security group so that SSH is
permitted, but no other inbound traffic.
2. Does this new set of security group rules work as expected if you spin up a VM and
associate its vNIC with this security group?
3. Try launching two instances on the ‘int’ network in the ‘sec1’ security group, A and B. Try
logging into instance A on the console. Can you SSH to instance B? Can you ping
instance B? Why not?
4. Identify a set of circumstances where the ability to create images from running servers is
of particular use.

14
Module 6 Lab: Glance
Glance is the code name for the OpenStack Image service. This service stores base images for
virtual machines. The lab environment comes equipped with a test Cirros image already
available in Glance. You can manage images in Glance via Horizon, or via the glance CLI, or
via the unified openstack CLI. Certain Glance functions are also proxied through the nova
CLI, such as listing available images.

1. Log into the OpenStack allinone VM

$ vagrant ssh

2. Source the ‘stacker’ Keystone credentials into your environment, created in your
Keystone practice.

$ source ./keystonerc_stacker

3. View the current list of available images, using the glance CLI:

$ glance image-list

4. (Alternate method) View the current list of available images, using the nova CLI:

$ nova image-list

5. To create a new image, we could use a local file or a URL. Let’s grab a fresh copy of the
CirrOS image from the internet:

$ glance image-create --name my-cirros --disk-format qcow2 --


container-format bare --copy-from http://download.cirros-
cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img --progress

$ glance image-list
$ glance image-show my-cirros

6. Log into Horizon as the stacker user. Check the list of available images and check if the
new ‘my-cirros’ image is visible from the ‘tenant1’ project, which is the default in our RC
file. Is the ‘my-cirros’ image visible from the ‘training’ project?

http://192.168.50.21/dashboard/project/images/

7. Now update the permissions on the new image to share it with the ‘training’ project. You
will need to retrieve the id of the training tenant. To find the ID for the ‘training’ tenant,
check the Keystone project list, as the cloud admin.

15
$ glance member-create my-cirros <training_tenant_id>

8. Reload the view of available images in horizon for the ‘training’ project. The ‘my-cirros’
image should now be available. You can also check the member list of the image or a
tenant from the CLI. To find the image ID, check the Glance image listing.

$ glance member-list --image-id <image_id>


$ glance member-list --tenant-id <tenant_id>

Managing Service Policy

1. The API policy for Glance, like a lot of other OpenStack services, is managed by a
policy.json file in the Glance service configuration directory (/etc/glance). We are
going to temporarily change Glance’s policy to restrict the ability to create images to
admins only. Log in to the allin1 node:

demo $: vagrant ssh allin1

2. Become root and navigate to the configuration directory for Glance:

$ sudo su
# cd /etc/glance

3. Edit the policy.json file in your favorite text editor:

# vi policy.json

4. Find the line for the add_image action and restrict it to users with an admin role:

"add_image": "", #original version

"add_image": "role:admin", #after the edit

5. Save the file and return to the shell. Restart the glance-api service to pick up the policy
change:

# systemctl restart openstack-glance-api.service

6. Now source the OpenStack credentials for the ‘stacker’ user. Try to create a new image
as before. What happens?

# glance image-create --name my-cirros2 --disk-format qcow2 --


container-format bare --copy-from http://download.cirros-
cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img --progress

16
7. Now source the OpenStack credentials for the ‘admin’ user. Try to create the new
image:

# glance image-create --name my-cirros2 --disk-format qcow2 --


container-format bare --copy-from http://download.cirros-
cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img --progress

8. Now that you have seen policy in action, let’s revert the change to the policy file so that it
doesn’t get in the way later. Open up the policy.json file in your favorite text editor:

# vi policy.json

9. Find the line for the add_image action and remove the role restriction:

"add_image": "role:admin", #original version

"add_image": "", #after the edit

10. Save the file and return to the shell. Restart the glance-api service to pick up the policy
change:

# systemctl restart openstack-glance-api.service

Directed Practice

1. Try logging into Horizon as both the admin and the ‘stacker’ tenant and create new
images. We recommend sticking with CirrOS so that you are not trying to work with
enormous files.
2. As the admin tenant, make your new image public. Confirm that you can view the image
from both the ‘tenant1’ project and the ‘training’ project.
3. When you create a new image as the ‘stacker’ user, can you make the image public? If
not, why might that be?
4. Glance also supports a ‘protected’ flag on images, to guard against accidental deletion.
As the ‘stacker’ user, try setting this flag on one of your images, then try to delete it. Can
an admin delete the image? (hint: glance help image-update)

Module 7 Lab: Cinder


Cinder is the code name of the block storage service for OpenStack VMs. Using this service,
users can provision and associate persistent volumes for VMs, on demand. Volumes can be
managed via Horizon, or using the cinder CLI, or the openstack CLI.

In order to complete the Cinder exercises, you need to have at least one VM running. If you do
not have a VM running, create one as follows:

17
1. Log into the OpenStack allinone VM

$ vagrant ssh

2. Source the ‘stacker’ Keystone credentials into your environment

$ source ./keystonerc_stacker

3. View the current list of available images, using the nova CLI, and verify you have a
CirrOS image available:

$ nova image-list

4. Create a new instance using the m1.nano flavor and the CirrOS test image:

$ nova boot --flavor m1.nano --image testvm-snap --key-name stacker_key


--nic net-id=$(neutron net-list | grep -w int | awk '{print $2}') --security-
groups sec1 testvm

5. Confirm that the current tenant has an available floating IP:

$ nova floating-ip-list

6. Associate a free floating IP with the VM:

$ nova floating-ip-associate testvm <IP_addr>

Managing and Using Volumes with Cinder

1. Make sure you are logged into the allinone VM with your OpenStack credentials
configured:

$ source ./keystonerc_stacker

2. List the current volumes in the system, using the Cinder CLI:

$ cinder list

3. Create a new volume of 1GB and then check its status on the volume list. Make a note
of the volume’s id value.

$ cinder create 1 --display-name mytestvol

18
$ cinder list

4. Attach your volume to one of your running VMs, e.g. testvm. Note that if you are using
the CirrOS image, the volume will be mounted to the next available device (e.g.
/dev/vdb) instead of the device specified in the volume-attach command.

$ nova volume-attach testvm <volume_id> /dev/vdb

5. Log into testvm and create a filesystem on the attached volume. You can use SSH from
the router node or the VNC console to log in. The command in this and following steps
assume use of the CirrOS image. If you are using a different image, replace commands
with the equivalents in your local OS:

$ ssh cirros@192.168.80.21
cirros@192.168.80.21's password:
$ cat /proc/partitions

$ sudo mkfs.ext3 /dev/vdb


$ sudo mkdir -p /mnt/testvol
$ sudo mount -t ext3 /dev/vdb /mnt/testvol

6. Write data to the attached volume:

$ sudo su
$ echo “Here is some test data” > /mnt/testvol/testdata.txt
$ exit

7. Unmount the volume from the VM and turn back to the OpenStack allinone window:

$ sudo umount /mnt/testvol


$ exit

8. Detach the volume from testvm and create a snapshot of the volume:

$ nova volume-detach testvm <volume_id>


$ cinder snapshot-create <volume_id> --display-name mytestsnapshot

9. While the volume is available, create a backup of the volume too. (Note: In the lab
environment, we are using Swift as the object storage back end for Cinder backups. In
order to successfully create and use a backup, your user needs to have the
SwiftOperator role assigned in your current project. If you get a permissions error on
backup creation, your user is likely missing this role. To fix, you can execute as admin
‘openstack role add --project <your_project> --user <your_user>
SwiftOperator’).

19
$ cinder backup-create <volume_id> --display-name mytestbackup

10. Check the status of the snapshot and backup you just created, and make sure that they
are both available:

$ cinder snapshot-list
$ cinder backup-list

11. Terminate your testvm:

$ nova delete testvm

12. Create a new VM on the ‘int’ network and allocate it a new floating IP:

$ nova boot --flavor m1.nano --image testvm-snap --key-name stacker_key


--nic net-id=$(neutron net-list | grep -w int | awk '{print $2}') --security-
groups sec1 newvm

$ nova floating-ip-associate newvm <IP_addr>

13. Attach the existing volume to the new VM:

$ nova volume-attach newvm <volume_id> /dev/vdb

14. Log into the VM and mount the filesystem from the attached device. Check for the test
data you created earlier.

$ ssh cirros@192.168.80.21
cirros@192.168.80.21's password:
$ cat /proc/partitions

$ sudo mkdir -p /mnt/testvol


$ sudo mount -t ext3 /dev/vdb /mnt/testvol
$ sudo less /mnt/testvol/testdata.txt

15. Update the test data with new content:

$ sudo su
$ rm /mnt/testvol/testdata.txt
$ echo “All new data has been created now” > /mnt/testvol/newdata.txt
$ exit

16. Unmount the ‘mytestvol’ volume and return to the OpenStack allin1:

$ sudo umount /mnt/testvol


$ exit

20
17. Look up your volume snapshot and use it to create a new volume ‘snapvol’

$ cinder create 1 --snapshot-id <snapshot_id> --display-name snapvol

18. Detach the current volume ‘mytestvol’ from ‘newvm’ and attach the volume ‘snapvol’:

$ nova volume-detach newvm <mytestvol_volume_id>


$ nova volume-attach newvm <snapvol_volume_id> /dev/vdb

19. Log into ‘newvm’ and mount the volume. Verify that the original test data is still present:

$ ssh cirros@192.168.80.21
cirros@192.168.80.21's password:
$ cat /proc/partitions

$ sudo mount -t ext3 /dev/vdb /mnt/testvol


$ sudo less /mnt/testvol/testdata.txt

20. Now unmount the volume ‘snapvol’ and return to the OpenStack allin1:

$ sudo umount /mnt/testvol


$ exit

21. Restore the ‘mytestvol’ volume state from the backup you made earlier:

$ cinder backup-restore --volume-id <mytestvol_volume_id> mytestbackup

22. Attach ‘mytestvol’ to the ‘newvm’ instance

$ nova volume-attach newvm <mytestvol_volume_id> /dev/vdb

23. Log into ‘newvm’ and mount the volume. Verify that the original test data is still present:

$ ssh cirros@192.168.80.21
cirros@192.168.80.21's password:
$ cat /proc/partitions

$ sudo mount -t ext3 /dev/vdb /mnt/testvol


$ sudo less /mnt/testvol/testdata.txt

Working with Bootable Cinder Volumes

21
1. View the current list of available images that you can use to make a bootable image.
These steps assume use of a Cirros image, but any general image will work. Note the id
of the image you wish to use.

$ nova image-list

2. Create a bootable volume from the image by creating a 1GB volume specifying a source
image ID:

$ cinder create --image-id <image_id> --display-name mybootvol 1

3. Monitor the progress of volume creation, and then confirm that the newly available
volume has the bootable flag set to true. Note the id of the new volume:

$ cinder list
$ cinder show <mybootvol_id>

4. Now launch a new VM instance using the bootable volume. Note that the volume is
mapped to the first block device (vda):

$ nova boot --flavor m1.nano --block-device source=volume,id=<VOLUME-


ID>,dest=volume,size=1,shutdown=preserve,bootindex=0 --key-name stacker_key -
-nic net-id=$(neutron net-list | grep -w int | awk '{print $2}') --security-
groups sec1 mybootvoltest

5. Check the status of the launched VM, and the volume. The VM should be running, and
the volume should show that it is attached to the VM:

$ nova list
$ cinder list

6. Test the VM by assigning it a public IP and pinging and/or logging in via the VNC
console. Try creating some test data on the volume.

$ nova floating-ip-associate mybootvoltest <IP_addr>

$ ssh cirros@<IP_addr>

$ cat > kilroy.txt


Kilroy was here!

7. Now delete the mybootvoltest VM. Check the VM and volume lists:

$ nova delete mybootvoltest


$ nova list
$ cinder list

22
8. Create a new VM instance, referencing the same bootable volume. Once it has booted,
log in either via the VNC console or by assigning a floating IP. Do you see the data you
created previously?

$ nova boot --flavor m1.nano --block-device source=volume,id=<VOLUME-


ID>,dest=volume,size=1,shutdown=preserve,bootindex=0 --key-name
stacker_key --nic net-id=$(neutron net-list | grep -w int | awk '{print
$2}') --security-groups sec1 mybootvoltest2

$ nova list

$ nova floating-ip-associate mybootvoltest2 <IP_addr>

$ ssh cirros@<IP_addr>

$ cat kilroy.txt
Kilroy was here!

Directed Practice

1. Try logging into Horizon as ‘stacker’ tenant and practice creating and managing
volumes. Can you perform the same operations that you did from the CLI?
2. You should see your bootable volume in the list of volumes? Can you attach another VM
to it? How would you make it into an image?
3. The OpenStack documentation illustrates a number of different operations on volumes,
including associating persistent and ephemeral volumes with an instance on boot.
Review these examples, and experiment in your cloud.

Module 8 Lab: Neutron


Neutron is the code name for the OpenStack networking project, which provides configurable
networking services for virtual machines. As with other services, Neutron can be managed via
Horizon, a specific neutron CLI, or (to some degree) the unified openstack client.

Neutron Networking Practice

1. Log into the OpenStack allinone VM

$ vagrant ssh

2. Edit the ‘stacker’ Keystone credentials so that you are selecting the ‘training’ tenant for
your work.

$ source ./keystonerc_stacker

23
export OS_USERNAME=stacker
export OS_TENANT_NAME=training # << this is the line to edit!
export OS_PASSWORD=training123
export OS_AUTH_URL=http://192.168.50.21:5000/v2.0/
export OS_REGION_NAME=RegionOne
export PS1='[\u@\h \W(stacker)]\$ '

3. Source the ‘stacker’ Keystone credentials into your environment

$ source ./keystonerc_stacker

4. List the available networks in the ‘training’ project. You should see the shared, public
‘ext’ network.

$ neutron net-list

5. Create a private network to the project, ‘private’, and confirm its status:

$ neutron net-create private

$ neutron net-list

6. Create the subnet for the ‘private’ network. You could use any IP range, but let’s pick
one that already exists in the ‘tenant1’ project:

$ neutron subnet-create --gateway=2.0.0.254 --name=subint private


2.0.0.0/24 --enable-dhcp --dns-nameserver=8.8.8.8

$ neutron subnet-list

7. Now create a virtual router:

$ neutron router-create router1

$ neutron router-list

8. Set the provider network as the gateway for the tenant router:

$ neutron router-gateway-set router1 ext

9. Associate the ‘subint’ subnet with the tenant router:

$ neutron router-interface-add router1 subint

24
10. Log into Horizon as the ‘stacker’ user to view the network topology that you have
created:

http://192.168.50.21/dashboard/project/network_topology/

11. Also take a look at the list of defined Routers in Horizon, and examine the details of the
tenant-router:

http://192.168.50.21/dashboard/project/routers/

Configuring Security Groups and Floating IPs for Nova

1. Review the available security groups and security group rules for the ‘training’ project:

$ neutron security-group-list

$ neutron security-group-rule-list

2. Create a new security group ‘sec1’. Note the default security group rules associated with
the new security group. If you have questions, look at the visualization of the security
group rules provided by Horizon.

$ neutron security-group-create sec1

$ neutron security-group-show sec1

3. Create two new security group rules to enable inbound ICMP and SSH traffic:

$ neutron security-group-rule-create --direction ingress --ethertype


IPv4 --protocol icmp sec1

$ neutron security-group-rule-create --direction ingress --ethertype


IPv4 --protocol tcp --port-range-min 22 --port-range-max 22 sec1

$ neutron security-group-show sec1 # compare with Horizon view

4. Check the available floating IPs assigned to the ‘training’ project. If you do not have a
floating IP available, allocate one:

$ neutron floatingip-list

$ neutron floatingip-create ext

5. Boot a VM with a NIC on the ‘private’ network, then assign a floating IP. Check ping and
SSH connectivity to the instance from the router node, after you are sure the instance
has completed boot.

25
$ nova boot --flavor m1.nano --image cirros --nic net-id=$(neutron net-
list | grep -w private | awk '{print $2}') testvm

$ nova list

$ nova console-log testvm

$ nova floating-ip-associate testvm <floating-ip>

<from router node>

$ ping <floating-ip>
$ ssh cirros@<floating-ip>

6. Now associate the VM with the ‘sec1’ security group. Check ping and SSH connectivity
from the router node. Has behavior changed?

$ nova add-secgroup testvm sec1

$ nova show testvm

<from router node>

$ ping <floating-ip>
$ ssh cirros@<floating-ip>

Directed Practice

1. Note that the subnet used for the ‘subint’ subnet is the same as one of the demo subnets
created in the ‘tenant1’ tenant when you installed your environment. If you create a VM
or already have a VM on that subnet, can you ping its private IP from the ‘testvm’
instance? Assign a floating IP to the instance? Is it now pingable from ‘testvm’?
2. Try creating a second private network in the ‘training’ tenant, called ‘protected’. Add a
subnet of your choice to that network. How would you create a VM with interfaces on
both the ‘private’ and ‘protected’ networks?

Module 9 Lab: Swift

Swift is the default OpenStack object storage service. The lab environment contains a very
compressed deployment of Swift, storing 1 replica only, in order to demonstrate Swift API
functions.

26
Using Swift

1. Log into the OpenStack allinone VM:

$ vagrant ssh

2. Source the admin Keystone credentials into your environment:

$ source ./keystonerc_admin

3. Let’s make sure that both your ‘user1’ and ‘stacker’ users have the right role to use Swift
on the projects that they are members of. The key role is ‘SwiftOperator’. You can check
the roles assigned in a particular project via the CLI or in Horizon (logged in as ‘admin’).

$ openstack role list --user user1 --project tenant1


$ openstack role list --user stacker --project training

$ openstack role add --project tenant1 --user user1 SwiftOperator


$ openstack role add --project training --user stacker SwiftOperator

$ openstack role list --user user1 --project tenant1


$ openstack role list --user stacker --project training

4. Now, load the RC file for either the ‘user1’ user or the ‘stacker’ user:

$ source ./keystonerc_stacker

5. Use the swift CLI to check the status of the current account:

$ swift stat

This command, with no arguments, will return the top-level information about the Swift
account owned by the current OpenStack project.

6. Create a new container using the swift CLI:

$ swift post files

7. Verify that the container has been created:

$ swift stat # has the container count incremented?


$ swift list # is the container in the list of the account’s
containers

8. Create a small test file and upload it to your new container ‘textfiles’:

27
$ echo “This is some test data.” >> testdata.txt
$ swift upload files testdata.txt

9. Verify that the file has been uploaded into the container:

$ swift stat files


$ swift list files

10. Now delete the test file from your local directory, and then retrieve a copy from Swift:

$ rm testdata.txt
$ swift download files testdata.txt
$ less testdata.txt

11. Now delete the test file from the Swift container:

$ swift delete files testdata.txt


$ swift stat files

12. Let’s force Swift to do some chunking. This is a little artificial, but illustrates a mechanism
that Swift uses automatically if a user attempts to upload a file that is larger than Swift’s
storage limit (5GB). First, let’s grab an image file onto the controller node:

$ wget
https://upload.wikimedia.org/wikipedia/commons/3/3e/EMC_Corporation_log
o.svg

$ swift upload files -S 1024 EMC_Corporation_logo.svg

$ swift list files

13. Now, let’s peer under the covers of how Swift is storing the data. Log into Horizon as the
user you have been working with and navigate to the Object Store -> Containers view for
the relevant project. You will see a new container created called ‘files_segments’. This is
generated by Swift automatically when chunking a data stream. You can explore this
container to find all of the individual 1K chunks that the proxy server split the file into
during upload, e.g.

28
14. Return to the command line and try downloading the file into a new directory. The proxy
server transparently reassembles the chunks and streams back the original object.

$ mkdir temp
$ cd temp
$ swift download files EMC_Corporation_logo.svg

Managing ACLs in Swift

1. Log into the OpenStack allinone VM:

$ vagrant ssh

2. Source the user credentials into your environment that you were using in the previous
exercise, e.g.

$ source ./keystonerc_stacker

3. Create a new container ‘img’ and examine the metadata on it. Note particularly that there
are parameters ‘Read ACL’ and ‘Write ACL’

$ swift post img

$ swift stat img

4. Upload an image file to this container. You could do this via Horizon or the command
line, e.g.
$ wget
https://upload.wikimedia.org/wikipedia/commons/3/3e/EMC_Corporation_
logo.svg

$ swift upload img EMC_Corporation_logo.svg

29
$ swift list img

5. Set the ‘read’ ACL on this container to allow any domain to access its objects:

$ swift post -r '.r:*' img

6. Use the swift stat command to validate the Read ACL on the container:

$ swift stat img

7. Look up the public URL for the Swift service using the keystone CLI:

$ openstack catalog show swift

8. In a web browser window (ie. Google Chrome or Mozilla Firefox), enter the public URL
for the test file in the ‘files’ container and download the file. The public URL is the
concatenation of

a. The publicURL from the keystone catalog for the current account
b. The container name
c. The object name

For example, the following URL is for the object testdata.txt in the container
public_container in the specified tenant:

http://192.168.50.21:8080/v1/AUTH_d0d8cf2dd3e84966be33942bb4d0958c/img/EMC_C
orporation_logo.svg

9. Now revoke the general read permission from the container:

$ swift post -r '' img


$ swift stat img

10. Try reloading the URL for the file from your browser window. The retrieval will now fail
with an authorization error. The public/private read status on a container is also reflected
in the Containers display in Horizon.

Directed Practice

1. In the compressed lab environment, the allinone node is running account, container, and
object services collocated. You can explore the directory structure that Swift is using to
store data by becoming root and exploring the /srv/node/swiftloopback/
directory:

30
$ sudo su -

# cd /srv/node/swiftloopback
# ls
accounts containers lost+found objects tmp

2. If you follow directories down the accounts/ and containers/ paths, you will
eventually end up in directories storing the SQLite files corresponding to active Swift
account and container storage locations.

# pwd
/srv/node/swiftloopback/containers/141933/7b6/8a9b593b3c8a50c6179592d4d
9e6f7b6
# ls
8a9b593b3c8a50c6179592d4d9e6f7b6.db
8a9b593b3c8a50c6179592d4d9e6f7b6.db.pending

3. If you follow directories down the objects/ path, you will eventually find directories
containing the objects, stored as .data files. You can see the object’s metadata by
getting the ‘swift.metadata’ extended attribute on the file:

# pwd
/srv/node/swiftloopback/objects/16353/457/0ff8510edae272d4799f7d046ca51
457

# ls
1435180238.24558.data

# attr -g swift.metadata 1435180238.24558.data


Attribute "swift.metadata" had a 254 byte value for
1435180238.24558.data:
?}q(UContent-
LengthqU7898UnameqU:/AUTH_d0d8cf2dd3e84966be33942bb4d0958c/files/amqp-
logo.pngqUETagqU bd0ea8d912b3821137e744d0536dff5fqU
amqp-logo.pngU X-TimestampqU1435180238.24558U-Object-
Meta-Orig-FilenameU
Content-Typeq U image/pngq
u.

4. The keys to creating and navigating these filesystems are provided by the rings. What
would happen if the rings were lost or had to be completely rebuilt?

Module 10 Lab: Ceilometer

31
Ceilometer is the OpenStack metering service, and serves as a central collection and publishing
system for a range of usage and performance measurements from OpenStack services and
components.

Using Ceilometer

1. Log into the OpenStack allinone VM:

$ vagrant ssh

2. Source the admin credentials into your environment.

$ source ./keystonerc_admin

3. List all the available meters in your OpenStack environment:

$ ceilometer meter-list

4. Now examine the available samples for all meters of the ‘image’ type. These are
datapoints on the existence of images in the system at particular times.

$ ceilometer sample-list -m image

5. Look at aggregated statistics for the ‘image’ meters. The stated period is the time
interval for aggregation of the data points and the duration is the overall period covered
by the query.

$ ceilometer statistics -m image

6. Now, take a look at the default pipeline definitions in


/etc/ceilometer/pipeline.yaml. Note that there are pipelines defined for various
sources, but focus on the ‘cpu’ source, which has a cadence applied every 600 seconds
of sending to the ‘cpu_sink’. In the definition of this resource, note that a ‘cpu’ meter is
being used to derive a new ‘cpu_util’ meter, using a ‘rate_of_change’ transformer.

7. To see this in action, spin up a new VM as the ‘stacker’ user, and make a note of the
instance’s ID.

$ nova boot --flavor m1.nano --image cirros --nic net-id=$(neutron net-


list | grep -w private | awk '{print $2}') testvm

8. Now, become the admin user and look for meters on that resource. After 10 minutes or
so from instance creation, you should see the ‘cpu_util’ meter appear.

# ceilometer meter-list --query resource_id=<instance-id>

32
9. If you keep your VM running for a long period of time, you can examine statistics of its
CPU utilization, e.g. hourly

# ceilometer statistics -m cpu_util -p 3600 --query


resource_id=<instance-id>

Using Ceilometer Alarms

1. Log into the OpenStack allinone VM:

$ vagrant ssh

2. Source the ‘stacker’ credentials into your environment.

$ source ./keystonerc_stacker

3. If you have an instance already running in the ‘training’ project, then make a note of the
instance’s ID value. Otherwise, create a new instance:

$ nova boot --flavor m1.nano --image cirros --nic net-id=$(neutron net-


list | grep -w private | awk '{print $2}') testvm

$ nova show testvm

4. Now, create an alarm for that instance, which will log an alert in the event that the
‘cpu_util’ meter for the instance crosses a threshold value (70%) for three consecutive
evaluation periods of 10 minutes. Note that other actions are possible, such as calling a
webhook URL. By default, alarms are evaluated every minute. Logged alarms will be
captured in the logfile /var/log/ceilometer/alarm-notifier.log

$ ceilometer alarm-threshold-create --name cpu_high --description


'instance running hot' --meter-name cpu_util --threshold 70.0 --
comparison-operator gt --statistic avg --period 600 --evaluation-
periods 3 --alarm-action 'log://’ --query resource_id=<INSTANCE_ID>

5. Check the status of your alarms via the command

$ ceilometer alarm-list

6. Update the alarm’s threshold to a smaller value via a command like:

$ ceilometer alarm-update --threshold 15 <ALARM_ID>

33
7. You can view the history of the alarm’s state changed via the command. You can check
the alarm-notifier log to see if any default notifications have been generated:

$ ceilometer alarm-history <ALARM_ID>


$ sudo tail /var/log/ceilometer/alarm-notifier.log

8. After you have been running it a while, you can disable the alarm via

$ ceilometer alarm-update --enabled FALSE <ALARM_ID>

9. The alarm can also be deleted permanently, if desired:

$ ceilometer alarm-delete <ALARM_ID>

Directed Practice

1. The default storage back end for meter, alarm, and event data is MongoDB, which is
used in this deployment. You can browse the datastore if you first install the MongoDB
client as root:

$ sudo yum install -y mongodb

2. You can then connect the mongo CLI to the ceilometer data store

$ mongo --host 192.168.50.21 ceilometer

3. The top level collections show the overall storage structure:

> show collections

4. There are circumstances where direct analysis of the data in MongoDB may be
preferable to the statistical aggregates exposed in the Ceilometer API, e.g. calculation of
standard deviation of cpu_util data for a particular tenant. JS programming of MongoDB
map-reduce is beyond the scope of this exercise, but adepts of Mongo are encouraged
to explore.

Module 11 Lab: Heat


Heat is the orchestration service for OpenStack, which can be used to deploy stacks of
resources based on parameterized template files. These resources include networks, routers,
instances, volumes, and containers.

Using Heat

34
1. Log into the OpenStack allinone VM:

$ vagrant ssh

2. Source the admin credentials into your environment.

$ source ./keystonerc_admin

3. Create a new tenant and user to use for testing stack deployments:

$ openstack project create heater

$ openstack user create --project heater --password hot hotuser

4. Grant the new ‘hotuser’ all roles for managing and using Heat templates in its default
project

$ openstack role add --project heater --user hotuser heat_stack_owner

$ openstack role add --project heater --user hotuser heat_stack_user

5. While still admin, check the status of the Heat component services

$ heat service-list

6. Now, log into Horizon as the ‘hotuser’ and create a default compute keypair called
‘default-key’.

7. Go down to the Orchestration view and click on ‘Resource Types’. This will list all
available resource names in the current OpenStack environment which could be
referenced and used in a HOT template. This is the same data as displayed in the CLI
command

$ heat resource-type-list

8. Take a look at the sample HOT template, servers_in_new_neutron_net.yaml.


Note the declaration of parameters in the initial part of the template, which could be
manually entered at instantiation of a stack, or filled in via a specified environment file.
Examine the sample environment file, hot-environment.txt, for comparison.
heat_template_version: 2013-05-23

description: >
HOT template to create a new neutron network plus a router to the
public

35
network, and for deploying two servers into the new network. The
template also
assigns floating IP addresses to each server so they are routable
from the
public network.

parameters:
key_name:
type: string
description: Name of keypair to assign to servers
image:
type: string
description: Name of image to use for servers
flavor:
type: string
description: Flavor to use for servers
public_net:
type: string
description: >
ID or name of public network for which floating IP addresses will
be allocated
private_net_name:
type: string
description: Name of private network to be created
private_net_cidr:
type: string
description: Private network address (CIDR notation)
private_net_gateway:
type: string
description: Private network gateway address
private_net_pool_start:
type: string
description: Start of private network IP address allocation pool
private_net_pool_end:
type: string
description: End of private network IP address allocation pool


# hot-environment.txt

parameters:
key_name: default-key
image: cirros
flavor: m1.nano
public_net: ext
private_net_name: private
private_net_cidr: 10.10.10.0/16
private_net_gateway: 10.10.10.1
private_net_pool_start: 10.10.10.2

36
private_net_pool_end: 10.10.10.255

9. Now, try instantiating the stack by going to the Stacks view and clicking on the ‘Launch
Stack’ button. Specify the file locations for the sample HOT template and environment
file. Name the new stack ‘test-stack’, and input the ‘hotuser’ password where asked.

The build may take some time to complete. Horizon will poll the Heat service and update
the stack status as it proceeds with its lifecycle.

10. If the stack build completes successfully, examine the resources now provisioned in the
‘heater’ project and compare with the specifications in the template’s .yaml file. If you
see a bunch of question marks when you view the topology of your stack, then you have
likely encountered a known bug in the RDO distribution of kilo. To fix this, you need to
add one line to the Apache config file for Horizon, noted in bold below, and restart the
service, e.g.

$ sudo vi /etc/httpd/conf.d/15-horizon_vhost.conf

WSGIProcessGroup dashboard
WSGIScriptAlias /dashboard "/usr/share/openstack-
dashboard/openstack_dashboard/wsgi/django.wsgi"
Alias /static/dashboard /usr/share/openstack-dashboard/static/dashboard
</VirtualHost>

$ sudo systemctl restart httpd

11. When you are satisfied, delete the stack using the ‘Delete Stack’ button.

Directed Practice

1. What IaaS compute, network, and storage resources would you need to run a simple
three-tier application within a project? Try sketching out the resource relationships to
identify all the resource types you require. Are all of these resource types available in
Heat?
2. Extra credit: put together a Heat template to deploy your stack. Bear in mind that the all-
in-one node is very constrained in terms of compute resources.

37

Вам также может понравиться