Вы находитесь на странице: 1из 74

Deploying Red Hat® Enterprise

Virtualization (RHEV) for Servers

Version 1.0
November 2009
Deploying Red Hat® Enterprise Virtualization (RHEV) for Servers
Copyright © 2009 by Red Hat, Inc.

1801 Varsity Drive


Raleigh NC 27606-2072 USA
Phone: +1 919 754 3700
Phone: 888 733 4281
Fax: +1 919 754 3701
PO Box 13588
Research Triangle Park NC 27709 USA

"Red Hat," Red Hat Enterprise Linux, the Red Hat "Shadowman" logo, and the products listed are
trademarks or registered trademarks of Red Hat, Inc. in the United States and other countries. Linux is
a registered trademark of Linus Torvalds.

Microsoft, Windows, Windows Server, and SQL Server are registered trademarks of Microsoft
Corporation.

All other trademarks referenced herein are the property of their respective owners.

© 2009 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set
forth in the Open Publication License, V1.0 or later (the latest version is presently available at
http://www.opencontent.org/openpub/).

The information contained herein is subject to change without notice. Red Hat, Inc. shall not be liable
for technical or editorial errors or omissions contained herein.

Distribution of modified versions of this document is prohibited without the explicit permission of the
copyright holder.

Distribution of the work or derivative of the work in any standard (paper) book form for commercial
purposes is prohibited unless prior permission is obtained from the copyright holder.

The GPG fingerprint of the security@redhat.com key is:


CA 20 86 86 2B D6 9D FC 65 F6 EC C4 21 91 80 CD DB 42 A6 0E

2 | www.redhat.com
Table of Contents

1 Executive Summary..........................................................................................7

2 Red Hat Enterprise Virtualization - Overview...................................................8


2.1 Red Hat Enterprise Virtualization - Portfolio.....................................................................8
2.2 Kernel-based Virtualization Machine (KVM)..................................................................11
2.2.1 Traditional Hypervisor Model....................................................................................11
2.2.2 Linux as a Hypervisor...............................................................................................12
2.2.3 A Minimal System.....................................................................................................12
2.2.4 KVM Summary.........................................................................................................12

3 Environment ..................................................................................................13

4 Red Hat Enterprise Virtualization Environment..............................................14

5 Install and Configure Red Hat Enterprise Virtualization Manager.................15


5.1 Microsoft Components and Applications........................................................................15
5.2 Helper Applications........................................................................................................16
5.3 Active Directory Domain................................................................................................16
5.4 Windows Update............................................................................................................16
5.5 Red Hat Enterprise Virtualization Manager Application.................................................17

6 Hosts...............................................................................................................21
6.1 Install and Configure the Standalone Red Hat Enterprise Virtualization Hypervisor......21
6.1.1 Media.......................................................................................................................22
6.1.1.1 CD/DVD..............................................................................................................22
6.1.1.2 USB Drive...........................................................................................................22
6.1.1.3 PXE.....................................................................................................................22
6.1.2 Kernel Options..........................................................................................................22
6.1.3 Configuration............................................................................................................23
6.1.3.1 Configure storage partitions................................................................................23
6.1.3.2 Configure authentication.....................................................................................24
6.1.3.3 Set the hostname................................................................................................24
6.1.3.4 Networking setup................................................................................................24
6.1.3.5 Configure RHEV Host.........................................................................................24

www.redhat.com | 3
6.1.3.6 Install locally and reboot.....................................................................................25
6.1.4 Approve....................................................................................................................25
6.2 Configuring KVM on Red Hat Enterprise Linux 5.4........................................................26
6.3 Power Management.......................................................................................................27

7 Activating a Data Center.................................................................................28

8 Populating the ISO Library.............................................................................30

9 Creating a Virtual Machine.............................................................................31


9.1 Red Hat Enterprise Linux 5.4 Installed from a PXE Server............................................31
9.2 Windows Server 2003 from Uploaded ISO....................................................................37

10 Resource Management................................................................................40
10.1 Configuring a Data Center...........................................................................................40
10.1.1 Create the Data Center..........................................................................................40
10.1.2 Create a Cluster.....................................................................................................42
10.1.2.1 Cluster Policy Setting........................................................................................42
10.1.3 Relocating an Existing Host ..................................................................................44
10.1.4 Create New Storage Pool.......................................................................................44
10.1.4.1 Fibre Channel...................................................................................................44
10.1.4.2 iSCSI.................................................................................................................45
10.1.5 Attach Storage Domain..........................................................................................46
10.2 Creating Additional Networks.......................................................................................47
10.2.1 Data Center Logical Network.................................................................................47
10.2.2 Cluster....................................................................................................................48
10.2.3 Hosts......................................................................................................................49
10.2.4 Virtual Machine.......................................................................................................50
10.2.5 Operating System Configuration............................................................................50
10.3 Additional Data Storage...............................................................................................50

11 Managing Virtual Machines..........................................................................51


11.1 Definition......................................................................................................................51
11.2 State............................................................................................................................51
11.2.1 Starting Virtual Machines.......................................................................................51
11.2.1.1 Run...................................................................................................................51
11.2.1.2 Run Once..........................................................................................................51
11.2.2 Shutting Down Virtual Machines............................................................................51
11.2.2.1 Guest................................................................................................................51

4 | www.redhat.com
11.2.2.2 Shutdown..........................................................................................................51
11.2.2.3 Stop..................................................................................................................52
11.2.2.4 Suspend............................................................................................................52
11.2.3 Rebooting Virtual Machines...................................................................................52
11.3 Migration......................................................................................................................52
11.3.1 User Controlled......................................................................................................52
11.3.2 Cluster Policy.........................................................................................................53
11.3.3 Maintenance...........................................................................................................53
11.4 Snapshots....................................................................................................................53
11.4.1 Create.....................................................................................................................53
11.4.2 Preview...................................................................................................................54
11.4.3 Commit...................................................................................................................54
11.4.4 Undo.......................................................................................................................54
11.4.5 Delete.....................................................................................................................54
11.5 Templates....................................................................................................................54
11.5.1 Preparation.............................................................................................................55
11.5.1.1 Red Hat Enterprise Linux..................................................................................55
11.5.1.2 Windows...........................................................................................................56
11.5.2 Creating Templates................................................................................................57
11.5.3 Using Templates....................................................................................................58
11.5.4 Copy Templates.....................................................................................................58
11.6 Resources....................................................................................................................58
11.6.1 Removable Media..................................................................................................58
11.6.2 Virtual Disks...........................................................................................................59
11.6.2.1 Moving the Storage Domain.............................................................................59
11.6.2.2 Edit....................................................................................................................59
11.6.2.3 Add...................................................................................................................60
11.6.2.4 Remove............................................................................................................61
11.6.3 Network Interfaces.................................................................................................61
11.6.3.1 Edit....................................................................................................................61
11.6.3.2 Add...................................................................................................................61
11.6.3.3 Remove............................................................................................................61

12 Managing Hosts............................................................................................62
12.1 Maintenance Mode......................................................................................................62
12.2 Power Management.....................................................................................................62
12.3 Manual Fence..............................................................................................................62
12.4 Remove.......................................................................................................................63
12.5 Upgrade.......................................................................................................................63

www.redhat.com | 5
12.5.1 Red Hat Enterprise Virtualization Hypervisor Host.................................................63
12.5.2 Red Hat Enterprise Linux Host...............................................................................63

13 Red Hat Enterprise Virtualization Manager Navigation Shortcuts...............64

14 Conclusion....................................................................................................66

Appendix A: iptables............................................................................................67

Appendix B: USB flash drives.............................................................................68

Appendix C: Configuring an NFS Server............................................................ 69

Appendix D: Red Hat Enterprise Linux iSCSI Target..........................................73

6 | www.redhat.com
1 Executive Summary
Industry-leading technology, enterprise-grade performance, scalability and security at the
lowest price point, and the widest ecosystem of hardware vendors make Red Hat Enterprise
Virtualization the clear platform choice for companies looking to protect their current
investments in virtualization technology and extend into the cloud-based future.

Red Hat Enterprise Virtualization for Servers consists of the following two components:
• Red Hat Enterprise Virtualization Manager for Servers: A feature-rich server
virtualization management system that provides advanced capabilities for hosts and
guests.
• Red Hat Enterprise Virtualization Hypervisor: A modern hypervisor based on KVM
which can be deployed either as a standalone bare metal hypervisor (included with
Red Hat Enterprise Virtualization for Servers), or as Red Hat Enterprise Linux 5.4 and
later (purchased separately) installed as a hypervisor host.

Red Hat Enterprise Virtualization for Servers offers:


• Comprehensive management functionality, including Live Migration, High Availability,
Power Manager, System Scheduler, and more.
• Unmatched consolidation ratios and performance.
• Lowest cost of ownership among enterprise virtualization platforms.
• Windows and Linux guest support.
• Full compatibility with Red Hat Enterprise Linux and the complete ecosystem of Red
Hat partners for virtualization of mission-critical applications.

Red Hat Enterprise Virtualization Manager for Servers allows enterprises to centrally manage
their entire virtual environment – virtual data centers, clusters, hosts, guest virtual servers,
networking and storage. Via advanced functionality like High Availability, Live Migration, and
System Scheduler, Red Hat Enterprise Virtualization helps enterprises significantly reduce
their IT operational and capital expenses.

This paper leads the reader through the steps of deploying Red Hat Enterprise Virtualization
Manager for Servers and provides an overview of the most common activities of the day-to-
day management of the environment.

www.redhat.com | 7
2 Red Hat Enterprise Virtualization -
Overview
2.1 Red Hat Enterprise Virtualization - Portfolio
Server virtualization offers tremendous benefits for enterprise IT organizations – server
consolidation, hardware abstraction, and internal clouds deliver a high degree of operational
efficiency. However, today server virtualization is not used pervasively in the production
enterprise data center. Some of the barriers preventing wide-spread adoption of existing
proprietary virtualization solutions are performance, scalability, security, cost, and ecosystem
challenges.
The Red Hat Enterprise Virtualization portfolio is an end-to-end virtualization solution, with
use cases for both servers and desktops, designed to overcome these challenges, enable
pervasive data center virtualization, and unlock unprecedented capital and operational
efficiency. The Red Hat Enterprise Virtualization portfolio builds upon the Red Hat Enterprise
Linux platform that is trusted by thousands of organizations on millions of systems around the
world for their most mission-critical workloads. Combined with KVM, the latest generation of
virtualization technology, Red Hat Enterprise Virtualization delivers a secure, robust
virtualization platform with unmatched performance and scalability for Red Hat Enterprise
Linux and Windows guests.
Red Hat Enterprise Virtualization consists of the following server-focused products:
1. Red Hat Enterprise Virtualization Manager (RHEV-M) for Servers: A feature-rich server
virtualization management system that provides advanced management capabilities for
hosts and guests, including high availability, live migration, storage management,
system scheduler, and more.

2. A modern hypervisor based on KVM (Kernel-based Virtualization Machine) which can


be deployed either as:

 Red Hat Enterprise Virtualization Hypervisor (RHEV-H): A standalone, small


footprint, high performance, secure hypervisor based on the Red Hat Enterprise
Linux kernel.

Or

 Red Hat Enterprise Linux 5.4: The latest Red Hat Enterprise Linux platform
release that integrates KVM hypervisor technology, allowing customers to
increase their operational and capital efficiency by leveraging the same hosts to
run both native Red Hat Enterprise Linux applications and virtual machines
running supported guest operating systems.

8 | www.redhat.com
www.redhat.com | 9
10 | www.redhat.com
2.2 Kernel-based Virtualization Machine (KVM)
A hypervisor, also called virtual machine monitor (VMM), is a computer software platform that
allows multiple (“guest”) operating systems to run concurrently on a host computer. The guest
virtual machines interact with the hypervisor which translates guest I/O and memory requests
into corresponding requests for resources on the host computer.
Running fully-virtualized guests, i.e., guests with unmodified guest operating systems, used to
require complex hypervisors and previously incurred a performance penalty for emulation and
translation of I/O and memory requests.
Over the last few years chip vendors Intel and AMD have been steadily adding CPU features
that offer hardware enhancements to support virtualization. Most notable are:
1. First-generation hardware assisted virtualization: Removes the requirement for
hypervisor to scan and rewrite privileged kernel instructions using Intel VT
(Virtualization Technology) and AMD's SVM (Secure Virtual Machine) technology.

2. Second-generation hardware assisted virtualization: Offloads virtual to physical


memory address translation to CPU/chip-set using Intel EPT (Extended Page Tables)
and AMD RVI (Rapid Virtualization Indexing) technology. This provides significant
reduction in memory address translation overhead in virtualized environments.

3. Third-generation hardware assisted virtualization: Allows PCI I/O devices to be


attached directly to virtual machines using Intel VT-d (Virtualization Technology for
directed I/O) and AMD IOMMU. Also, SR-IOV (Single Root I/O Virtualization) which
allows special PCI devices to be split into multiple virtual devices. This provides
significant improvement in guest I/O performance.

The great interest in virtualization has led to the creation of several different hypervisors.
However, many of these pre-date hardware-assisted virtualization, and are therefore some-
what complex pieces of software. With the advent of the above hardware extensions, writing a
hypervisor has become significantly easier and it is now possible to enjoy the benefits of
virtualization while leveraging existing open source achievements to date.
Kernel-based Virtual Machine (KVM) turns Linux into a hypervisor. Red Hat Enterprise Linux
5.4 provides the first commercial-strength implementation of KVM, which is developed as part
of the upstream Linux community.

2.2.1 Traditional Hypervisor Model


The traditional hypervisor model consists of a software layer that multiplexes the hardware
among several guest operating systems. The hypervisor performs basic scheduling and
memory management, and typically delegates management and I/O functions to a special,
privileged guest.
Today's hardware, however, is becoming increasingly complex. The so-called “basic”
scheduling operations have to take into account multiple hardware threads on a core, multiple
cores on a socket, and multiple sockets on a system. Similarly, on-chip memory controllers
require that memory management take into effect the Non-Uniform Memory Access (NUMA)
characteristics of a system. While great effort is invested into adding these capabilities to
www.redhat.com | 11
hypervisors, we already have a mature scheduler and memory management system that
handles these issues very well – the Linux kernel.

2.2.2 Linux as a Hypervisor


By adding virtualization capabilities to a standard Linux kernel, we can enjoy all the fine-
tuning work that has gone (and is going) into the kernel, and bring that benefit into a
virtualized environment. Under this model, every virtual machine is a regular Linux process
scheduled by the standard Linux scheduler. Its memory is allocated by the Linux memory
allocator, with its knowledge of NUMA and integration into the scheduler.
By integrating into the kernel, the KVM 'hypervisor' automatically tracks the latest hardware
and scalability features without additional effort.

2.2.3 A Minimal System


One of the advantages of the traditional hypervisor model is that it is a minimal system,
consisting of only a few hundred thousands lines of code. However, this view does not take
into account the privileged guest. This guest has access to all system memory, either through
hypercalls or by programming the DMA hardware. A failure of the privileged guest is not
recoverable as the hypervisor is not able to restart it if it fails.
A KVM-based system's privilege footprint is truly minimal: only the host kernel plus a few
thousand lines of the kernel mode driver have unlimited hardware access.

2.2.4 KVM Summary


Leveraging new silicon capabilities, the KVM model introduces an approach to virtualization
that is fully aligned with the Linux architecture and all of its latest achievements. Furthermore,
integrating the hypervisor capabilities into a host Linux kernel as a loadable module simplifies
management and improves performance in virtualized environments, while minimizing impact
on existing systems.
An important feature of any Red Hat Enterprise Linux update is that kernel and user APIs are
unchanged, so that Red Hat Enterprise Linux 5 applications are not required to be re-built or
re-certified. This extends to virtualized environments: with a fully integrated hypervisor, the
application binary interface (ABI) consistency offered by Red Hat Enterprise Linux means that
applications certified to run on Red Hat Enterprise Linux on physical machines are also
certified when run in virtual machines. With this, the portfolio of thousands of certified
applications for Red Hat Enterprise Linux applies to both environments.

12 | www.redhat.com
3 Environment
The following table lists the major components of the testbed.

Generic Tower
System - Tintin Intel x86 Family 6 Model 23 2.33 GHz
4GB RAM
HP DL580 G5
Quad Socket, Quad Core (Total of 16 cores)
Intel Xeon X7350 @ 2.93GHz
System - Monet
64 GB RAM
gigabit ethernet
4G FC HBA
HP DL580 G5
Quad Socket, Quad Core (Total of 16 cores)
Intel Xeon X7350 @ 2.93GHz
System - Degas
64 GB RAM
gigabit ethernet
4G FC HBA
HP DL585 G2
Red Hat Enterprise Linux 5.2 (2.6.18-92.1.6.el5xen)
Quad Socket, Dual Core (Total of 8 cores)
System - Renoir AMD Opteron 8222 SE @ 3.0 GHz
72 GB RAM
gigabit ethernet
4G FC HBA
MSA2212fc
Storage Array
24 146GB 15K drives
Storage Interconnect HP StorageWorks 4/16 SAN Switch

www.redhat.com | 13
4 Red Hat Enterprise Virtualization
Environment
Red Hat Enterprise Virtualization for Servers consists of the RHEV Manager, used to control
the environment, and hosts. The hosts consist of servers that have been deployed with the
KVM hypervisor. The hypervisor can be deployed as either a standalone configuration or
integrated with a system installed with Red Hat Enterprise Linux 5.4.
A host is a physical server which provides the CPU, memory, and connectivity to storage and
networks that are used for the virtual machines (VM). The local storage of the standalone host
is only used for holding the RHEV Hypervisor.
A cluster is a group of hosts of similar architecture. The requirement of similar architecture
allows a virtual machine to be migrated from host to host in the cluster without having to shut
down and restart the virtual machine. A cluster consists of one or more hosts, but a host can
only be a member of one cluster.
A data center is a collection of one or more clusters that have resources in common.
Resources that have been allocated to a data center can be used only by the hosts belonging
to that data center. The resources relate to storage and networks.
All hosts have a network interface assigned to the logical network named rhevm. This network
is used for the communications between the hypervisor and the manager. Additionally, for the
configuration in this paper, this network also serves as the public-facing network. Additional
logical networks are created on the data center and applied to one or more clusters. To
become operational, the host attaches an interface to the local network. While the actual
physical network can span across data centers, the logical network can only be used by the
clusters and hosts of the creating data center.
Storage is divided into two categories. One type is the storage used to contain CD and DVD
ISO images and floppy disk images that can be used to install the VMs. An ISO library can
only be used by one data center. NFS is the only supported type of ISO library storage for
Red Hat Enterprise Virtualization for Servers v2.1.
Data, the second type of storage, is used for disk images of the virtual machines, snapshots,
and storage for templates. The first storage of this type attached to a data center is identified
as type Data (Master). Any secondary data storage is identified as type Data. Again, any
storage is dedicated to only one data center. This storage can be either NFS, iSCSI, or Fibre
Channel, however all the data storage for a data center must be of the same type.

14 | www.redhat.com
5 Install and Configure Red Hat Enterprise
Virtualization Manager
This release of the Red Hat Enterprise Virtualization Manager is hosted on the 32-bit version
of Microsoft Windows Server 2003. After installing the Windows 2003 operating system, the
following steps were performed:
• Install/Verify various Microsoft components and applications
• Optionally install helper applications
• Optionally join an Active Directory Domain
• Apply all appropriate updates
• Install and configure RHEV Management software

5.1 Microsoft Components and Applications


After the user has installed Windows Server 2003 and applied updates, Service Pack 2
installation should be verified. When the General tab of the System Properties window is
displayed confirm that Service Pack 2 is included in the System section.
The Red Hat Enterprise Virtualization Manager software utilizes .NET. Verify that the correct
version and service pack (.NET Framework 3.5 Service Pack 1) are present on the computer.
This should be listed in the Change or Remove Programs tab of the Add or Remove
Programs window of the Control Panel.
Several specific sub-components of the Application Server must be in place. View the
Details... of the Application Server in the Windows Components Wizard. Verify that the
following are checked:
• Application Server Console
• ASP.NET
• Enable network COM+ access
• Enable network DTC access
• Internet Information Services (IIS)

www.redhat.com | 15
If any of these were not previously checked, the OS installs the component which may require
access the installation media.

Additionally, under Window Components is Internet Explorer Enhanced Security


Configuration. This should be disabled by removing any existing check in the box in front of
the option.

5.2 Helper Applications


Red Hat recommends that Windows PowerShell 1.0 be installed. If this is not present on the
system, the appropriate version for the system can be obtained by searching the Microsoft
web site. If PowerShell has been installed on the system, a command window appears by
typing 'powershell' in the Run... dialog box of the Start menu.

5.3 Active Directory Domain


System and user authentication can be local or through the use of an Active Directory
Domain. If there is an existing domain, an administrator can join using the Computer Name
tab of the System Properties window. Another option would be to configure the system which
runs the RHEV Manager software as a domain controller.

5.4 Windows Update


Prior to installing the RHEV Management software, Windows Update was repeated until there
were no more applicable updates. Additionally, automatic updates were scheduled.

16 | www.redhat.com
5.5 Red Hat Enterprise Virtualization Manager
Application
The installation program must be available to the server. While an ISO image containing the
needed software can be downloaded using the download software link, the following
procedure will reliably find the software components. From Red Hat Network using an
account with the RHEV for Servers entitlement, select the Red Hat Enterprise Virtualization
Manager Channel filter in the Channels tab. Expand the Red Hat Enterprise Virtualization
entry and select the appropriate architecture for the product to be installed.

www.redhat.com | 17
Select the Downloads link near the top of the page. Select the Windows Installer to download
the RHEV Manager installation program. While at this page, also download Guest Tools ISO,
VirtIO Drivers VFD, and VirtIO Drivers ISO.

18 | www.redhat.com
Execute the installation program, e.g. rhevm-2.1-37677.exe. After the initial screen, accept
the End User License Agreement. When the feature checklist screen is displayed, verify that
all features have been selected.

Next, the administrator chooses either to use an existing SQL Server 2005 DB or to install the
express version locally. After selecting to install SQLEXPRESS the 'sa' user, a strong
password must be applied. The destination folder for the install may be changed but was left
as is. The destination web site for the portal must be chosen next, again the defaults were
used. On the next screen, specify whether to use Domain or local authentication. If local is
used, provide the user name and password for an account belonging to the Administrators
group.
On the next screen input the organization name and computer name for use in the generation
of certificates. While the option to change the net console port is presented, it was left as is.
Proceeding past the Review screen the actual installation, which consists of the install of
several items, begins. The installation process prompts the administrator to install OpenSSL,
which provides secure connectivity to Red Hat Enterprise Virtualization Hypervisor and
Enterprise Linux as well as other systems. pywin32 is installed on the server. If selected, as in
this case, SQLEXPRESS is installed. The RHEV Manager is installed with no further

www.redhat.com | 19
interaction other than when the install has completed. A Finish button press completes the
installation.
Verify the install by starting RHEV Manager. From the Start menu select All Programs => Red
Hat => RHEV Manager => RHEVManager. The certificate is installed during the first portal
access. At the Login screen enter the User Name and Password for the RHEV administrator
that was entered during the installation. This presents the following screen.

20 | www.redhat.com
6 Hosts
Red Hat Enterprise Virtualization for Servers will require at least one host. A host can be
either the standalone RHEV Hypervisor or a Red Hat Enterprise Linux 5.4 system running the
KVM hypervisor. A multi-host environment can be homogeneous or mixed with both types of
hosts.
The remainder of this section describes the procedures to install each type of host.

6.1 Install and Configure the Standalone Red Hat


Enterprise Virtualization Hypervisor
The hypervisor software is downloaded though Red Hat Network. The hypervisor software is
provided in a downloadable rpm package or as part of zip file.
The zip file can be downloaded filtering Red Hat Enterprise Virtualization Hypervisor channel.
Expand the channel which is matched and select the architecture link. Select the Download
link near the top of the page. Selecting the RHEV-H image download the zip which contains
the ISO image and several manifests. The following command extracts the contents of the zip
file.
# unzip rhev-hypervisor-5.4-2.1.1.el5rhev.zip
Archive: rhev-hypervisor-5.4-2.1.1.el5rhev.zip
inflating: rhev-hypervisor.iso
inflating: manifests/rpm-manifest.txt
inflating: manifests/srpm-manifest.txt
inflating: manifests/dir-manifest.txt
inflating: manifests/file-manifest-post.txt
inflating: manifests/dir-manifest-post.txt
inflating: manifests/rpm-manifest-post.txt
inflating: manifests/srpm-manifest-post.txt
inflating: manifests/file-manifest.txt
extracting: SHA1SUM
#

The rpm can be downloaded by searching for 'rhev-hypervisor' using the packages option for
the search field near the top of all Red Hat Network pages. Select the link for rhev-hypervisor
in the page presented after performing the search. On the next page, which lists the
Packages by Name, select the link for the correct package. Towards the bottom of the of
Package Search - Details page is a Download Package link. This package contains an ISO
image file, a few utilities for using the ISO, and manifest files. When installed the ISO image
can be found at /usr/share/rhev-hypervisor/rhev-hypervisor.iso. The following command will
install the package after it has been downloaded.
# rpm -ivh rhev-hypervisor-5.4-2.1.1.el5rhev.noarch.rpm
Preparing... ###########################################
[100%]
1:rhev-hypervisor ###########################################
[100%]
#

www.redhat.com | 21
The ISO file contains a live image of the RHEV Hypervisor. A live image may be booted and
run as provided or installed to persistent storage. By installing to a hard drive, the
recommended method, specific configuration parameters are saved for subsequent boots.

6.1.1 Media
There are three primary methods for using the ISO image.
• Burn a CD/DVD
• Create a bootable image on a USB drive
• PXE boot image – used for booting a system over the network

6.1.1.1 CD/DVD
Burn the disk using a tool such as growisofs. The disk then can be used to boot the server
which in turn runs the RHEV Hypervisor.

6.1.1.2 USB Drive


The utility to create the bootable USB drive is installed with the rpm, the tools will not be found
if the zip was downloaded.
Extracting the ISO image to a USB drive requires that the administrator identify the device
name of the drive. After connecting to the USB port, by reviewing the output from dmesg or
the contents of /var/log/messages, a user should be able to identify the appropriate device.
Appendix B contains a simple script which can be used to identify connected USB devices.
The following command is used to configure the device and requires confirmation to overwrite
existing data on the device.
# livecd-iso-to-disk --format --reset-mbr /usr/share/rhev-hypervisor/rhev-
hypervisor.iso /dev/sdi1

Once the Live USB has been created, connect it to the server to run the RHEV Hypervisor
and reboot.

6.1.1.3 PXE
The following command generates a subdirectory named tftpboot which contains files for use
on a PXE server.
# livecd-iso-to-pxeboot /usr/share/rhev-hypervisor/rhev-hypervisor.iso

The files must be copied to the appropriate place on a configured PXE server.
Downloading and installing the rhev-hypervisor-pxe package, in addition to the the rhev-
hypervisor package, performs the extraction and places the results in the /usr/share/rhev-
hypervisor directory.

6.1.2 Kernel Options


Upon boot of any of the above devices, the system should pause allowing the user to input
various options to configure the hypervisor. Typically, no options are required and the boot

22 | www.redhat.com
continues after a short delay or an 'Enter' proceeds immediately.

6.1.3 Configuration
As the system completes the boot, the user is presented with the Configuration Setup menu.
This menu can be reached on previously configured systems by supplying “firstboot” as a
kernel parameter.

To configure the system used in this example, the following were selected:
• Configure storage partitions
• Configure authentication
• Set the hostname
• Networking Setup
• Configure RHEV Host
• Install locally and reboot

6.1.3.1 Configure storage partitions


Selecting Configure storage partitions presents a sub-menu with the following choices:
• Configure
• Review
• Commit configuration
• Return to the Hypervisor Configuration Menu

Configure provides options to select the location for the installation disk and adjust the sizes
of the partitions onto which the Hypervisor can be installed. Selecting Configure presents a
list of discovered devices followed by selectable options. The options consist of a selection for
each discovered device, Multipath, Abort, and Manual selection. Manual selection allows the
user to enter a device that discovery may have been unable to detect. Multipath allows the
user to specify devices to be controlled by the multipath daemon. The first device, /dev/cciss/
c0d0, was selected and the default partitions (boot, swap, root, config, logging, and data)
sizes were used.
Review was selected which displayed the names and sizes of the partitions. After selecting
www.redhat.com | 23
Commit configuration, a warning that existing data on the local device will be destroyed is
displayed and requires user confirmation.

6.1.3.2 Configure authentication


Selecting Configure authentication presents a menu with the following options:
• Set the administrator password
• Toggle SSH password authentication
• Return to the Hypervisor Configuration Menu

Selecting the Set the administrator password option prompts for the root user's password and
confirmation. In selecting Toggle SSH password authentication, the user answers “y” or “n” to
enable remote access to the RHEV Hypervisor using ssh.

6.1.3.3 Set the hostname


Selecting Set the hostname prompts the user for the name to be used for the Hypervisor.

6.1.3.4 Networking setup


Selecting the Networking setup option presents a list of network devices, a menu with DNS,
NTP, Abort, and Save And Return To Menu options, and a selection for each NIC. Select the
NIC used to connect to the RHEV Manager. The user is informed that the adapter is to be
configured into a bridge. The user must select whether or not they want the NIC to blink for 10
seconds to help identify the adapter. The user can then select to enable 802.1Q VLAN
support on the host. The next prompt presents options to select IPv4 support of Static, DHCP,
No IPv4 configuration, or to Abort the network configuration. The system in this configuration
example uses DHCP, therefore this option was selected. The user is asked to confirm the
configuration is correct with an option to abort. After selecting 'y', the menu is re-displayed.
Because DHCP handles DNS and NTP, the Save And Return To Menu option was selected.
However, if a static address is used or DHCP configuration does not configure DNS and NTP,
these options must be selected and configured for proper operation.
As the system's networks are restarted, verify that the newly configured bridge is started
which would be indicated by OK after bringing up the bridged interface. The bridge for eth0 is
named breth0.
During the install, configure only the network that communicates with the RHEV Manager
system. Additional networks can be configured later using the RHEV Manager. It is expected
that any addresses used are fully resolvable. The systems used in this paper receive their
DNS configuration as part of their DHCP registration, otherwise the DHCP menu option could
have been selected and saved.

6.1.3.5 Configure RHEV Host


Upon selecting Configure RHEV host, the user is asked for two host names or network
addresses, one for the RHEV management server and the second for the NetConsole server.
NetConsole allows console messages to be recorded by remote systems. The RHEV
Manager and NetConsole server are usually the same system. It is recommended to use host
names to confirm proper name resolution. While the port for each can be specified, they are

24 | www.redhat.com
not required for standard configurations and best left unspecified.

6.1.3.6 Install locally and reboot


After selecting Install locally and reboot, the administrator can add any desired kernel boot
options. No additional boot options were required or supplied. After confirmation, the
hypervisor is installed to the permanent storage and the system reboots.
While the system is rebooting, remove the media that was used to install the hypervisor.

6.1.4 Approve
Once the RHEV Hypervisor system has rebooted, visit the Hosts tab in the RHEV Manager.
The system should be listed with a status of Pending Approval. Pressing the right mouse
button on the entry containing system produces a menu of options. Select the Approve option
in this menu or press the Approve button after selecting the system. This causes the
Edit&Approve Host window to appear. The default values were selected by pressing the
Approve button. The status changes to Installing and then to Up when complete.

www.redhat.com | 25
6.2 Configuring KVM on Red Hat Enterprise Linux 5.4
It is assumed that a system that has been installed with Red Hat Enterprise Linux 5.4 is
properly configured, including IP name resolution, SELinux, and firewall. If your Red Hat
Enterprise Linux system is firewall-enabled with iptables, specific ports require opening for
proper functionality. See Appendix A for details relating to the firewall settings.
To use a Red Hat Enterprise Linux 5.4 system as a host with RHEV, the system requires
access to the appropriate software. Subscribe to the RHEL Virtualization and Red Hat
Enterprise Virt Management Agent channels via Red Hat Network to provide access to the
necessary software.
When adding hosts to the RHEV Manager be aware that unless the system has a cluster
which matches the system's CPU type as denoted by CPU Name, it fails to add. The first
system automatically sets the CPU type of the Default cluster.
Because the system being added has a different CPU type (64-bit AMD with NX) than that of
the other hosts (64-bit Intel with NX), a new cluster, amd, was created in the Default data
center so the node could be successfully added.
In the Hosts tab, select the New button or the New option from the right mouse button menu.
This presents the New Host dialog box. The Name, Address, and Root Password were
supplied in addition to selecting amd in the Host Cluster pull down. Optionally, the Enable
Power Management information can be entered. Selecting OK starts the install which takes
several minutes and includes a reboot of the system.

26 | www.redhat.com
6.3 Power Management
Power Management allows the RHEV Manager to control the power of the hosts. The
Manager may do this to maintain consistency in a cluster. Power Management requires that
the server support remote management. RHEV currently supports the following forms of
remote managers: Dell DRAC, HP iLO, IBM RSA, IBM Blade, or IPMI LAN. The command is
issued by other members of the data center. If only a single node exists in the data center,
power management commands fail to be issued through a 'proxy host.' This requires that the
address of the selected power management be accessible via the configured networks of the
hosts.
While Power management could have been enabled when the system was approved, it is
being shown here as a separate step. Select the host and then either select the Edit button or
the right mouse button Edit option to present the Edit Host window. Select the Enable Power
Management checkbox. The Address, User Name, Password and Type fields were
appropriately populated. Pressing the Test button confirms that the process is communicating
correctly.

www.redhat.com | 27
7 Activating a Data Center
While a host has been added to the RHEV Manager, it can not host a VM in its current state.
First, a data center must be active. When the node was added, it was added to the default
data center, named Default, and the default cluster, also named Default or as in the case of
the Red Hat Enterprise Linux host it was added to the amd cluster. In the data center tab,
notice the red downward pointing arrow and the Status field indicating the state of the data
center. By selecting the Default data center and selecting the Guide Me button, the actions
required before the data center can be activated are listed as links to assist in performing
each action.

The main Storage tab has buttons for creating the storage domains. However, the buttons in
the New Data Center – Guide Me dialog box provide a convenient method to performing the
steps required to support the activation of the data center.
The Configure Storage button is used for creating storage of type Data. Recall that Data
storage can be either FC, NFS, or iSCSI based. By default, the Default data center used NFS
which is configured in this example. Section 10 demonstrates configuring iSCSI and FC
based data centers. Appendix C provides an example of exporting an NFS mount point from
a Red Hat Enterprise Linux system. In the following example, the /rhev-data/defaultStore is
mounted from node renoir.lab.bos.redhat.com. On renoir, the owner and group of this
directory were set to 36 which correspond to the vdsm user and kvm group. Pressing the
Configure Storage button displays the New Storage dialog box. Verify that Type is set to NFS
28 | www.redhat.com
and supply a Name and Export path. renoir.lab.bos.redhat.com:/rhev-data/defaultStore was
used in this case.
Selecting the Attach Storage button in the re-displayed New Data Center – Guide Me dialog
box produces the Attach Storage dialog box. The storage domain just created, defaultStore, is
listed. Select the corresponding checkbox and press OK. The data center has been
configured enough to change status to Up.
While the data center is active, configuring an ISO Library allows installation from ISO images
and access to floppy images. Follow a similar procedure using the Configure ISO Library and
Attach ISO Library buttons to create and attach the defaultISOs library. With only optional
steps listed in the Guide Me dialog box, select Configure Later to dismiss the dialog box. The
ISO Library is attached but listed as Inactive. Select the ISO Library, either in the Storage tab
of the data center's Details pane or in the main level Storage tab. Select the Activate button to
complete the storage configuration for this data center.

Each data center requires that a host serve as a Storage Protection Manager (SPM),
indicated in the SpmStatus field of the Hosts tab.

www.redhat.com | 29
8 Populating the ISO Library
The ISO Library is used to present CD, DVD and floppy images to a guest for reading. An
administrator may populate the library with images that can be used for installing operating
systems or other software.
The Guest Tools and VirtIO driver files that were downloaded when the RHEV Manager
installer was downloaded are recommended software to be available in the ISO Library. On
the RHEV Manager system, select Start => All Programs => Red Hat => RHEV Manager =>
ISO Uploader. In the Red Hat Virtualization ISO Uploader window, press the Add button and
select the downloaded files. The Data Center can be selected using the pull down. After
providing a password, pressing the Upload button initializes the library and place the software
where it can be mounted by Virtual Machines.

30 | www.redhat.com
9 Creating a Virtual Machine
With the data center activated, the environment is ready to create Virtual Machines to run
guest operating systems. This section provides two examples:
• Red Hat Enterprise Linux 5.4 installed from a PXE server
• Windows 2003 installed from an uploaded ISO image

The initial steps to install a virtual machine are similar:


• Create a New Server virtual machine
• Define the Network Interface for the virtual machine
• Define Storage for the virtual machine
• Use Run Once to load the OS

Using created templates reduces the steps required, see Section 12.5.

9.1 Red Hat Enterprise Linux 5.4 Installed from a PXE


Server
In the Virtual Machines tab of the RHEV Manager interface, select the New Server button.
This presents the New Virtual Machine dialog box with the fields described in Table 1. While
several values were used as their default, a Name and a Description were provided, the
Operating System was set to 'Red Hat Enterprise Linux 5.x', Memory Size was set to '4096',
'4' vCPUs was selected in the Number of CPU cores field, and the Highly Available option
was selected. When all options have been entered as desired, press OK.

www.redhat.com | 31
Field Name Type Values Notes
Blank (default) or any
Blank allows installation
Template Pull Down templates saved to this data
from media
center
Name Text Field Must be completed
Description Text Field Optional
Clusters of the chosen data
Host Cluster Pull Down
center
Auto Assign chooses
which host to place
Auto Assign and all members
Default Host Pull Down guest. Can be used to
of the chosen cluster
create a VM on a cluster
with no current hosts.
List of Storage Domains
Storage Domain Pull Down associated with data center of
selected Host Cluster
If not specified, MB is
Memory Size Text Field Can specify either GB or MB
assumed
Number of CPU Options are configurable
Pull Down
cores using Configuration Tool
Unassigned
Red Hat Enterprise Linux 5.x
Red Hat Enterprise Linux 4.x Unassigned – place
Red Hat Enterprise Linux 3.x holder for later selection
Other Linux
Operating System Pull Down
Windows XP Other – selected when
Windows 2008 attempting an OS not
Windows 2003 listed
Windows 2003 x64
Other
When selected, VM
Highly available Checkbox attempts to restart after a
failure
Only when Windows
selected for OS - Name
Domain Text Field of Active Directory
Domain that guest can
join, optional
Only when Windows
Time Zone Pull Down
selected for OS
Table 1: New Virtual Machine options

32 | www.redhat.com
The VM is listed in the Virtual Machine tab, however, the New Virtual Machine – Guide Me
window prompts the user through the steps required to finish configuring the guest. The New
Virtual Machine – Guide Me window lists two required actions and provide buttons for each:
Configure Network Interfaces and Configure Virtual Disks. Selecting the Configure Network
Interfaces button displays New Network Interface window.
The Virtual Machine uses the rhevm network which provides public access. Section 11.6.3
demonstrates configuring additional networks. In the New Network Interface dialog box
(explained in Table 2), the defaults for Name ('eth0') and Network ('rhevm') were used. The
Type was switched to 'Red Hat VirtIO'. Selecting OK completes adding this virtual network.
Field Name Type Values Notes
Defaults to incrementally available
Name Text Field
name
List of Logical Network for the data See Section 11.2 on
Network Pull Down
center configuring logical networks
Emulate Intel's PRO/1000
e1000
family
Virtual hardware emulation
developed for improved
performance and/or features
in virtualized guests
(Red Hat Enterprise Linux 4.8
Red Hat VirtIO
and Red Hat Enterprise Linux
5.3 and later provide this
driver as part of their initial
Type Pull Down install, or can be downloaded
for earlier versions)
Only available for XP, allows
a guest to come up using a
Dual mode rtl8139,VirtIO standard emulation, then
switch to VirtIO once the
driver has been supplied
Emulate Realtek's
rtl8139 RTL8139/810x Family Fast
Ethernet
Table 2: New Network Interface options

www.redhat.com | 33
To add a Virtual Disk for guest use, select the Configure Virtual Disks button in the New
Virtual Machine - Guide Me window which is re-displayed after the network has been
configured. The following settings were input into the New Virtual Disk window (options
described in Table 3): '8' for Size (GB); 'System' for Disk type; 'VirtIO for Interface. The OK
button was pressed, the disk was created. Select Configure Later in the New Virtual Machine
– Guide Me window.

Field Name Type Values Notes


Size (GB) Text Field
List of Data Storage
Storage Domain Pull Down Domains in the Only available for first disk
guest's data center
Sets Format to Preallocated and
System
checks Wipe after delete
Disk type Pull Down
Sets Format to Preallocated and
Data
clears Wipe after delete
Emulate Integrated Drive Electronics
IDE
based drives
Interface Pull Down VirtIO Block Device emulation –
supported driver installed with Red
VirtIO
Hat Enterprise Linux 4.8 and 5.3 and
later, available for earlier versions
Space allocated as needed, Disk
Thin Provisioned
format is COW
Format Pull Down
Entire requested space is allocated
Preallocated
up front, Disk format is RAW
Clears the disk image by overwriting
Wipe after delete Checkbox with zeroes prior to deallocation
when the guest is removed.
Marks a disk as bootable. Required
Is bootable Checkbox
for MBR of the guest OS.
Table 3: New Virtual Disk options

34 | www.redhat.com
With the resources for the Virtual Machine configured, it must be booted in a state to install
the Operating System. This is accomplished using Run Once, either using the right mouse
button menu or the Run Once option of the Run button. The Run Virtual Machine(s) window
appears, explained in Table 4. For this install, Boot from Network(PXE) and VNC were
selected. Press OK. The status should quickly change from Down to Waiting for Launch to
Powering Up. Once the status displays Powering Up, start a console by selecting the Console
button (picture of a monitor) or selecting the right mouse button menu option Console: VNC.
The install should be completed like a standard machine. If VirtIO was selected the install
device will be /dev/vda. When the install calls for a reboot, the Virtual Machine should be shut
down then started with Run, or else it attempts to install from the PXE server again. After the
First Boot prompted for several questions and rebooted, local customizations may be
performed.

www.redhat.com | 35
Field / Sub-Field Description
When checked, floppy image is presented to
Attach Floppy
the VM, see Section 9 for floppy images
When checked, the ISO image is presented to
Attach CD
the VM, see Section for uploading ISOs
When selected, device boots from the OS
Boot From Hard Disk
installed on VM virtual disk
Only available when Attach CD is checked,
Boot Device Boot From CD boots from attached CD. Used for Installs and
Live CDs
When selected, VM boots using the PXE
Boot From Network(PXE)
protocol, allows for network installation
When checked the VM starts but is not
Start in Pause Mode
scheduled for any CPU cycles.
For Windows systems, when selected the
Reinitialize sysprep
answer file floppy is attached
When checked, any changes on the guest OS
are not saved. Can be used to test risky
Run Stateless
patches/upgrades, run a previewed snapshot,
etc
VNC Virtual Network Computing
Display Protocol Simple Protocol for Independent Computing
SPICE
Environments
For h/w compatibility reasons, allows the
Disable hardware acceleration disabling of Intel VT/AMD SVM which should
have significant impact to performance.
Also for h/w compatibility, allows the disabling
Disable ACPI support of the Advanced Configuration and Power
Interface (ACPI)
Table 4: Run Virtual Machine(s) options

36 | www.redhat.com
9.2 Windows Server 2003 from Uploaded ISO
Installing a Windows virtual machine, the start of the process is very similar. In the Virtual
Machine tab of the RHEV Manager portal, select the New Server button. This presents the
New Virtual Machine dialog box window, where Table 1 describes the options. The Name and
Description were provided. The Operating System was set to 'Windows 2003'. '2048' was
input for Memory Size. '2' was selected for Number of CPU cores to specify the number of
vCPUs. The remaining fields were left at their default values. When all options had been
entered as desired, OK was pressed.

The New Virtual Machine – Guide Me window displays listing the two required actions
required to finish configuring the guest. Selecting the Configure Network Interfaces button
displays New Network Interface window. In the New Network Interface window (fields
explained in Table 2), the defaults for Name ('eth0') and Network ('rhevm') were used. The
Type was set to 'Red Hat VirtIO', which requires the installation of the VirtIO driver. The other
driver options do not require additional drivers installed. Selecting OK completes the addition
of the virtual network.

www.redhat.com | 37
To add the Virtual Disk, select the Configure Virtual Disks button in the re-displayed New
Virtual Machine - Guide Me window. The following settings were entered into the New Virtual
Disk window (options described in Table 3): '10' for Size (GB); 'System' for Disk type; 'VirtIO
for Interface. The OK button was pressed and the disk was created. Select Configure Later in
the New Virtual Machine – Guide Me window.

The Virtual Machine's resources are configured. The next step is to boot using the installation
CD. Selecting Run Once, either from the right mouse button menu or the option from the Run
button presents the Run Virtual Machine(s) window, Table 4 explains the fields. For this
install, the Attach CD checkbox was selected. The ISO image that corresponds to the first
Windows Server 2003 CD was chosen and the Boot From CD option was selected. If the user
opts to use a 'VirtIO' Interface for the virtual disk or network, then the Attach Floppy checkbox
should be selected with the 'virtio-drivers-1.0.0.vfd' that was uploaded in Section 8. Pressing
OK accepts the values as entered. As the status changes from Down to Waiting for Launch to
Powering Up, the console can be started by selecting the Console button (icon of a monitor)
or selecting the right mouse button menu option Console: VNC.

38 | www.redhat.com
The installation should progress like a standard Windows install: initial interaction; copying
files, reboot, setup, and another reboot. If using a VirtIO disk, the driver loads automatically
from the virtual floppy. When logging into the system, the OS prompted for the second CD.
This was mounted by selecting the appropriate CD from the list presented when the right
mouse button was pressed and the Change CD options chosen. At this point the VM's
Display Properties were changed to allow for a higher resolution, which the Console window
resized to accommodate. The title bar of the console window was right mouse buttoned and
the Show Toolbar option was checked, which amongst other features provides a button to
send Ctrl+Alt+Del to the guest.
Before applying all the relevant Windows Updates, the network driver requires installation, if
the VirtIO interface was used. The driver can be installed using various methods. The driver
can be loaded directly from either the VirtIO virtual floppy or the VirtIO driver ISO image.
Additionally the driver is loaded as part of the guest tool installation.
The ISO Uploader process demonstrated in Section 8 included the image for the RHEV
guest tools. This should be mounted on the guest by selecting the guest in the RHEV
Manager Virtual Machines tab and then selecting Change CD in the right mouse button option
list. When Change CD is selected a menu of uploaded ISO images and an Eject option is
presented. Select the previously uploaded RHEV tools ISO image, in the form of rhevm-
guest-tools-#.#_#####.iso. With the RHEV-tools CD mounted, in the top level directory find
the RHEV-Application Provisioning Tool install file. While the tools could have been installed
by selecting the RHEV-ToolsSetup executable, the Application Provisioning Tool
automatically installs or upgrades the tools when the VM is booted with a tools CD mounted.

www.redhat.com | 39
10 Resource Management
10.1 Configuring a Data Center
The following steps were followed to create and activate a new data center:
• Create the data center
• Create a cluster
• Add/move a host into the data center
• Add data store
• Fibre Channel
• iSCSI
• Attach storage domain

The procedure to add an ISO Library is not shown here because the procedure was shown as
part of Section 8. While a data center can make use of an ISO Library, it is not required
because guests can be installed using the network.

10.1.1 Create the Data Center


To create a new data center, select the New button in the Data Centers tab. Input a Name
and optionally a Description. All the Data Storage Domains for a data center must be of the
same type, which is selected among 'NFS', 'FCP', and 'iSCSI' in the Type pull down menu.
Pressing OK completes the data center creation.

40 | www.redhat.com
While the data center exists, there are several steps required for it to become operational.
The New Data Center – Guide Me lists the steps to activate the data center. While some step
use the Guide Me buttons, the user should be able explore the interface to discover other
methods of performing the same actions.

www.redhat.com | 41
10.1.2 Create a Cluster
The first button in the New Data Center – Guide Me dialog box is Configure Clusters. After
selection, the New Cluster dialog box displays. A Name must be specified, while a description
is optional. The Memory Over Commit option can be changed from None to either Server
Load or Desktop Load. The amount of over commit represented by each choice is
configurable using the Configuration Tool. The CPU Name must compliant with the
capabilities of the systems that is placed into the clusters. In this case, one of the systems in
the Default cluster is moved into this new cluster. These systems use the 64-bit Intel with NX
setting, the other options are Intel Pentium III, 64-bit Intel, 64-bit AMD, and 64-bit AMD with
NX.

10.1.2.1 Cluster Policy Setting


The Guide Me does not provide buttons for the next few steps, therefore select Configure
Later. This step is optional, but provides settings for better utilization of the hosts in a cluster.
The cluster policy relates to load balancing. Note that for all settings, the host selected to run
new VMs is the host with the lowest CPU utilization in the cluster. The options are:

None
RHEV Manager performs no load balancing on running VMs. The user can use migration
to control which VMs correspond with which hosts.

Even Distribution
RHEV Manager migrates a virtual machine to the host with the lowest CPU utilization
when the current host utilization is over the set Maximum Service Level for the set time.
This process can repeat until the highly loaded host utilization reports below the set

42 | www.redhat.com
maximum.

Power Saving
In addition to migrating virtual machines from a host when over the set Maximum Service
Level, RHEV Manager migrates the virtual machines currently on a host that has
reported a utilization below the Minimum Service Level, to other hosts that are reporting
a utilization below the Maximum Service Level. This allows power-saving features on the
evacuated hosts to conserve power. If the utilization increases to over the maximum,
virtual machines are potentially migrated to the previously evacuated host because it is
likely have the lowest CPU utilization.

In the Cluster tab, select the cluster in which the load balance policy is to be set. In the
Details pane, select the Edit button in the Policy tab. Adjust the desired policy, service levels,
and interval. Select OK.

www.redhat.com | 43
10.1.3 Relocating an Existing Host
The Guide Me, which was previously dismissed, would allow the adding of a new Red Hat
Enterprise Linux host, while the other options are to add a new RHEV-H or use an existing
host. The latter is explained in this section.
In the Host tab, select the system to be moved into the new cluster. Select the Maintenance
button or the right mouse button Maintenance option. Once the host has successfully
transitioned into Maintenance mode, select the Edit button or right mouse button option.
Select the appropriate Host Cluster and press Save. Select the Activate button or right mouse
button menu option to start the host in cluster.

10.1.4 Create New Storage Pool


While the context of this section is creating a storage pool for a new data center, the same
steps can be used to create additional storage pools for existing data centers.
While previous sections demonstrated the steps to add a NFS based storage pool, this
section explains the remaining two support storage protocols, fibre channel and iSCSI.
While the user could be directed through the completion of activating the data center by
pressing the Guide Me button, these examples presents the other method.

10.1.4.1 Fibre Channel


Prior to interacting with RHEV-M to add the fibre channel based storage pool, it is assumed
that a LUN has been presented to all the hosts of the corresponding data center. It is a best
practice to limit this LUN access to only the nodes of the data center. Secondly, the LUN
should be clean, i.e. have no data or metadata on it.
In the Storage tab, select New Storage either by its button or the right mouse button option

44 | www.redhat.com
which displays the New Storage dialog box. First, verify that a host from the data center which
the storage pool is to be associated has been selected in the Config. Host field. Set the Type
to FCP. Supply a Name. If at first the LUN does not appear in the Discovered LUNs area,
either Cancel the New Storage window and try again, or temporarily select Use Preconfigured
Domain then back to Build New Domain. Select the checkbox associated with the LUN and
press Add which moves the LUN to the the Selected LUNs area. Select OK to complete the
storage pool creation.

10.1.4.2 iSCSI
Similar to adding a fibre channel storage pool, an iSCSI target must be presented to the hosts
of the data center. Appendix D shows how to present a LUN from a Red Hat Enterprise Linux
system. Secondly, the LUN should be clean, i.e. have no data or metadata on it.
In the Storage tab, select New Storage either by its button or the right mouse button option
which displays the New Storage dialog box. First, verify that a host from the data center which
the storage pool is to be associated has been selected for the Config. Host field. Set the Type
to iSCSI. Supply a Name. Verify that the Build New Domain option is selected, then press the
Connect to Target. The Connect to Targets dialog box displays, provide the IP Address of the
iSCSI target's server and if any authentication is in use, configure appropriately. Then press
Discover. The LUN appears in the Discovered Targets area. By using either the Login to All
button or the Login button next to the appropriate target, when logged in the buttons ghost
out. Select OK. With the New Storage window regaining focus, select the checkbox
associated with the LUN and press Add, which moves the LUN to the the Selected LUNs
area. Select OK to complete the storage pool creation.

www.redhat.com | 45
10.1.5 Attach Storage Domain
The final step to making the new data center active attaches the storage pool to data center.
Select the appropriate data center in the Data Centers tab, then select the Attach Domain
button in the Storage tab of the Details pane. This displays the Attach Storage dialog box.
Select the checkbox corresponding to the newly created storage pool then select OK. The
Status of data center changes to Contend then to Up.

46 | www.redhat.com
10.2 Creating Additional Networks
Additional networks may be required in the environment for various reason such as
establishing a DMZ, private interconnect, or storage network.
The steps required to create networks for guest usage are as follows:
• Create new logical network in the data center
• Attach Logical Network to cluster
• Optionally attach host network interface to logical networking
• Assign network to virtual machine
• Configure network in virtual machine

10.2.1 Data Center Logical Network


In the Data Centers tab, select the data center which to add the new logical network. In the
Details pane, select the New button on the Logical Networks tab. In the New Logical Network
dialog box only the Name is required. Select OK after providing the desired input.

www.redhat.com | 47
10.2.2 Cluster
At the top level, select the Cluster tab. In the Details pane for a selected cluster, select the
Manage Networks button in the Logical Networks tab. In the Manage Networks window, the
data center's logical networks are listed excluding rhevm. Select which networks to be
assigned to the cluster by selecting the corresponding checkbox(es) and selecting OK. The
selected network(s) is listed with a Non Operational status with instructions in the Todo
column suggesting what steps are required to become operational.
Additionally, when multiple networks are configured for a cluster, selecting a network and
pressing the Set as Display button directs Spice and VNC display traffic to use the selected
interface.

48 | www.redhat.com
10.2.3 Hosts
For each node of the cluster to which the Logical Network was associated, the network
interface must be configured. In the main Hosts tab, select a host in the cluster. In the
Network Interfaces tab in the Details pane, select the controller to be used by this system that
connects with the other hosts in the cluster for this logical network. Selecting the Edit button
presents the Edit Network Interface dialog box. Verify that the Network pulldown menu has
the correct logical network name. If a network address is provided, then this host has
connectivity to the other nodes using this network, but it is not required for guest connectivity.
Pressing OK dismisses the dialog box and updates the list of Network Interfaces. If the results
are correct, press the Save Network Configuration button. Once this process has been
repeated for all hosts in the cluster, the cluster's status in the Detail pane's Logical Networks
tab changes to Operational.

www.redhat.com | 49
10.2.4 Virtual Machine
To add a network to a virtual machine requires that the guest is shut down. In the Virtual
Machines tab, select the down guest which is to have a network added. In the Network
Interfaces tab of the Details pane select the New button. In the New Network Interface dialog
box, verify that the Network is set to the virtual network and that Type is set to Red Hat VirtIO.
Select OK.

10.2.5 Operating System Configuration


Upon booting the virtual machine, follow the method of the guest's OS to configure the new
network as would be performed for a non-virtualized system.

10.3 Additional Data Storage


Previous steps explained how to create master data storage pools. The same procedure can
be used in the creation of secondary storage pools. Secondary storage pools provide
additional space for a data center, remember that all data storage in the same data center
must be of one type. The storage of existing virtual machines and templates can be moved or
copied from the master to secondary storage.

50 | www.redhat.com
11 Managing Virtual Machines
11.1 Definition
When creating a new server, a user enters values to define the new virtual machine. The
majority of these values can be modified by editing the virtual machine. To edit, select the
guest in the Virtual Machines tab, then select either the Edit button or the right mouse button
option. This displays the Edit Virtual Machine dialog box. See Table 1 for an explanation of
the fields.
The Data Center and Template can not be modified. The Storage Domain should be changed
using the Move option. The Memory Size, Number of CPU cores, Host Cluster, and Time
Zone can only be modified when the virtual machine is in the Down state. Host Cluster
options are limited to clusters in the same data center. The remaining options can be modified
when the host is either up or down.

11.2 State
State refers to the starting and stopping of existing virtual machines.

11.2.1 Starting Virtual Machines


11.2.1.1 Run
Selecting the Run button or right mouse button menu option on a shut down virtual machine is
the virtual equivalent to powering up physical machine.

Run is also used to resume a VM that has a paused or suspended status.

11.2.1.2 Run Once


While the Run option starts the selected VM, Run Once displays a window with several
options, detailed in Table 4, which allow a fine control over the VM. The options are only in
effect for the specified run and will not persist after the VM shuts down.

11.2.2 Shutting Down Virtual Machines


All of the methods to shut down close any connected consoles.

11.2.2.1 Guest
The preferred way to shut down a virtual machine is by using the guest's OS method of
shutting down.

11.2.2.2 Shutdown
The Shut down button (red square) and right mouse button menu option contacts the guest to
attempt to perform an orderly shutdown. A confirmation dialog box displays. Additionally, for a
Red Hat Enterprise Linux guest, a dialog box displays on the console requiring the user to
confirm action. A second shutdown request at the RHEV Manager acts like a stop.

www.redhat.com | 51
11.2.2.3 Stop
The right mouse button menu option of Stop immediately shuts down the system. This is
similar to removing the power to the VM and could have adverse side effects.

11.2.2.4 Suspend
An additional method is to use the Suspend button (two vertical red rectangles) or right
mouse button menu option. Prior to shutting down, it saves the running state of the system
and changes the status to Suspend.

11.2.3 Rebooting Virtual Machines


The guest OS method is used to reboot the virtual machine. The console remains available
throughout the reboot.

11.3 Migration
RHEV performs live migrations, providing for minimal down time and maintaining application
state. A live migration keeps the guest domain active while its memory image is duplicated to
the destination host prior to stopping on the original host.

11.3.1 User Controlled


An operator can change the host on which a VM is operating in a cluster by selecting either
the Migrate button or right mouse button menu option. After selecting Migrate, the Migrate
Virtual Machine(s) dialog box window allows the operator to select the destination host or
leave the default in which case the host is selected automatically. After selecting OK, the
guest's status changes to Migrating From and undergoes a live migration. Upon a successful
migration the status returns to Up and the Host field updates.

52 | www.redhat.com
11.3.2 Cluster Policy
The RHEV Manager automatically migrates virtual machines depending on CPU utilization
and whether the Cluster Policy has been set to Even Distribution or Power Saving. For detail
on the Cluster Policy see Section 10.1.2.1.

11.3.3 Maintenance
One of the steps in preparing a host for maintenance mode is migrating any active VMs to
other hosts. Migrated VMs do not automatically migrate back once the previous host becomes
active, even if configured as the default host.

11.4 Snapshots
Snapshots allow an operator to preserve the image of a virtual machine at a point in time.
The states of a virtual machine's snapshots are as follows:
• create – save the image of a virtual machine
• preview – temporarily restore a previously saved image to a virtual machine
• commit – make a previewed snapshot permanent
• undo – revert from a previewed image to the VM's latest state
• delete – remove a saved snapshot
The creation of a snapshot converts the format of a disk to COW and the allocation to
Sparse, if not already in this configuration.
All snapshot actions can be performed only when the corresponding virtual machine is shut
down.

11.4.1 Create
To create a snapshot, select the desired guest in the Virtual Machines tab, then select the
Create button in the Snapshots tab of the Details pane. A Create Snapshot dialog box
appears and prompts the user to supply a Description. Press OK to create the snapshot. The
VM's image is locked during the creation.

www.redhat.com | 53
11.4.2 Preview
The Snapshots tab for a VM displays a calendar on the left side of the Details pane. The days
which correspond to the dates snapshots were created are displayed in bold. The snapshots
for a given date are displayed to the right of the calendar when a date is selected. Once a
snapshot is selected from this list and the guest shut down, the Preview button becomes
active. Selecting the Preview button applies the saved snapshot to the VM.
Only the disks associated with the guest at the time of the snapshot creation are changed.

11.4.3 Commit
Selecting the Commit button makes the previewed image the current image for the virtual
machine and any snapshots created afterward are removed.

11.4.4 Undo
Selecting the Undo button restores the guest to the most current state.

11.4.5 Delete
The Delete button can be selected only when there is no snapshot in Preview. Select the date
and snapshot prior to pressing the Delete button.

11.5 Templates
Templates are used to quickly provision a virtual machine from a previously configured virtual
machine. Time is saved in not having to perform an installation from media for the OS and
applications, and having to configure the system for operation.

54 | www.redhat.com
Templates can be changed very similarly to the way a Virtual Machine can be changed. Edit
changes default definition of a template. Network interfaces can be edited, added or removed.
Templates can only be used within the same storage domain in which the virtual machine
resides.

11.5.1 Preparation
Prior to creating a template, the system can be put into a state where it can be re-configured
at the next boot.

11.5.1.1 Red Hat Enterprise Linux


For Red Hat Enterprise Linux guests, sys-unconfig shuts down the guest to be copied
after it has prepared it to be reconfigured on the next boot. Upon boot, the user is prompted
for keyboard selection, root password, network device and settings, timezone, authentication
configuration, and automatic service startup. Preparing the Red Hat guest for reconfiguration
is optional.

www.redhat.com | 55
11.5.1.2 Windows
Sysprep is used to prepare a Windows system for reconfiguration. Different versions of
Windows may deviate in the Sysprep process. The procedures documented here was
successful for a Windows Server 2003 guest.
On the Windows server, a directory is required for extraction of the utilities. C:\sysprep was
created. Mounting the installation media, locate Deploy.cab in \Support\Tools. Extract the
contents of the cab file to the previously created directory. Execute sysprep.exe in the same
directory. Set the checkbox adjacent to Don't reset grace period for activation, verify the
Shutdown Mode is set to Shut down and press the Reseal button. The guest shuts down
when complete.

56 | www.redhat.com
11.5.2 Creating Templates
Templates can only be created from virtual machines that are down.
Select the system which is to be copied into a template in the Virtual Machines tab. Select
either the Make Template button or right mouse button menu option. The New Template
dialog box appears. A Name must be supplied while a Description is optional. The Number of
CPU cores, Memory Size, Operating System are adjustable. The Host Cluster can be any
cluster within the data center. Selecting OK locks the image as the image is copied into a
template.

www.redhat.com | 57
11.5.3 Using Templates
Templates are used when creating virtual machines. In Section 9, the Blank template
provided on all storage domains was used. The Blank template requires the use of installation
media, which is not required when using templates created from previously installed guests.

11.5.4 Copy Templates


A template must be in the same storage domain as the virtual machine to be created. An
existing template can be copied to other storage domains within the data center. In the
Templates tab, select the template to be copied. Select Copy using either the button or right
mouse button option.

11.6 Resources
The resources available to a virtual server are:
• removable media
• virtual disks
• network interfaces

11.6.1 Removable Media


Beyond the use of Run Once to mount a floppy or CD/DVD at the boot of a guest, the RHEV
Manager provides a method to change CD/DVDs on the fly. When a virtual machine has been
selected, the right mouse button menu includes a Change CD option. Selecting this presents
all the uploaded ISO images as well as the option to eject any connected CD/DVD. A selected

58 | www.redhat.com
CD/DVD is immediately presented to the corresponding guest.

11.6.2 Virtual Disks


A user can preform the following actions related to virtual disks:
• Change the storage domain of the existing disks
• Edit an existing disk
• Add additional disks
• Remove existing disks
All disks of a virtual machine belong to the same storage domain.

11.6.2.1 Moving the Storage Domain


The storage for a virtual machine can be moved to other storage domains within the data
center. The virtual machine must be shut down. In the Virtual Machines tab, select the guest
to be copied. Select Move using either the button or right mouse button option. This displays
the Move Virtual Machine dialog box. Other domains in the data center are listed. Select the
desired target and press OK.

11.6.2.2 Edit
Not all the settings for an existing disk can be changed, only Interface, Wipe after delete, and
Is bootable can be modified. These can be changed only when the guest is shut down. To
make a change, select the guest in the Virtual Machines tab. Select the disk to change and
the press the Edit button in the Virtual Disks tab in the Details pane. This displays the Edit
Virtual Disk dialog box. After applying any desired changes, select OK.

www.redhat.com | 59
11.6.2.3 Add
A new virtual disk can be added to a down guest by selecting the New button in the Virtual
Disks tab in a guest's Detail pane. The options of the New Virtual Disk dialog box were
described in Table 3. Upon boot of the guest, the new disk can be configured for use.

60 | www.redhat.com
11.6.2.4 Remove
Prior to shutting down a guest to remove a virtual disk, verify the disk is no longer in use by
the guest's OS. After the virtual machine has shut down, select the disk to delete and press
the Remove button in the Virtual Disks tab in the Details pane. A confirmation dialog box
appears. Complete the removal by selecting OK.

11.6.3 Network Interfaces


Changes to a guest's network interface should be preceded or followed by the corresponding
changes on the guest's OS.
Any changes to network interfaces can only performed when the guest is down.

11.6.3.1 Edit
To make a change to an existing network interface, select the guest to be changed in the
Virtual Machines tab. Select the network to change and the press the Edit button in the
Network Interfaces tab in the Details pane. This displays the Edit Network Interfaces dialog
box. Any of the values can be changed. After applying desired settings, select OK.

11.6.3.2 Add
A new virtual network can be added to a down guest by selecting the New button in a guest's
Details pane's Network Interfaces tab. The options in the New Network Interfaces dialog box
were described in Table 2. After setting the fields as desired, select OK.

11.6.3.3 Remove
After the guest has shut down, select the network to be deleted and the press the Remove
button in the Network Interfaces tab in the Details pane. A confirmation dialog box appears,
complete the removal by selecting OK.

www.redhat.com | 61
12 Managing Hosts
The Host tab provides several controls for managing a host machine, whether it is a RHEV-H
or Red Hat Enterprise Linux system. The Approve button was used in Section 6.1 to allow a
RHEV-H host to be managed by this RHEV Manager. The New button was used in Section
6.2 to add a Red Hat Enterprise Linux host. The Edit button was previously used to enable
power management and transition an existing host to another cluster. Editing a host also
allows changes to the name, address, and communication port.
Additional capabilities include:
• Maintenance Mode
• Power Management states
• Fencing

12.1 Maintenance Mode


As its name suggests, placing a host into maintenance mode prepares the host for further
changes such as removal or change of cluster. One of the steps of a host transitioning to
maintenance is the migration of any active virtual machines to other available cluster
members. If the host can not migrate the virtual machines, it remains with a status of
Preparing for Maintenance. An operator can shutdown active VMs to allow the host to finish
the transition. Another step includes freeing a host from being the SPM of a data center.
Placing a host into Maintenance can be achieved by selecting the host in the Host tab, and
either pressing the Maintenance button or selecting the right mouse button option. The
transition is complete when the host displays a status of Maintenance.

12.2 Power Management


The Power Management button of the Hosts tab has three options, which use the power
management enabled for a host:
• Restart – host is powered off, then powered on
• Start – host is issued a power off command
• Stop – host is issued power on command

The restart and start options can only be issued to a host in Maintenance. The start command
can only be issued to a host that is shut down.

12.3 Manual Fence


When the RHEV Manager is unable to determine the status of a host, that host is typically
fenced, i.e. not allowed to interact with other members of the cluster – typically done by a
power cycle, in order to free any resources or guests that host may be allocated to other
hosts. The RHEV Manager uses enabled Power Management on a host to fence the host. If a
host does not have a functional power management, user intervention is required through the
Manual Fence button. If a host's status is displaying Non Operational, which can happen with
a rhevm network failure, the host should be shut down or restarted. If the resources have not
been recovered, select the node in the Hosts tab and press the Manual Fence button. A
dialog box forces a confirmation and informs the operator that the hosts should have been

62 | www.redhat.com
shut down or restarted. Selecting OK, frees the resource used by the host.

12.4 Remove
A host in Maintenance can be removed from the RHEV-M by selecting the host and choosing
either the Remove button or right mouse button option.
A RHEV Hypervisor host that has been removed can be added again by approving it after a
reboot. A Red Hat Enterprise Linux node can be added the same way it was initially, using the
New button in the host tab.

12.5 Upgrade
When the RHEV Manager identifies that a new version is applicable to a host, the General tab
in the Details pane for a VM with a status of Up has an Alert with the message “A new RHEV-
Hypervisor version is available. Upgrade” The Upgrade link is not selectable. When a host is
in Maintenance, an Alert stating “Host is in maintenance mode, you can Activate it by
pressing the Activate button. If you wish to upgrade or reinstall it click here” is displayed
independent of whether a newer hypervisor is available or not.

12.5.1 Red Hat Enterprise Virtualization Hypervisor Host


Prior to upgrading a RHEV-H host, the ISO image for the new version must to be placed in
the correct location on the RHEV Manager. To download it from Red Hat Network login to
rhn.redhat.com. Choose the Channels tab near the top of the page. In the Filter by Product
Channel pull down select Red Hat Enterprise Virtualization Hypervisor and press the Filter
button. Expand the Red Hat Enterprise Virtualization channel when listed and select the
x86_64 link next to Red Hat Enterprise Virtualization Hypervisor 5. Select the Downloads link
beneath the page title. Select the RHEV-H Image link to download a zipped folder. Extract the
rhev-hypervisor.iso file into My Computer => C: => Program Files => RedHat =>
RHEVManager => Service => RHEV-H Installer. It is recommended to rename the file to
include version information for future reference.
With the file in place, transition the host to Maintenance mode. Select the here link in Alert
section of General tab for the VM. This displays the Install Host dialog box. In the RHEV-H
ISO Name pull down, select the proper version. Pressing the OK button the host state should
cycle thought Installing, Reboot, Non Responsive, and Up.

12.5.2 Red Hat Enterprise Linux Host


A Red Hat Enterprise Linux host uses its RHN subscription for the software to use in
upgrading. While the RHEV Manager can upgrade the hypervisor software, it is
recommended to upgrade all the software on the system at the same time. Start the upgrade
by transitioning the host to Maintenance. With the host in maintenance, log into the host and
upgrade using yum upgrade or the Software Updater user interface. After all software has
been updated and the system rebooted if needed, Activate the system to bring it out of
maintenance mode.

www.redhat.com | 63
13 Red Hat Enterprise Virtualization
Manager Navigation Shortcuts
The Search bar provides a easy method of identifying targeted information. The auto-
completion feature guides a user in the construction of a valid query. The list of potential
completions appear as part of the search combo box as the query is input.

The general form of any query follows the syntax:


result-type: [{criteria}] [sortby sort_spec]
Result-type identifies the object type that is displayed for items matching the query.
Following are the options related to servers (case insensitive, pluralization optional):
• Vms
• Hosts
• Templates
• Events
• Clusters
• Datacenter
• Storage
The optional criteria consists of one or more comparisons that can be joined with
'and' or 'or'. Each comparison has a syntax as follows:
[<object-type>.]<property> <operator> <value>
object-type: optional, can be any of the possible result-types
property: object-type dependent characteristic

64 | www.redhat.com
operator: possible choices are <, >, <=, >=, =, or !=
value: string, integer, enumerated type, or date - wildcards allowed for strings
The optional sort_spec identifies a property used to sort the results. 'asc' or 'desc' can
be used to determine the order.
Some examples of search queries:
• DataCenter: Clusters.initialized = true sortby type
List data centers that contain initialize clusters, sort the results by the data center type
• Events : severity > normal and time > yesterday sortby severity
List events with a severity higher than normal (warning, error) which have occurred
since yesterday sorted by severity
• Vms: status != down and status != up
List virtual machines which are in transition, neither up nor down
• Hosts : Vms.memory >= 4096 and cpu_model = “Intel*” sortby cluster desc
List Intel based hosts that have active guests with 4G or more of memory sorted by
cluster
All queries can be saved as a bookmark which allows quick access for future use. To create a
bookmark from the current search query, press either the New button in the Bookmark pane
or the star after the search bar. This displays the New Bookmark dialog box in which a Name
must be provided and the current Search string can be modified before saving by pressing
OK.
A saved bookmark can be recalled by selecting the desired query in the Bookmarks pane.

www.redhat.com | 65
14 Conclusion
This paper has demonstrated the ease of performing the following activities using Red Hat
Enterprise Virtualization for Servers.
• Installing Red Hat Enterprise Virtualization Manager for Servers
• Installing a Red Hat Enterprise Virtualization Hypervisor host
• Configuring a Red Hat Enterprise Linux system with KVM as part of the Red Hat
Enterprise Virtualization environment
• Making OS images available for installation
• Creating Linux and Windows guests
• Managing
• Resources
• Guests
• Hosts
The reader should now be prepared to deploy their own Red Hat Enterprise Virtualization
environment.

66 | www.redhat.com
Appendix A: iptables
A Red Hat Enterprise Linux host with an active firewall requires access to specific ports.
Some of the access is required for RHEV functionality. Additional access is required for the
guest, depending on the guest's needs. This paper does not address all potential uses of a
guest and the corresponding firewall configuration. The configuration detailed provides the
access for RHEV functionality.
Access may already be allowed for ICMP and SSH, however the two commands below
provide commands to open each respectively.
# iptables -I RH-Firewall-1-INPUT -p icmp -m icmp --icmp-type any -j ACCEPT
# iptables -I RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport
22 -j ACCEPT

The following command opens port 54321, the default port for communication with the RHEV
Manager.
# iptables -I RH-Firewall-1-INPUT -p tcp --dport 54321 -m state --state NEW -
j ACCEPT

This command opens a wide range of ports that can be used for the consoles of virtual
machines.
# iptables -I RH-Firewall-1-INPUT -p tcp -m multiport --dports 5634:6166 -j
ACCEPT

The command below allows DHCP functionality for the guests through the hosts.
# iptables -I RH-Firewall-1-INPUT -p udp -m state --state NEW --dport 67:68 -
j ACCEPT

The ports used for migrating guests between hosts will be opened with this command.
# iptables -I RH-Firewall-1-INPUT -p tcp --dport 49152:49216 -j ACCEPT

This last command saves the iptables configuration so the changes made persist through
reboots.
# service iptables save

www.redhat.com | 67
Appendix B: USB flash drives
The following script can be used to indicate when USB based removable media is attached to
a system. This can be helpful when determining the drive on which to burn a RHEV-H Live
image.
#!/bin/bash
devices=$(hal-find-by-property --key "storage.bus" --string "usb")
for dev in $devices
do
available=$(hal-get-property --udi $dev --key "storage.removable.media_available")
if [ $available = "true" ] ; then
vendor=$(hal-get-property --udi $dev --key "info.vendor")
prod=$(hal-get-property --udi $dev --key "info.product")

partitions=$(hal-find-by-property --key "info.parent" --string $dev)

for a in $partitions
do
drive=$(hal-get-property --udi $a --key "block.device")
size=$(hal-get-property --udi $a --key "volume.size")
mb=$((size/1048576))
mnt=$(hal-get-property --udi $a --key "volume.mount_point")
fs=$(hal-get-property --udi $a --key "volume.fstype")
echo -e "$drive \t $vendor $prod \t $mb MB \t $fs \t $mnt"
done
fi
done

The command outputs as much of the following that it was able to determine: device name,
manufacturer info, size, file system type, and mount point.
# sh /pub/spr/usbs.sh
/dev/sde1 Kingston DataTraveler G2 7657 MB ext2 /media/disk-1
/dev/sdd Ut165 USB2FlashStorage 964 MB vfat /media/disk

68 | www.redhat.com
Appendix C: Configuring an NFS Server
NFS is used for the ISO library and is one of the options for storing virtual machine data. This
example configures the exports for both on a system using SELinux and iptables.
The first step is configuring the NFS server is to identify the storage space to be shared.
While a sub-directory of an existing file system can be used, this example shows the creation
of a new files system. The device /dev/cciss/c0d2 is be used.
# mkfs -t ext3 /dev/cciss/c0d2
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
8962048 inodes, 17913240 blocks
895662 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
547 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424

Writing inode tables: done


Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 36 mounts or


180 days, whichever comes first. Use tune2fs -c or -i to override.
#

The following line was added to /etc/fstab to cause the storage to mount at each boot with the
proper options.
/dev/cciss/c0d2 /rhev-data ext3 noatime,defaults 0 0

The following commands create a mount used to export the files and mount the file system.
# mkdir /rhev-data
# mount /dev/cciss/c0d2 /rhev-data

The ports for the daemons used by NFS typically vary. These are set to specific ports by
uncommenting the following lines in /etc/sysconfig/nfs.
RQUOTAD_PORT=875
LOCKD_TCPPORT=32803
LOCKD_UDPPORT=32769
MOUNTD_PORT=892
STATD_PORT=662
STATD_OUTGOING_PORT=2020

www.redhat.com | 69
Verify that NFS has been started on the server.
# ps -ef | grep nfs
root 8419 8208 0 11:04 pts/7 00:00:00 grep nfs
#

The are no NFS daemons listed in the process table, therefore start them for this boot. Next,
configure the daemons to start on subsequent boots.
# service nfslock restart
# service nfs start
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS daemon: [ OK ]
Starting NFS mountd: [ OK ]
# chkconfig nfs on

The ports for the portmapper and nfs daemons are well known and defined. Verify that the
remaining daemons are using the ports that were defined.
# rpcinfo -p

program vers proto port


100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100011 1 udp 875 rquotad
100011 2 udp 875 rquotad
100011 1 tcp 875 rquotad
100011 2 tcp 875 rquotad
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100021 1 udp 32769 nlockmgr
100021 3 udp 32769 nlockmgr
100021 4 udp 32769 nlockmgr
100021 1 tcp 32803 nlockmgr
100021 3 tcp 32803 nlockmgr
100021 4 tcp 32803 nlockmgr
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100005 1 udp 892 mountd
100005 1 tcp 892 mountd
100005 2 udp 892 mountd
100005 2 tcp 892 mountd
100005 3 udp 892 mountd
100005 3 tcp 892 mountd
100024 1 udp 662 status
100024 1 tcp 662 status
#

The operator must open the ports that the daemons are using to allow access through the
firewall for the systems requiring access to the NFS mount. The commands below open all
the used ports and save the changes, however some may have been opened in previous
70 | www.redhat.com
configuration steps so the user may not have to issue every command listed.
# iptables -I RH-Firewall-1-INPUT -p udp -m udp --dport 111 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT -p udp -m udp --dport 2049 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT -p udp -m udp --dport 875
# iptables -I RH-Firewall-1-INPUT -p udp -m udp --dport 875 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT -p udp -m udp --dport 32769 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT -p udp -m udp --dport 892 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT -p udp -m udp --dport 662 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT -p udp -m udp --dport 2020 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT -p tcp -m tcp --dport 111 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT -p tcp -m tcp --dport 2049 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT -p tcp -m tcp --dport 875 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT -p tcp -m tcp --dport 32803 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT -p tcp -m tcp --dport 892 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT -p tcp -m tcp --dport 662 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT -p tcp -m tcp --dport 2020 -j ACCEPT
# service iptables save
Saving firewall rules to /etc/sysconfig/iptables: [ OK ]
#

Specify which systems can access the NFS share by adding entries to /etc/exports for the
host and guests. Only the systems listed have access and this access is read and write.
/rhev-data monet.lab.bos.redhat.com(rw)
/rhev-data degas.lab.bos.redhat.com(rw)

Export the directories.


# exportfs -a

The following commands create separate areas for the ISO Library and the Storage Domain.
Ownership is changed so that the vdsm user and kvm group become the owners of each
directory.
# mkdir /rhev-data/defaultISOs
# mkdir /rhev-data/defaultStore
# chown 36.36 /rhev-data/defaultISOs /rhev-data/defaultStore

The following commands restart various NFS related daemons and list the details of exported
mounts.
# service nfslock restart
Stopping NFS locking: [ OK ]
Stopping NFS statd: [ OK ]
Starting NFS statd: [ OK ]
# service nfs restart
Shutting down NFS mountd: [ OK ]
Shutting down NFS daemon: [ OK ]
Shutting down NFS quotas: [ OK ]
Shutting down NFS services: [ OK ]
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS daemon: [ OK ]
Starting NFS mountd: [ OK ]

www.redhat.com | 71
# showmount -e
Export list for renoir.lab.bos.redhat.com:
/rhev-data degas.lab.bos.redhat.com,monet.lab.bos.redhat.com
#

72 | www.redhat.com
Appendix D: Red Hat Enterprise Linux
iSCSI Target
iSCSI is one of the protocol options to use for guest data storage. Available space can be
exported from a Red Hat Enterprise Linux guest with a few commands that can be used by
RHEV hosts.
The scsi-target-utils package is required on the system providing the storage. This package is
part of the cluster-storage channel.
Start the target daemon.
# service tgtd start
# service tgtd start
Starting SCSI target daemon: [ OK ]
# chkconfig tgtd on

Define a target name in the target configuration file, /etc/tgt/targets.conf. The following lines
present /dev/cciss/c0d3 using the name iqn.2009-09.com.redhat.bos.lab.renoir.
<target iqn.2009-09.com.redhat.bos.lab.renoir>
backing-store /dev/cciss/c0d3
</target>

If the serving system has an active firewall, the port used by iSCSI should be opened.
# iptables -I RH-Firewall-1-INPUT -p tcp --dport 3260 -m state --state NEW -j
ACCEPT
# service iptables save

The following command can be used to confirm the target's availability first on the serving
system, then on the hosts.
# iscsiadm --mode discovery --type sendtargets --portal renoir

If errors relating to the initiator name are observed, confirm that iSCSI has been installed and
is running.
# yum list iscsi-initiator-utils
Loaded plugins: rhnplugin, security
Installed Packages
iscsi-initiator-utils.x86_64 6.2.0.871-0.10.el5 installed
# service iscsid status
iscsid (pid 5712) is running...
#

Verify the initiator file is in place.


# ls /etc/iscsi/initiatorname.iscsi
ls: /etc/iscsi/initiatorname.iscsi: No such file or directory
#

www.redhat.com | 73
If the initiator file is missing, create the file and then restart the daemon.
# echo InitiatorName=$(iscsi-iname) > /etc/iscsi/initiatorname.iscsi
# service iscsid restart
Stopping iSCSI daemon:
Turning off network shutdown. Starting iSCSI daemon: [ OK ]
[ OK ]
#

74 | www.redhat.com

Вам также может понравиться