Вы находитесь на странице: 1из 26

VMware Infrastructure 3, deployment

Executive summary............................................................................................................................... 3 Audience ........................................................................................................................................ 3 This white paper .............................................................................................................................. 3 ESX server pre-deployment.................................................................................................................... 3 Compatibility and support................................................................................................................. 4 HP ProLiant servers ....................................................................................................................... 4 HP StorageWorks SAN................................................................................................................. 4 HP I/O devices ............................................................................................................................ 4 Server configuration options .............................................................................................................. 4 IOAPIC table setting ..................................................................................................................... 4 Hyper-Threading Technology ......................................................................................................... 5 Node interleaving ........................................................................................................................ 5 Platform specific considerations ......................................................................................................... 5 HP ProLiant servers with AMD Opteron processors ........................................................................... 5 SAN configuration ........................................................................................................................... 6 Configuring HBAs ........................................................................................................................ 6 Configuring clustering support ....................................................................................................... 6 Supported SAN topologies ............................................................................................................ 7 MSA configuration notes ............................................................................................................... 7 EVA configuration notes ................................................................................................................ 8 XP configuration notes................................................................................................................... 9 Supported guest operating systems ................................................................................................. 9 Boot From SAN .......................................................................................................................... 10 Deploying VMware ESX Server 3.0 ..................................................................................................... 11 Installation methods ........................................................................................................................ 11 HP Integrated Lights-Out .............................................................................................................. 11 Scripted installation .................................................................................................................... 11 Installation considerations ............................................................................................................... 11 ATA/IDE and SATA drives........................................................................................................... 11 SAN ......................................................................................................................................... 12 Boot From SAN .......................................................................................................................... 12 Disk partitioning ......................................................................................................................... 12

Post-installation tasks .......................................................................................................................... 13 HP Insight Management agents........................................................................................................ 13 Obtaining IM agents for ESX Server ............................................................................................. 13 Installing and configuring IM agents ............................................................................................. 14 Silent installation ........................................................................................................................ 15 Re-configuring the IM agents........................................................................................................ 15 ESX Server configuration................................................................................................................. 15 Virtual Machine deployment ............................................................................................................... 16 Creating new VMs ......................................................................................................................... 16 Using standard media................................................................................................................. 16 Network deployment .................................................................................................................. 16 Using RDP ................................................................................................................................. 17 Migrating VMs from ESX Server 2.x ................................................................................................. 18 VMware Tools ............................................................................................................................... 18 Using RDP to install VMware Tools ............................................................................................... 18 Using HP Systems Insight Manager ...................................................................................................... 20 Host management .......................................................................................................................... 20 SNMP settings............................................................................................................................ 20 Trust relationship ........................................................................................................................ 21 VM management ........................................................................................................................... 21 Configuring HP SIM.................................................................................................................... 21 Windows VMs ........................................................................................................................... 22 Linux VMs.................................................................................................................................. 24 Troubleshooting ............................................................................................................................. 24 For more information.......................................................................................................................... 26

Executive summary
This white paper provides guidance on the deployment of a VMware Infrastructure environment based on HP server, storage, and management products. The following key technology components are deployed: HP ProLiant servers HP ProLiant Essentials software HP StorageWorks Storage Area Network (SAN) products VMware ESX Server 3.x VMware VirtualCenter 2.x This white paper is not designed to replace documentation supplied with individual solution components but, rather, is intended to serve as an additional resource to aid the IT professionals responsible for deploying a VMware environment. This is the second in a series of documents on the planning, deployment, and operation of an Adaptive Enterprise based on VMware Infrastructure and HP server, storage, and management technologies. The remaining documents in this series can be found at http://www.hp.com/go/vmware.

Audience
This white paper is intended for solutions architects, IT planners, and engineers involved in the deployment of virtualization solutions. The reader should be familiar with networking in a heterogeneous environment, understand and interact with virtualized infrastructures on an on-going basis, and have a working knowledge of ESX Server and HP ProLiant Essentials software products.

This white paper


This white paper provides information on the following topics: Compatibility and support Identifying and configuring HP ProLiant server platforms that are certified for VMware ESX Server 3.0. SAN configuration Configuring supported HP StorageWorks SAN arrays for connectivity with ESX Server systems Deploying VMware ESX Server 3.0 Advanced methods for deploying ESX Server on HP ProLiant servers HP Insight Management agents Installing HP management tools on an ESX Server system Virtual Machine deployment Using conventional and advanced methods to deploy a virtual machine (VM) VMware Tools Deploying management tools into a guest operating system Using HP Systems Insight Manager Using HP Systems Insight Manager (HP SIM) to provide VM visibility

ESX server pre-deployment


This section contains configuration steps that should be performed before you deploy VMware ESX Server.

Compatibility and support


This section details HP servers, storage, and I/O devices that have been tested and are supported by HP for ESX Server 3.0. HP ProLiant servers For the most up-to-date list of supported platforms and important configuration notes refer to the support matrix at http://h18004.www1.hp.com/products/servers/software/vmware/hpvmwarecert.html. HP StorageWorks SAN The following is a list of HP StorageWorks SAN array systems that have been certified with VMware ESX Server 3.0. For the most up-to-date list of supported arrays and important configuration notes, refer to the Storage / SAN Compatibility Guide for ESX Server 3.0 at http://www.vmware.com/pdf/vi3_san_guide.pdf. HP StorageWorks 1500cs Modular Smart Array (MSA1500) HP StorageWorks 1000 Modular Smart Array (MSA1000) HP StorageWorks 4000 Enterprise Virtual Arrays (EVA4000) HP StorageWorks 6000 Enterprise Virtual Arrays (EVA6000) HP StorageWorks 8000 Enterprise Virtual Arrays (EVA8000) HP StorageWorks XP128 Disk Array (XP128) HP StorageWorks XP1024 Disk Array (XP1024) HP I/O devices For the most up-to-date list of supported devices and important configuration notes, refer the HP ProLiant option support matrix at http://h18004.www1.hp.com/products/servers/software/vmware/hpvmware-options-matrix.html.

Server configuration options


This section provides information on configuring the IOAPIC table, Intel Hyper-Threading, and node interleaving for AMD Opteron-based systems. All of these options can be configured in the ROMBased Setup Utility (RBSU). To access the RBSU, press F9 when prompted during the Power-On Self Test (POST). IOAPIC table setting The IOAPIC (Input/Output Advanced Programmable Interrupt Controller) controls the flow of interrupt requests in a multi-processor system. It also affects the mapping of IRQs to interrupt-driven subsystems such as PCI or ISA devices. Full IOAPIC table support should be enabled for all HP ProLiant servers running VMware ESX Server. This option can be found in the Advanced Options menu of the RBSU. Full IOAPIC table support is enabled by default in current generation ProLiant servers. Note: Previous generation ProLiant servers may refer to this option as MPS Table Mode.

Hyper-Threading Technology Hyper-Threading Technology is an embedded Intel processor technology that allows the operating system to view a single CPU as two logical units. The processor is capable of managing multiple tasks generated by different applications. Hyper-Threading is supported by ESX Server. To enable or disable Hyper-Threading at the system level, select Processor Hyper-Threading from the Advanced Options menu in the RBSU. Node interleaving To optimize performance over a wide variety of applications, the AMD Opteron processor supports two different types of memory access: Non-Uniform Memory Access (NUMA) and Uniform Memory Access (UMA), or node interleaving. NUMA is enabled by default on Opteron-based ProLiant servers. To place the server in UMA mode, enable Node Interleaving from the Advanced Options menu in the RBSU. Additional details about memory access and configuration for Opteron-based ProLiant servers are provided in the next section.

Platform specific considerations


The following section contains configuration details and consideration specific to various ProLiant server lines. HP ProLiant servers with AMD Opteron processors As mentioned above, the Opteron processor supports two different types of memory access: nonuniform memory access (NUMA) and sufficiently uniform memory access (SUMA), or node interleaving. A node consists of the processor cores, including the embedded memory controller and the attached DIMMs. The total memory attached to all the processors is divided into 4096 byte segments. In the case of linear addressing (NUMA), consecutive 4096 byte segments are on the same node. In the case of node interleaving (SUMA), consecutive 4096 byte segments are on different or adjacent nodes. Linear memory accessing (NUMA) defines the memory starting at 0 on node 0 and assigns the total amount of memory on node 0 the next sequential address, up to the memory total on node 0. The memory on node 1 will then start with the next sequential address until the process is complete. Node interleaving (SUMA) breaks memory into 4KB addressable entities. Addressing starts with address 0 on node 0 and sequentially assigns through address 4095 to node 0, addresses 4096 through 8191 to node 1, addresses 8192 through 12287 to node 3, and addresses 12888 through 16383 to node 4. Address 16384 is assigned to node 0 and the process continues until all memory has been assigned in this fashion.

ESX Server currently offers NUMA support for Opteron-based systems and implements several optimizations designed to enhance virtual machine performance on NUMA systems. However, some virtual machine workloads may not benefit from these optimizations. For example, virtual machines that have more virtual processors than the number of processor cores available on a single hardware node cannot be managed automatically. Virtual machines that are not managed automatically by the NUMA scheduler still run correctly; they simply don't benefit from ESX Server's NUMA optimizations. In this case, performance may be improved by activating node interleaving. For best performance, HP recommends configuring each node with an equal amount of RAM. Additionally, each ProLiant server may have its own rules and guidelines for configuring memory. Please see the QuickSpecs for each platform available at http://h18000.www1.hp.com/products/quickspecs/ProductBulletin.html For more information on ESX Server and NUMA technology, refer to VMware Knowledge Base article 1570.

SAN configuration
This section contains important information for both server and SAN administrators to use when configuring ESX Server hosts for SAN connectivity. Configuring HBAs This section contains information on obtaining the World-Wide Port Names (WWPNs) from your Fibre Channel HBAs and configuring them for clustering support. Obtaining World-Wide Port Name (WWPN) In order to configure an HP StorageWorks SAN, you will need to know the WWPN for each HBA you intend to connect to the SAN. Follow the instructions for your HBA model below to obtain the WWPN. Write down the WWPN for later use.
Table 1. Obtaining WWPNs for HBAs HBA How to obtain WWPN The WWPN can be found by entering the QLogic Fast!UTIL utility during server post. Select the appropriate host adapter (if more than one is present), then go to Configuration Settings -> Host Adapter Settings and look for Adapter Port Name. The WWPN can be found by entering the Emulex BIOS Utility during server post. Select the appropriate host adapter (if more than one is present). The WWPN will be displayed at the top of the screen.

QLogic

Emulex

Configuring clustering support Clustering your virtual machines between ESX Server machines requires shared disks. To configure an HBA for clustering support, follow the instructions for your HBA listed in the table below.

Table 2. Configuring clustering support HBA Configuring clustering support Enter the QLogic Fast!UTIL utility during server POST, then select the desired HBA. Select Configuration Settings Advanced Adapter Settings; ensure that the following settings are configured:

QLogic

Enable LIP Reset is set to No Enable LIP Full Login is set to Yes Enable Target Reset is set to Yes

Emulex

N/A

Supported SAN topologies All HP StorageWorks SANs are supported in both single-fabric and multi-fabric environments. Direct connect is not supported except when using the HP StorageWorks MSA SAN Switch 2/8 1 . For more information on specific SAN topologies, refer to the HP SAN Design Reference Guide. IMPORTANT: For high availability, multi-path capability is provided natively by ESX Server. Do not attempt to install other multipath software such as HP Secure Path or Auto Path.

MSA configuration notes Before configuring an MSA1000 or MSA1500 SAN and an ESX Server machine, HP recommends upgrading the array controller to an appropriate firmware version, as specified in Table 3.
Table 3. Array controller firmware levels Array MSA1000 MSA1500 Minimum 4.48 4.98 Recommended 4.48 5.02

For more information on upgrading the firmware and configuring the MSA, see http://h18006.www1.hp.com/storage/arraysystems.html For proper communications between the MSA and an ESX Server host, set the connection profile for each HBA port to Linux. Follow these steps:
1. Determine the World Wide Port Name (WWPN) for each HBA port to be connected to the array. 2. Set the profile for each HBA port using either the Command Line Interface (CLI) or Array

Configuration Utility (ACU). CLI

The MSA SAN Switch 2/8 is a true fibre channel switch thus does not represent a true direct connect architecture.

Provide a unique name for the connection and set the profile by typing the following command: CLI> add connection <unique_name> wwpn=<wwpn> profile=Linux To verify that each connection has been correctly set, type the following command: CLI> show connections For each connection, verify that the profile is set to Linux and that its status is Online. If there are any problems, refer to the MSA1000/MSA1500 documentation for troubleshooting guidelines. ACU Enable Selective Storage Presentation (SSP). Review the list of HBAs connected to the array. Assign a unique name to each connection and select Linux from the drop-down list as the desired profile. Enable access to the LUNs you wish to present to the ESX Server systems. EVA configuration notes Before configuring an EVA SAN to an ESX Server machine, HP recommends upgrading the array controller to the appropriate firmware version, as specified in Table 4.
Table 4. Array controller firmware levels Array EVA3000 (HSV100) EVA5000 (HSV110) EVA4000 EVA6000 (HSV200) EVA8000 (HSV210) XCS 5.031 XCS 5.100 Minimum VCS 3.028 VCS 4.004 VCS 3.028 VCS 4.004 XCS 5.031 Recommended VCS 3.028 VCS 4.004 VCS 3.028 VCS 4.004 XCS 5.100

For more information on upgrading the firmware of an EVA array and on configuring the EVA, see http://h18006.www1.hp.com/storage/arraysystems.html When adding an ESX Server system as a new host, Command View EVA may not populate all the HBAs in the WWPN drop-down list. The WWPN can be entered manually if this occurs. The connection type must be set to according to Table 5.

Table 5. Array controller connection type Array EVA3000 EVA5000 EVA3000 EVA5000 EVA4000 EVA6000 EVA8000 EVA4000 EVA6000 EVA8000 XCS 5.100 VMware XCS 5.031 Custom: 00000000220008BC VCS 4.004 Firmware VCS 3.028 Connection Type Custom: 000000002200282E VMware

Red Hat Advanced Server 2.1, Red Hat Enterprise Linux 3, and SuSE Linux Enterprise Server 8 guest VM support requires using the "vmxlsilogic" SCSI emulation. XP configuration notes Before configuring an XP SAN and ESX Server, HP recommends upgrading the array controller to an appropriate firmware version, as specified in Table 6.
Table 6. Array controller firmware levels Array XP128 XP1024 Minimum 21.14.18.00/00 Recommended 21.14.18.00/00

For more information on upgrading the firmware of an XP array and on configuring this array, see: http://h18006.www1.hp.com/storage/arraysystems.html The host mode for all XP arrays should be set to 0x0C. Supported guest operating systems The following guest operating systems are supported with HP StorageWorks SAN arrays and VMware ESX Server:

Table 7. Supported Guest Operation Systems Microsoft Windows SuSE Linux Red Hat

Windows 2000 SP3 and SP4 Windows 2003 base and SP1

SLES 8 SP3 SLES 9 SP1 and SP2

RHEL 2.1 U6 and U7 RHEL 3 U4 and U5 RHEL 4 U2

RHEL 2.1, RHEL 3, and SLES8 support requires using the "vmxlsilogic" SCSI emulation. Boot From SAN Enabling Boot From SAN on an HP ProLiant server is a two-stage process: enabling and configuring the QLogic BIOS, and configuring the servers host boot order in the RBSU. Perform the following steps: Configuring the BIOS
1. While the server is booting, press Ctrl Q to enter Fast!UTIL. 2. From the Select Host Adapter menu, choose the adapter you want to boot from, then press Enter. 3. In the Fast!UTIL Options menu, choose Configuration Settings, then press Enter. 4. In the Configuration Settings menu, choose Host Adapter Settings, then press Enter. 5. In the Host Adapter Settings menu, change the Host Adapter BIOS setting to Enabled by pressing

Enter.
6. Press ESC to go back to the Configuration Settings menu. Choose Selectable Boot Settings, then

press Enter.

7. In the Selectable Boot Settings menu, enable the Selectable Boot option, then move the cursor to the

Primary Boot Port Name, LUN:, then press Enter.


8. In the Select Fibre Channel Device menu, choose the device to boot from, then press Enter. 9. In the Select LUN menu, choose the supported LUN. 10. Save the changes by pressing ESC twice.

Configuring the host boot order


1. While the system is booting, Press F9 to start the BIOS Setup Utility. 2. Choose Boot Controller Order. 3. Select the primary HBA (that is, the HBA dedicated to your SAN or presented to your LUN) and

move it to Controller Order 1.


4. Disable the Smart Array Controller. 5. Press F10 to save your configuration and exit the utility.

10

Deploying VMware ESX Server 3.0


Installation methods
VMware ESX Server 3 includes both a graphical and a text-mode installer. The graphic-mode installer is the default and recommended method for installation. When using the VMware installation media, you will be presented with a boot prompt at system startup. Press Enter to start the graphical installer or type esx text at the boot prompt to use the text-mode installer. HP Integrated Lights-Out Integrated Lights-Out (iLO) is a web-based, remote management technology available on HP ProLiant servers. iLO offers complete control as if you were physically standing in front of the target server. The iLO Virtual Media feature offers a number of options for booting a remote machine in order to install ESX Server.
Table 8. Options for booting a remote machine How? Using a standard 1.44-MB floppy diskette Using a CD-ROM Using an image of the floppy diskette or CD-ROM Where? On a client machine On a client machine From anywhere on the network

ESX Server supports installation via iLO Virtual Media using either the physical installation CD or an ISO 2 image from a client machine or the network. Virtual Media requires an iLO/iLO2 Advanced license. For more information about iLO, refer to www.hp.com/servers/ilo. Scripted installation Once ESX Server has been deployed on an HP ProLiant server, IT staff can use this system to automate further deployments. This is particularly useful when deploying ESX Server instances on a number of similarly configured servers. See the Installation and Upgrade Guide from VMware for more information on creating a scripted installation.

Installation considerations
This section highlights some additional items that you may wish to consider before beginning your deployment. ATA/IDE and SATA drives VMware supports booting from ATA/IDE and SATA devices; however you cannot create a VMFS volume on these devices. An ESX Server host must have SCSI/SAS storage, NAS, or a SAN on which to store and run virtual machines.

As prescribed by ISO standard ISO 9660

11

SAN Before installation, you should zone and mask all SAN LUNs away from your server except those needed during installation. This includes shared LUNs with existing VMFS partitions. This will help prevent accidental deletion of critical VMs and data. After installation, you may then unmask the shared LUNs. The maximum number of LUNs supported by the ESX Server installer is 128, and the maximum for ESX Server is 255. Keep these maxima in mind when configuring and presenting LUNs. Boot From SAN ESX Server does not support booting from a shared LUN. Each ESX Server host should have its own boot volume, and this volume should be masked away from all other systems. Disk partitioning The following table shows how the ESX Server hosts storage should be partitioned. The sizes provided are recommended minima, and optional partitions are noted.
Table 9. Default storage configuration and partitioning for a VMFS volume on internal drives Partition name /boot / File system format ext3 ext3 Size 100MB 2560MB Description The boot partitions stores files required to boot ESX Server. Called the root partition, this contains the ESX Server operating system and Web Center files. Allocate an additional 512MB if you plan to use this server for scripted installations. The swap partition is used by the service console and tools like the HP Insight Management agents. This partition serves as a repository for the VMkernel core dump files in the event of a VMkernel core dump. The VMFS file system for the storage of virtual machine disk files. Must be large enough to hold your VM disks. Storage for individual users. Partition used for temporary storage. Partition is used for log file storage. HP recommends creating a /var partition to prevent unchecked log file growth from creating service interruptions.

NA vmkcore VMFS /home (optional) /tmp (optional) /var (optional)

Swap vmkcore VMFS-3 ext3 ext3 ext3

544MB 100MB 1200MB+ 512MB 1024MB 1024MB

12

Post-installation tasks
HP Insight Management agents
HP Insight Management (IM) agents provide server management capabilities for ESX Server installed on supported server platforms. This section describes how to obtain, install, and configure the agents for a particular server environment. Obtaining IM agents for ESX Server The latest IM agents are available from the HP website. Follow these steps to download the agents:
1. Go to http://www.hp.com/servers/swdrivers. 2. Under Option 2: Locate by category, select the appropriate version of VMware ESX Server from 3. 4. 5. 6.

the Operating system drop-down list. From the Category drop-down list, select Software System Management. Click Locate software to obtain a download link. Download the compressed tar file directly to the ESX Server system. Unpack the archive with the command: %> tar zxvf hpmgmt-<version>-vmware.tgz

Note: Opening the tar file on a Windows server may corrupt files in the archive.

The contents of the archive are unpacked in the hpmgmt/<version>/ directory. Table 10 lists the packages included in the archive and their functions.
Table 10. Descriptions of packages included with the download Package hpasm hprsm cmanic hpsmh Description This package provides server and storage management capabilities. This package provides rack and iLO management capabilities. This agent gathers critical HP ProLiant NIC hardware and software information to help IT administrators manage and troubleshoot their systems. The System Management Homepage provides a consolidated view for single-server management that highlights tightly-integrated management functionalities (such as performance, fault, security, diagnostic, configuration, and software change management). Package dependency for hpsmh. This package is not installed on ESX Server 3.0 This package contains the UCD-SNMP protocol and cmaX extensions. This package is not installed on ESX Server 3.0. This package contains tools for requesting or setting information from SNMP agents, tools for generating and handling SNMP traps, a version of the netstat command, which uses SNMP, and a Tk/Perl MIB browser. This package is not installed on ESX Server 3.0.

expat ucd-snmp-cmaX ucd-snmp-cmaX-utils

13

Installing and configuring IM agents After the tar file has been downloaded and unpacked, view the included README file for important installation and configuration information. Before starting the installation, you should consider or have available the following information: SNMP settings During the installation, you will be asked to supply community stringsboth read-only and readwritefor SNMP communication with the localhost and with a management station such as HP Systems Insight Manager. The settings you provide will be written to the SNMP configuration file, /etc/snmp/snmpd.conf. You must specify a read-write community string for the localhost. This community string is used by the agents to write data to the SNMP Management Information Base (MIB) tree. If you are using HP SIM or other management software, see Central Management Server (CMS) below. Central Management Server If using a central management system such as HP Systems Insight Manager, you will need to provide the IP address or DNS name for the management server during the installation. Enter the management servers IP address along with the community string that matches the settings in your management server. When using HP SIM, you only need to allow read-only access to the CMS. Firewall Configuration VMware ESX Server 3.0 uses a firewall to restrict network communications to and from the ESX Server host to essential services only. For full functionality of health and management agents, the following ports must be opened:
Table 11. Ports Port Protocol ESX Firewall Service Description

161 tcp/udp 162 tcp/udp 2381 tcp

snmp snmp https

snmpd snmpd N/A

SNMP traffic SNMP traps HP System Management Homepage

During the installation, you will be given the option to have the installer configure the ESX Server firewall for you. For more information about the HP health and management agents, please see the Managing ProLiant Servers with Linux HOWTO at ftp://ftp.compaq.com/pub/products/servers/Linux/linux.pdf. Although this HOWTO was written for enterprise Linux systems, much of the information contained is also applicable to VMware ESX Server environments. To begin the installation, login as root and run the following command: %> ./installvm<version>.sh --install The installation script performs some basic checks and guides you through the installation process. After the installation has completed, you may wish to configure the System Management Homepage (SMH). To start the SMH configuration wizard, run the following command: %> /usr/local/hp/hpSMHSetup.pl For detailed information on configuring SMH, refer to the System Management Homepage Installation Guide.

14

Silent installation The installation script may also be run in silent mode, installing and configuring the IM agents based on settings contained in an input file without user interaction. To automate the installation of the agents, create an input file using the hpmgmt.conf.example file from the download package as a template. Information on the available options is given in the example file; at a minimum, you should configure local read-write community access for SNMP. To automate the configuration of the System Management Homepage, place a copy of the SMH configuration file (smhpd.xml) with the desired setting into the same directory as the agent installation script. It is recommended to use a file from a pre-existing installation rather than edit the file by hand. The smhpd.xml file can be found in /opt/hp/hpsmh/conf/. During a silent installation, the installer will check for the presence of this file. If found, it will be used to configure SMH; otherwise, SMH will be configured with the default options. When you are ready to begin the installation, login as root and run the following command: %> ./installvm<version>.sh --silent --inputfile input.file The installation process starts immediately; you are not prompted for confirmation. However, if necessary information from the configuration file is missing, you may be prompted for it during the installation. Re-configuring the IM agents To change the configuration of the agents after the installation is complete, login as root and run the following command: %> service hpasm reconfigure This command stops the agents and reruns the interactive configuration wizard. After reconfiguring the agents, you must restart the SNMP service.

ESX Server configuration


After installing ESX Server, you will need to configure the hosts networking, storage, and security settings. Please see the Virtual Infrastructure: Server Configuration Guide for complete details on configuring your ESX Server host.

15

Virtual Machine deployment


The provisioning and deployment of a virtual machine (VM) is, in many ways, similar to the provisioning and deployment of a physical server. Servers are first configured with the desired hardware (such as CPUs, memory, disks, and NICs) then provisioned with an operating system most likely via physical media such as a CD-ROM or DVD, or over the network. Likewise, a VM is created with a specific virtual hardware configuration; however, in addition to the more conventional methods of server provisioning, VMs offer some unique options. This section discusses conventional and advanced methods for deploying VMs.

Creating new VMs


Many of the deployment options used for physical servers are also available to virtual machines. Some of the more widely-used options (the use of standard media, a network deployment, and the use of HP ProLiant Essentials Rapid Deployment Pack (RDP)) are discussed below. Using standard media The most basic method of installing an operating system is by using the physical install media most likely a CD-ROM or DVD or, perhaps, a floppy diskette. A VMs CD-ROM drive can be mapped directly to the CD-ROM drive of the host machine, permitting a very simple guest operating system installation. The VMs CD-ROM drive can also be mapped to an image file on the host or the hosts network. With ESX Server 3.0 and the Virtual Infrastructure Client, you can now also use images on the client or clients network. By creating a repository for CD-ROM images on the network or SAN, you can maintain a central location for images to be shared by all ESX Server machines, eliminating the need to locate and swap CD-ROMs between hosts. VMs can also access CD-ROMs and image files via the HP ProLiant server hosts iLO Virtual Media feature. When connecting a VMs CD-ROM to the iLO Virtual CD, take note of the following: Use the special device /dev/scd0 rather than the standard /dev/cdrom. It is NOT necessary to first mount the device in the ESX service console. When using iLO Virtual Floppy, the device is typically /dev/sda; however, if a SAN or some other SCSI device is attached, this may not be the case. To verify which device is attached to the Virtual Floppy, run the dmesg command in the service console after connecting the floppy in the iLO interface. Look for lines similar to the following example: scsi3 : SCSI emulation for USB Mass Storage devices Vendor: HP Model: Virtual Floppy Rev: 0.01 Type: Direct-Access ANSI SCSI revision: 02 VMWARE: Unique Device attached as scsi disk sde at scsi3, channel 0, id 0, lun 0 Attached scsi removable disk sde at scsi3, channel 0, id 0, lun 0 Note that the fourth and fifth lines of this example show the Virtual Floppy attached to /dev/sde. Network deployment Many operating systems now support some method of installation over the network (for example, Microsoft Remote Installation Service). This scenario is usually accomplished by remote booting with a Pre-Boot eXecution Environment (PXE) ROM or by using special boot media containing network support. PXE boot is supported by ESX Server VMs.

16

A VM with no guest operating system installed attempts to boot from devices (hard disk, CD-ROM drive, floppy drive, network adapter) in the order in which these devices appear in the boot sequence specified in the VMs BIOS. As a result, if you plan to use PXE boot capability, HP recommends placing the network adapter at the top of the boot order. To achieve this, press F2 when the VM first boots to enter the VMs BIOS; update the boot order in the BIOS. Note: The PXE boot image must contain drivers for the Universal Network Device Interface (UNDI) or the VMs virtual network adapter to support network connectivity.

Using RDP RDP includes predefined jobs for deploying an operating system to a VM. However, before using one of these jobs, you must perform some additional steps, as described below. First, you must set the PXE NIC to appear first in the VMs boot order. To achieve this, perform the following steps:
1. Power on the VM. 2. Press F2 during POST to enter the VMs BIOS configuration utility. 3. From the Boot menu, select Network Boot and then press the + (plus) key until the PXE NIC is first

in the boot order.

Next, allow the VM to PXE boot and connect to the Deployment Server. Once connected, the VM is displayed under New Computers in the Deployment Server console as shown in Figure 1.

Figure 1. The VM is displayed in the Deployment Server console

The default deployment scripts will use the console name as the system name. You should consider renaming the VM using a name that complies with the requirements of the operating system that will be deployed, or modify to deployment job to use/create a valid system name. The deployment job may now be run on the VM. To customize the deployment of the operating system on the VM, use the same procedures as you would for a physical server. For example, if installing a Windows operating system, you must create an unattend.txt file that is customized for the specific VM, then configure the deployment job to use your custom unattend.txt file.

17

Migrating VMs from ESX Server 2.x


Virtual Machines created with ESX Server 2.x can be migrated to ESX Server 3.0 hosts. Follow the procedures in the VMware Installation and Upgrade Guide for migrating your ESX 2.x virtual machines.

VMware Tools
This section provides guidelines for deploying VMware Tools into a Windows guest operating system. IMPORTANT: It is very important to install VMware Tools in the guest operating system. While the guest operating system can run without VMware tools, significant functionality and convenience would be lost.

VMware Tools is a suite of utilities that can enhance the performance of the VMs guest operating system and improve VM management. Features include: VMware Tools service for Windows (or vmware-guestd on Linux guests) A set of VMware device drivers, including an SVGA display driver, the advanced networking driver for some guest operating systems, the BusLogic SCSI driver for some guest operating systems, the memory control driver for efficient memory allocation between virtual machines, the sync driver to quiesce I/O for Consolidated Backup, and the VMware mouse driver. The VMware Tools control panel, which allows IT staff to modify settings, shrink virtual disks, and connect and disconnect virtual devices A set of scripts that helps automate guest operating system operations; the scripts run when the VMs power state changes. A component that supports copying-and-pasting text between the guest and managed host operating systems For more information on installing VMware Tools, refer to the Basic System Administration Guide. See below for guidelines on using RDP to install VMware Tools. Using RDP to install VMware Tools RDP can be used to automate the installation of VMware Tools into a Windows guest OS. Perform the following steps: Note: To fully automate the installation of VMware Tools, the DriverSigningPolicy for the system or domain must be set to Ignore. For more information on setting DriverSigningPolicy in Windows, refer to Microsoft KB article 298503. The policy may be restored to its original setting when the installation is complete.

18

1. To copy VMware Tools files to the Deployment Server, first use the Virtual Infrastructure Client to

attach to an existing VMs console. 2. Connect the VMs virtual CD-ROM to the VMware Tools ISO image at /usr/lib/vmware/isoimages/windows.iso. <deployment_server>\lib\software on the Deployment Server.

3. Copy the contents of the CD-ROM to VMwareTools, a newly-created directory under 4. Create a new Distribute Software task in a new or existing job. 5. Configure the task to support the silent installation of all VMware Tools components, as shown in

Figure 2.

Figure 2. Configuring a task for the silent install of all VMware Tools components

Note: When scheduling this task, be aware that the VM reboots when the installation is complete.

19

Using HP Systems Insight Manager


Host management
This section provides details on configuring HP Systems Insight Manager (HP SIM) to manage your HP ProLiant servers running ESX Server 3.0. SNMP settings Before discovering your host, you should install and configure the HP Insight Management agents according to the instructions above. Verify that you have provided read-only SNMP access to the CMS and that the community string matches one provided to HP SIM. To check the community strings, go to Options Protocol Settings Global Protocol Settings.

Figure 3. To check community strings:

You can provide up to 8 global community strings. If you need more, you can provide additional strings for each host after discovery.

20

Note: If you use public for a community string, HP recommends making it the lowest priority.

During discovery, HP SIM will attempt to use the first community string in the list. If it is unable to establish communication with the host, then it will try with the second string in the list and so on. Once HP SIM can establish a connection with a particular community string, no additional strings will be tried and that string will be used for all future SNMP communication (unless explicitly configured via the hosts protocol settings page). Once you have verified that your SNMP configuration is correct, you can then perform device discovery of your host. Note: Make sure that you have configured your ESX firewall to permit SNMP traffic.

Trust relationship You may wish to establish a trust relationship between the HP SIM CMS and your ESX Server host. This will enable single sign-on (SSO) management between the CMS and the host and permit remote task execution. Trust can be established by Name or by Certificates and is configured through the System Management Homepage. To export the CMS certificate, go to Options Security Certificates Server Certificate and use the Export feature. This certificate can be saved, then uploaded to your ESX Server host. Refer to the System Management Homepage Installation Guide for further details.

VM management
Since its hardware is virtual rather than physical, it is unnecessary to install IM agents on a VM. However, the visibility of VMs is valuable to IT staff, answering questions such as: Is the VM up or down? How much free disk space is available to the VM? For basic status monitoring, HP SIM does not require additional software to be installed on the VM HP SIM can poll status by issuing a standard ping and waiting for a response. However, there are some states where, although the VM responds to a ping, an application or major parts of the operating system may have crashed. Thus, it is desirable to implement more advanced status capabilities through WBEM/WMI and SNMP industry-standard management protocols. Configuring HP SIM Since WMI/WBEM is a secured protocol, HP SIM needs a valid user name/password combination to be able to authenticate to the managed systems. These credentials can be specified on a global or individual system basis. More information about the proper credentials to use is provided in the operating system specific sections below. To set a global username and password, set the appropriate Default WBEM settings, as shown in Figure 4, under Options Protocol Settings Global Protocol Settings .

21

Figure 4. Setting a global username and password

After ensuring that the Enable WBEM checkbox is checked, specify the appropriate user name and password so that HP SIM can access WBEM/WMI data. Since there are five different settings, you can specify different logins for different groups of servers. To specify the WBEM credentials on an individual basis, select the server you wish to configure, then select Options Protocol Setting System Protocol Settings ..., as shown in Figure 5.

Figure 5. Setting an individual username and password

After ensuring that Use values specified below is selected, enter the individual username and password. SNMP has a rudimentary security capability through an agreed-upon pass phrase (known as a community string) used by both sender and receiver. SNMP community strings can be specified on a global or individual system basis. Details on configuring SNMP community strings in HP SIM are provided in the Host Management section above. Windows VMs Microsoft Windows operating systems come with an SNMP stack. Follow the instructions for installing and configuring SNMP provided by your operating system.

22

In addition to community string security, Windows Server 2003 provides a host allow/deny feature for controlling access to SNMP data. By default, the SNMP service is set up to respond only to requests from localhost and ignore all others, including requests from the HP SIM server. There are two options for enabling these requests: Add the DNS name or IP address of the HP SIM CMS to the list of addresses from which SNMP service accepts packets, or Change the selection to Accept packets from any host, as shown in Figure 6.

Figure 6. Configuring SNMP service to accept requests from any host

WMI is provided and installed by default by Windows 2000 and later operating systems. A WMI stack is available for Windows NT 4.0 via download from Microsoft. Visit http://www.microsoft.com/downloads; search the keywords, WMI core. Since WMI uses secure communications, a valid OS account login/password combination must be supplied to HP SIM. Since this user does not need special privileges, an administrator account is not required. HP recommends using a single account across all VMs for WMI communication. If the configuration is correct, your VMs should be displayed (after discovery) in HP SIM as in the example shown in Figure 7. Note that the Product Name is shown as VMware Virtual Platform.

23

Figure 7. An example of correctly-configured VMs in HP SIM

If the any information is missing or unknown, click on the particular VM and verify that Management Protocols settings include WBEM and SNMP. If not, verify your settings as described above or see the Troubleshooting section below for additional information. Linux VMs Most Linux distributions also provide an SNMP stack, although it may not be installed by default. Refer to your distributions instructions for installing SNMP. After SNMP has been installed, you will need to provide read-only access to your HP SIM CMS. Add the following line to your snmpd configuration file. rocommunity community_string_from_CMS IP_of_CMS For additional information on configuring snmpd security settings, see the snmpd.conf manual page. Once SNMP is configured, HP SIM will discover your Linux VM with a Product Name of Linux Server. This is different from your Windows VMs, which are discovered as VMware Virtual Platform. If you prefer, you can change the way HP SIM discovers your Linux VMs to match that of your Windows VMs. You will need to create a new Managed Type in HP SIM and change the SNMP SysObjectID on your Linux VM. Refer to the documentation for your SNMP stack and Systems Insight Manager. Note: In order to change the SysObjectID, you must use the netsnmp stack. The ucd-snmp stack does not support this feature.

In order to use WBEM to manage your Linux VMs, you will need to install a WBEM Common Information Model Object Manager (CIMOM). There are two open source projects available for download that provide CIMOMs OpenWBEM and Open Pegasus. Recent Linux distributions are now beginning to include a CIMOM package; however, for best results, HP recommends downloading and compiling Open Pegasus from source (use version 2.5 or later). Follow the instructions provided by each project for installing and configuring a user account for access by the CMS.

Troubleshooting
This section describes some common problems you may encounter while discovering your ESX Server hosts and VMs and provides some tips on how to resolve them.

24

My ESX Server hosts do not display the ProLiant server model or report as unknown or Linux Server. Verify the HP Insight Management agents have been installed and are running. Verify your ESX firewall is configured to permit SNMP traffic (ports 161 and 162). Open a browser to the System Management Homepage (https://hostname:2381). If you are unable to bring up SMH or if any information is missing, restart the management agents. Navigate to the System Page for the host in HP SIM. Expand the Product Description area and verify that SNMP is listed as one of the Management Protocols. Navigate to the hosts System Protocol Settings page and verify that SNMP community string matches the one you configured on your host. If not, override the global setting and supply the community string you wish to use. Re-run Identify Systems from the Options menu. My Virtual Machine reports as unknown or unmanaged. Verify your SNMP and WBEM configuration in the Global Protocol Settings. Navigate to the devices System Page. Expand the Product Description area and verify SNMP and WBEM are listed in the Management Protocols section. Verify that your network permits traffic on port 5989. Restart the WMI Mapper service on the CMS host. Re-run Identify Systems from the Options menu.

25

For more information


Resource description HP website HP ProLiant servers HP ProLiant Server Management Software HP StorageWorks VMware server virtualization Web address www.hp.com www.hp.com/go/proliant www.hp.com/go/hpsim www.hp.com/go/storageworks www.hp.com/go/vmware

VMware website VMware Infrastructure VMware Infrastructure Documentation VMware Knowledge Base VMware and HP Alliance

www.vmware.com www.vmware.com/products/vi/ www.vmware.com/support/pubs/vi_pubs.html www.vmware.com/kb www.vmware.com/hp

2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Intel is a trademark or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. AMD Opteron is a trademark of Advanced Micro Devices, Inc. Linux is a U.S. registered trademark of Linus Torvalds. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. 6/2006 -1

Вам также может понравиться