Академический Документы
Профессиональный Документы
Культура Документы
IBM PowerVM
Getting Started Guide
Step by step virtualization configuration from scratch to the first partition IVM, HMC, and SDMC examples provided Advanced configurations included
ibm.com/redbooks
Redpaper
4815edno.fm
International Technical Support Organization IBM PowerVM Getting Started Guide March 2012
REDP-4815-00
4815edno.fm
Note: Before using this information and the product it supports, read the information in Notices on page v.
First Edition (March 2012) This edition applies to IBM Virtual I/O Server versions 2.2.0 and 2.2.1, IBM Systems Director Management Console versions 6.7.4.0, IBM Hardware Monitor Console This document created or updated on January 6, 2012.
Copyright International Business Machines Corporation 2012. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
4815TOC.fm
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii The team who wrote this paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Chapter 1. Introduction to PowerVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Terminology differences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 3 6 7
Chapter 2. Setting up using Integrated Virtualization Manager . . . . . . . . . . . . . . . . . . . 9 2.1 Single VIOS setup using IVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1.1 Install VIOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1.2 Create partition for client OS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.1.3 Configure VIOS for client network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.1.4 Configure VIOS for client storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.1.5 Install Client OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.2 Setting up a dual VIOS with IVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.3 Setting up NPIV Fibre Channel with IVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Chapter 3. Setting up using Hardware Management Console . . . . . . . . . . . . . . . . . . . 3.1 Single VIOS setup using HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Create VIOS Partition Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Install VIOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.3 Create Client OS Logical Partition Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.4 Configure VIOS Partition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Dual VIOS setup using HMC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Create dual VIOS partition profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Install VIOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Create Client OS Logical Partition Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.4 Configure VIOS partitions for dual setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Setup virtual Fibre Channel using HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Additional client partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 4. Setting up using the SDMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Guided dual VIOS setup using the SDMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Create the Virtual Servers for VIOS1 and VIOS2 . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 Install VIOS1 and VIOS2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.3 Configure the TCP/IP stack in VIOS1 and VIOS2. . . . . . . . . . . . . . . . . . . . . . . . . 4.1.4 Create the SEA failover configuration using the SDMC . . . . . . . . . . . . . . . . . . . . 4.1.5 Configure storage devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.6 Create Virtual Server for client OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.7 Install client OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 20 20 24 27 28 32 33 34 34 36 40 44 44 45 46 46 48 49 50 50 52 53
iii
4815TOC.fm
4.1.8 Configure virtual Fibre Channel adapters using the SDMC . . . . . . . . . . . . . . . . . 4.2 Single VIOS setup using the SDMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Create VIOS Virtual Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Install VIOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Create Virtual Server for client OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.4 Configure Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.5 Install client OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Dual VIOS setup using the SDMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Create second VIOS Virtual Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Install second VIOS using NIM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Configure second VIOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Setup virtual Fibre Channel using the SDMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Configure client Virtual Server for NPIV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Configure Virtual I/O Server fro NPIV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Configure second VIOS for NPIV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 5. Advanced Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Adapter ID numbering scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Partition numbering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 VIOS partition and system redundancy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Advanced VIOS network setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Using IEEE 802.3ad Link Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Using IEEE 802.1Q VLAN tagging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.3 Multiple SEA configuration on VIOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.4 General network considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Advanced storage connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Shared processor pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Live Partition Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Active memory sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Active Memory Deduplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10 Shared storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53 55 55 57 58 59 62 63 63 64 66 70 70 71 73 75 76 77 77 78 78 79 79 80 80 82 82 82 82 83 85 85 85 85
iv
4815spec.fm
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
4815spec.fm
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
Active Memory AIX BladeCenter GPFS IBM POWER Hypervisor Power Systems POWER6 POWER7 PowerHA PowerVM Power POWER Redbooks Redpaper Redbooks (logo) System i System p5 System Storage
The following terms are trademarks of other companies: Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.
vi
4815pref.fm
Preface
IBM PowerVM virtualization technology is a combination of hardware and software that supports and manages the virtual environments on POWER5-, POWER5+,POWER6 and POWER7-based systems. Available on IBM Power Systems, IBM BladeCenter servers as optional Editions, and supported by the IBM AIX, IBM i, and Linux operating systems, this set of comprehensive systems technologies and services is designed to enable you to aggregate and manage resources using a consolidated, logical view. Deploying PowerVM virtualization and IBM Power Systems offers you the following benefits: Lower energy costs through server consolidation Reduced cost of your existing infrastructure Better management of the growth, complexity, and risk of your infrastructure This IBM Redpaper publication is intended as a quick start guide to help you install and configure a complete PowerVM virtualization solution on IBM Power Systems either using Integrated Virtualization Manager (IVM), Hardware Management Console (HMC), Virtual IO Server (VIOS), or Systems Director Management Console (SDMC). The paper is targeted to new customers who need instructions on how to install, configure and bring up a new server in a virtualized environment in an easy and quick way.
vii
4815pref.fm
expertise include PowerVM, Power Systems, AIX, Data Protection, IBM System Storage and Storage Area Network. The project that produced this publication was managed by: Scott Vetter is a Certified Executive Project Manager at the International Technical Support Organization, Austin Center. He has enjoyed 24 years of rich and diverse experience working for IBM in a variety of challenging roles. His latest efforts are directed at providing world-class Power Systems Redbooks, white papers, and workshop collateral. Thanks to the following people for their contributions to this project: Don S. Spangler, Brian King, Ann Lund, Linda Robinson, Alfred Schwab, Richard M. Conway, David Bennini IBM US Nicolas Guerin IBM France
Comments welcome
Your comments are important to us! We want our papers to be as helpful as possible. Send us your comments about this paper or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an email to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400
viii
4815pref.fm
Preface
ix
4815pref.fm
4815ch01.fm
Chapter 1.
Introduction to PowerVM
Businesses are turning to IBM PowerVM virtualization to consolidate multiple workloads onto fewer systems, increase server utilization, and reduce cost. PowerVM provides a secure and scalable virtualization environment for AIX, IBM i, and Linux applications built upon the advanced reliability, availability, and serviceability features and the leading performance of the Power Systems platform. This publication is intended for customers new to virtualization that are looking for a quick guide to get their virtualized servers up and running without delving into too many details of the architecture. It will guide you through the basic installation and configuration of each technology involved with PowerVM, one chapter at a time: HMC, IVM, and SDMC. Chapter 1 provides a short overview of the key PowerVM concepts, a planning model and best practices to follow. Chapters 2, 3, and 4 describe, in a step-by-step manner, how to configure your system using Integrated Virtualization Manager (IVM), Hardware Management Console (HMC), and Systems Director Management Console (SDMC), respectively. All three chapters are logically independent, you can read these three chapters in any order. Chapter 5 has advanced tips and pointers to where to go once you have completed the initial setup detailed here.
4815ch01.fm
1.1 Overview
The chapters 2, 3, and 4 are a step-by-step installation and configuration in a cookbook style guide. You will find similar steps to accomplish the same task: From a factory fresh machine, you will be able to install and configure virtual machines using an HMC, IVM, or SDMC, and have a fully functional Logical Partition (LPAR.) Note: The term logical partition, or LPAR, is used as a generic term in this document. Other terms used include guest partition, partitions, and Virtual Servers. All these terms refers to virtualized guest servers running their own operating systems.
Note: This document is not intended to deal with IBM BladeCenter. All three managing systems are intended to manage virtualization on IBM Power Systems. Table 1-1 show how they differ:
Table 1-1 Virtualization manager features IVM Included in PowerVM Manage Power Blades Manage more than one server Hardware monitoring Service agent call home Graphical Interface Requires separate server to run on Run on virtualized environments Advanced PowerVM features High-end servers Low-end and midrange servers Server families support Redundant setup
POWER5/POWER5+: POWER6/POWER6+: POWER7:
HMC
POWER5/POWER5+: POWER6/POWER6+: POWER7:
4815ch01.fm
These are the basic steps you will find in all three chapters, but they will vary in order and complexity from one managing system to another. They include: 1. Fresh out of the box: This paper will guide you through all the installation and configuration from scratch. You can factory reset your machine if you wish, no previous configurations needed. Important: Please remember to perform a backup of all of your data before a factory reset. 2. Depending on the case, install one or two Virtual IO Servers (VIOS). Redundant VIOS is only supported by HMC, and SDMC. 3. Configure network and storage. This procedure may require some information provided by your network and/or storage administrator. 4. Create the client LPAR. By the end of each chapter you will have a fully functional PowerVM solution with one LPAR ready to be used.
1.2 Planning
In this publication we use three sets of machines to describe all the installation and configuration process with the three different managing systems, Figure 1-1 on page 4 presents the model, machine type, and managing system used.This guide suggests you do some planning before starting to configuring your environment, including: Check Firmware levels on Power Server and HMC or SDMC before you start. Decide if you will use Logical Volume Mirroring (LVM) - in AIX LPARs - or Multipath IO (MPIO1.) The examples in this paper uses MPIO. Make sure your Fibre Channel switches and adapters are N_Port ID Virtualization (NPIV2) capable. Make sure your network is properly configured. Check the firewall rules on the HMC or SDMC. Plan how much processor and memory you will assign to the VIOS for best performance. Its important to plan the VIOS virtual adapter slot numbering scheme. This publication uses the scheme shown in Figure 1-2 on page 5. SDMC offers you an automatic handling of slot allocation. Plan for two Virtual IO Servers (VIOS). We recommend that you use the dual VIOS architecture so you can have serviceability and scalability. Note: The dual VIOS architecture is only available when using the HMC or SDMC as managers. You cannot use dual VIOS with IVM.
Multipath IO is a fault-tolerance and performance enhancement technique where there is more than one path between the CPU in a computer system and its storage devices through buses, controllers, switches, and bridge devices connecting them. To virtualize Fibre Channel adapters PowerVM is using a subset of Fibre Channel standard called N_Port ID Virtualization (NPIV.)
4815ch01.fm
The dual VIOS setup offers serviceability to a PowerVM environment on the managed system. It also provides added redundancy and load balancing of client network and storage. The mechanisms involved in setting up a dual VIOS configuration use Shared Ethernet Adapter (SEA3) failover for network and MPIO via shared drives on the VIOS partitions for client storage. There are other mechanisms which can be employed but SEA failover for networks and MPIO for storage provide for less configuring on the client partitions. SEA failover and MPIO allow for serviceability as well as redundancy and load balancing with the VIOS partitions. One VIOS can act a primary VIOS for networks and be a standby for storage; while the other VIOS can act as standby for networks and be the primary for storage.The flexibility afforded by using a dual VIOS setup caters to a wide range of client requirements.
A Shared Ethernet Adapter is a VIOS component that bridges a physical Ethernet adapter and one or more virtual Ethernet adapters. For more information please refer to the online IBM documentation about Shared Ethernet Adapters - http://ibmurl.hursley.ibm.com/2F2F
4815ch01.fm
101 102
11 12
VirtServer1 LPARID=10
21 22
101 102
111
VIOS1 LPARID=1
11 12
112
VirtServer2 LPARID=11
21 22
111 112
VIOS2 LPARID=2
. . .
XX1 XX2 11 12
VirtServerXX LPARID=12
21 22
XX1 XX2
Figure 1-2 is represented in Table 1-2 for VIOS1 which describes the relationship between the virtual client adapter ID and virtual servers client adapter IDs. Similarly in Table 1-3 on page 6 for VIOS2 describes the adapter ID allocation and its relationship to the virtual servers client adapter IDs.
Table 1-2 VIOS1 adapter ID allocation Virtual Adapter Server Adapter ID VLAN ID Server Adapter Slot C2 Client Partition/ Virtual Server All virtual servers N/A Client Adapter ID Client Adapter Slot N/A
Virtual Ethernet Virtual Etherneta Virtual Ethernetb Virtual VSCSI Virtual Fibre
N/A
C3
N/A
N/A
2 11 12
C2 C11 C12
4815ch01.fm
Virtual Adapter
Server Adapter ID
VLAN ID
Client Adapter ID
1 N/A N/A
2 11 12
a. This virtual Ethernet adapter is to be used as the control channel adapter (SEA failover adapter) b. This client virtual Ethernet adapter is not actually associated with a VIOS server. The VLAN ID configured on the adapter is the link to the SEA adapter configuration. Table 1-3 VIOS2 adapter ID allocation Virtual Adapter Server Adapter ID VLAN ID Server Adapter Slot C2 Client Partition/ Virtual Server N/A Client Adapter ID Client Adapter Slot N/A
Virtual Ethernet Virtual Etherneta Virtual VSCSI Virtual Fibre Virtual VSCSI Virtual Fibre
2 (used default allocation) 3 (used default allocation) 101 102 111 112
1 (used default allocation) 99 (default for SDMC only) N/A N/A N/A N/A
N/A
C3
N/A
N/A
N/A
21 22 21 22
4815ch01.fm
Table 1-4 Power Systems and x86 terms Power terms managed system x86 term or concept server or system Definition A physical server that contains physical processors, memory, and I/O resources that is often virtualized into virtual servers, which are also known as client logical partitions. The logical partition that controls all of the physical I/O resources on the server and provides the user interface from which to manage all of the client logical partitions within the server. In this case, the logical partition in which IVM is installed. The collection of virtual or physical processor, memory, and I/O resources defined to run the client operating system and its workload. The underlying software of VIOS that enables the sharing of physical I/O resources between client logical partitions within the server. In IVM environments, the terms Virtual I/O Server and Integrated Virtualization Manager are sometimes used interchangeably.
management partition
virtual machine, virtual server, management operating system, VMWare Service Console, or KVM Host partition
Power Hypervisor
x86 hypervisor
1.4 Prerequisites
There are some prerequisites you should verify in order to get as close to an ideal scenario as possible. Check that: Your HMC or SDMC (the hardware or the virtual appliance) is configured, up, and running. Your HMC or SDMC is connected to the new servers HMC port. We suggest either a private network or a direct cable connection. The TCP port 657 is open between the HMC/SDMC and the Virtual Server in order to enable Dynamic Logical Partition functionality. You have IP addresses properly assigned for the HMC, and SDMC. The Power Server is ready to power on. All your equipment is connected to 802.3ad capable network switches with link aggregation enabled. Refer to the Chapter 5: Advanced Configuration on page 75 for more details. Fibre Channel fabrics are redundant. Refer to Chapter 5: Advanced Configuration on page 75 for more details. Ethernet network switches are redundant. SAN storage for virtual servers (logical partitions) is ready to be provisioned.
4815ch01.fm
4815ch02.fm
Chapter 2.
4815ch02.fm
10
4815ch02.fm
8. Select the language if necessary. 9. Enter the Service Processor password for the admin user account. The default password is admin. If the default password does not work, and you do not have the admin password, you will have to contact hardware support to walk through signing on with the CE profile. 10.Insert the VIOS installation media in the CD/DVD drive. 11.To boot from the CD/DVD drive: Select Boot Options (5), Install/Boot Device (1), CD/DVD (3), List All Devices (9) and choose the right CD/DVD device from the list. (Probably the last device at the bottom of the list.) Select media type from the list. 12.Select Normal Mode Boot and Exit from the SMS menu. 13.Select the console number and press Enter. 14.Select the preferred language. 15.When prompted with the Installation and Maintenance menu, select option 1 to start with default settings.As other screens are presented, select the default options each time. 16.A progress screen shows Approximate% Complete and Elapsed Time. This installation should take between 15 minutes and an hour to complete. When installation is complete sign on and accept the license agreements: 17.Sign on as padmin and when prompted, change the password to something secure. 18.If prompted to accept the license agreement or to accept the software maintenance agreement, accept these agreements and continue. 19.After receiving the $ prompt use the license -accept command to accept the license agreement. To attach VIOS to the external Ethernet and configure TCP/IP follow these steps: 20.Use the lsdev -vpd | grep ent command to list all Ethernet adapter ports. For our installation, we have plugged the Ethernet cable into the top port of the 4 port Ethernet card in slot C4 of the CEC. Our lsdev listing is shown in Example 2-2 below.
Example 2-2 Output from lsdev -vpd | grep ent command
In Example 2-2 above, the top port (T1) of the Ethernet card in slot 4 (C4) of the CEC drawer (P1, serial number DNWKGPB) is assigned to ent4. 21.Use the cfgassist command and select VIOS TCP/IP Configuration. Then select the appropriate en# interface related to the adapter port chosen in the previous steps. In our case it is interface en4 related to adapter port ent4. Note: Each ent# has an associated en# and et# (her # is the same number). So in our example ent4, en4 and et4 are all related to the same Ethernet port on the card. Always use the en# entry for assigning TCP/IP addresses.
11
4815ch02.fm
22.On the VIOS TCP/IP Configuration screen enter TPC/IP configuration values for the VIOS connectivity as shown in Example 2-3:
Example 2-3 - VIOS TCP/IP Configuration Screen
VIOS TCP/IP Configuration Type or select values in entry fields. Press Enter AFTER making all desired changes. [Entry Fields] *Hostname *Internet ADDRESS (dotted decimal) Network MASK (dotted decimal) *Network INTERFACE Default Gateway (dotted decimal) NAMESERVER Internet ADDRESS (dotted decimal) DOMAIN Name Cable Type
Initializing the Ethernet may take a few minutes to complete. 23.Attempt to ping the internet address set for VIOS from your PC. (172.16.22.10 in the example above.) 24.Open a browser on your PC and attempt to connect to URL HTTPS://internet-address. (HTTPS://172.16.22.10 in the example above) Check Point: Do not proceed until you can get the browser to connect. Not all Windows browsers will work with IVM. We suggest you use Microsoft Internet Explorer version 8 or earlier or Firefox 3.6 or earlier. You must also enable pop-ups in the browser.
Note: From this point forward all the configurations will be done from the browser window. 25.Using a browser, sign on to VIOS using the padmin profile and the password set earlier. 26.Check for updates to VIOS by clicking Updates from the Service Management section of the left hand panel. VIOS is now installed and ready for client partitions. At this time, VIOS owns all the hardware in the server and can either supply virtual adapters to the various client partitions, or can give up control of a hardware adapter or port for assignment to the client partition.
4815ch02.fm
2. Select the next number for the partition ID, give the partition a name and select the operating system environment for the partition. Click Next. 3. Enter an appropriate amount of memory for the partition. Click Next. 4. Enter the amount of processing power for the partition and select shared or dedicated processor mode. Click Next. Note: The amount of memory and processing power to assign to the client partition will depend on the available resources in the server and on the anticipated workload of the client partition. 5. If installing AIX or Linux, unselect the Host Ethernet Adapter ports on the top half of the screen. Use the pull down menu on the first Ethernet adapters for the partition and select which virtual Ethernet this adapter will be on. In most cases the default values will suffice. Do not worry about the warning message that the Ethernet is not bridged at this time. We will bridge these ports later as we assign them to a partition. Click Next. 6. Click None for the storage assignment at this time. We will go back to add disk later. Click Next. Note: If the storage selection panel is blank at this point, you may be using a browser that is not supported by the IVM. Try another browser or an earlier version of your browser. 7. If installing AIX or Linux, skip the virtual fiber connections and click Next. 8. Confirm that 1 virtual optical device (CD/DVD) is selected at this time. Click Next. 9. The final summary screen shows the settings for this partition. If nothing needs to be corrected click Finish and VIOS will finish creating the partition environment.
13
4815ch02.fm
8. Click the Apply button. Repeat these steps to bridge and configure additional virtual Ethernet IDs if needed.
14
4815ch02.fm
10.Select the appropriate storage pool to get the storage from and select the amount of storage to assign to the virtual disk. Be careful to change the MB (megabytes) to GB (gigabytes) as appropriate for your size selection. Caution: Sizing the disk is dependent on the operating system you want to install ad the anticipated workload. For instance the first virtual disk must meet the minimum disk size for the operating system being loaded. You should consult the IBM Information Center for specific disk size requirements.
Note: You may assign this single virtual disk to your partition at this time, or create all the virtual disks and assign them to the partition at one time later. 11.Click OK when finished. When finished creating the first virtual disk, it will show up in the Virtual Disk screen. Continue repeating steps 7 through 11 to create additional virtual disks for your partition at this time. The final step is to assign the virtual disk to the logical partition: 12.Click the View/modify Virtual Storage from the left hand panel. 13.Click the Virtual Disk tab, and check all the virtual disk units to be assigned. 14.Click the Modify partition assignment button. 15.Click the partition from the pull down menu and click OK to finish. You are now finished with assigning virtual storage to the client partition and can now install the client operating system in section 2.1.5, Install Client OS on page 15.
15
4815ch02.fm
Here is an example of installing IBM i from the physical media. You will need IBM System i Access for Windows installed on a PC to configure the LAN console for accessing the partition during installation of IBM i. See the IBM Information Center for detailed information on installing and configuring the LAN console. To begin the install, assign the physical CD/DVD drive to the partition: 1. Click the View/Modify Virtual Storage from the left hand panel. 2. Click the Optical/Tape tab on the right hand panel. 3. If necessary, expand the Physical Optical Devices section. 4. Select the cd0 device and click the Modify partition assignment button. 5. Select the partition from the pull-down menu and click OK. The physical CD/DVD drive in the server now belongs to that partition. Next, we will select the IPL type for the IBM i partition and verify other partition settings. 6. Click the View/Modify Partitions from the left hand panel. In the right hand panel check the partition and use the More-Tasks pull down menu and click properties. 7. Change the IPL type to D (IPL from CD/DVD) and change the keylock position to Manual. 8. Place the I_Base_01 CD in the CD/DVD drive of the server. Click OK at the bottom of the screen. 9. Select the partition again and use the Activate button to start the partition IPL. Progress Note: In the case of IBM i, if the partition gets to the C600-4031 reference code, the partition is operating normally and looking for the LAN console session. If the i partition reaches reference code A600-5008, the partition was unsuccessful in contacting the console session and you will need to troubleshoot the LAN Console connectivity. Make sure you bridged the proper VLAN ports and the Lan console PC is on the same subnet as the bridged Ethernet port. Once you reach the language selection screen on the console, the installation of IBM i proceeds the same as installing on a stand-alone server. Continue with Dedicated Service Tools functions to add the disk to the ASP and loading the operating system. At this point you have installed and configured VIOS and at lease one client partition. The following sections expand on this basic installation with more advanced features.
4815ch02.fm
page 15) or creating virtual Fibre Channel adapters and assigning the virtual adapter to the partition. Best Practices: Use the internal disk for the installation of VIOS, mirroring the rootvg volumes. Use external SAN storage for the installation of client operating systems. This will position the client partitions for use of partition mobility later. To configure N_Port ID Virtualization (NPIV) attached storage, we must create the virtual fiber adapters to generate the Worldwide Port Name to allow the configuration and assignment of the storage. To configure the virtual fiber adapters: 1. Click the View/Modify Partitions from the left hand panel. 2. Select the partition with the check box and then click Properties from the More Tasks pull down menu. 3. Click the Storage tab. 4. Expand the Virtual Fiber Channel section. 5. If an interface is not shown, click Add to create the first interface. Select the first interface listed (listed as Automatically Generated) and select the proper physical port from the pull-down menu. 6. Click OK to complete the generation of Worldwide Port Names for this interface. 7. Return to the partition storage properties (steps 1, 2, and 3 above) to display the Worldwide Port Numbers. Record these numbers for configuring the fiber attached storage. Once the operating system is installed and the NPIV attached storage is provisioned, the storage should be directly assigned to the partitions operating system. VIOS will have no knowledge of the storage. You should use the normal procedures for adding newly attached storage to the operating system (AIM, IBM i or Linux.) Now that you have finished install with the Integrated Virtualization Manager (IVM), you can increase RAS of the configuration using the advanced topics in Chapter 5, Advanced Configuration on page 75.
17
4815ch02.fm
18
4815ch03.fm
Chapter 3.
19
4815ch03.fm
20
4815ch03.fm
Figure 3-1 HMC window displaying the system in the work panel
4. Click the button which appears at the end of your managed system to open the popup menu and click the following: Configuration Create Logical Partition VIO Server Figure 3-1 shows the popup menus to create the VIOS partition. 5. In the Create Partition window, specify your partitions name, and click Next. Partition Name: VIOS1 6. In the Partition Profile window, enter your Profile name and click Next. Profile Name: Normal 7. In the Processors window, ensure Shared is selected and click Next. 8. In the Processor Settings window, enter: Desired processing units: 0.2 Maximum processing units: 10 Desired virtual processors: 2 Desired maximum processors: 10 Select the Uncapped checkbox. Update Weight setting to 192. Note: The processor settings allow for the lowest utilization setting for the VIOS of 0.2 (Desired processing units) but scalable up to 2 processing units (Desired virtual processors) if necessary. The higher weighting provides the VIOS priority over the other logical partitions. This is detailed in depth in IBM PowerVM Virtualization Introduction and Configuration, SG24-7940. 9. In the Memory Settings window, enter: Minimum Memory: 1GB
Chapter 3. Setting up using Hardware Management Console
21
4815ch03.fm
4GB 8GB
10.In the I/O window, select the checkboxes of: The RAID or SAS controller where the internal disks are attached to (disk controllers for the VIOS internal drives). The Ethernet adapter (newer adapters are described as PCI-to-PCI Bridge) where it has been cabled to the network. The Fibre Channel adapter attached to the SAN fabric. Click Add as desired and click Next. Figure 3-2shows the adapters selected.
11.In the Virtual Adapters window, update the Maximum virtual adapters setting. Maximum virtual adapters: 1000 Note: There is flexibility for you to plan your own adapter numbering scheme. The Maximum virtual adapters setting needs to be set in the Virtual Adapters window to allow for your numbering scheme. The maximum setting is 65535 but the higher the setting, the more memory the managed system reserves to manage the adapters. 12.To create a virtual Ethernet adapter for Ethernet bridging, in the Virtual Adapters window as shown in Figure 3-3 on page 23: a. Click Actions Create Virtual Adapter Ethernet Adapter b. In the Create Virtual Ethernet Adapter window, select the Use this adapter for Ethernet bridging checkbox and click OK. c. The virtual Ethernet adapter is created and appears in the Virtual Adapters window. Information: When creating the virtual Ethernet adapter we accepted the default settings for Adapter ID, Port VLAN ID and Ethernet Bridging Priority (Trunk Priority). These settings are customizable for a range of planning designs or standards.
22
4815ch03.fm
13.For single VIOS partition setup, skip to step 14. For dual VIOS partition setup, continue to create a virtual Ethernet adapter for SEA failover, in the Virtual Adapters window: a. Click Actions Create Virtual Adapter Ethernet Adapter b. In the Create Virtual Ethernet Adapter window, update: Port Virtual Ethernet: 99 c. Click OK. The virtual Ethernet adapter is created and appears in the Virtual Adapters window. 14.To create the virtual SCSI adapter, in the Virtual Adapters window: a. Click Actions Create Virtual Adapter SCSI Adapter b. In the next window, select the Only selected client partition can connect checkbox. Update the following fields: Adapter: 101 Client partition: 10 Client adapter ID: 11 Click OK to accept settings. c. The virtual SCSI adapter is created and appears in the Virtual Adapters window. Information: For the client partition we are beginning at partition ID 10 (reserving Partition IDs 2-9 for future VIOS or infrastructure servers). For the adapter ID we chose 101 as a numbering scheme to denote the partition and virtual device #1. As for the Client adapter ID, 11 is chosen as the first disk adapter for the client partition.
23
4815ch03.fm
15.The Virtual Adapter window appears with virtual adapters you created as shown in Figure 3-4. Click Next.
Figure 3-4 Virtual Adapters window with virtual Ethernet and virtual SCSI adapter defined.
16.For the remaining windows click Next until you reach the Profile Summary window. 17.On the Profile Summary window, verify your settings and click Finish. 18.Click your managed system to view the VIOS partition profile you created. At this point, you have completed the creation of a logical partition (virtual server) for the VIOS installation.
24
4815ch03.fm
Operations Activate profile 4. In the Activate Logical Partition window, click Advanced to open the advanced options window. 5. In the advanced options window, click SMS from the Boot mode drop down menu, and click OK. 6. Back in the Activate Logical Partition window, click OK to activate the VIOS partition. 7. Open a terminal window to the VIOS partition and observe the VIOS partition being booted into the SMS Main Menu. 8. Chapter 2.1.1, Install VIOS on page 10, details the installation via DVD. Follow step 10 on page 11 to step 19 on page 11. The VIOS is ready to be configured for client network and storage service. For client storage the World Wide Port Number (WWPN) can be extracted from the Fibre Channel adapter interface and given to the SAN Administrator for zoning. The command to extract the WWPN is as follows: lsdev -dev fcs0 -vpd | grep Network Address Example 3-1shows the WWPN for Fibre Channel adapter port fcs0. To obtain the WWPN for fcs1 run the command but replace fsc0 with fcs1.
Example 3-1 WWPN of fcs0 Fibre Channel adapter port $ lsdev -dev fcs0 -vpd | grep "Network Address" Network Address.............10000000C99FC3F6
To use the installios feature on the HMC you need TCP/IP details for the VIOS partition: VIOS TCP/IP address. The subnet mask of the VIOS TCP/IP network. The VIO network gateway TCP/IP address.
25
4815ch03.fm
12.Enter auto for the VIOS adapter duplex. 13.Enter no to not configure the TCP/IP address on the VIOS after installation. 14.Select the open TCP/IP address of the HMC. Note: At least two adapters are shown with TCP/IP addresses where one address is for the HMC open network and the other is the private network to your systems Flexible Service Processor (FSP) port. 15.After the HMC retrieves the Ethernet adapter details based on the VIOS partition profile configuration, select the Ethernet adapter port that is cabled in section 3.1, Single VIOS setup using HMC on page 20. 16.Press Enter to accept en_US as the language and locale defaults. Note: Alternatively, if en_US is not your default language and locale, enter the language and locale you regularly use. 17.A window should appear with the details you selected. Press Enter to proceed using the details you have provided. 18.Review the License Agreement details. At the end of the License Agreement window enter Y to accept. 19.If the installation media spans multiple DVDs you will be prompted to change DVDs and to enter c to continue. Using the details that you provided, the HMC uploads the software from installation media to a local file system within the HMC. Network Install Manager On Linux (NIMOL) features on HMC are used to network boot the VIOS partition and network install the VIOS software. 20.Open a terminal window to the VIOS partition. 21.After the VIOS installation is completed and the VIOS partition boots up with login prompt, enter padmin user ID to login. 22.When prompted, change the password to something secure. 23.Enter a to accept the VIOS software maintenance terms and conditions. 24.Enter the license -accept command to accept the VIOS license agreement. 25.To list the physical Fibre Channel adapters on the VIOS, enter the lsnports command. Example 3-2 shows the Fibre Channel adapter ports configured on VIOS1. As we explained in section 3.1, Single VIOS setup using HMC on page 20, the first port (T1) is planned for virtual SCSI and the second port (T2) is planned for virtual Fibre Channel which is explained later in this chapter.
Example 3-2 Fibre Channel Adapter port listing on VIOS1 $ lsnports name fcs0 fcs1 physloc U5802.001.0087356-P1-C2-T1 U5802.001.0087356-P1-C2-T2 fabric tports aports swwpns 1 64 64 2048 1 64 64 2048 awwpns 2046 2048
26.For client storage the World Wide Port Number (WWPN) can be extracted from the Fibre Channel adapter interface and given to the SAN Administrator for zoning. The command to extract the WWPN is as follows: lsdev -dev fcsX -vpd | grep Network Address 26
IBM PowerVM Getting Started Guide
4815ch03.fm
Example 3-3 shows the WWPN for Fibre Channel adapter port fcs0. To obtain the WWPN for fcs1 run the command but replace fsc0 with fcs1.
Example 3-3 WWPN for fcs0 Fibre Channel Adapter port $ lsdev -dev fcs0 -vpd | grep "Network Address" Network Address.............10000000C99FC3F6
The VIOS is ready to be configured for client network and storage service.
If a NIM server is not available and you wish to use NIM to build a PowerVM environment on your system, you do the following: 1. 2. 3. 4. Build the VIOS partition using either DVD or installios. Build the first client partition as an AIX NIM server. If you plan to build a second VIOS partition, build the second VIOS using NIM. Deploy any Linux or AIX client partitions using NIM.
4. In the Partition Profile window, enter your Profile name and click Next.
27
4815ch03.fm
5. In the Processors window, ensure Shared option is selected and click Next. 6. In the Processor Settings window, enter: Desired processing units: 0.4 Maximum processing units: 10 Desired virtual processors: 4 Desired maximum processors: 10 Select the Uncapped checkbox. 7. In the Memory Settings window, enter: Minimum Memory: Desired Memory: Maximum Memory: 1GB 16GB 24GB
8. In the I/O window, click Next. 9. In the Virtual Adapters window, update the Maximum virtual adapters setting: Maximum virtual adapters: 50 10.To create virtual Ethernet adapters, in the Virtual Adapters window: a. Click Actions Create Virtual Adapter Ethernet Adapter b. In the Create Virtual Ethernet Adapter window, click OK. c. The virtual Ethernet adapter is created and appears in the Virtual Adapters window. 11.To create the virtual SCSI adapter, in the Virtual Adapters window: a. Click Actions Create Virtual Adapter SCSI Adapter b. In the Create Virtual SCSI Adapter window, select the Only selected client partition can connect checkbox. Update the following: Adapter: 11 Server partition: 1 Server adapter ID: 101 Click OK to accept settings c. The virtual SCSI adapter is created and appears in the Virtual Adapters window. 12.For a single VIOS setup, skip to step 13. For a dual VIOS setup, create an additional virtual SCSI adapter to map to VIOS2 virtual server SCSI adapter: a. Click Actions Create Virtual Adapter SCSI Adapter b. In the next window, select the Only selected client partition can connect checkbox. Update the following: Adapter: 21 Server partition: 2 Server adapter ID: 101 Click OK to accept the settings c. The virtual SCSI adapter is created and appears in the Virtual Adapters window. 13.The Virtual Adapter window appears with virtual adapters you created. Click Next. 14.For the remaining windows click Next until you reach the Profile Summary window. 15.On the Profile Summary window, click Finish. 16.Click your managed system to view the partition profile you created.
28
4815ch03.fm
2. To list the Ethernet devices configured on the VIOS to show the logical name relationship to the physical device details, run lsdev -vpd | grep ent, see Example 3-4.
Example 3-4 Listing of VIOS Ethernet devices $ lsdev -vpd | grep ent ent4 U8233.E8B.061AB2P-V1-C2-T1 ent0 U78A0.001.DNWHZS4-P1-C2-T1 ent1 U78A0.001.DNWHZS4-P1-C2-T2 ent2 U78A0.001.DNWHZS4-P1-C2-T3 ent3 U78A0.001.DNWHZS4-P1-C2-T4 Virtual I/O Ethernet Adapter (l-lan) 4-Port 10/100/1000 Base-TX PCI-Express 4-Port 10/100/1000 Base-TX PCI-Express 4-Port 10/100/1000 Base-TX PCI-Express 4-Port 10/100/1000 Base-TX PCI-Express
In Example 3-4, ent0 (U78A0.001.DNWHZS4-P1-C2-T1) is the physical Ethernet adapter port cabled. The U78A0.001.DNWHZS4-P1-C2 Ethernet adapter is the adapter selected in Figure 3-2 on page 22. Adapter ent4 (U8233.E8B.061AB2P-V1-C2-T1) is the virtual Ethernet adapter shown in Figure 3-4 on page 24. Note: For the virtual ethernet adapter U8233.E8B.061AB2P-V1-C2-T1, the V in V1 indicates it is a virtual adapter and C2 indicates it is a slot with adapter ID 2 as shown in step 15 on page 24. 3. Create the SEA adapter which bridges the physical adapter and the virtual adapter, where: ent0 is the physical adapter found in step 2. ent4 is the virtual adapter found in step 2. 1 is the Port VLAN ID of ent4 where we had accepted the default Port VLAN ID allocation. mkvdev -sea ent0 -vadapter ent4 -default ent4 -defaultid 1 In Example 3-5, the SEA virtual network devices are created: ent5 is an Ethernet network adapter device. en5 is a standard Ethernet network interface where TCP/IP addresses are assigned. et5 is an IEEE 802.3 Ethernet network interface.
Example 3-5 Create SEA interface $ mkvdev -sea ent0 -vadapter ent4 -default ent4 -defaultid 1 ent5 Available en5 et5
4. Configure the TCP/IP connection for the VIOS with details provided by the network administrator. Example 3-6, is a sample provided for this exercise.
Example 3-6 Sample network parameters The following details are provided: network ip address: 172.16.22.15 network subnet: 255.255.252.0 network gateway: 172.16.20.1
mktcpip -hostname vios1 -interface en5 -inetaddr 172.16.22.15 -netmask 255.255.252.0 -gateway 172.16.20.1 or via cfgassist: a. Enter cfgassist on the command line. b. Select the VIOS TCP/IP Configuration menu item. c. Select en5 which is the SEA interface created in step 3 and press Enter. d. Enter the TCPIP details in Example 3-6.
Chapter 3. Setting up using Hardware Management Console
29
4815ch03.fm
Note: Interface en5 is the SEA adapter created in 3 on page 29. Alternatively, an additional virtual adapter may be created for the VIOS remote connection, or another physical adapter may be used (it will need to be cabled) for the TCP/IP remote connection. TCP and UDP port 657 must be open between the HMC and the VIOS. This is a requirement for DLPAR (using RMC protocol).
6. Update the Fibre Channel adapter SCSI protocol devices attributes listed in step 5 to enable dynamic tracking and fast failover, enter: chdev -dev fscsi0 -attr dyntrk=yes fc_err_recov=fast_fail chdev -dev fscsi1 -attr dyntrk=yes fc_err_recov=fast_fail Note: If the Fibre Channel adapter SCSI protocol device is busy, append the flag -perm to the command to update the VIOS database only. The attributes are not applied to the device until the VIOS is rebooted. For example: chdev -dev fscsi0 -attr dyntrk=yes fc_err_recov=fast_fail -perm 7. To configure the disks on the VIOS, enter cfgdev. 8. List the disks on the VIOS partition and to show what type of disk, enter lsdev -type disk.In Example 3-8, VIOS1 lists 2 internal SAS disks and 6 DS4800 disks.
Example 3-8 List disks with its type on VIOS1 $ lsdev -type disk name status hdisk0 Available hdisk1 Available hdisk2 Available hdisk3 Available hdisk4 Available hdisk5 Available hdisk6 Available hdisk7 Available description SAS Disk Drive SAS Disk Drive MPIO DS4800 Disk MPIO DS4800 Disk MPIO DS4800 Disk MPIO DS4800 Disk MPIO DS4800 Disk MPIO DS4800 Disk
30
4815ch03.fm
9. To confirm the SAN LUN ID on VIOS1, enter lsdev -dev hdiskX -attr | grep -i -E reserve|unique_id for each of the disks listed in step 8 on page 30 until the correct disk is found with LUN ID provided by the SAN administrator. Example 3-9 shows the hdisk which the SAN administrator had assigned. Note also, the SCSI reserve policy has been set with single_path and this setting will need to be updated with no SCSI reserve locks. The LUN ID is embedded in the unique_id string for hdisk6, beginning with the 6th character.
Example 3-9 Disk attributes of hdisk6 $ lsdev -dev hdisk6 -attr | grep -E "unique_id|reserve" reserve_policy single_path Reserve Policy True unique_id 3E213600A0B8000114632000092784EC50F0B0F1815 FAStT03IBMfcp Unique device identifier False
Information: Disks using EMC PowerPath, IBM SDDPCM, and IBM SDD drivers also have their LUN IDS embedded in the unique_id string. Use their supplied commands to show the LUN IDS in a more readable format. Refer to their respective manuals to obtain the disks complete with LUN IDS. EMC disks appear with hdiskpowerX notation and SDD disks appear with vpathX notation. Use their disk notations with the lsdev command sequence instead of hdisk. Other disk subsystems may use different fields to set their SCSI reserve locks. Use the lsdev command sequence without the pipe to grep, i.e. lsdev -dev sampledisk -attr. 10.To deactivate the SCSI reserve lock on the disk, in this case hdisk6, enter: chdev -dev hdisk6 -attr reserve_policy=no_policy Note: Ignore this step if the disks are using SDDPCM and SDD drivers as the SCSI reserve locks are already deactivated. For EMC disks and disks using native MPIO it is necessary to deactivate the SCSI reserve locks. The SCSI reserve lock attribute differs among disk subsystems. The IBM System Storage SCSI reserve lock attribute is reserve_policy as displayed in Example 3-9. The attribute on EMC disk subsystem is reserve_lock. If you are unsure of the allowable value to use to deactivate the SCSI reserve lock, the following command will provide a list of allowable values, in this case: lsdev -dev hdisk6 -range reserve_policy 11.To determine the virtual adapter name of the virtual SCSI adapter created in step 14 on page 23, enter: lsdev -vpd | grep Virtual SCSI In Example 3-10, the virtual SCSI adapter with server Adapter ID C101 is vhost0 to use in the next step.
Example 3-10 List of Virtual SCSI devices $ lsdev -vpd | grep "Virtual SCSI" vhost0 U8233.E8B.061AB2P-V1-C101 Virtual SCSI Server Adapter
31
4815ch03.fm
12.The MPIO 1 setup is used to map whole LUNS to client OS partitions. To map hdisk6 to CLIENT1, enter: mkvdev -vdev hdisk6 -vadapter vhost0. where: hdisk6 is the disk found in step 9 on page 31. vhost0 is the virtual server SCSI adapter with adapter ID 101 created for CLIENT1, found in step 10 on page 31. In Example 3-11, the Virtual Target Device (VTD) vtscsi0 is created.
Example 3-11 Create disk mapping to client partition $ mkvdev -vdev hdisk6 -vadapter vhost0 vtscsi0 Available
13.To check mapped devices to vhost0, enter: lsmap -vadapter vhost0 In Example 3-12, the vhost0 virtual SCSI adapter shows one disk mapped, where hdisk6 is mapped to the vtscsi0 device.
Example 3-12 vhost0 disk mapping $ lsmap -vadapter vhost0 SVSA Physloc Client Partition ID --------------- -------------------------------------------- -----------------vhost0 U8233.E8B.061AB2P-V1-C101 0x0000000a VTD Status LUN Backing device Physloc Mirrored vtscsi0 Available 0x8100000000000000 hdisk6 U5802.001.0087356-P1-C2-T1-W202200A0B811A662-L5000000000000 false
32
4815ch03.fm
Note: Each VIOS is configured with a server virtual SCSI adapter for the client partition. If you are planning to update from a single VIOS setup to a dual VIO setup, DLPAR operations can be utilized to avoid interfering with the operations of a deployed VIOS or client partition. The dual VIOS setup expands on the single VIOS setup. The following steps to creating a dual VIOS setup is using the adapter ID allocation is Table 1-2 on page 5 for VIOS1 and Table 1-3 on page 6 for VIOS2. If you have completed the single VIOS setup and are looking to change to a dual VIOS setup keep the adapter IDs as consistent as possible. You may have your own adapter ID scheme you want to use or use the default adapter ID allocations.
33
4815ch03.fm
2. To save the DLPAR updates to VIOS1 to its profile, click the button which appears at the end of your client partition to open the popup menu and click the following: Configuration Save Current Configuration Note: DLPAR is reliant on the RMC connectivity between the HMC and VIOS1. If DLPAR fails use steps a on page 35 to steps i on page 35 as a reference to create the virtual Ethernet adapter for SEA failover. 3. In the new window, Save Partition Configuration, click OK. Go to step 5 to create the VIOS2 partition profile. 4. To create the VIOS1partition profile, follow the steps in section 3.1.1, Create VIOS Partition Profile on page 20. 5. To create the VIOS2 partition profile, follow the steps in section 3.1.1, Create VIOS Partition Profile on page 20. Important Changes to VIOS2: The priority of the virtual Ethernet adapters used for bridging must be different. By default VIOS1 is created with a priority of 1. For VIOS2, when the virtual Ethernet adapter used for bridging is being created in step 12 on page 22, the priority is set to 2. For VIOS1 and VIOS2, ensure the virtual Ethernet adapters used for SEA failover in step 13 on page 23 are created with the same Port VLAN ID. This is essential for inter VIOS communication. For VIOS2, ensure the virtual SCSI adapter in step 14 on page 23 is created with a different client adapter ID than VIOS1: Adapter: 101 Client partition: 10 Client adapter ID: 22
Important: For virtual Ethernet adapters to be used as SEA failover adapters: The Port Virtual Ethernet ID (also known as VLAN ID) must be consistent for VIOS1 and VIOS2. The Port Virtual Ethernet ID must not be a known VLAN ID on the network. The virtual Ethernet adapter must not be configured for IEEE 802.1Q The virtual Ethernet adapter must not be bridged to a physical adapter. 6. VIOS1 and VIOS2 should appear in your system server listing. Their partition profiles are ready to use for the installation process.
34
4815ch03.fm
Alternatively, if the client partition profile already exists and you wish to configure an additional virtual SCSI adapter, this can be done in two ways: Add the virtual SCSI adapter via DLPAR and then save the current configuration overwriting the current profile (the client partition must be running and have RMC connectivity to the HMC): a. Select your client partition checkbox. b. Click the button which appears at the end of your client partition to open the popup menu and click the following: Dynamic Logical Partitioning Virtual Adapters c. Click Actions Create Virtual Adapter SCSI Adapter d. In the Create Virtual SCSI Adapter window, select the Only selected client partition can connect checkbox. Update the following: Adapter: 22 Server partition: 2 Server adapter ID: 101 e. Click OK to accept the settings. f. Click OK to dynamically add the virtual SCSI adapter. g. Click the button which appears at the end of your client partition to open the popup menu and click the following: Configuration Save Current Configuration h. In the new window, Save Partition Configuration, click OK. i. Click Yes to confirm the save. Update the client partition profile to add the additional virtual SCSI adapter; then shutdown the client partition (if it is running) and activate the client partition. Note: Shutting down the client partition and then activating it causes the client partition to re-read its profile. A partition reboot does not re-read the partition profile. a. Click the button which appears at the end of your client partition to open the popup menu and click the following: Configuration Manage Profiles b. Click the profile to update. c. Click the Virtual Adapters tab. d. Click Actions Create Virtual Adapter SCSI Adapter e. In the next window, select the Only selected client partition can connect checkbox. Update the following: Adapter: 22 Server partition: 2 Server adapter ID: 101 f. Click OK to accept settings. g. Click OK to save the profile. h. Run the shutdown command on the client partition. i. After the client partition appears with the Not Activated state, activate the client partition.
35
4815ch03.fm
4. To query which ent device is the SEA adapter, enter lsdev -type sea. Example 3-14 shows the output for VIOS1.
Example 3-14 List the SEA adapters configured on VIOS1. $ lsdev -type sea name status ent5 Available description Shared Ethernet Adapter
5. To update the SEA adapter to add SEA failover functionality, enter: chdev -dev ent5 -attr ctl_chan=ent6 ha_mode=auto where: ent5 is the SEA adapter in step 4. ent6 is the SEA failover virtual Ethernet Adapter in step Example 3-13. 6. To configure the VIOS2 partition, continue with step 7. 7. Logon to the VIOS terminal window. 8. To list the Ethernet devices configured on the VIOS showing the logical name relationship to the physical device details, run lsdev -vpd | grep ent, see Example 3-4 on page 29.
Example 3-15 Listing of Ethernet devices on the VIOS $ lsdev -vpd | grep ent ent5 U8233.E8B.061AB2P-V2-C3-T1 ent4 U8233.E8B.061AB2P-V2-C2-T1 ent0 U78A0.001.DNWHZS4-P1-C3-T1 ent1 U78A0.001.DNWHZS4-P1-C3-T2 ent2 U78A0.001.DNWHZS4-P1-C3-T3 ent3 U78A0.001.DNWHZS4-P1-C3-T4 Virtual I/O Ethernet Adapter (l-lan) Virtual I/O Ethernet Adapter (l-lan) 4-Port 10/100/1000 Base-TX PCI-Express 4-Port 10/100/1000 Base-TX PCI-Express 4-Port 10/100/1000 Base-TX PCI-Express 4-Port 10/100/1000 Base-TX PCI-Express
In Example 3-4 on page 29, ent0 (U78A0.001.DNWHZS4-P1-C2-T1) is the physical Ethernet adapter port cabled. The U78A0.001.DNWHZS4-P1-C2 Ethernet adapter is the adapter selected in Figure 3-2 on page 22. Adapter ent4 (U8233.E8B.061AB2P-V1-C2-T1) is the virtual Ethernet adapter shown in Figure 3-5 on page 33. Adapter ent5 36
IBM PowerVM Getting Started Guide
4815ch03.fm
(U8233.E8B.061AB2P-V2-C3-T1) is the virtual Ethernet adapter shown also in Figure 3-5 on page 33. 9. Create the SEA Adapter which bridges the physical adapter and the virtual adapter, where: ent0 is the physical adapter found in step 8. ent4 is the bridging virtual adapter found in step 8. 1 is the Port VLAN ID of ent4. ent5 is the SEA failover virtual adapter found in step 8 on page 36. mkvdev -sea ent0 -vadapter ent4 -default ent4 -defaultid 1 -attr ctl_chan=ent5 ha_mode-auto In Example 3-16.the SEA virtual network devices are created: ent6 is an Ethernet network adapter device. en6 is a standard Ethernet network interface where TCP/IP addresses are assigned. et6 is an IEEE 802.3 Ethernet network interface.
Example 3-16 Create SEA interface $ mkvdev -sea ent0 -vadapter ent4 -default ent4 -defaultid 1 -attr ctl_chan=ent5 ha_mode=auto ent6 Available en6 et6
10.Configure the TCP/IP connection for the VIOS with details provided by the network administrator. Example 3-17 is a sample provided for this exercise.
Example 3-17 Sample network parameters The following details are provided: network ip address: 172.16.22.15 network subnet: 255.255.252.0 network gateway: 172.16.20.1
mktcpip -hostname vios1 -interface en6 -inetaddr 172.16.22.15 -netmask 255.255.252.0 -gateway 172.16.20.1 or via cfgassist: a. cfgassist VIOS TCP/IP Configuration b. Select en5 which is the SEA interface created in step 3 on page 29 and click Enter. c. Enter the TCPIP details in Example 3-17.
37
4815ch03.fm
fscsi0 fscsi1
Available Available
FC SCSI I/O Controller Protocol Device FC SCSI I/O Controller Protocol Device
12.Update the Fibre Channel adapter SCSI protocol device attributes listed in step 5 on page 36 to enable dynamic tracking and fast failover, enter: chdev -dev fscsi0 -attr dyntrk=yes fc_err_recov=fast_fail chdev -dev fscsi1 -attr dyntrk=yes fc_err_recov=fast_fail Note: If the Fibre Channel adapter SCSI protocol device is busy, append the flag -perm to the command to update the VIOS database only. The attributes are not applied to the device until the VIOS is rebooted. For example, chdev -dev fscsi0 -attr dyntrk=yes fc_err_recov=fast_fail -perm 13.To configure the disks on the VIOS, enter cfgdev. 14.List the disks on the VIOS partition and to show what type of disk, enter lsdev -type disk.In Example 3-19, VIOS1 lists 2 internal SAS disks and 6 DS4800 disks.
Example 3-19 List disks with their type on VIOS1 $ lsdev -type disk name status hdisk0 Available hdisk1 Available hdisk2 Available hdisk3 Available hdisk4 Available hdisk5 Available hdisk6 Available hdisk7 Available description SAS Disk Drive SAS Disk Drive MPIO DS4800 Disk MPIO DS4800 Disk MPIO DS4800 Disk MPIO DS4800 Disk MPIO DS4800 Disk MPIO DS4800 Disk
15.To confirm the SAN LUN ID on VIOS1, enter lsdev -dev hdiskX -attr | grep -i -E reserve|unique_id for each of the disks listed in step 8 on page 36 until the correct disk is found with LUN ID provided by the SAN administrator. Example 3-20 shows the hdisk which the SAN administrator had assigned. Note also, the SCSI reserve policy has been set with single_path and this setting will need to be updated with no SCSI reserve locks. The LUN ID is embedded in the unique_id string for hdisk6
Example 3-20 Disk attributes of hdisk6 $ lsdev -dev hdisk6 -attr | grep -E "unique_id|reserve" reserve_policy single_path Reserve Policy True unique_id 3E213600A0B8000114632000092784EC50F0B0F1815 FAStT03IBMfcp Unique device identifier False
Information: disks using EMC PowerPath, IBM SDDPCM, and IBM SDD drivers also have their LUN IDS embedded in the unique_id string. Use their supplied commands to show the LUN IDS in a more readable format. Refer to their respective manuals to obtain the disks complete with LUN IDS. EMC disks appear with hdiskpowerX notation and SDD disks appear with vpathX notation. Use their disk notations with the lsdev command sequence instead of hdisk. Other disks subsystems may use different fields to set their SCSI reserve locks. Use the lsdev command sequence without the pipe to grep, i.e. lsdev -dev sampledisk -attr.
38
4815ch03.fm
16.To deactivate the SCSI reserve lock on the disk in this case is hdisk6, enter: chdev -dev hdisk6 -attr reserve_policy=no_policy Note: Ignore this step if the disks are using SDDPCM and SDD drivers as the SCSI reserve locks are already deactivated. For EMC disks and disks using native MPIO it is necessary to deactivate the SCSI reserve locks. The SCSI reserve lock attribute differs among disk subsystems. The IBM System Storage SCSI reserve lock attribute is reserve_policy as displayed in Example 3-9. The attribute on EMC disk subsystem is reserve_lock. If you are unsure of the allowable value to use to deactivate the SCSI reserve lock, the following command will provide a list of allowable values, in this case: lsdev -dev hdisk6 -range reserve_policy 17.To determine the virtual adapter name of the virtual SCSI adapter created in step 14 on page 23, run: lsdev -vpd | grep Virtual SCSI In Example 3-21, the virtual SCSI adapter with server Adapter ID C101 is vhost0 to use in the next step.
Example 3-21 List of Virtual SCSI devices $ lsdev -vpd | grep "Virtual SCSI" vhost0 U8233.E8B.061AB2P-V1-C101 Virtual SCSI Server Adapter
18.The MPIO setup is used to map whole LUNS to client OS partitions. To map hdisk hdisk6 to CLIENT1, enter: mkvdev -vdev hdisk6 -vadapter vhost0 where: hdisk6 is the disk found in step 9 on page 31. vhost0 is the virtual server SCSI adapter found in step 10 on page 31. In Example 3-22, the Virtual Target Device (VTD) vtscsi0 is created.
Example 3-22 Create disk mapping to client partition $ mkvdev -vdev hdisk6 -vadapter vhost0 vtscsi0 Available
19.To check mapped devices to vhost0, enter: lsmap -vadapter vhost0 In Example 3-23, the vhost0 virtual SCSI adapter shows one disk mapped, where hdisk6 is mapped to the vtscsi0 VTD device.
Example 3-23 Disk mapping for vhost0 $ lsmap -vadapter vhost0 SVSA Physloc Client Partition ID --------------- -------------------------------------------- -----------------vhost0 U8233.E8B.061AB2P-V1-C101 0x0000000a VTD Status LUN Backing device vtscsi0 Available 0x8100000000000000 hdisk6
39
4815ch03.fm
Physloc Mirrored
U5802.001.0087356-P1-C2-T1-W202200A0B811A662-L5000000000000 false
20.Repeat steps 7 on page 36 through 19 on page 39, to configure the VIOS2 partition. For step 7 on page 36, ensure you logon to the VIOS2 terminal window.
40
4815ch03.fm
3. Click Actions Create Virtual Adapter Fibre Channel Adapter. 4. In the Create Virtual Fibre Channel Adapter window, enter the following: Adapter: 102 Client partition: 10 Client adapter ID: 12 Click OK to accept the settings. 5. Click OK to dynamically add the virtual Fibre Channel adapter. 6. Click the button which appears at the end of your client partition to open the popup menu and click Configuration Save Current Configuration as shown in Figure 3-7 on page 42.
41
4815ch03.fm
7. In the Save Partition Configuration window, click OK. 8. Click Yes to confirm the save, overwriting the existing profile.
42
4815ch03.fm
14.Click the button which appears at the end of your client partition to open the popup menu and click Configuration Save Current Configuration. 15.In the new window, Save Partition Configuration window, click OK. 16.Click Yes to confirm the save.
19.To configure the virtual Fibre Channel adapter added via DLPAR in step 4 on page 41, enter the cfgdev command. 20.To list the virtual Fibre Channel adapters, enter: lsdev -vpd | grep vfchost In Example 3-25 on page 44, one virtual Fibre Channel adapter is listed with an adapter slot ID of 102 (C102) created in step 4 on page 41 called vfchost0.
43
4815ch03.fm
Example 3-25 List of virtual Fibre Channel adapters on the VIOS $ lsdev -vpd | grep vfchost vfchost0 U8233.E8B.061AB2P-V1-C102 Virtual FC Server Adapter
21.To map the client virtual Fibre Channel adapter to the physical Fibre Channel adapter zoned for NPIV, enter: vfcmap -vadapter vfchost0 -fcp fcs1 Note: You can map multiple client virtual Fibre Channel adapters to a physical Fibre Channel adapter port. Up to 64 client virtual Fibre Channel adapters can be active at one time per physical Fibre Channel adapter. 22.To verify the virtual Fibre Channel mapping for vfchost0, enter: lsmap -vadapter vfchost0 -npiv or to list all virtual Fibre Channel mapping on the VIOS, enter: lsmap -all -npiv 23.If you have a dual VIOS setup, repeat step 1 on page 40 through step 21 for VIOS2. Ensure the client partition adapter IDs are unique.
3.5 Summary
In this chapter you have created either a single or dual VIOS environment for client partitions. For setting up the virtual network we used default options for the virtual Ethernet adapter IDs and VLAN IDs; for virtual SCSI and virtual Fibre Channel we used specific adapter IDs. The use of specific adapter IDs or default adapter IDs is entirely your choice. Chapter 5, Advanced Configuration on page 75 discusses the advantages of using specific adapter IDs which would fall under your own adapter ID numbering scheme. The chapter also discusses other advanced configurations such as setting redundant adapters for your network and storage setup. 44
IBM PowerVM Getting Started Guide
4815ch04.fm
Chapter 4.
Virtual Server is the SDMC terminology for a virtualized guest server running its own operating system.
45
4815ch04.fm
46
4815ch04.fm
Follow these steps to create the Virtual Server for VIOS1: 1. On the Name screen enter the following values and then click Next. Virtual Server name: VIOS1(You can enter any name you want) Environment: VIOS (from the pull down menu) Leave the default values for other fields. 2. On the Memory screen select the Dedicated for Memory Mode checkbox (if present) and enter an appropriate amount of memory in the Assigned memory field. Use 4 GB of memory. Click Next. Note: The amount of memory your VIOS needs depends on the functionalities of VIOS you will use. We recommend to start with 4 GB of memory and periodically monitor the memory usage on VIOS. 3. On the Processor screen select Shared for Processing Mode. In the Assigned Processors field enter 1 for a single shared processor. Click Next. Note: We recommend to start with 1 shared processor and periodically monitor the CPU on VIOS. 4. On the Ethernet screen, expand the Virtual Ethernet part. By default the wizard creates two virtual Ethernet adapters. The first virtual Ethernet adapter is using adapter ID 2 and VLAN ID 1. The second virtual Ethernet adapter is using adapter ID 3 and VLAN ID 99. The second virtual Ethernet adapter is used for control channel between two VIOSes in dual VIOS configurations and the Shared Ethernet Adapter failover configuration. More details about control channel and dual VIOS configuration for virtual Ethernet is in the IBM PowerVM Virtualization Introduction and Configuration, SG24-7940 publication. Select the checkbox to the left of the first adapter (ID 2) and then click Edit. In the Virtual Ethernet - Modify Adapter screen select the Use this adapter for Ethernet bridging checkbox and enter 1 in the Priority field. The explanation of the priorities can be found in the IBM PowerVM Virtualization Introduction and Configuration, SG24-7940 publication. Click OK to confirm changes to the first Ethernet adapter. Confirm the Ethernet screen using the Next button.
47
4815ch04.fm
5. Skip the Virtual Storage Adapter screen the Next button. As the client Virtual Servers are added and assigned storage the console will automatically create the virtual SCSI or virtual Fibre Channel server adapters. 6. On the Physical I/O Adapters screen select checkboxes to the left of the Location Code and Description of the needed adapter. These are the physical adapters and controllers used later to virtualize devices to the Virtual Server for client OS. To be able to use all functionalities described in this publication you need to select: One SAS or SCSI disk controller (controller for internal disk drives). One Ethernet adapter (Ethernet adapter for connection to LAN). One Fibre Channel adapter (Fibre Channel adapter for connection to SAN and a virtual Fibre Channel configuration). In our case we selected these physical adapters: U78A0.001.DNWKF81-P1-T9 RAID Controller U5802.001.RCH8497-P1-C7 Quad 10/100/1000 Base-TX PCI-Express Adapter U5802.001.RCH8497-P1-C3 Fibre Channel Serial Bus The RAID Controller selected also has the physical CD/DVD drive connected. Recommendation: We recommend you check the physical location codes of your adapters and that you are sure you use the correct adapters for your Virtual Server. Sometimes the description can be misleading (for example the PCI-to-PCI bridge can be the Ethernet adapter). Confirm Physical I/O Adapters screen using the Next button. 7. Verify information on the Summary screen and confirm the creation of the Virtual Server using the Finish button. To create the Virtual Server for VIOS2 follow the same steps and change these values: The name for Virtual server in step 1 to VIOS2 (Make sure that you choose the Environment:VIOS). In step 4 change the priority of virtual Ethernet adapter to 2. The Use this adapter for Ethernet bridging checkbox must be also selected. Select different adapters and controller in step 6. We selected these adapters: U5802.001.RCH8497-P1-C2 U5802.001.RCH8497-P1-C6 U5802.001.RCH8497-P1-C5 PCI-E SAS Controller Quad 10/100/1000 Base-TX PCI-Express Adapter Fibre Channel Serial Bus
A more detailed description of the possible options during Virtual Server creation wizard is in redbook IBM Systems Director Management Console: Introduction and Overview, SG24-7860.
48
4815ch04.fm
Install VIOS using the IBM Network Installation Manager (NIM). To install VIOS1 follow the steps in chapter 4.2.2, Install VIOS on page 57. To install VIOS2 follow the steps in chapter 4.3.2, Install second VIOS using NIM on page 64. You can also use the installios command from the SDMC command line to install both VIOS1 and VIOS2.
Select the correct Ethernet port from the listed ports that is to be used for the LAN connection and has the Ethernet cable plugged in. The interface device name of this physical adapter port is used in the next step (In our case it is en0). 2. Use the cfgassist command and select VIOS TCP/IP Configuration. Then select the appropriate interface device name from the previous step. 3. On the VIOS TCP/IP Configuration screen enter TPC/IP configuration values for VIOS connectivity. For these values consult your network administrator. See Example 2-3 on page 12 for TPC/IP configuration values example. After entering the needed values for TCP/IP configuration press Enter. You should see the output Command: OK. Then press F10 (or press the ESC and 0 sequence). To configure TCP/IP on VIOS2 follow the same steps on VIOS2 console, properly changing the IP configuration in step 3. Note: From this point on you can use ssh to connect to VIOS1 and VIOS2.
49
4815ch04.fm
The SDMC automatically creates the SEA adapters on both VIOS1 and VIOS2. The SDMC will also configure the control channel as a part of this step. The virtual Ethernet adapter with the highest VLAN ID is used for the SEA control channel. 3. You can confirm the created SEA on the Virtual Network Management screen. You should see two created SEA adapters. each with a different priority as shown in Figure 4-3.
Figure 4-3 View the Shared Ethernet Adapter from the SDMC
50
4815ch04.fm
administrator. Once the storage administrator provisions the needed SAN LUN, map this SAN LUN over the virtual SCSI adapter to Virtual Server for client OS using the SDMC. To attach VIOS1 to a SAN and configure storage follow these steps on VIOS console: 1. To find the Fibre Channel adapters owned by VIOS1 enter the lsdev -vpd | grep fcs command. The number of Fibre Channel adapters can vary. You will receive a list similar to this: fcs0 fcs1 U5802.001.RCH8497-P1-C3-T1 U5802.001.RCH8497-P1-C3-T2 8Gb PCI Express Dual Port FC Adapter 8Gb PCI Express Dual Port FC Adapter
In our case we used the Fibre Channel port fcs0 for LUN masking the SAN LUNs for the installation device of the client operating system. 2. Find the World Wide Port Name (WWPN) address for the fcs0 device using the lsdev -dev fcs0 -vpd | grep Address command. Your output will be similar to this: Network Address.............10000000C9E3AB56 3. Repeat the step 1 and 2 on VIOS2. 4. Provide the location codes and the WWPN addresses from the previous steps for both VIOS1 and VIOS2 to your storage administrator. At this time your storage administrator will provision your necessary SAN LUN. The storage administrator should make the LUN masking that VIOS1 and VIOS2 both see the same SAN LUN. The storage administrator should also give you the SAN LUN ID of the disk for client OS installation. For this exercise, the SAN administrator has allocated disk with LUN ID 60:0a:0b:80:00:11:46:32:00:00:92:75:4e:c5:0e:78 and size 25 GB. 5. After the storage administrator provisions the storage, run the cfgdev command on VIOS1 and also VIOS2 command line to discover any new devices. Before SAN LUN can be virtualized and provisioned to the Virtual Server for client OS, you need to change the behavior of locking the SCSI reservation of the physical disk (here SAN LUN). You dont want VIOS to lock the SCSI reservation (to be prepared for dual VIOS configuration). You need to do the following steps on both VIOS1 and VIOS2. To change the behavior of locking SCSI reservations follow these steps: 1. Log on to the VIOS console and list the physical disks attached to your VIOS using the lsdev -type disk command as shown in Example 4-2.
Example 4-2 Listing of physical disk devices on VIOS
$ lsdev -type disk hdisk0 Available hdisk1 Available hdisk2 Available hdisk3 Available hdisk4 Available hdisk5 Available hdisk6 Available hdisk7 Available hdisk8 Available hdisk9 Available
SAS SAS SAS SAS IBM IBM IBM IBM IBM IBM
Disk Disk Disk Disk MPIO MPIO MPIO MPIO MPIO MPIO
Drive Drive Drive Drive DS4800 DS4800 DS4800 DS4800 DS4800 DS4800
In the output you can see that there are four internal disks (SAS hdisk0 to hdisk3) and six external disks from the IBM DS4800 (MPIO hdisk4 to hdisk9). 2. To confirm SAN LUN ID on VIOS, execute the lsdev -dev hdisk4 -attr | grep unique_id command. Example 4-3 on page 52 shows the output with highlighted LUN ID.
Chapter 4. Setting up using the SDMC
51
4815ch04.fm
FAStT03IBMfcp PCM
Repeat the above command to find the physical disk with the correct LUN ID you received from the storage administrator. The device name for the provisioned external disk used in the next steps is hdisk4. 3. To change the behavior of locking SCSI reservations use the chdev -dev hdisk4 -attr reserve_policy=no_reserve command. You need to do the previous steps on both VIOS1 and VIOS2. Make a note of the correct device names of the SAN LUN on both VIOS1 and VIOS2 and use these device names in the next chapter to virtualize them to the Virtual Server for client OS. Note: You can find that the names of devices on VIOS1 are not the same as on VIOS2. The reason for this may be that VIOS2 has a different number of internal disks.
Note: If you use a storage subsystem from a different vendor the reserve_policy attribute can have a different name. For example if you use EMC Powerpath drivers to connect LUNs from the EMC storage subsystem you need to use the reserve_lock attribute and the value no instead.
Recommendation: We recommend you make the disk configuration of both VIOS1 and VOS2 the same. This makes management of a dual VIOS configurations easier and less prone to administrator mistakes.
52
4815ch04.fm
4. The Ethernet screen wizard by default creates two virtual Ethernet adapters. Only the first virtual Ethernet adapter (with VLAN ID 1) will be used for a network connectivity. Select the checkbox to the left of the second adapter (ID 3) and then click Delete. 5. If the Storage selection screen appears, select the Yes, Automatically manage the virtual storage adapters for this Virtual Server optionbox. You can provision the Virtual Disks, Physical Volumes, or virtualize the Fibre Channel adapters here. Select the checkbox to the left of the Physical Volumes. Click Next. Note: The virtual Fibre Channel adapters will be configured in the next chapter. 6. In the Physical Volumes part of the screen select the physical disk to virtualize to the Virtual Server for client OS. These are the same disks that you were changing SCSI reservation policy on in chapter 4.1.5, Configure storage devices on page 50. You can also check the Physical Location Code column to find the correct physical disk. Important: Make sure you select the appropriate physical disk on both VIOS1 and VIOS2. 7. On the Optical devices screen in the Physical Optical Devices tab select the checkbox to the left of the cd0. This will virtualize the physical DVD drive to the Virtual Server for client OS. Confirm the Optical devices screen using the Next button. 8. On the Physical I/O Adapters screen dont select any physical I/O adapters. Client OS will be installed on the disk connected using the virtual SCSI adapter and all other devices are virtualized. 9. Verify the information on the Summary screen and confirm creation of the Virtual Server using the Finish button.
53
4815ch04.fm
To create virtual Fibre Channel adapters for the Virtual Server for client OS log into the SDMC environment and from the Home page, locate the host which contains VirtServer1 Virtual Server. Click the host name to open the Resource Explorer window. Check the checkbox to the left of the Virtual Server (VirtServer1), then click Actions System Configuration Manage Virtual Server. To create virtual Fibre Channel adapters for the Virtual Server for client OS follow these steps. 1. From the left menu click Storage Devices. 2. In the Fibre Channel part click Add. 3. The Add Fibre Channel screen shows the physical Fibre Channel adapters that support N_Port ID Virtualization (NPIV). Select the physical Fibre Channel adapter that you want to virtualize to the Virtual Server for client OS. In our case we selected the physical Fibre Channel adapter with device name fcs1 for both VIOS1 and VIOS2. Click OK. Note: The physical Fibre Channel adapter with device name fcs0 was already used in chapter 4.1.5, Configure storage devices on page 50 to provision the SAN LUN. 4. Click Apply. Now it is necessary to update the configuration profiles of VirtServer1, VIOS1, and VIOS2. To update the profile on VirtServer1 Virtual Server log on to the SDMC environment. Follow these steps: 1. From the Home page locate the host which contain the VirtServer1 Virtual Server and click on the name of the host to open the Resource Explorer window. 2. Check the checkbox to the left of the Virtual Server VirtServer1, then click Actions System Configuration Save Current Configuration. 3. Select the Overwrite existing profile checkbox and select the OriginalProfile profile. 4. Confirm screen using the OK button. 5. Confirm the Save Profile screen using the Yes button. Repeat these steps for VIOS1 and VIOS2 to update their configuration profiles. At this point you have a running Virtual Server with these virtualized configurations: One virtual CPU from the Shared Processor Pool (This can be adjusted dynamically to meet your needs). 4 GB of memory (This can be adjusted dynamically to meet your needs). One virtual Ethernet adapter with high-available failover mode. Two virtual SCSI adapters for the operating system disk. This disk use two paths - one path to VIOS1 and second path to VIOS2. Two virtual Fibre Channel adapters, likely for connecting the SAN LUNs for data. Each virtual Fibre Channel adapter is provided by a separate VIOS. Example 4-4 shows devices from the Virtual Server running the AIX operating system.
Example 4-4 List of the virtual devices from AIX
# lsdev -Cc adapter ent0 Available Virtual I/O Ethernet Adapter (l-lan) fcs0 Available C5-T1 Virtual Fibre Channel Client Adapter fcs1 Available C6-T1 Virtual Fibre Channel Client Adapter vsa0 Available LPAR Virtual Serial Adapter
54
4815ch04.fm
vscsi0 Available Virtual SCSI Client Adapter vscsi1 Available Virtual SCSI Client Adapter # lsdev -Cc disk hdisk0 Available Virtual SCSI Disk Drive # lspath Enabled hdisk0 vscsi0 Enabled hdisk0 vscsi1
55
4815ch04.fm
Follow these steps to create the Virtual Server for VIOS1: 1. On the Name screen enter the following values and then click Next: Virtual Server name: VIOS1(You can enter any name you want) Virtual server ID: 1 We follow the naming convention from chapter 1.2, Planning on page 3 (default is the next partition number available). Environment: VIOS (from the pull down menu) 2. On the Memory screen select the Dedicated for Memory Mode checkbox (if present) and enter an appropriate amount of memory in the in the Assigned memory field. Use 4 GB of memory. Click Next. Note: The amount of memory your VIOS needs depends on the functionalities of VIOS you will use. We recommend to start with 4 GB of the memory and periodically monitor the usage of memory on VIOS. 3. On the Processor screen select Shared for Processing Mode. In the Assigned Processors field enter 1 for a single shared processor (from the Shared Processor Pool DefaultPool(0)). Click Next. Note: In background the value of Assigned Uncapped Processing Units is 0.1 by default. We recommend to start with 1 shared processor and periodically monitor CPU on VIOS. 4. On the Ethernet screen, expand the Virtual Ethernet part. By default the wizard creates two virtual Ethernet adapters. The first virtual Ethernet adapter is using adapter ID 2 and VLAN ID 1. The second virtual Ethernet adapter is using adapter ID 3 and VLAN ID 99. The second virtual Ethernet adapter is used for control channel between two VIOSes in dual VIOS configuration and is not really used in single VIOS configuration. More details about control channel and dual VIOS configuration for virtual Ethernet can be found in the IBM PowerVM Virtualization Introduction and Configuration, SG24-7940 publication. 5. Select the checkbox to the left of the first adapter (ID 2) and then click Edit. In the Virtual Ethernet - Modify Adapter screen select the Use this adapter for Ethernet bridging checkbox and enter 1 in the Priority field. The explanation of the priorities can be found in the IBM PowerVM Virtualization Introduction and Configuration, SG24-7940 publication.
56
4815ch04.fm
Click OK to confirm changes to first Ethernet adapter. Confirm the Ethernet screen using the Next button. 6. On the Virtual Storage Adapter screen enter 200 in the Maximum number of virtual adapters: field. Then click Create Adapter and enter the following values: Adapter ID: Adapter type: Connecting Virtual Server ID: Connecting adapter ID: 101 SCSI 10 11
Numbers for virtual adapters and the Virtual Server ID come from the naming convention in chapter 1.2, Planning on page 3. Confirm the Create Virtual Adapter screen using OK and click Next. 7. On the Physical I/O Adapters screen select checkboxes to the left of the Location Code and Description of the needed adapter. To be able to use all the functionalities described in this publication you need to select: One SAS or SCSI disk controller (controller for internal disk drives). One Ethernet adapter (Ethernet adapter for connection to LAN). One Fibre Channel adapter (Fibre Channel adapter for connection to SAN and a virtual Fibre Channel configuration). Note: If at any time the busy icon saying Working hangs, try to click another tab and then come back to the previous window. In our case we selected these physical adapters: U78A0.001.DNWKF81-P1-T9 RAID Controller U5802.001.RCH8497-P1-C7 Quad 10/100/1000 Base-TX PCI-Express Adapter U5802.001.RCH8497-P1-C3 Fibre Channel Serial Bus The RAID Controller selected has also the physical CD/DVD drive connected. Recommendation: We recommend you check the physical location codes of your adapters and that you are sure you use the correct adapters for your Virtual Server. Sometimes the description can be misleading (for example the PCI-to-PCI bridge can be the Ethernet adapter). Confirm Physical I/O Adapters screen using the Next button. 8. Verify the information on the Summary screen and confirm the creation of the Virtual Server using the Finish button. A more detailed description of the possible options during Virtual Server creation wizard is in redbook IBM Systems Director Management Console: Introduction and Overview, SG24-7860.
57
4815ch04.fm
Install VIOS using the IBM Network Installation Manager (NIM). In this chapter we are using the VIOS DVD media to install VIOS. Before you install VIOS insert the VIOS installation media into the systems CD/DVD drive. To install a VIOS into the previously created Virtual Server first log on to the SDMC. From the Home page, locate the host on which the Virtual Server for VIOS was created and click on its name. Select the checkbox to the left of the Virtual Server name you created (VIOS1), then click Actions Operations Activate Profile. On the Activate Virtual Server: VIOS1 screen click Advanced. Change Boot mode to SMS and click OK. Select the Open a terminal window or console session checkbox and click OK. The terminal console for VIOS1 opens. Enter your SDMC user ID and password to open the console. After the terminal console windows is opened follow these steps to install VIOS. 1. If presented with options to set this as the active console, press the key indicated on the screen. 2. Enter Select Boot Options (5), Select Install/Boot Device (1) and List all Devices (7). Find the CD-ROM device in the list. You may need to scroll down using the N. Record the number of the device and press Enter. 3. Select Normal Bode Boot (2) and Yes (1) to exit from the SMS menu. 4. Select the console number and press Enter. 5. Select the preferred language. To select English press Enter. 6. When prompted with the Installation and Maintenance menu, select the Change/Show Installation Settings and Install (2) option to open installation settings screen. 7. Use the Disk(s) where you want to install (1) option to select the target installation device (the target installation device is marked with >>>). Usually this is the first physical disk device, so you can leave the default value. Use the Previous Menu (99) option. 8. Use the Select Edition (5) option to choose the correct PowerVM edition. 9. Start the installation using Install with the settings listed above (0) option. A progress screen shows Approximate % Complete and Elapsed Time. If the installation process ask you to insert the volume 2, do so and press Enter. This installation should take between 15 minutes and an hour to complete. When VIOS1 first comes up use the padmin username to log in. VIOS asks you to change the password and accept software maintenance terms. After you change the password and agree to the license enter the license -accept command.
4815ch04.fm
2. On the Memory screen select the Dedicated for Memory Mode optionbox (if present) and enter an appropriate amount of memory for this Virtual Server in the Assigned memory field. Then click Next. 3. On the Processor screen select Shared for Processing Mode. In the Assigned Processors field enter 1 (you can change this value to reflect your needs). Click Next. 4. The Ethernet screen wizard by default creates two virtual Ethernet adapters. Only the first virtual Ethernet adapter (with VLAN ID 1) will be used for a network connectivity. Select the checkbox to the left of the second adapter (ID 3) and then click Delete. 5. If the Storage selection screen appears, select the No, I want to manage the virtual storage adapters for this Virtual Server optionbox. 6. On the Virtual Storage Adapter screen enter 30 in Maximum number of virtual adapters: field. Then click Create Adapter and enter the following values: Adapter ID: Adapter type: Connecting Virtual Server ID: Connecting adapter ID: 11 SCSI VIOS (1) 101
Confirm Create Virtual Adapter screen using the OK button. Click Next. 7. On the Physical I/O Adapters screen dont select any physical I/O adapters. Client OS will be installed on the disk connected using the virtual SCSI adapter. The virtual Fibre Channel adapters are added in chapter 4.4, Setup virtual Fibre Channel using the SDMC on page 70. Click Next. 8. Verify information on the Summary screen and confirm creation of the Virtual Server using the Finish button.
59
4815ch04.fm
4-Port 10/100/1000 Base-TX PCI-Express Adapter 4-Port 10/100/1000 Base-TX PCI-Express Adapter 4-Port 10/100/1000 Base-TX PCI-Express Adapter
Select the correct Ethernet port from the listed ports that is to be used for the LAN connection and has the Ethernet cable plugged in. In our case it is the device ent0. This physical adapter port device name is used in the next steps as a value for the -sea attribute. 5. Find the device name for the virtual Ethernet adapter port with adapter ID 2 use the command: lsdev -vpd | grep ent | grep Virtual | grep C2 The output should looks like this: ent4 U8233.E8B.10F5D0P-V1-C2-T1 Virtual I/O Ethernet Adapter
Explanation: The value C2 used in the previous command is related to adapter ID 2 of the virtual Ethernet adapter created in chapter 4.2.1, Create VIOS Virtual Server on page 55. You can also find this ID and the slot number in Table 1-2 on page 5. The device name from the output is the virtual Ethernet adapter port. In our case it is device ent4. This virtual adapter port device name is used in the next step as a value for the -vadapter and -default attributes. The virtual Ethernet adapter found in this step should use VLAN ID 1. Confirm VLAN ID using the entstat -all ent4 | grep "Port VLAN ID" command. VLAN ID 1 is confirmed by the output: Port VLAN ID: 1 6. Create a virtual bridge (called Shared Ethernet Adapter in VIOS terminology) using the mkvdev -sea ent0 -vadapter ent4 -default ent4 -defaultid 199 command as shown on Example 4-6.
Example 4-6 Creating SEA on VIOS
$ mkvdev -sea ent0 -vadapter ent4 -default ent4 -defaultid 199 main: 86 Recived SEA events bytes 164 ent6 Available en6 et6 The created adapter is called a Shared Ethernet Adapter (SEA). In our case the SEA device name is ent6. Make a note of the name of the created device. This SEA device name is needed in the first part of chapter Configure second VIOS for client network on page 66 for changing the attributes of the SEA on VIOS1. Note: The Shared Ethernet Adapter is actually bridging a virtual and physical network using VIOS. The Shared Ethernet Adapter functionality is usually refer to as SEA. 7. Run cfgassist and select VIOS TCP/IP Configuration. Then select the appropriate interface device name from the previous step. In our case select en6. 8. On the VIOS TCP/IP Configuration screen enter TPC/IP configuration values for VIOS connectivity. For these values consult your network administrator. See Example 2-3 on page 12 for TPC/IP configuration values example. After entering the needed values for TCP/IP configuration press Enter. You should see the output Command: OK. Then press F10 (or press the ESC and 0 sequence). 60
IBM PowerVM Getting Started Guide
4815ch04.fm
Note: From this point on you can use ssh to connect to VIOS1.
2. Find the World Wide Port Name (WWPN) address for the fcs0 device using the lsdev -dev fcs0 -vpd | grep Address command. Your output will be similar to this: Network Address.............10000000C9E3AB56 3. Provide the location code and the WWPN address from the previous steps to your storage administrator. At this time your storage administrator will provision your necessary SAN LUN. The storage administrator should also give you the SAN LUN ID of the disk for client OS installation. For this exercise, the SAN administrator has allocated disk with LUN ID 60:0a:0b:80:00:11:46:32:00:00:92:75:4e:c5:0e:78 and size 25 GB. 4. After the storage administrator provisions the storage, run the cfgdev command to find any new devices. 5. To list the physical disks attached to your VIOS run the lsdev -type disk command as shown on Example 4-7.
Example 4-7 Listing of physical disk devices on VIOS
$ lsdev -type disk hdisk0 Available hdisk1 Available hdisk2 Available hdisk3 Available hdisk4 Available hdisk5 Available hdisk6 Available hdisk7 Available hdisk8 Available hdisk9 Available
SAS SAS SAS SAS IBM IBM IBM IBM IBM IBM
Disk Disk Disk Disk MPIO MPIO MPIO MPIO MPIO MPIO
Drive Drive Drive Drive DS4800 DS4800 DS4800 DS4800 DS4800 DS4800
In the output you can see four internal disks (SAS hdisk0 to hdisk3) and six external disks from the IBM DS4800 (MPIO hdisk4 to hdisk9). 6. To confirm the SAN LUN ID on VIOS, execute the lsdev -dev hdisk4 -attr | grep unique_id command. Example 4-8 shows the output with highlighted LUN ID.
Example 4-8 Listing disk LUN ID on VIOS
61
4815ch04.fm
unique_id
3E213600A0B8000114632000092754EC50E780F1815
FAStT03IBMfcp PCM
If you have more disk devices, repeat the above command to find the physical disk with the correct LUN ID you received from the storage administrator. The device name for the external disk used in the next steps is hdisk4. 7. If dont plan to use a dual VIOS configuration skip this step. In this step you change the behavior of locking the SCSI reservation on the physical disk device. You dont want VIOS to lock the SCSI reservation (to be prepared for dual VIOS configuration). To change the behavior of locking SCSI reservations use the chdev -dev hdisk4 -attr reserve_policy=no_reserve command. Note: If you use a storage subsystem from a different vendor the reserve_policy attribute can have a different name. For example if you use EMC Powerpath drivers to connect LUNs from the EMC storage subsystem you need to use the reserve_lock attribute and the value no instead. Now map the physical disk to the virtual SCSI adapter. This mapping can be done using the SDMC interface or using the VIOS command line. Use the SDMC to create the mapping of physical disk to the virtual SCSI adapter. Creation of this mapping using the VIOS command is shown in chapter Configure second VIOS for client storage on page 68. To create the mapping: 8. log on to the SDMC. 9. From the Home page locate the host which contain the VIOS1 Virtual Server. Check the checkbox to the left of the host containing the VIOS1. Then click Actions System Configuration Virtual Resources Virtual Storage Management. 10.In VIOS/SSP select VIOS1 and click Query. 11.Click Physical Volumes. 12.Select the checkbox to the left of the hdisk4 physical disk. This device name was found in the previous steps. 13.Click Modify assignment. 14.From the New virtual server assignment select the VirtServer1(10) and confirm using the OK button. 15.Click Physical Volumes. 16.Select the checkbox to the left of the cd0 physical DVD drive. 17.Click Modify assignment. 18.From the New virtual server assignment select VirtServer1(10) and confirm using the OK button. 19.Click Close.
62
4815ch04.fm
An installation of a client operating system is not within the scope of this publication. To install a client operating system follow instructions provided by the installation guide for your operating system.
63
4815ch04.fm
Click OK to confirm changes to the first Ethernet adapter. Confirm the Ethernet screen using the Next button. 6. On the Virtual Storage Adapter screen enter 200 in the Maximum number of virtual adapters: field. Then click Create Adapter and enter the following values: Adapter ID: Adapter type: Connecting Virtual Server ID: Connecting adapter ID: 101 SCSI 10 (you can select VirtServer1 (10)) 21
Confirm Create Virtual Adapter screen using the OK button. Click Next. 7. On the Physical I/O Adapters screen select checkboxes to the left of the Location Code and Description of the needed adapters. To be able to use all the functionalities described in this publication you need to select: One SAS or SCSI disk controller (controller for internal disk drives). One Ethernet adapter (Ethernet adapter for connection to LAN). One Fibre Channel adapter (Fibre Channel adapter for connection to SAN). In our case we selected these physical adapters: U5802.001.RCH8497-P1-C2 U5802.001.RCH8497-P1-C6 U5802.001.RCH8497-P1-C5 PCI-E SAS Controller Quad 10/100/1000 Base-TX PCI-Express Adapter Fibre Channel Serial Bus
Confirm the Physical I/O Adapters screen using the Next button. 8. Verify information on the Summary screen and confirm the creation of the Virtual Server using the Finish button. A more detailed description of possible options can be found in the publication IBM Systems Director Management Console: Introduction and Overview, SG24-7860.
4815ch04.fm
The second VIOS (VIOS2) must be added to the NIM environment as a standalone machine. Then initialize a VIOS2 installation from NIM using the prepared NIM standalone machine and NIM resources. Important: On the Install the Base Operating System on the Standalone Clients screen be sure that the Remain NIM client after install attribute is set to NO. This will tell NIM not to set up the TCP/IP configuration on newly installed VIOS so you can create an SEA in this VIOS. At this point your NIM environment should have all the needed resources prepared and an installation of the second VIOS should be initialized from NIM. For a detailed description how to prepare NIM to be able to install VIOS refer to the NIM installation and backup of the VIOS document available at: https://www-304.ibm.com/support/docview.wss?uid=isg3T1011386#4 To install VIOS2 into the Virtual Server created in 4.3.1, Create second VIOS Virtual Server on page 63 logon to the SDMC. From the SDMC Home page, locate the host on which the Virtual Server for VIOS2 was created and click on its name. Select the checkbox to the left of the Virtual Server name VIOS2, then click Actions Operations Activate Profile. On the Activate Virtual server: VIOS2 screen click Advanced. Change Boot mode to SMS and click OK. Select the Open a terminal window or console session checkbox and click OK. The terminal console for VIOS2 opens. Enter your SDMC user ID and password to open the terminal console and then follow these steps to install second VIOS. 1. If presented with options to set this as the active console, press the key indicated on the screen. 2. Select Setup Remote IPL (Initial Program Load) (2). 3. Select the number of the port that is connected to the Ethernet switch and subnet used during installation. In our case select Port 1: 3. Port 1 - IBM 4 PORT PCIe 10/10 U5802.001.RCH8497-P1-C6-T1 00145ee726a4 4. Select IPv4 - Address Format 123.231.111.222 (1), BOOTP (1) and IP Parameters (1). Enter the TCP/IP configuration parameters. We used these parameters: 1. 2. 3. 4. Client IP Address Server IP Address Gateway IP Address Subnet Mask [172.16.22.13] [172.16.20.40] [172.16.20.1] [255.255.252.0]
The Server TCP/IP Address address is the TCP/IP address of your NIM server. 5. Press the ESC key and then select Ping Test (3) and Execute Ping Test (1). You should receive this message: .-----------------. | Ping Success. | `-----------------' 6. Press any key to continue and then press the ESC key five times to go to the Main Menu. From the Main Menu enter Select Boot Options (5), Select Install/Boot Device (1), Network (6) and BOOTP (1). 7. In the Select Device screen select the number of the port that is connected to the switch and subnet used during installation. In our case we selected Port 1 (option 3): 3. - Port 1 - IBM 4 PORT PCIe 10/100/1000 Base-TX Adapter
65
4815ch04.fm
(loc=U5802.001.RCH8497-P1-C6-T1 ) 8. Select Normal Mode Boot (2) and Yes (1) to leave the SMS menu and start the installation. 9. When prompted to define the System Console type a 1 and press Enter. The number you need to type may be different for your installation. 10.To confirm English during install press Enter. 11.When prompted with the Installation and Maintenance menu, select the Change/Show Installation Settings and Install (2) option to open the installation settings screen. 12.Use the Disk(s) where you want to install (1) option to select the target installation device. Usually this is the first physical disk device, so you can leave the default. After you select the target installation device use the Continue with choices indicated above (0) option to go back to the main menu. 13.Use the Select Edition (5) option to choose the PowerVM edition. 14.Start the installation using the Install with the settings listed above (0) option. A progress screen shows Approximate % Complete and Elapsed Time. This installation should take between 15 minutes and an hour to complete. When VIOS2 first comes up, use the padmin username to log in. VIOS asks you to change the password and accept software maintenance terms. After you change the password and agree to the license enter the license -accept command.
66
4815ch04.fm
3. Change the attributes of the SEA adapter on VIOS1 using the chdev -dev ent6 -attr ha_mode=auto ctl_chan=ent5 command. In this command the -dev attribute contains the SEA device name from chapter Configure VIOS for client network on page 59. You can confirm the attributes of the SEA adapter on VIOS1 using the lsdev -dev ent6 -attr command. In the second part of this chapter configure the virtual Ethernet bridge (known as the SEA) on the second VIOS (VIOS2) and also configure the management TCP/IP address for the second VIOS. Follow these steps in VIOS2 console. Important: Make sure you are logged on the second VIOS, in our case VIOS2. 1. To find device names for physical Ethernet adapter ports use the lsdev -vpd | grep ent | grep -v Virtual command as shown in Example 4-5 on page 59.
Example 4-9 Listing of physical Ethernet adapter ports on VIOS $ lsdev Model ent0 ent1 ent2 ent3 -vpd | grep ent | grep -v Virtual Implementation: Multiple Processor, PCI bus U5802.001.RCH8497-P1-C6-T1 4-Port 10/100/1000 U5802.001.RCH8497-P1-C6-T2 4-Port 10/100/1000 U5802.001.RCH8497-P1-C6-T3 4-Port 10/100/1000 U5802.001.RCH8497-P1-C6-T4 4-Port 10/100/1000
Select one of the listed ports that is used for a LAN connection and has an Ethernet cable plugged in. In our case it is device ent0. This physical adapter port device name is used in the next steps as the value for -sea attribute. 2. To find the device name for the virtual port use the lsdev -vpd | grep ent | grep C2 command. The output is: ent4 U8233.E8B.10F5D0P-V2-C2-T1 Virtual I/O Ethernet Adapter (l-lan)
Explanation: C2 used in the previous command is related to the number of the adapter created in chapter 4.3.1, Create second VIOS Virtual Server on page 63. The device name from the output is the virtual Ethernet adapter port. In our case it is device ent4. This virtual adapter port device name is used in the next step as a value for the -vadapter and -default attributes. The virtual port device name found in this step should use VLAN ID 1. Confirm VLAN ID using the entstat -all ent4 | grep "Port VLAN ID" command. VLAN ID 1 is confirmed by the output: Port VLAN ID: 1 3. Find the device name for the virtual port that functions as a control channel in the output of the lsdev -vpd | grep ent | grep C3 command: ent5 U8233.E8B.10F5D0P-V2-C3-T1 Virtual I/O Ethernet Adapter (l-lan) This is the second virtual Ethernet adapter with adapter ID 3 that was created by default in chapter 4.3.1, Create second VIOS Virtual Server on page 63. This device name is used in the next step in the ctl_chan attribute. In our case it is device name ent5. The virtual Ethernet adapter found in this step should use VLAN ID 99. Confirm VLAN ID using the entstat -all ent5 | grep "Port VLAN ID" command. VLAN ID 99 is confirmed by the output: Port VLAN ID: 99
Chapter 4. Setting up using the SDMC
67
4815ch04.fm
4. Create a virtual bridge (called Shared Ethernet Adapter in VIOS terminology) using the mkvdev -sea ent0 -vadapter ent4 -default ent4 -defaultid 199 -attr ha_mode=auto ctl_chan=ent5 command as shown in Example 4-10.
Example 4-10 Creation of SEA on second VIOS
$ mkvdev -sea ent0 -vadapter ent4 -default ent4 -defaultid 199 -attr ha_mode=auto ctl_chan=ent5 main: 86 Recived SEA events bytes 164 ent6 Available en6 et6 Make a note of the name of the created SEA adapter and interface. In our case the device name of interface en6. Important: Mismatching SEA and the SEA failover could cause broadcast storms to occur on the network and effect the network stability. See details in IBM PowerVM Virtualization Introduction and Configuration, SG24-7940. 5. Run the cfgassist command and select VIOS TCP/IP Configuration. Then select the appropriate interface device name from the previous step. In our case we selected en6. 6. On the VIOS TCP/IP Configuration screen enter TPC/IP configuration values for VIOS2 connectivity. For these values consult your network administrator. See Example 2-3 on page 12 for TPC/IP configuration values example. After entering the needed values for TCP/IP configuration press Enter. You should see the output Command: OK. Then press F10 (or press the ESC and 0 sequence). Note: From this time on you can use ssh to connect to VIOS2. From the Ethernet point of view, the Virtual Server for client OS is already prepared for the dual VIOS configuration. There is no need to make any changes to the Virtual Server for client OS.
68
4815ch04.fm
To add the second virtual SCSI adapter to your Virtual Server VirtServer1 follow these steps: 1. Change the Virtual Server for client OS dynamically using these steps: a. Log into the SDMC environment and from the Home page, locate the host which contains VirtServer1 Virtual Server. This is the same host used in 4.2.3, Create Virtual Server for client OS on page 58. Click on the host name to open the Resource Explorer window. Check the checkbox to the left of the Virtual Server (VirtServer1), then click Actions System Configuration Manage Virtual Server. b. Click Storage Adapters from the left menu. c. Click Add to open the Create Virtual Storage Adapter window and enter the following: Adapter Id: Adapter type: Connecting virtual server Connecting adapter ID 21 SCSI VIOS2(2) 101
Confirm window with the OK button. d. Click Apply to dynamically add the virtual SCSI adapter to the Virtual Server. 2. Now it is necessary to update configuration profile of VirtServer1. To update profile on the VirtServer1 Virtual Server: a. Log on to the SDMC environment. b. From the Home page locate the host which contains the VirtServer1 Virtual Server and click on the name of the host to open the Resource Explorer window. Check the checkbox to the left of the Virtual Server VirtServer1. c. Click Actions System Configuration Save Current Configuration. d. Select the Overwrite existing profile checkbox and select the OriginalProfile profile. e. Confirm screen using the OK button. f. Confirm the Save Profile screen using the Yes button. Now configure the second VIOS (VIOS2) to provision the disk to the Virtual Server for client OS. To attach the second VIOS to a SAN and configure the storage follow these steps in VIOS2 console. 1. Provide Fibre Channel card location codes and their World Wide Port Name (WWPN) addresses to your storage administrator. The steps to find location codes and WWPN addresses are described in chapterConfigure VIOS for client storage on page 61. At this time your storage administrator should provide you the same SAN LUN (and its LUN ID) that was provisioned and used in chapterConfigure VIOS for client storage on page 61. 2. After the storage administrator completes the provisioning, run the cfgdev command to find new devices. 3. To list the physical disks attached to your VIOS run the lsdev -type disk command. Example 4-11 shows out system output form the lsdev command.
Example 4-11 Listing physical disks on VIOS
$ lsdev -type disk hdisk0 Available hdisk1 Available hdisk2 Available hdisk3 Available hdisk4 Available hdisk5 Available
0 0 0 0 0 0
4815ch04.fm
In the output you can see six internal disks and six external disks from the IBM DS4800 storage subsystem. Make sure that you find the correct physical disk device names as explained in chapter Configure VIOS for client storage on page 61. In our case the physical disk with LUN ID 60:0a:0b:80:00:11:46:32:00:00:92:75:4e:c5:0e:78 has the device name hdisk6. This device name will be used in the next steps. Note: You can see that the names of devices on VIOS1 are not same as on VIOS2. The reason for this is that VIOS2 has more internal disks, and the external disk has a higher number.
Recommendation: We recommend you make the disk configuration of both VIOS1 and VIOS2 the same. This makes management of a dual VIOS configuration easier and less prone to administrator mistakes. 4. To change the behavior of locking SCSI reservations use the chdev -dev hdisk6 -attr reserve_policy=no_reserve command. 5. To find the device name for the virtual adapter connected to Virtual Server for client OS run the lsdev -vpd | grep vhost | grep C101 command. C101 is the slot number from the Table 1-3 on page 6. In our case the output is: vhost0 U8233.E8B.10F5D0P-V1-C101 Virtual SCSI Server Adapter The device name for the virtual adapter will be used in the next step. In our case it is vhost0. 6. Map the external disk to the Virtual Server for client OS using the mkvdev -vdev hdisk6 -vadapter vhost0 command.
70
4815ch04.fm
Log into the SDMC environment and from the Home page, locate the host that contains VirtServer1 Virtual Server. This is the same host used in 4.2.3, Create Virtual Server for client OS on page 58. Click on the name of the host to open the Resource Explorer window. Check the checkbox to the left of the VirtServer1 Virtual Server name, then click Actions System Configuration Manage Virtual Server. Follow these steps to add virtual Fibre Channel adapters to VirtServer1 Virtual Server: 1. Click Storage Adapters from the left menu. 2. Click Add to open the Create Virtual Storage Adapter window and enter the following: Adapter Id: Adapter type: Connecting virtual server Connecting adapter ID 12 Fibre Channel VIOS1(1) 102
Confirm the window with the Add button. 3. Click Add to open the Create Virtual Storage Adapter window and enter the following: Adapter Id: Adapter type: Connecting virtual server Connecting adapter ID 22 Fibre Channel VIOS2(2) 102
4. Confirm the window with the Add button. 5. Click Apply to dynamically add virtual Fibre Channel adapters to VirtServer1. Now it is necessary to update the configuration profiles of VirtServer1 Virtual Server. To update the profile of VirtServer1 Virtual Server log on to the SDMC environment and: 1. From the Home page locate the host which contain the VirtServer1 Virtual Server and click on the name of the host to open the Resource Explorer window. 2. Check the checkbox to the left of the Virtual Server VirtServer1, then click Actions System Configuration Save Current Configuration. 3. Select the Overwrite existing profile checkbox and select the OriginalProfile profile. 4. Confirm screen using the OK button. 5. Confirm the Save Profile screen using the Yes button.
71
4815ch04.fm
Confirm the window with the Add button. 3. Click Apply to dynamically add the virtual Fibre Channel adapter to VIOS1. 4. On the VIOS1 command line run the cfgdev command to check for newly added devices. 5. On VIOS1 run the lsdev -type adapter | grep "Virtual FC" command to list virtual Fibre Channel adapters as shown in Example 4-12.
Example 4-12 Listing virtual Fibre Channel adapters on VIOS1
$ lsdev -type adapter | grep "Virtual FC" vfchost0 Available Virtual FC Server Adapter The device name vfchost0 is used in the next steps as the -vadapter attribute. 6. List the physical Fibre Channel ports and NPIV attributes using the lsnports command as shown on the Example 4-13.
Example 4-13 Listing NPIV capable Fibre Channel ports on VIOS1
NPIV capable ports have number 1 in the fabric column. For Fibre Channel virtualization select the physical port with the device name fcs1, this device name will be used in the next steps to create the mapping. The physical port fcs0 was used for a SAN LUN masking in chapter Configure VIOS for client storage on page 61. 7. Create the virtual Fibre Channel adapter to physical Fibre Channel adapter mapping. This mapping can be done using the SDMC interface or using the VIOS command line. Use the SDMC to create the mapping between the virtual Fibre Channel adapter and the physical Fibre Channel adapter. Creation of this mapping using the VIOS command is shown in chapter 4.4.3, Configure second VIOS for NPIV on page 73. To create the mapping log on to the SDMC and: 8. from the Home page locate the host which contain the VIOS1 Virtual Server. Check the checkbox to the left of the host containing the VIOS1. 9. Click Actions System Configuration Virtual Resources Virtual Storage Management. 10.In VIOS/SSP optionbox select VIOS1 and click Query. 11.Click Virtual Fibre Channel. 12.Select the checkbox to the left of the fcs1 physical Fibre Channel port. This device name was found in the previous steps. 13.Click Modify virtual server connections. 14.Select the checkbox to the left of the VirtServer1 Virtual Server name. 15.Click OK. Now it is necessary to update configuration profiles of VIOS1 Virtual Server. To update the profile of VIOS1 Virtual Server log on to the SDMC environment. From the Home page locate the host which contain the VIOS1 Virtual Server and: 1. Click on the name of the host to open the Resource Explorer window. 2. Check the checkbox to the left of the Virtual Server VIOS1, then click Actions System Configuration Save Current Configuration.
72
4815ch04.fm
3. Select the Overwrite existing profile checkbox and select the OriginalProfile profile. 4. Confirm screen using the OK button. 5. Confirm the Save Profile screen using the Yes button.
Confirm window with the Add button. 3. Click Apply to dynamically add the virtual Fibre Channel adapter to VIOS2. 4. On the VIOS2 command line run the cfgdev command to check for newly added devices. 5. On VIOS2 run the lsdev -type adapter | grep "Virtual FC" command to list virtual Fibre Channel adapters as shown in Example 4-14.
Example 4-14 Listing virtual Fibre Channel adapters on VIOS2
$ lsdev -type adapter | grep "Virtual FC" vfchost0 Available Virtual FC Server Adapter Device name vfchost0 is used in the next steps as a -vadapter attribute. 6. List the physical Fibre Channel ports and NPIV attributes using lsnports command as shown on the Example 4-15.
Example 4-15 Listing NPIV capable Fibre Channel ports on VIOS2
NPIV capable ports have number 1 in the fabric column. For Fibre Channel virtualization select the physical port with the device name fcs1. The physical port fcs0 was used for the SAN LUN masking in chapter Configure second VIOS for client storage on page 68. 7. Create the Fibre Channel virtualization using the vfcmap -vadapter vfchost0 -fcp fcs1 command. 8. Verify the virtual Fibre Channel mapping using the lsmap -all -npiv command as shown in Example 4-16 on page 74. The status of the virtual Fibre Channel adapter should be LOGGED_IN.
73
4815ch04.fm
Note: Make sure that the client OS in Virtual Server VirtServer1 has checked for new devices after adding devices in chapter 4.4.1, Configure client Virtual Server for NPIV on page 70. In AIX use the cfgmgr command to check for newly added devices. Example 4-16 Listing virtual Fibre Channel mapping on VIOS $ lsmap -all -npiv Name Physloc ClntID ClntName ClntOS ------------- ---------------------------------- ------ -------------- ------vfchost0 U8233.E8B.10F5D0P-V2-C102 10 VirtServer1 AIX Status:LOGGED_IN FC name:fcs1 Ports logged in:1 Flags:a<LOGGED_IN,STRIP_MERGE> VFC client name:fcs1
FC loc code:U5802.001.RCH8497-P1-C5-T2
Now it is necessary to update configuration profiles of VIOS2 Virtual Server. To update the profile of the VIOS2 Virtual Server log on to the SDMC environment and: 1. From the Home page locate the host which contain the VIOS2 Virtual Server and click on the name of the host to open the Resource Explorer window. 2. Check the checkbox to the left of the Virtual Server VIOS2, then click Actions System Configuration Save Current Configuration. 3. Select the Overwrite existing profile checkbox and select the OriginalProfile profile. 4. Confirm screen using the OK button. 5. Confirm the Save Profile screen using the Yes button.
74
4815ch05.fm
Chapter 5.
Advanced Configuration
This chapter describes additional configurations to a dual VIOS setup. The advanced setup also addresses performance concerns over the single and dual VIOS setup as well as detailing other advanced configuration practices.
75
4815ch05.fm
Virtual Ethernet Virtual Ethernet Virtual Ethernet Virtual Ethernet Virtual Ethernet Virtual Ethernet
20 21
20 1
22 23 N/A N/A
C22 C23 C10 C11 C101 C103 C105 C10 C11 C111
N/A N/A
N/A N/A
N/A N/A C10 C11 C21 C23 C25 C10 C11 C21
Virtual SCSI101 Virtual Fibre 103 Virtual Fibre 105 Virtual Ethernet Virtual Ethernet N/A N/A
Virtual SCSI111
76
4815ch05.fm
Virtual Adapter
Additional Server VLANs Adapter Slot N/A N/A N/A N/A N/A N/A N/A C113 C115 C10 C11 C121 C122 C123
Client Client Adapter ID Adapter Slot C23 C25 C10 C11 C21 C23 C25
Virtual Fibre 113 Virtual Fibre 115 Virtual Ethernet Virtual Ethernet N/A N/A
77
4815ch05.fm
78
4815ch05.fm
VIOS 1 Primary ent 7 (SEA) ent6 (inaggr) ent 0 (phy.) ent1 (phy.) ent2 (phy.) ent3 (phy. )
en7 (if.)
VIOS 2 Standby
Cli en t Partition
ent 4 (v irt.)
ent5 (virt.)
ent5 (virt.)
priority=1
priority=2
PVID= 1
PVI D= 1
Ethernet switch 1
Uplink VL AN=1
Hypervisor
Control chann el
79
4815ch05.fm
Cl ient Partitio n
MPIO for data disks
VIO S 1
Server Fibre Channel Adapter Server F ibre Channel Adapter Server SCSI Adapter MPIO Server SCSI Adapter MPIO Server Fibre Channel Adapter
VI OS 2
Server Fibre Channel Adapter
fcs0
Physical FC Adapter 1
fcs1
fcs2
Physical F C Adapter 2
fcs3
fcs0
Physical FC Adapter 1
fcs1
fcs2
Physical F C Adapter 2
fcs3
SAN
Figure 5-2 MPIO setup where each VIOS partition is connected to one SAN switch.
80
4815ch05.fm
Each VIOS partition can have their Fibre Channel adapter ports connected to different SAN switches as illustrated in Figure 5-3
Cl ient Partitio n
MPIO for data disks
VIO S 1
Server Fibre Channel Adapter Server F ibre Channel Adapter Server SCSI Adapter MPIO Server SCSI Adapter MPIO Server Fibre Channel Adapter
VI OS 2
Server Fibre Channel Adapter
fcs0
Physical FC Adapter 1
fcs1
fcs2
Physical F C Adapter 2
fcs3
fcs0
Physical FC Adapter 1
fcs1
fcs2
Physical F C Adapter 2
fcs3
SAN
Figure 5-3 MPIO setup where the VIO partitions are connected to the 2 SAN switches.
The two approaches have their benefits and their drawbacks as highlighted in Table 5-2:
Table 5-2 Fibre Channel cabling scenarios Scenario VIOS Partition connected to 1 SAN switch, Figure 5-2 on page 80 VIOS1 is unavailable for storage. The LUNS are accessible via VIOS2. VIOS1 is affected. VIOS2 is unaffected. Easier to pinpoint cabling problems as all connections on VIOS1 are connected to SAN switch 1; for VIOS2 all connections are connected to SAN switch 2. VIOS Partition connected to 2 SAN switches, Figure 5-3 Storage is available through both VIOS partitions. VIOS1 and VIOS2 are both impacted and may lose connectivity to the SAN. Harder to manage cable issues as VIOS1 and VIOS2 have connections to both SAN switch 1 and 2.
Cabling issues
81
4815ch05.fm
4815ch05.fm
Memory Sharing since the savings can be used to either lower memory over commitment levels or to create room to increase logical partitions memory footprint. Memory Deduplication is the topic of RedPaper REDP-4827.
83
4815ch05.fm
84
4815bibl.fm
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this paper.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this document. Note that some publications referenced in this list might be available in softcopy only. IBM Systems Director Management Console: Introduction and Overview, SG24-7860 IBM PowerVM Virtualization Introduction and Configuration, SG24-7940 Integrated Virtualization Manager on IBM System p5, REDP-4061 IBM PowerVM Virtualization Introduction and Configuration, SG24-7940 You can search for, view, download or order these documents and other Redbooks, Redpapers, Web Docs, draft and additional materials, at the following website: ibm.com/redbooks
Online resources
These websites are also relevant as further information sources: PowerVM QuickStar by William Favorite <wfavorite@tablesace.net> http://www.tablespace.net/quicksheet/powervm-quickstart.html IBM i Information Center http://publib.boulder.ibm.com/eserver/ibm.html NIM installation and backup of the VIO server https://www-304.ibm.com/support/docview.wss?uid=isg3T1011386#4
85
4815bibl.fm
86
Back cover
IBM PowerVM
Getting Started Guide
Step by step virtualization configuration from scratch to the first partition IVM, HMC, and SDMC examples provided Advanced configurations included
IBM PowerVM virtualization technology is a combination of hardware and software that supports and manages the virtual environments on POWER5, POWER5+,POWER6 and POWER7-based systems. Available on IBM Power Systems, and IBM BladeCenter servers as optional Editions, and supported by the AIX, IBM i, and Linux operating systems, this set of comprehensive systems technologies and services is designed to enable you to aggregate and manage resources using a consolidated, logical view. Deploying PowerVM virtualization and IBM Power Systems offers you the following benefits: Lower energy costs through server consolidation Reduced cost of your existing infrastructure Better management of the growth, complexity, and risk of your infrastructure This IBM Redpaper publication will drive through a quick start guide to help you install and configure a complete PowerVM virtualization solution on IBM Power Systems either using Integrated Virtualization Manager (IVM), Hardware Management Console (HMC), Virtual IO Server (VIOS), or Systems Director Management Console (SDMC). The paper is targeted to new customers who need first instructions on how to install, configure and bring up the whole system on an easy and quick way.
Redpaper
INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION