Вы находитесь на странице: 1из 402

HP BladeSystem Solutions I: Planning and

Deployment
CSG21230SG10411

HP BladeSystem Solutions I: Planning and


Deployment
CSG21230SG10411

HP Training

Student guide

Copyright 2004 Hewlett-Packard Development Company, L.P.


The information contained herein is subject to change without notice. The only warranties for HP
products and services are set forth in the express warranty statements accompanying such
products and services. Nothing herein should be construed as constituting an additional warranty.
HP shall not be liable for technical or editorial errors or omissions contained herein.
This is an HP copyrighted work that may not be reproduced without the written permission of HP.
You may not use these materials to deliver training to any person outside of your organization
without the written permission of HP.
Intel, Pentium, Xeon, and Itanium are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
Windows, MS Windows, and Windows NT are US registered trademarks of Microsoft
Corporation.
UNIX is a registered trademark of The Open Group.
VHDM is a registered trademark of Teradyne Inc.
Printed in USA
HP BladeSystem Solutions I Planning and Deployment
Student Handout 1
November 2004
HP Restricted

Contents

Course Overview ................................................................................. 1


Module 1 Introducing the HP BladeSystem Portfolio................ 11

Define server blade and server blade enclosure

Describe the HP BladeSystem product line

Identify the deployment and management tools available for the HP


BladeSystem solutions

Discuss the benefits of server blades and HP BladeSystem solutions TCO

Module 1 Lab: Classroom Setup and Configuration ................. 39

Describe the classroom arrangement

Verify the initial classroom configuration

Explain the hardware layout

Identify the blade server assignments for each student group

Validate the HP StorageWorks Modular Smart Array 1000 (MSA1000)


configuration

Describe the HP ProLiant BL p-Class GbE2 Interconnect Switch


configuration

Module 2 ProLiant BL p-Class Server Blades and Infrastructure


............................................................................................................ 47

Discuss the HP ProLiant BL p-Class system anatomy

Describe the server blade enclosure

Compare the differences among the ProLiant BL20p G2, BL20p G3, BL30p,
and BL40p server blades

List and describe the server blade options

Rev. 4.41

Network interconnectivity

Storage connectivity

Power infrastructure

Design the ProLiant BL p-Class power infrastructure

HP Restricted

HP BladeSystem Solutions I Planning and Deployment

Module 3 Site Planning and Infrastructure Design.................... 95

Plan a deployment site for HP BladeSystem solutions

Plan a target data center environment

Design the power infrastructure for ProLiant BL p-Class servers

Module 3 Lab 1: Using the HP ProLiant BL p-Class Sizing Utility


.......................................................................................................... 111

Access the ProLiant BL p-Class Sizing Utility

Use the sizing utility graphical user interface (GUI)

Configure the blade enclosures

Configure the server blades

Configure the rack-centralized power subsystem

Obtain the equipment list summary

Reset the sizing utility

Determine the maximum rack density

Module 3 Lab 2: Setting Up and Configuring a p-Class Blade


System ............................................................................................. 129

Identify the HP BladeSystem components

Install the power supplies in the power enclosure

Install the interconnects

Cable and power on the system

Module 4 ProLiant BL p-Class Network Connectivity Options


.......................................................................................................... 155

ii

Discuss general networking concepts

VLAN

STP

Port trunking, load balancing, and teaming

Discuss ProLiant BL p-Class server blade signal routing

Identify the available HP BladeSystem interconnect options

Choose the appropriate interconnect options for HP BladeSystem servers

Describe GbE Interconnect Switch best practices

HP Restricted

Rev. 4.41

Contents

Module 4 Lab 1: Configuring the HP ProLiant BL GbE2


Interconnect Switch ........................................................................ 203

Set up and cable the HP ProLiant BL GbE2 Interconnect Switch

Access the switch console interface

Set a static IP address for the switch management interface

Manipulate the switch configuration files and firmware images

Access the GbE2 Interconnect Switch with a web browser

Configure port trunking

Module 4 Lab 2: Configuring VLANs and STP with the HP


ProLiant BL GbE2 Interconnect Switch ........................................ 219

Verify connectivity between server blades on two separate switches and a


single connection between the switches

Add Virtual Local Area Network (VLAN) connectivity between servers on


separate switches

Apply the basic concepts of the Spanning Tree Protocol (STP) by using
VLANs in conjunction with STP

Module 4 Lab 3: Accessing and Configuring iLO.................... 235

Access and configure the integrated Lights-Out (iLO) of your server blade

Upgrade the HP BladeSystem firmware

Module 5 Deploying ProLiant BL p-Class Server Blades ....... 255

Rev. 4.41

Deploy an HP BladeSystem server using

RDP

iLO

Systems Insight Manager

Prepare a deployment server

Use RDP and iLO to manage an HP BladeSystem solution

HP Restricted

iii

HP BladeSystem Solutions I Planning and Deployment

Module 5 Lab 1: Preparing the Deployment Server................. 273

Install the Altiris Deployment Solution 6.1 on a deployment server running


Microsoft Windows Server 2003

Install the HP ProLiant Integration Module for the Deployment Solution 1.60

Complete the HP ProLiant Essentials Rapid Deployment Pack (RDP)


predeployment configuration:

Configure the Preboot eXecution Environment (PXE) to process new


computers automatically.

Synchronize the console name with the Microsoft Windows name.

Preconfigure the Deployment Server agent.

Preconfigure the Insight Web Agent in the HP ProLiant Support Pack


(PSP) for Windows.

Module 5 Lab 2: Creating a Windows Server 2003 Reference


Server ............................................................................................... 303

Connect the server blade to an HP StorageWorks Modular Smart Array 1000


(MSA1000)

Deploy a scripted Microsoft Windows Server 2003 installation to a Preboot


eXecution Environment (PXE)-enabled server blade

Install the Altiris eXpress Deployment Server Agent on the reference server
blade

Remotely access the server blade

Capture the reference server blade hardware configuration and disk image

Module 5 Lab 3: Creating a Red Hat Enterprise Linux AS 3


Reference Server............................................................................. 335

iv

Add Linux jobs to the Deployment Server Console

Erase the target server

Deploy a scripted Linux installation

Configure the HP StorageWorks Modular Smart Array 1000 (MSA1000)

Capture the reference server blade hardware configuration and disk image

HP Restricted

Rev. 4.41

Contents

Module 5 Lab 4: Deploying Windows Server 2003 Using Disk


Imaging ............................................................................................ 365

Deploy Windows Server 2003 using the hardware configuration files and the
disk image previously created

Configure and demonstrate the rip-and-replace functionality

Module 5 Lab 5: Deploying Red Hat Enterprise Linux AS 3 Using


Disk Imaging .................................................................................... 379

Deploy Red Hat Enterprise Linux AS 3 using the hardware configuration


files and the disk image previously created

Configure and demonstrate the rip-and-replace functionality

Module 6 ProLiant BL p-Class Storage Connectivity Options


.......................................................................................................... 393

Identify the storage solutions supported by the HP BladeSystem

Describe HP BladeSystem SAN support

Explain how to connect an HP ProLiant BL p-Class server to an HP SAN

Discuss the process of booting from a SAN

Module 6 Lab 1: Booting Windows Server 2003 from a SAN


.......................................................................................................... 417

Connect the server blade to an HP StorageWorks Modular Smart Array 1000


(MSA1000)

Disable the integrated array controller and change the boot order

Configure the QLogic Host Bus Adapters (HBAs)

Modify the deployment job to support a SAN boot

Install the operating system

Module 6 Lab 2: Booting Red Hat Enterprise Linux AS 3 from


a SAN................................................................................................ 447

Rev. 4.41

Connect the server blade to an HP StorageWorks Modular Smart Array 1000


(MSA1000)

Disable the integrated array controller and change the boot order

Configure the QLogic Host Bus Adapters (HBAs)

Modify the deployment job to support a SAN boot

Install the operating system

HP Restricted

HP BladeSystem Solutions I Planning and Deployment

Module 7 ProLiant BL p-Class Server Blade Management ..... 477

Identify functions and components of Systems Insight Manager

Discuss how OVO for Windows provides management services for HP


BladeSystems

Explain how Systems Insight Manager integrates with OVO for Windows to
manage HP BladeSystems

Describe how to manage HP BladeSystems using iLO technology

Module 7 Lab: HP SIM 4.1 Installation and Discovery............. 499

Verify the HP System Insight Manager (HP SIM) 4.1 hardware and software
requirements

Install and configure HP SIM 4.1 on a Microsoft Windows Server 2003


system

Navigate the HP SIM 4.1 home page

Run the first device discovery

Module 8 ProLiant BL p-Class Service and Troubleshooting


.......................................................................................................... 527

Use the ProLiant BL p-Class Diagnostic Station to communicate with an HP


BladeSystem solution

Discuss the service and troubleshooting procedures for


HP BladeSystems

List the HP warranty and support options for HP BladeSystem servers

Appendix ProLiant BL p-Class Server Blade Enclosure


Installation Guide ............................................................................ 567

vi

HP Restricted

Rev. 4.41

HP BladeSystem Solutions I Planning and


Deployment

Course Overview

HP BladeSystem
Solutions I Planning
and Deployment
Course Overview

HP Restricted
2004 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice

Rev. 4.41

HP BladeSystem Solutions I Planning and


Deployment

Course Overview

Introduction

Identify HP BladeSystem server blades


Deploy HP BladeSystem server blades
Connect HP BladeSystem server blades to a network
Connect HP BladeSystem server blades to storage devices
Manage HP BladeSystem server blades
Troubleshoot common HP BladeSystem server blade
problems

Rev. 4.41

HP Restricted

Overview 2

Introduction
The HP BladeSystem portfolio includes a full range of solutions to optimize server, network,
and storage use. HP BladeSystem solutions are built on HP ProLiant BL
p-Class server blades, which provide higher density and less cabling than rack-mounted
servers. ProLiant BL p-Class server blades also offer simple deployment solutions and efficient
management tools.
Because the high-density ProLiant BL p-Class server blades offer several features specific to
their design, there are special considerations in their implementation and maintenance. In this
course, you will learn how to:
Identify HP BladeSystem server blades
Deploy HP BladeSystem server blades
Connect HP BladeSystem server blades to a network
Connect HP BladeSystem server blades to storage devices
Manage HP BladeSystem server blades
Troubleshoot common HP BladeSystem server blade problems

Rev. 4.41

HP BladeSystem Solutions I Planning and


Deployment

Course Overview

Prerequisites
Prerequisite certifications
MCSE for Windows 2000/Windows Server 2003 or
Red Hat/SuSE Linux
ASE

Prerequisite training
HP StorageWorks Full-Line Technical WBT
Windows 2000 Integration and Performance course
ProLiant Essentials RDP
Advanced Technical Training
Deploying Linux on ProLiant servers using RDP Linux Edition

Installing and Using HP Systems Insight Manager (optional


but recommended)
Rev. 4.41

HP Restricted

Overview 3

Prerequisites
This course is a Master Accredited Systems Engineer (ASE) level course. It is designed for
people who have the required certifications or equivalent knowledge and experience. The
student guide and labs designed for this course, combined with other information you receive
from HP, will help you prepare for the Master ASE exam.
Prerequisite certifications
The following certifications or equivalent knowledge and experience are required before taking
this class:
Microsoft Certified Systems Engineer (MCSE) for Windows 2000/Windows Server 2003 or
Red Hat/SuSE Linux certification
HP Accredited Systems Engineer (ASE) certification
Prerequisite training
In addition to these certifications, each student is required to have completed the following
training or have the equivalent knowledge and experience:
HP StorageWorks Full-Line Technical Web-Based Training (WBT)
Microsoft Windows 2000 Integration and Performance course
ProLiant Essentials Rapid Deployment Pack (RDP) Advanced Technical Training or
Deploying Linux on HP ProLiant servers using RDP Linux Edition
Installing and Using HP Systems Insight Manager (optional but recommended)
Important! This course builds on knowledge gained in the prerequisite certifications and
training. If a student does not meet the prerequisites, this course can be extremely difficult or
impossible to complete. The course is written, and will be taught, as if the prerequisites have
been met by all students.

Rev. 4.41

HP BladeSystem Solutions I Planning and


Deployment

Course Overview

Course objectives (1 of 3)
Identify the major components of an HP BladeSystem
solution
Describe the ProLiant BL server blade line strategy
Identify the HP BladeSystem families
Discuss the deployment and management tools available
for HP BladeSystem solutions
Compare the differences among the ProLiant BL20p G2,
BL30p, and BL40p server blades
Identify the ProLiant BL p-Class system components
Describe technologies and concepts unique to the ProLiant
BL p-Class server blade system

Rev. 4.41

HP Restricted

Overview 4

Course objectives (1 of 3)
After completing this course, you should be able to:
Identify the major components of an HP BladeSystem solution
Describe the HP ProLiant BL server blade line strategy
Identify the HP BladeSystem families
Discuss the deployment and management tools available for HP BladeSystem solutions
Compare the significant differences among the HP ProLiant BL20p G2, BL30p, and BL40p
server blades
Identify the ProLiant BL p-Class system components
Describe technologies and concepts that are unique to the ProLiant BL p-Class server blade
system

Rev. 4.41

HP BladeSystem Solutions I Planning and


Deployment

Course Overview

Course objectives (2 of 3)
Discuss the architecture of the ProLiant BL p-Class system
components and the benefits they provide
List server blade options and describe how they are used to
enhance performance
Plan a deployment site for HP BladeSystem servers
Plan the deployment of a target environment
Design the power infrastructure of ProLiant BL p-Class
server blades
Deploy HP BladeSystem servers
Prepare a deployment server and deploy HP BladeSystem
servers
Discuss general networking concepts
Choose interconnect options for HP BladeSystem servers
Explain iLO port aggregation in HP BladeSystem servers
Rev. 4.41

HP Restricted

Overview 5

Course objectives (2 of 3)
Discuss the architecture of the ProLiant BL p-Class system components and the benefits
they provide
List the server blade options and describe how they are used to enhance performance in the
ProLiant BL p-Class system
Plan a deployment site for HP BladeSystem servers
Plan the deployment of a target environment
Design the power infrastructure of HP ProLiant BL p-Class server blades
Deploy HP ProLiant BladeSystem servers using
HP ProLiant Essentials RDP
HP integrated Lights-Out (iLO)
HP Systems Insight Manager
Prepare a deployment server and deploy HP BladeSystem servers
Discuss general networking concepts
VLAN
Spanning tree protocol
Port trunking and teaming
Choose the appropriate interconnect options for HP BladeSystem servers
Explain iLO port aggregation in HP BladeSystem servers

Rev. 4.41

HP BladeSystem Solutions I Planning and


Deployment

Course Overview

Course objectives (3 of 3)
Identify the storage solutions supported by the HP BladeSystem
Describe HP BladeSystem SAN support
Explain how to connect a ProLiant BL p-Class server to a SAN
Discuss the process of booting from a SAN
Identify functions and components of Systems Insight Manager
Discuss how HP OVO for Windows provides management
services for HP BladeSystems
Explain how Systems Insight Manager integrates with OVO for
Windows and iLO technology to manage HP BladeSystems
Use the ProLiant BL p-Class Diagnostic Station to communicate
with an HP BladeSystem solution
Discuss service and troubleshooting procedures for HP
BladeSystems
List HP warranty and support options for HP BladeSystems
Rev. 4.41

HP Restricted

Overview 6

Course objectives (3 of 3)

Identify the storage solutions supported by the HP BladeSystem


Describe HP BladeSystem storage area network (SAN) support
Explain how to connect a ProLiant BL p-Class server to an HP SAN
Discuss the process of booting from a SAN
Identify functions and components of Systems Insight Manager
Discuss how HP OpenView Operations (OVO) for Windows provides management
services for HP BladeSystems
Explain how Systems Insight Manager integrates with OVO for Windows and iLO
technology to manage HP BladeSystems
Use the ProLiant BL p-Class Diagnostic Station to communicate with an HP BladeSystem
solution
Discuss the service and troubleshooting procedures for HP BladeSystems
List the HP warranty and support options for HP BladeSystem servers

Rev. 4.41

HP BladeSystem Solutions I Planning and


Deployment

Course Overview

Other information
Course modules
Classroom facilities
Location of restrooms and smoking areas
Class hours

Classroom guidelines
Do not interfere with other students learning
Do not smoke in classroom

Rev. 4.41

HP Restricted

Other information
Course modules
This course includes the following modules:
Module 1 Introducing the HP BladeSystem Portfolio
Module 2 ProLiant BL p-Class Server Blades and Infrastructure
Module 3 Site Planning and Infrastructure Design
Module 4 ProLiant BL p-Class Network Connectivity Options
Module 5 Deploying ProLiant BL p-Class Server Blades
Module 6 ProLiant BL p-Class Storage Connectivity Options
Module 7 ProLiant BL p-Class Server Blade Management
Module 8 ProLiant BL p-Class Service and Troubleshooting
Classroom facilities
The instructor will give you detailed information concerning:
Location of restrooms and smoking areas
Class hours
Class start time
Scheduled breaks
Class stop time

Rev. 4.41

Overview 7

HP BladeSystem Solutions I Planning and


Deployment

Course Overview

Classroom guidelines
Use the following guidelines when attending this class:
Do not interfere with other students learning.
Be on time for class.
Turn all mobile phones and pagers to off or the silent setting.
Be professional in your speech and actions.
Do not change or modify lab equipment, passwords, or software configurations.
Do not smoke in the classroom.
Important! You may be removed from the classroom and not allowed to return if you fail to
follow the classroom guidelines.

Rev. 4.41

HP BladeSystem Solutions I Planning and


Deployment

Rev. 4.41

Rev. 4.41

Course Overview

HP Restricted

Overview 9

HP BladeSystem Solutions I Planning and


Deployment

Rev. 4.41

Course Overview

10

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

Introducing the HP
BladeSystem Portfolio
Module 1

HP Restricted
2004 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice

Rev. 4.41

11

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

Objectives
Define server blade and server blade enclosure
Describe the HP BladeSystem product line
Identify the deployment and management tools available for
the HP BladeSystem solutions
Discuss the benefits of server blades and HP BladeSystem
solutions TCO

Rev. 4.41

HP Restricted

1 12

Objectives
After completing this module, you should be able to:
Define server blade and server blade enclosure
Describe the HP BladeSystem product line
Identify the deployment and management tools available for the HP BladeSystem solutions
Discuss the benefits of server blades and the total cost of ownership (TCO) of HP
BladeSystem solutions

Rev. 4.41

12

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

Introducing server blades


Businesses are deploying more servers for
Web serving
Media streaming
Load balancing
Caching
Firewall protection

Server blades address the need for


Increased server density
Rapid deployment and server software provisioning
Remote manageability
Industry-standard compatibility

Rev. 4.41

HP Restricted

1 13

Introducing server blades


As the need for more powerful and efficient computing systems grows, businesses are
deploying more servers for edge-of-the-network applications such as:
Web serving
Media streaming
Load balancing
Caching
Firewall protection
However, adding servers increases operating costs, consumes more power and space, and
increases the complexity of system administration. Existing resources can become inadequate.
Server blades enable enterprises to accommodate growth and use resources more efficiently.
The HP BladeSystem portfolio specifically addresses the needs of space-constrained
enterprises and service providers for:
Increased density
Rapid deployment and server software provisioning
Remote manageability
Industry-standard compatibility

Rev. 4.41

13

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

What is a server blade?


An independently functioning
server with
Processors
Memory
Network adapters
Optional hard drives

Lacks internal power supply


and external connections
found on traditional servers

Rev. 4.41

HP Restricted

1 14

What is a server blade?


A server blade is an independently functioning server with all the necessary server components
integrated on a single board, including:
Processors
Memory
Network adapters
Optional hard drives
A server blade, however, lacks the internal power supply and external connections found on
traditional servers.
The modular, hot-pluggable architecture of the server blade offers increased density over
traditional rack-mounted servers as well as increased adaptability, scalability, and
manageability.

Rev. 4.41

14

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

What is a server blade enclosure?

ProLiant BL p-Class server blade enclosure

Houses multiple server blades


Eight slots, up to 16 server blades

Allows server blades to share resources


Power supplies
Cooling fans
Network interconnects
Rev. 4.41

HP Restricted

1 15

What is a server blade enclosure?


A server blade enclosure houses multiple blades in a compact, precabled chassis, with eight
slots for up to 16 server blades. This configuration enables the blades to share common
resources such as power supplies, cooling fans, and interconnects.
The rack-mountable enclosures are easily installed in standard 22U, 36U, and 42U racks with
spring-loaded rack rails and thumbscrews.
Note: Do not use the term server to refer to server blade enclosures or to the enclosure and
server blades collectively. The term server, or more correctly, server blade, is used for the
individual blades only. The server blade enclosure is the chassis that houses the server blades
and other components. The term system refers to all the components collectively.

Rev. 4.41

15

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

HP BladeSystem product line


A complete portfolio of server
blades
Blade PC solutions Dense,
single-processor PC blades
ProLiant server blades
High-performance SMP blades

HP industry-leading
technologies
Tool-free mechanical design
Hot-pluggable components
Remote management through
iLO
Blade PC solutions and ProLiant server
blades in a 42U rack
Rev. 4.41

HP Restricted

1 16

HP BladeSystem product line


The HP BladeSystem blade portfolio ranges from maximum density, single-processor blade PC
solutions to high-performance ProLiant symmetric multiprocessing (SMP) blades for mid-tier
and back-end applications.
HP plans to expand this portfolio to include:
Desktop blades
Workstation blades
HP BladeSystem servers feature HP industry-leading technologies such as:
Tool-free mechanical designs
Hot-pluggable components
Remote management through integrated Lights-Out (iLO) (BL p-Class only)

Rev. 4.41

16

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

HP BladeSystem portfolio
Blade PC solutions Power
and space efficiency

ProLiant server blades Highperformance, high-availability

Optimized architecture for office


productivity applications
Integrated tools for scale-out
environments

Fault-resilient power, redundant


NICs, integrated RAID, and
optional hot-pluggable SCSI
drives
Remote management
Dynamic web hosting and media
streaming
Models
BL20p G2 and G3
BL30p

System redundancy
ProLiant C-GbE Interconnect
Model
bc1000 blade PC

BL40p

Rev. 4.41

HP Restricted

1 17

HP BladeSystem portfolio
The HP BladeSystem server portfolio includes two distinct families of server blades:
Blade PC solutions Designed for power and space efficiency
Optimized architecture for office productivity applications
Integrated management and deployment tools for scale-out environments
System-level redundancy
ProLiant C-GbE Interconnect (with four 10/100/1000Mb/s Gigabit Ethernet RJ-45
uplinks for up to 40-to-1 network cable reduction)
Model HP bc1000 blade PC
ProLiant server blades High-performance, high-availability server blades designed for
multitiered data center architectures
Intelligent fault-resilient power, redundant NICs, integrated RAID, and optional hotpluggable SCSI drives.
Note: The BL30p has optional internal boot drives.
Remote management using iLO Advanced with built-in graphic console and virtual
media
System of choice for dynamic web hosting and media streaming
Current models
ProLiant BL20p Generation 2 (G2) and Generation 3 (G3)
ProLiant BL30p
ProLiant BL40p

Rev. 4.41

17

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

Blade PC solutions
Centralize desktop compute and storage resources
Used in CCI solutions
HP bc1000 blade PC on back end
Thin clients on front end

Currently available in North America

Rev. 4.41

HP Restricted

1 18

Blade PC solutions
HP blade PC solutions are computing solutions that centralize desktop compute and storage
resources into easily managed, highly secure data centers. They also offer users the
convenience and familiarity of a traditional desktop environment.
The HP blade PC solutions are single-processor blades used in three-tiered Consolidated Client
Infrastructure (CCI) solutions, which feature:
A compute tier with racks of HP bc1000 blade PCs on the back end
An access tier using thin clients on the front end
A resource tier made up of a storage pool, network printers, application servers, and other
networked resources
Blade PC solutions are covered by End User Workplace Solutions from HP Services. These
services can help you simplify the provisioning, support, and management of any access device
or printer. These services also provide users with secure access to corporate information, email,
Internet, and printer services, anywhere, anytime.
Note: The HP blade PC solutions are currently available in North America only.

Rev. 4.41

18

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

HP bc1000 blade PC
Features
3U form factor
1.0GHz Transmeta Efficeon TM8000 processor
512MB DDR SDRAM PC2700 at 333MHz with 1024KB L2 cache
expandable to 1GB
Dual 10/100 integrated NICs
40GB hard drive
Three-year limited hardware warranty

Benefits
Maximizes density up to 20 blade PCs per enclosure
Minimizes power and cooling requirements
Offers good performance
Maximizes network throughput
Provides ample storage
Rev. 4.41

HP Restricted

1 19

HP bc1000 blade PC
The HP bc1000 blade PC offers the following features:
Form factor A 3U form factor maximizes density.
Processor A 1.0GHz Transmeta Efficeon TM8000 processor offers thermal efficiencies
designed to minimize power and cooling requirements, which lowers TCO.
Memory 512MB DDR SDRAM PC2700 at 333MHz with 1024KB L2 cache memory
expandable to 1GB offers good performance for mainstream applications.
Network controllers Dual 10/100 integrated NICs help maximize throughput
efficiencies.
40GB hard drive Ample storage is available for accessing and working with data
Warranty A three-year limited hardware warranty is standard on the blade, one year on
the drive.
Currently, the HP bc1000 blade PC is only available with Windows XP Embedded Service
Pack (SP) 1 operating system installed.
Benefits
Provides extreme densityup to 20 blade PCs per enclosure
Minimizes power and cooling requirements
Offers good performance
Maximizes network throughput
Provides ample storage

Rev. 4.41

19

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

HP Consolidated Client Infrastructure


For customers seeking to
Lower desktop costs
Regain valuable desktop real estate
Secure user data
Standardize desktop solutions

Components
Client tier
DAE
Compute tier
Data resources tier

Rev. 4.41

HP Restricted

1 20

HP Consolidated Client Infrastructure


The HP CCI is a complete solution for customers seeking to:
Lower their desktop costs
Regain valuable desktop real estate
Secure user data
Standardize their desktop solution across the enterprise
CCI enables users on thin clients to connect to and work from application servers, such as the
HP bc1000 blade PC.
The CCI architecture incorporates the following components:
Client tier Thin client devices connect to a remote blade PC using the Microsoft
Remote Desktop Protocol. The HP bc1000 PCs host Microsoft Windows XP to the client
tier.
Dynamic Allocation Engine (DAE) The DAE ensures that users reach an available
device. This layer is transparent to the user.
Compute tier The operating system and applications are installed on the internal drive
of the bc1000 PC. The systems administrator performs management tasks such as updating
the device and the applications and redeploying the desktops remotely on the blades.
Data resources tier Data in the CCI is written remotely to a storage area network
(SAN) or network attached storage (NAS). Users on the client tier save their work to the
central location and retrieve it through a local network, virtual private network (VPN), or
browser.

Rev. 4.41

20

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

CCI features and benefits


Simplifies management and deployment of user desktops
User data is managed and retrieved from a central location
User data backup can be planned and scheduled

Delivers significant benefits


Reduced TCO
Increased ease of virus isolation and eradication
Smoother and lower network traffic
Easier consolidation of other enterprise IT components

Provides connectivity from any location at any time


Has a shorter startup cycle
Delivers data center levels of availability

Rev. 4.41

HP Restricted

1 21

CCI features and benefits


CCI simplifies management and deployment of user desktops because:
User data is managed and retrieved from a central location.
Data backup can be planned and scheduled to ensure the integrity of the data and to
minimize or eliminate data loss.
In addition to significantly reducing the frequency of data loss and corruption, CCI delivers the
following benefits:
Lower TCO
Simplified virus isolation and eradication
Smoother and lower network traffic between data centers and users
Easier consolidation of other enterprise IT components, including:
File servers
Application servers
Database servers
Storage systems
For enterprise users, CCI provides connectivity from any location at any time, has a shorter
startup cycle, and delivers data-center levels of availability.

Rev. 4.41

21

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

Thin clients

HP thin client t5700


HP thin client t5515
HP thin client t5500
HP thin client t5300

Rev. 4.41

HP Restricted

1 22

Thin clients
The HP Thin Client Series uses streamlined components to reduce hardware and software
duplication across a network. Both Windows CE.NET and Windows XP Embedded operating
systems are supported
The HP portfolio of thin clients includes:
HP thin client t5700
HP thin client t5515
HP thin client t5500
HP thin client t5300
You can read the complete specifications at:
http://h18004.www1.hp.com/products/thinclients/index_t5000.html

Rev. 4.41

22

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

ProLiant BL p-Class server blades

BL20p G2 and G3

2P

High-performance 2P blade designed with enterprise


availability
Mid-tier applications server blade

BL30p

Optimized for compute density and external storage


solutions
Mid-tier applications server blade

BL40p

4P

Rev. 4.41

High-performance 4P blade designed for missioncritical applications


Back-end server blade

HP Restricted

1 23

ProLiant BL p-Class server blades


The HP ProLiant portfolio includes these server blades:
BL20p G2 and G3
High-performance 2P blade designed with enterprise availability
Mid-tier applications server blade
Ideal for dynamic web/ application service provider (ASP) hosting, computational
cluster nodes, terminal server farms, and AV/media streaming
BL30p
Optimized for compute density and external storage solutions
Mid-tier applications server blade
Ideal for compute density and external storage solutions
BL40p
High-performance 4P blade designed for mission-critical applications
Back-end server blade
Ideal for database servers, mail/messaging servers, and high-availability clusters

Rev. 4.41

23

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

Ideal customer profiles


ProLiant server blades

Blade PC solutions

Two-processor server blades


Dynamic web/ASP hosting
Computational clusters

Single-processor PC blades for


CCI

Terminal server farms


Audio/visual media
streaming
Four-processor server blades
Database servers
Mail/messaging servers
High-availability cluster
nodes

Rev. 4.41

HP Restricted

1 24

Ideal customer profiles


The HP BladeSystem architecture is designed to protect customer investments in two ways:
Provide longevity of the server blades and the interconnect infrastructure
Enable installation of server blades in standard racks along with legacy servers and storage
The HP ProLiant server blades target large corporate and service provider accounts with the
following requirements:
Two-processor server blades
Dynamic web and ASP hosting
Computational clusters
Terminal server farms
Audio/visual media streaming
Four-processor server blades
Database servers
Mail/messaging servers
High-availability cluster nodes

Rev. 4.41

24

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

HP deployment and management tools

RDP
SmartStart Scripting Toolkit
Systems Insight Manager
OpenView

Rev. 4.41

HP Restricted

1 25

HP deployment and management tools


Deployment and management tools enable you to quickly and easily implement and configure
software on multiple server blades. Deployment and management tools include:
HP ProLiant Essentials Rapid Deployment Pack (RDP)
HP SmartStart Scripting Toolkit
HP Systems Insight Manager
HP OpenView

Rev. 4.41

25

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

RDP
ProLiant Integration Module
SmartStart Scripting Toolkit
Configuration jobs for industry-standard operating systems
Sample unattended files
PSPs
Software drivers
Management agents
Documentation

Altiris Deployment Solution


Window-based management console
Windows or Linux editions

Rev. 4.41

HP Restricted

1 26

RDP
RDP is a complete solution for ProLiant servers that automates the process of deploying and
provisioning server software, enabling companies to quickly and easily adapt to changing
business demands. RDP combines an off-the-shelf version of Altiris Deployment Solution with
the ProLiant Integration Module.
The ProLiant Integration Module consists of software optimizations for ProLiant servers,
including:
SmartStart Scripting Toolkit
Configuration jobs for industry-standard operating systems
Sample unattended files
ProLiant Support Packs (PSPs) that include software drivers, management agents, and
important documentation
Deployment Solution provides a choice between a Windows-based or a web-based
management console. Both consoles have an intuitive user interface, making deployment of a
server or multiple servers easy and consistent. You can deploy servers through the imaging
feature or by scripting using the SmartStart Scripting Toolkit.
RDP is available in Linux and Windows editions. The Windows edition is hosted on a
Windows server and is intended for heterogeneous environments deploying both Windows and
Linux systems. The Linux edition is hosted on a Linux server and is intended for a
homogeneous Linux environment deploying only Linux systems.

Rev. 4.41

26

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

SmartStart Scripting Toolkit


Uses a combination of DOS utilities and batch files to
automate the operating system installation process
Is an excellent solution for large server deployments
Features and functions
Is delivered using a web download or on a RDP CD-ROM
Can be set up by advanced users and deployed by beginners
Supports all ProLiant DL, ML, and BL servers and selected
legacy server models
Performs an unattended installation
Can perform remote installations the RILOE Virtual Floppy
feature

SmartStart 7.0 and earlier does not support HP


BladeSystem servers
Rev. 4.41

HP Restricted

1 27

SmartStart Scripting Toolkit


The SmartStart Scripting Toolkit uses a combination of DOS utilities and batch files for
configuring and deploying servers in a customized, predictable, and unattended manner. These
utilities duplicate the configuration of a source server on target servers with minimum user
interaction.
The SmartStart Scripting Toolkit is an excellent solution for large server deployments. It
requires advanced user knowledge to configure and detailed scripting knowledge to maintain
installation batch files for multiple server types.
Features and functions of the SmartStart Scripting Toolkit include:
Is delivered using a web download or on an RDP CD
Can be set up by advanced users and deployed by beginners
SmartStart 7.10 supports all ProLiant DL, ML, and BL servers and selected legacy server
models
Performs an unattended installation
Can perform remote installations with the Virtual Floppy feature of Remote Insight LightsOut Edition (RILOE)
Important! SmartStart 7.0 and earlier does not support HP BladeSystem servers. All relevant
software drivers, agents, and utilities for HP BladeSystem servers are packaged on the ProLiant
Essentials RDP CD.

Rev. 4.41

27

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

Systems Insight Manager


A single tool that provides the
capabilities to manage
Windows, Linux, and UNIX
server platforms
Provides
Fault management
Change management
Asset and inventory reporting
Remote control
Role-based security
Tool definitions
Visualization and support
Systems Insight Manager
view of a server rack
Rev. 4.41

HP Restricted

1 28

Systems Insight Manager


Systems Insight Manager provides all of the capabilities required to manage HP server
platforms running standard operating systems (Windows, Linux, and UNIX) in a single tool:
Fault management Identifies prefailure conditions to detect problems before they result
in downtime.
Change management Provides electronic updates through the HP website. The only
complete system software maintenance tool in the industry, Systems Insight Manager teams
with an agent running on each server to upload upgrades and monitor software baselines.
Asset and inventory reporting Eliminates the need for a physical inventory by
providing reports using 100 data collection parameters such as processor, memory, installed
software, and serial numbers.
Remote control Access to servers is always available from the Systems Insight
Manager home page, even if the server is in blue screen or kernel error conditions.
Role-based security Grants the system administrator control over which users can
perform select management operations on certain devices. This is different than Microsoft
Windows security.
Tool definitions Enables you to integrate applications or scripts into the interface.
Visualization and support Includes blade, enclosure, and rack views.
The Systems Insight Manager program is available for download from the HP website:
http://h18013.www1.hp.com/products/servers/management/hpsim/download.html
Additional HP management tasks can be added for a fee; the framework also supports thirdparty plug-ins.

Rev. 4.41

28

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

Systems Insight Manager (continued)


Systems Insight Manager architectural layers include:
Plug-in tasks Extend the software functionality. Add plug-in tasks based on your
environment and budget. Plug-ins for rapid deployment, performance management,
partition management, and workload management enable system administrators to choose
the software required to deliver required management of their hardware.
Adaptive control functions Enable an IT infrastructure to flex dynamically and
automatically to meet changing IT demands. This layer uses the tasks layer and the
automation engine to deliver an adaptive infrastructure.
Service provisioning Enables you to select a group of unconfigured servers and deploy
an image and configuration to them.
Automatic Service Recovery Automatically migrates the preexisting configuration to a
spare server to maintain or recover service if a server should fail.
Dynamic Resource Scaling Automatically adds or deletes server or storage resources
for a given service based on predefined policies.
Virtual or physical view Handles resource abstraction as servers are provisioned and
reprovisioned for various tasks. You can manage virtual machines running on servers and
associate them to physical servers.

Rev. 4.41

29

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

HP OpenView

NNM
OVO for Windows
OVO for UNIX
GlancePlus
GlancePlus Pak (GlancePlus and Performance Agent)
Performance Manager
SPIs
Rev. 4.41

HP Restricted

1 30

HP OpenView
OpenView products ensure smooth system operation by managing the availability and
performance of critical services across the enterprise. Products in the OpenView software suite
include:
Network Node Manager (NNM) Designed for all sizes of networks requiring
discovery, graphical layout, and advanced management of network equipment,
sophisticated root-cause analysis, and distributed management for large networks spanning
multiple departments.
OpenView Operations (OVO) for Windows Provides comprehensive event
management, proactive performance monitoring, and automated alerting, reporting, and
graphing for heterogeneous systems, middleware, and applications.
OVO for UNIX (Network Node Manager and Service Navigator) Provides a
distributed, large-scale management solution that monitors, controls, and reports the health
of IT environments.
GlancePlus Allows easy examination of system activities, identifies and resolves
performance bottlenecks, and tunes the system for more efficient operation.
GlancePlus Pak (GlancePlus and Performance Agent) Offers the diagnostic
capabilities of GlancePlus and the logging, alarming, and collection capabilities of
Performance Agent.
Performance Manager Monitors, analyzes, and forecasts resource utilization for
distributed and multi-vendor environments. Performance Manager uses data collected from
the Performance Agent and other sources to isolate bottlenecks and maximize resource
uptime.
Smart Plug-Ins (SPIs) Extends the management capabilities of OpenView to ensure
optimum performance and uptime for specific applications. SPIs include BEA WebLogic
Server, IBM WebSphere, Microsoft .Net, Active Directory, Exchange and SQL Server,
Oracle, PeopleSoft, and Sun Java System Application Server 7.

Rev. 4.41

30

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

OpenView working together with Systems


Insight Manager
Enables IT professionals to proactively manage the entire
computing infrastructure
Systems Insight Manager provides configuration management
capabilities and detailed information
OpenView provides end-to-end, multivendor systems, network,
application, and service-centric management

Systems Insight Manager sends information about HP


platforms to OpenView
Drill down to more detailed hardware-level information sources
Resolve specific hardware events and isolate root causes

Rev. 4.41

HP Restricted

1 31

OpenView working together with Systems Insight Manager


OpenView and Systems Insight Manager work together to enable IT professionals to
proactively manage their entire computing infrastructure with unsurpassed control. This
integration allows seamless cooperation of operators and administrators.
Systems Insight Manager provides configuration management capabilities and detailed
information regarding:
Property pages of devices with serial numbers
Processor types
Amount of memory
Physical and logical disk information
Temperature information of specific components
OpenView provides end-to-end, multivendor systems, network, application, and service-centric
management, focused on managing and optimizing services over IT, voice, and application and
data infrastructures.
OpenView products are used by operators, IT managers, and help desk specialists to ensure
smooth operations by managing the availability and performance of critical services across the
whole enterprise.
Systems Insight Manager sends information about HP platforms to OpenView and enables you
to drill down to more detailed hardware-level information sources to resolve specific hardware
events and isolate root causes.

Rev. 4.41

31

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

Why choose HP BladeSystems?


Innovative HP was first to market with blades
Comprehensive A portfolio of deployment tools and
management solutions unmatched in the market
Collaborative Industry-leading partnerships
Economical The densest dual-processor blades
Flexible A full range of interconnect options
Scalable Built with future expansion in mind

Rev. 4.41

HP Restricted

1 32

Why choose HP BladeSystems?


HP BladeSystem infrastructures offer a highly flexible and scalable environment that
maximizes IT staff efficiency and processes while dramatically reducing total cost of
ownership. HP BladeSystems are:
Innovative HP was first to market with blades
Comprehensive A portfolio of deployment tools and management solutions unmatched
in the market
Collaborative Industry-leading partnerships
Economical The densest dual-processor blades on the market
Flexible A full range of interconnect options
Scalable Built with future expansion in mind

Rev. 4.41

32

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

HP BladeSystem benefits
Data center space savings
Lower connectivity costs and simplified cabling

Fewer spare parts


Reduced installation, upgrade, and maintenance time
Higher system availability
Improved data center efficiency
Remote access for centralized management
Automated deployment and provisioning

Rev. 4.41

HP Restricted

1 33

HP BladeSystem benefits
Data center space savings Server blade systems reduce required data center space 14%
to 24%.
Lower connectivity costs and simplified cabling Up to 25% of a system
administrators time is spent in cable management, and cable failures are a primary cause of
downtime. Server blade systems are wired once and reconfigured through virtual LAN
(VLAN) software configuration tools.
Fewer spare parts Server blade architectures are designed for shared storage, so all user
changeable data should be on NAS and SANs. Server blades run operating systems and
applications only and are managed by software deployment tools, so there are fewer errors
during operating system, patch, and application maintenance.
Reduced installation, upgrade, and maintenance time HP BladeSystems are installed
once, then reconfigured with software tools as necessary. Adding and reconfiguring server
blades, network ports, cables, and disk capacity takes minutes instead of days.
Higher system availability Server blades are fully redundant with dual VLAN switches
per blade enclosure, redundant shared power systems across all blades in a rack, backplane
data paths (Ethernet and Fibre Channel SAN), local disks (RAID 1), and fans. In addition,
rip and replace server maintenancethrough the enclosure slot and using software
deployment tools such as RDPdecreases downtime.
Improved data center efficiency Blade systems are a catalyst to improving data center
ratios (devices managed per administrator) and reduce the need to touch every device in the
data center.
Remote access for centralized management You can centralize management of
multiple data centers and merge separate management domains (servers, network, and
storage).
Automated deployment and provisioning Blade systems provide optional redundant
SAN connectivity.

Rev. 4.41

33

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

TCO improvements across the lifecycle

Acquisition
Planning
Deployment and provisioning
Maintenance
Upgrades and replacements
Reprovisioning

Rev. 4.41

HP Restricted

1 34

TCO improvements across the lifecycle


Server blades support the standardization of hardware and software building blocks to lower
TCO throughout the product lifecycle. The stages of the lifecycle include:
Acquisition Adding blades is less expensive than adding rack-mounted servers. Upgrade
and replacement blades are also less costly.
Planning Blades require upfront planning, but pay off quickly in terms of savings for
installing new and managing redeployed blades.
Deployment and provisioning Hardware deployment is simpleyou plug the blades
into an enclosure. Software deployment can be automated, using images or scripts.
Maintenance Fewer service errors occur because of the more streamlined modular
architecture. Because there are fewer components per server, the system is more highly
available.
Upgrades and replacements With software provisioning tools, upgrades and
replacements require no recabling.
Reprovisioning Reprovisioning is a drag-and-drop event, saving time and resources and
maximizing hardware utilization.

Rev. 4.41

34

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

HP server blades reduce hard costs

Rev. 4.41

HP Restricted

1 35

HP server blades reduce hard costs


The initial costs of a ProLiant BL p-Class solution are lower than the costs of a 1U server
because server blades require a lower investment in network and keyboard/video/mouse
(KVM) switches, especially in environments with a GbE network and SAN connectivity.
ProLiant BL p-Class blades simplify cabling, which lowers connectivity costs. Total
networking costs are reduced because the system administrator spends less time in cable
management and recovering from cable failures.
With ProLiant BL p-Class blades provide maximum density, the footprint is reduced 14%
to 24%.
ProLiant BL p-Class blades speed up installation, provisioning, and maintenance, making it
easy to add and reconfigure servers, network ports, cables, and disk capacity.
ProLiant BL p-Class blades have fewer spare parts, which simplifies daily management.

Rev. 4.41

35

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

HP BladeSystem TCO Calculator


3 Year TCO Calculation
Summary
Number of Servers Compared

8
BL20P G2

Scenario

TCO / NPV

1U Server

BL20p G2 Savings

44,661 $

(115,473) $

3 year TCO per server $

5,583 $

(14,434) $

20,017

(79,926) $

(97,212) $

17,286

Acquisition Cost

160,133

Installation Cost

(2,333) $

(8,653) $

6,320

Yearly Operational Value

53,176 $

(4,000) $

57,176

Details
1U Server Scenario
Acquisition Costs
Server Acquisition cost

Year 0

Year 1

44,528

52,684

5,600

2,000

1,053

Year 2

Year 3

=from Step 4 includes FC HBA cost

Infrastructure Acquisition Costs from Step #4


Installation Costs
Racking costs
=rackable items x time to rack one x labor rate

Additional Power related installation


=from Step #1 input value

Cabling Costs
=number of cables x time to install one x labor rate

Maintenance/Upgrade Costs

4,000 $

4,000 $

4,000

= # of servers x # of events per year x time to remove and install x labor rate

Rev. 4.41

HP Restricted

1 36

HP BladeSystem TCO Calculator


The HP BladeSystem TCO Calculator:
Is a spreadsheet-based model that creates a three-year TCO (based on a net present value
[NPV]) for server blades and a comparative value for 1U rack-mounted servers (for
example, a ProLiant DL360)
Uses customer-specific data (such as labor rates, pricing, and power costs) combined with
rack configuration rules to create a specific answer for each customer
Creates what if scenarios to aid in the decision-making process
Is revised monthly as variables change and as additional functionality is added
Is available from http://presales.hp.com/iss/blades/ under Sales Aides Business Case
for BladeSystems TCO White Papers and Tools

Rev. 4.41

36

HP BladeSystem Solutions I Planning and Deployment

Introducing the HP BladeSystem Portfolio

Learning
check

Rev. 4.41

HP Restricted

1 37

Learning check
1. Define server blade.
_________________________________________________________________________
_________________________________________________________________________
__________________________________________________________
2. Define server blade enclosure.
_________________________________________________________________________
_________________________________________________________________________
__________________________________________________________
3. The HP BladeSystem _______________ portfolio offers high availability server blades
designed for multitiered data center architectures.
4. The HP BladeSystem _______________ portfolio only offers single-processor server
blades.
5. How does the HP BladeSystem lower TCO during acquisition?
_________________________________________________________________________
_________________________________________________________________________
_____________________________________________________________
6. Match the deployment and management tool with its feature.
a. RDP
___ Performs unattended installation
b. SmartStart Scripting Toolkit ___ Automates deployment
c. Systems Insight Manager
___ Identifies pre-failure conditions
d. HP OpenView
___ Manages the availability and performance of
critical services across the enterprise

Rev. 4.41

37

HP BladeSystem Solutions I Planning and Deployment

Rev. 4.41

Rev. 4.41

Introducing the HP BladeSystem Portfolio

HP Restricted

38

1 38

Classroom Setup and Configuration


Module 1 Lab

Objectives
After completing this lab, you should be able to:

Describe the classroom arrangement

Verify the initial classroom configuration

Explain the hardware layout

Identify the blade server assignments for each student group

Validate the HP StorageWorks Modular Smart Array 1000 (MSA1000)


configuration

Describe the HP ProLiant BL p-Class GbE2 Interconnect Switch


configuration

Introduction
This lab is provided as a reference point for validating the classroom setup and
configuration. Typically, most of the configuration work described in this lab has
already been performed by your instructor or the support staff.

Important
Do not proceed without first consulting your instructor or carefully reviewing a
configuration deviation document. Your instructor or the configuration
deviation document will explain what configuration steps, if any, must yet be
performed.

Regardless of the initial configuration state, you must become familiar with the
classroom setup and configuration. Depending on the initial configuration state,
perform one of the following:
1.

If you are to complete certain configuration steps described in this lab and
identified by your instructor or the configuration deviation document,
perform them as instructed. When done, review this entire lab to become
familiar with the classroom setup and configuration.

2.

If no configuration steps are to be completed, review this entire lab to


become familiar with the classroom setup and configuration.

When finished, proceed with the next lab as instructed.

Rev. 4.41

L1 39

HP BladeSystem Solutions I Planning and Deployment

Exercise 1 Classroom arrangement


The classroom consists of several student stations, each accommodating one or
more students. A typical classroom configuration consists of 6 such stations, each
for every 2 students, totaling 12 students per classroom.

Student stations
Six student stations should be available. Each student station consists of:

A deployment/management server, such as a ProLiant DL360 Generation 2


(G2) with:

At least 512MB (preferably 1GB) of memory

Two processors (preferable)

Two 100Mb/s network interface adapters

Embedded Smart Array 5i+ controller

Two mirrored 18.2GB drives

Each student station server has Microsoft Windows Server 2003 Enterprise
Edition installed and hosts these roles:

Primary domain controller with Active Directory

Domain Name System (DNS)

Dynamic Host Configuration Protocol (DHCP) server

Rapid Deployment Pack (RDP) server

HP Systems Insight Manager and Central Management Server

Each deployment/management server has the required class software located


in the C:\ClassFiles folder. Your instructor may provide you additional
software on CDs.

HP BladeSystem
One HP BladeSystem is required in the classroom, with configuration as follows:

A ProLiant BL p-Class blade enclosure with six ProLiant BL20p G2 or G3


server blades.
Note
The ProLiant BL20p G2 or G3 is required for SAN connectivity.

L1 40

Single processor

256MB or more memory

Two internal 18.2GB or larger hot-pluggable drives

Fibre Channel mezzanine adapter


Rev. 4.41

Classroom Setup and Configuration

One single-phase ProLiant BL p-Class power enclosure and two ProLiant BL


p-Class single-phase power supplies.

One ProLiant BL p-Class GbE2 Interconnect Switch Kit (two GbE2


Interconnect Switches) with one ProLiant BL p-Class GbE2 Storage
Connectivity Kit (two Fibre Channel storage connectivity modules).

One MSA1000 storage solution with the embedded HP StorageWorks SAN


Switch 2/8 and at least five 18.2GB or larger drives.

22U rack (not required, but highly recommended) to store the


HP BladeSystem components.

Six ProLiant BL p-Class diagnostic adapters.

Rev. 4.41

Important
It is mandatory to have 240VAC 30a power to each ProLiant BL p-Class
power enclosure.

L1 41

HP BladeSystem Solutions I Planning and Deployment

Exercise 2 Initial classroom configuration


Only the deployment/management server must be configured beforehand; the
server blades and components will be configured and deployed during the class.
The deployment/management server must have Microsoft Windows Server 2003
Enterprise Edition preinstalled and configured as follows:

L1 42

Computer name: DS1

Computer domain: class

IP address/subnet mask:

192.168.0.1/255.255.255.0 (NIC 1, data/Preboot eXecution


Environment [PXE] network)

192.168.1.100/255.255.255.0 (NIC 2, integrated Lights-Out [iLO]


network)

Active Directory domain name: class.local

Active Directory down-level domain name: class

Class files copied to the C:\ClassFiles folder.

Rev. 4.41

Classroom Setup and Configuration

Exercise 3 Hardware layout


Student stations

Blade rack
with one blade enclosure
and six ProLiant BL20p G2s
Data/PXE network
(192.168.0.x)

iLO network
(192.168.1.x)

MSA1000
SAN connection

This figure shows the overall classroom configuration. At the center, a blade rack
consists of a single-phase power enclosure with two single-phase power supplies
installed in bays 1 and 2. One blade server enclosure contains two GbE2
Interconnect Switches and six ProLiant BL20p G2 server blades. Each server
blade is connected to the shared MSA1000 using the embedded StorageWorks
SAN Switch 2/8.
Six student stations are connected to the GbE2 interconnect switches. NIC 1 of
each student server is connected to the left (side A) GbE2 Interconnect Switch,
providing data and the PXE network. NIC 2 of each student server is connected to
the right (side B) GbE2 interconnect switch, providing the iLO network.

Rev. 4.41

L1 43

HP BladeSystem Solutions I Planning and Deployment

Exercise 4 Blade server assignments


Student station:

MSA1000 with
embedded SAN Switch 2/8

Blade enclosure

Each student group has a dedicated ProLiant BL20p G2/G3 server blade in the
corresponding bay as the student group number. For example, student group 4 has
a dedicated server blade in bay 4.

L1 44

Rev. 4.41

Classroom Setup and Configuration

Exercise 5 MSA1000 configuration

LUN 6 (blade 6)
LUN 5 (blade 5)
LUN 4 (blade 4)
LUN 3 (blade 3)
LUN 2 (blade 2)
LUN 1 (blade 1)

The MSA1000 has at least five 18.2GB disk drives configured with RAID 5. Six
logical drives (LUNs) were created and assigned to their respective server blades.
Access to the individual LUNs is controlled by Selective Storage Presentation
(SSP).

Rev. 4.41

L1 45

HP BladeSystem Solutions I Planning and Deployment

Exercise 6 GbE2 Interconnect Switch


configuration
Each GbE2 interconnect switch is configured identically, as shown in the
following table. Virtual Local Area Networks (VLANs) are used to logically group
the GbE2 ports to provide isolated access from the student stations to their
respective server blades.
VLAN number

GbE2 ports

1 (default)
11
12
13
14
15
16

13, 14, 15, 16, 17, 18


1, 2, 19
3, 4, 20
5, 6, 21
7, 8, 22
9, 10, 23
11, 12, 24

Deployment server
N/A
1
2
3
4
5
6

Server blade
N/A
1
2
3
4
5
6

Ports 17 and 18 are the crosslink ports. Ports 13, 14, 15, and 16 belong to server
blades in bays 7 and 8, which are empty. All these ports are left in the default
VLAN 1.

L1 46

Rev. 4.41

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

ProLiant BL p-Class
Server Blades and
Infrastructure
Module 2

HP Restricted
2004 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice

Rev. 4.41

47

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Objectives
Discuss the HP ProLiant BL p-Class system anatomy
Describe the server blade enclosure
Compare the differences among the ProLiant BL20p G2,
BL20p G3, BL30p, and BL40p server blades
List and describe the server blade options
Network interconnectivity
Storage connectivity
Power infrastructure

Design the ProLiant BL p-Class power infrastructure

Rev. 4.41

HP Restricted

2 48

Objectives
After completing this module, you should be able to:
Discuss the HP ProLiant BL p-Class system anatomy
Describe the server blade enclosure
Compare the differences among the ProLiant BL20p Generation 2 (G2), BL20p G3,
BL30p, and BL40p server blades
List and describe the ProLiant BL p-Class server blade options such as the:
Network interconnectivity options
Storage connectivity options
Power infrastructure
Design the ProLiant BL p-Class power infrastructure

Rev. 4.41

48

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

ProLiant BL p-Class anatomy


Server blades
Server blade sleeve
Server blade enclosure

Rev. 4.41

Interconnect options
Power infrastructure

HP Restricted

ProLiant BL p-Class anatomy


The ProLiant BL p-Class system includes the following components:
Server blades
Server blade sleeve Required for ProLiant BL30p server blades only
Server blade enclosure
Signal backplane
Power backplane
Management module
Interconnect options
Interconnect switches
RJ-45 Patch Panels
Power infrastructure
Power enclosure with power supplies
Mini bus bar or scalable bus bar
2-to-1 Power Enclosure Connector Kit

Rev. 4.41

49

2 49

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

ProLiant BL p-Class server blades

BL20p G2

Rev. 4.41

BL20p G3

BL30p

HP Restricted

ProLiant BL p-Class server blades


The ProLiant BL p-Class server blades portfolio consists of:
ProLiant BL20p G2
ProLiant BL20p G3
ProLiant BL30p
ProLiant BL40p

Rev. 4.41

50

BL40p

2 50

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

ProLiant BL20p G2
Increases the capabilities of the original
two-processor server blade
533MHz front-side bus
2MB or 1MB L3 cache or 512KB L2 cache
Smart Array 5i
Interconnects
Three NICs, two SANs, and one iLO

Local I/O port for USB, video, network, and


serial access

Compatible with the original server blade


enclosure
Targets the small database and application
server markets

Rev. 4.41

HP Restricted

2 51

ProLiant BL20p G2
The ProLiant BL20p G2 is a high-performance 2P server blade designed for enterprise
availability. As a mid-tier applications server blade, it increases the capabilities of the original
two-processor server blade.
The ProLiant BL20p G2 features a 533MHz front-side bus with one or two of the following
Intel Xeon processors:
3.2GHz with 2M or 1M Level 3 (L3) cache
3.06GHz with 512KB L2 cache or 1MB L3 cache
2.8GHz processor with 1MB L3 cache or 512KB L2 cache
The integrated Smart Array 5i drive controller offers Ultra3 performance and optional batterybacked write cache (BBWC). Standard interconnects include three NICs, two SANs, and one
local I/O port on the front of the server blade that supports USB, video, network, and serial
access.
The ProLiant BL 20p G2 is compatible with the original server blade enclosure.
With the addition of SAN connectivity, the ProLiant BL20p G2 moves into the small database
and application server markets but maintains its role as a leading web-hosting,
e-commerce, streaming media, and messaging server.

Rev. 4.41

51

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

ProLiant BL20p G3
Processors
Faster 3.2GHz processors
1MB cache

Smart Array 6i
Optional 128MB BBWC

Interconnects
Four NICs, two SANs, and one
iLO

Compatible with the original


server blade enclosure

Rev. 4.41

HP Restricted

2 52

ProLiant BL20p G3
The next generation of the BL20p server blade is a dual-processor-capable server blade that
mounts in either the original or the enhanced server blade enclosure. Important components
include:
Processors The ProLiant BL20p G3 features an approximate 10% processor
performance boost over the BL20p G2. It can be configured with one or two 3.2, 3.4, or
3.6MHz Intel Xeon processors (all with Hyper-Threading and EM64t technology and the
Intel E7520 chipset) and up to 8GB of DDR II PC2-3200 DDR SDRAM memory. It has a
1MB cache memory on the processor chip.
Smart Array 6i The ProLiant BL20p G3 ships with a Smart Array 6i controller and has
an optional, fully transportable 128MB BBWC to protect data from system, hard boot, and
power failures.
Interconnects The ProLiant BL20p G3 includes four integrated Gigabit NICs, dual
Fibre Channel host bus adapters (HBAs), and one iLO port.

Rev. 4.41

52

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

BL20p G2 or G3 front panel LEDs


1.
2.
3.
4.
5.
6.

Unit identification LED


Health LED
NIC 1 LED
NIC 2 LED
Hard drive activity LED
Power on/standby LED

Rev. 4.41

HP Restricted

2 53

BL20p G2 or G3 front panel LEDs


Six LEDs on the front of the ProLiant BL20p G2 or G3 server blade indicate server status.
1. Unit identification LED
Blue Flagged
Blue flashing Management mode
Off No remote management
2. Health LED
Green Normal status
Flashing Booting
Amber Degraded status
Red Critical status
3. NIC 1 LED
Green Linked to network
Green Flashing Network activity
Off No activity
4. NIC 2 LED
5. Hard drive activity LED
6. Power on/Standby LED
Green On
Amber Standby (power available)
Off Unit off

Rev. 4.41

53

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Internal view BL20p G2

Rev. 4.41

HP Restricted

Internal view BL20p G2


1. Smart Array 5i controller
2. Standard NIC module (shown) or Fibre Channel Mezzanine Card
3. DC filter module
4. DC to DC power converter
5. DIMM slots (four)
6. Battery
7. Processor power module slot 1
8. System Maintenance Switch (SW4)
9. System Switch (SW3)
10. System Configuration Switch (SW1)
11. Processor socket 1 (populated)
12. Fan connectors
13. SCSI backplane board connector
14. Processor socket 2
15. Processor power module slot 2
16. Optional BBWC Enabler

Rev. 4.41

54

2 54

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

ProLiant BL30p
Features and benefits
Optimized for compute density and external
storage solutions
Double the density of ProLiant BL20p series
Dual-processor capable
Fibre Channel SAN support
Single aggregated iLO port

Requires the BL30p blade sleeve and


enhanced server blade enclosure
Power infrastructure
Enables reuse of existing hardware
Supports all interconnect options
Split power distribution

Rev. 4.41

HP Restricted

2 55

ProLiant BL30p
The ProLiant BL30p server blade is a mid-tier applications server blade that is optimized for
compute density and external storage solutions. It has double the density of the ProLiant
BL20p series and is capable of supporting dual processors. It also offers Fibre Channel SAN
support and a single aggregated iLO port, which enable the reuse of existing interconnects and
power infrastructure options.
Note: The ProLiant BL30p server blade does not offer hot-pluggable drives or a Smart Array
controller.
The ProLiant BL30p server blade sleeve is required for the ProLiant BL30p server blade only.
Each sleeve holds two ProLiant BL30p server blades and requires an enhanced server blade
enclosure.
The power infrastructure of the ProLiant BL30p has a split power distribution design that is
fully compatible with existing hardware and interconnect options. HP also offers the optional
Dual Power Input Kit, which enables you to attach two power enclosures to a mini bus bar.
New balcony card form factor
The ProLiant BL30p features a similar Fibre Channel solution as the ProLiant BL20p G2 or
G3, but with a new balcony card that stacks on the mezzanine card. This new card:
Prevents waste of materials in upgrade situations and saves on cost
Improves installation and accessibility
Enables future standardization among all server blades

Rev. 4.41

55

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

BL30p front panel LEDs and buttons


1.
2.
3.
4.
5.
6.

Unit identification LED


Health LED
NIC 1 LED
NIC 2 LED
Hard drive activity LED
Power LED and button

Rev. 4.41

HP Restricted

BL30p front panel LEDs and buttons


1. Unit identification LED
Blue Flagged
Blue flashing Active remote management
Off No active remote management
2. Health LED
Green Normal status
Flashing Booting
Amber Degraded status
Red Critical status
3. NIC 1 LED
Green Linked to network
Off No activity
4. NIC 2 LED
Green flashing Network activity
Off No activity
5. Hard drive activity LED
Green flashing Activity
Off No activity
6. Power LED and button
Green On
Amber Standby (power available)
Off Unit off

Rev. 4.41

56

2 56

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Internal view BL30p

12

11

10

Rev. 4.41

HP Restricted

Internal view BL30p


1. Power button and LED board cable connector
2. Fan assembly connectors (two)
3. System maintenance switch (SW1)
4. Processor socket 2
5. Adapter card connectors (two)
6. Battery
7. Power converter module
8. DIMM slots (two)
9. Processor socket 1 (populated)
10. Hard drive cable connector
11. Fan assembly
12. Hard drive cage

Rev. 4.41

57

2 57

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

ProLiant BL40p
Drives blade technology through
the data center
Is suited to ERP and CRM
databases

Rev. 4.41

HP Restricted

2 58

ProLiant BL40p
The ProLiant BL40p builds on the success of the ProLiant BL20p by adding processors,
memory, hard drives, and I/O option slots.
Designed for mission-critical applications, the ProLiant BL40p drives blade technology
through the data center. In addition to front-end and application server functionalities, the
ProLiant BL40p provides a high-performance, back-end solution that is suited to large
databases such as enterprise resource planning (ERP) and customer relationship management
(CRM).

Rev. 4.41

58

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

BL40p front panel LEDs and buttons


1.
2.
3.
4.
5.
6.
7.
8.
9.

Unit identification LED


Internal Health LED
External Health LED
NIC 1 LED
NIC 2 LED
NIC 3 LED
NIC 4 LED
NIC 5 LED
Power LED and button

Rev. 4.41

HP Restricted

BL40p front panel LEDs and buttons


1. Unit identification LED
Blue Flagged
Blue flashing Management mode
Off No active remote management
2. Internal Health LED
Green Normal when server blade is powered on
Off Normal when server blade is in standby mode
Amber System degraded
Red System critical
3. External Health LED
Green Normal when server blade is powered on
Off Normal when server blade is in standby mode
Amber Redundant fan failed
Red Critical fan failure
4. NIC 1 LED
Green Linked to network
Green flashing Network activity
Off No activity
5. NIC 2 LED
6. NIC 3 LED
7. NIC 4 LED
8. NIC 5 LED
9. Power LED and button
Green On
Amber Standby (power available)
Off Unit off

Rev. 4.41

59

2 59

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Internal view BL40p

Rev. 4.41

HP Restricted

2 60

Internal view BL40p


1. Processor socket 3

18. Smart Array 5i Plus memory module

2. LED/power switch board connector

19. System board

3. Processor power module (PPM) slot 3

20. DC power converter connect

4. SCSI Inter-IC (I2C) cable connector

21. DIMM slots 1 through 6

5. PPM slot 4

22. System board power module

6. Processor socket 4

23. Channel B SCSI connector

7. System battery

24. System maintenance switch (SW3)

8. SCSI backplane fan connector

25. Channel A SCSI connector

9. PCI-X mezzanine board connector

26. Processor socket 1

10. PCI-X mezzanine board

27. SCSI power connector

11. 64-bit/100MHz PCI-X slot 2

28. PPM slot 1

12. 64-bit/100MHz PCI-X slot 1

29. System ID switch (SW1)

13. PCI-X mezzanine power module

30. Processor fan connector

14. BBWC enabler

31. PPM slot 2

15. Ethernet pass-through board

32. Processor socket 2

16. RJ-45 connectors

33. LED/power switch board

17. NIC I/O board

Rev. 4.41

60

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Comparing ProLiant BL p-Class server blades


(1 of 2)
Features
Processors

BL20p G2

BL20p G3

BL30p

Xeon 3.2GHz

Xeon 3.6GHz

Xeon 3.2GHz

Xeon 3.06GHz

Xeon 3.4GHz

533MHz bus

Xeon 2.80GHz

Xeon 3.2GHz

BL40p
Xeon MP
1.5/2.0GHz
400MHz bus

533MHz bus
Number of
processors

Maximum of 2

Maximum of 2

Maximum of 2

Maximum of 2

Standard/
max RAM

512MB

1024MB

1024MB

512MB

8GB maximum

8GB maximum

4GB maximum

12GB maximum

Array
controller

Integrated Smart Integrated


Array 5i Plus
Smart Array 6i

Integrated
ServerWorks
chipset

Integrated Smart
Array 5i Plus

NIC

Three NICs

Four NICs

Two NICs

Five NICs

One iLO NIC

One iLO NIC

One iLO NIC

One iLO NIC

Drive bays

Rev. 4.41

Two hot-plug
Two hot-plug
Two ATA drive
SCSI drive bays SCSI drive bays bays

Four hot-plug
SCSI drive bays

HP Restricted

2 61

Comparing ProLiant BL p-Class server blades (1 of 2)


This table compares the features of the ProLiant BL20p G2, BL30p, and BL40p servers,
including:
Processors Intel Xeon models and number of processors
Standard and maximum memory Error checking and correcting (ECC) double data
rate (DDR)
Array controller Integrated HP Smart Array or ServerWorks chipset
NICs Number and type
Drive bays SCSI or ATA

Rev. 4.41

61

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Comparing ProLiant BL p-Class server blades


(2 of 2)
Features

BL20p G2

BL20p G3

BL30p

BL40p

Slots

No slots

No slots

No slots

Two PCI-X slots

Chassis

6U form factor

6U form factor

3U form factor

6U form factor
Four slots wide

Server
management

iLO Advanced

iLO Advanced

iLO Advanced

iLO Advanced

Power

Rack-centralized

Rack-centralized

Rack-centralized

Rack-centralized

Fibre Channel
storage
connectivity

Yes

Yes

Yes

Yes

Rev. 4.41

HP Restricted

2 62

Comparing ProLiant BL p-Class server blades (2 of 2)


This table compares the features of the ProLiant BL20p G2, BL30p, and BL40p servers,
including:
Slots Number and type
Chassis Form factor
Server management iLO Advanced
Power Rack-centralized external hot-pluggable power
Fibre Channel storage connectivity optional

Rev. 4.41

62

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Server blade enclosure overview

Power backplane
Management module
Rev. 4.41

HP Restricted

Signal backplane

2 63

Server blade enclosure overview


The ProLiant BL p-Class server blade enclosure has 10 bays. The two outside bays are for the
interconnect optionseither the RJ-45 Patch Panel 2 or GbE2 interconnect switch. The eight
interior bays can house two ProLiant BL40p server blades, eight ProLiant BL20p G2 or G3
server blades, or 16 half-height BL30p server blades (or combinations of the three models).
The power backplane, the management module, and the signal backplane are identified on the
server blade enclosure pictured in the slide graphic.

Rev. 4.41

63

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Enhanced server blade enclosure


Backward-compatible with all
infrastructure pieces
Upgrade kits are available
Components
New power and signal backplane
New server blade management
module
Enhanced server enclosure

Key changes
iLO port aggregation
Power infrastructure

Rev. 4.41

HP Restricted

2 64

Enhanced server blade enclosure


The ProLiant BL20p G3 and BL30p server blades require the enhanced server blade enclosure.
The enhanced enclosure is fully backward-compatible so it supports all current servers in the
ProLiant BL20p, BL30p, and BL40p series. The enclosure also supports all current
interconnect options. Upgrade kits are available for the original server blade enclosures.
Components of the enhanced server blade enclosure include:
New power and signal backplane
New server blade management module
Key changes
Integrated Lights-Out (iLO) port aggregation All server blade iLO network
connections are accessible through a single aggregated iLO port on the server enclosure
management module.
Power infrastructure Enclosure DC power is split between A and B rails and not
shared at the power backplane.
The upgraded enclosures are offered with and without HP ProLiant Essentials Rapid
Deployment Pack (RDP) licenses.
Note: The common iLO port on the rear of the signal backplane indicates if the enclosure is a
new model or if the enhanced backplane enclosure option kit is installed.
Eventually the original server blade enclosure will be retired and the enhanced server blade
enclosure will become the standard.

Rev. 4.41

64

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Server blade enclosure upgrade


Power down the server
blade enclosure
Slide out all server blades
and interconnects 1 inch
Before installing the new
components, remove the:
Management module
Signal backplane
Power backplane

Rev. 4.41

HP Restricted

2 65

Server blade enclosure upgrade


The enclosure upgrade kit enables a field upgrade of the original server blade enclosure to the
enhanced server blade enclosure. The kit contains three pieces of hardware:
Power backplane
Signal backplane
Server blade management module
Because the brackets are attached with thumbscrews, the upgrade process requires no tools.
After you have access to the backplane, the upgrade can be completed quickly.
Before installing the new components
1. Power down the server blade enclosure.
2. Slide out all server blades and interconnects 1 inch.
3. Remove the:
Management module.
Signal backplane.
Power backplane.
Install the enhanced components
1. Install a new management module with a single iLO port, signal backplane, and power
backplane.
2. Reseat the server blades and interconnects and power on the server blade enclosure.
3. Update the firmware in the power enclosure.

Rev. 4.41

65

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Server blade connectivity and power options


Network interconnectivity
Patch panels
Gigabit Ethernet switches

Storage connectivity
Local
Network

Power infrastructure
Power supplies
Bus bars and bus boxes

Rev. 4.41

HP Restricted

2 66

Server blade connectivity and power options


Ethernet network signals are routed through the ProLiant BL p-Class system components for
network interconnectivity. The ProLiant BL20p G2 and BL40p server blades support networkattached storage (NAS) through embedded NICs. In addition, the ProLiant BL20p G2 and
BL40p server blades have the optional ability to quickly access enterprise data in Fibre
Channel SANs.
Network interconnectivity Five interconnect options, including patch panels and
Gigabit Ethernet switches
Storage connectivity Local and network storage options
Power infrastructure A unique, rack-centralized power subsystem including power
supplies and bus bars to provide redundant, scalable power to all server blades in a rack

Rev. 4.41

66

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Network interconnectivity options


RJ-45 Patch Panel
RJ-45 Patch Panel 2
GbE Interconnect Switch
F-GbE (fiber uplinks)
C-GbE (copper uplinks)

GbE2 Interconnect Switch


F-GbE2 (fiber uplinks)
C-GbE2 (copper uplinks)

Rev. 4.41

HP Restricted

2 67

Network interconnectivity options


The ProLiant BL p-Class line offers these interconnect options:
ProLiant BL p-Class RJ-45 Patch Panel with 32 RJ-45 connectors Each of the four
network controllers on the server blades maps directly to one of the 32 (16 on each side)
RJ-45 ports on the back of the enclosure.
ProLiant BL p-Class RJ-45 Patch Panel 2 The Fibre Channel pass-through is
accessible only when using the ProLiant BL20p G2 or G3 and BL30p server blades.
GbE Interconnect Switch
F-GbE Interconnect Switch (fiber) Kit with four LC 1000SX and four RJ-45
10Base-T/100Base-TX uplink connectors This switch is identical to the copper
switch except for the fiber-based cable media in the interconnect module.
C-GbE Interconnect Switch (copper) Kit with four RJ-45 10Base-T/100BaseTX/1000Base-T and four RJ-45 10Base-T/100Base-TX/1000Base-T uplink
connectors Each network controller on the server matches to a 10/100 port on a
switch. Each switch is an industry-standard Layer 2 virtual LAN (VLAN) switch,
providing a pair of Gigabit Ethernet uplink ports and a pair of 10/100 Ethernet ports.
GbE2 Interconnect Switch
F-GbE2 switch with fiber uplinks This high-performance switch consolidates NIC
speeds to Gigabit Ethernet speeds. It provides Layer 3 through 7 and 10GbE
expandability. It enables cable consolidation and Fibre Channel pass-through.
C-GbE2 switch with copper uplinks This switch is identical to the fiber switch
except for the copper-based cable media in the interconnect module.
The F-GbE and C-GbE Interconnect Kits contain two GbE interconnect switches, which reduce
the number of required server-networking ports from 32 to one. The F-GbE Interconnect Kit
includes two Dual TSX Interconnect Modules; each provides two 10Base-T/100Base-TX
copper-based uplinks with RJ-45 connectors and two 1000Base-SX short-haul Fibre Channel
uplinks with LC connectors. The C-GbE Interconnect Kit includes two QuadT Interconnect
Modules; each provides two 10Base-T/100Base-TX and two 10Base-T/100Base-TX/1000BaseT copper-based RJ-45 uplinks.

Rev. 4.41

67

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Storage connectivity options


Local disk
Available for all ProLiant BL p-Class servers

Networked storage
Integrate into data centers using industry-standard
and highly available technologies

NAS and SAN fusion


StorageWorks NAS, iSCSI, SAN
Third-party SAN connectivity
Support for Fibre Channel connectivity

Rev. 4.41

HP Restricted

2 68

Storage connectivity options


ProLiant BL p-Class servers provide industry-standard options for accessing storage. These
options enable instant storage connectivity to the storage environment, protect existing storage
infrastructure investments, and provide paths for future storage growth.
Local disk
Local disk storage is available for all ProLiant BL p-Class servers. Optional disk drives offer
HP server blade customers the flexibility of storing operating system and data files on industrystandard, locally attached disk drives. HP local storage solutions are available in varying
capacities and fault tolerance.
Note: The ProLiant BL30p servers ship with no disk drives. Up to two internal ATA disk
drives can be added as an option. Data on these drives can be protected with software
mirroring. BIOS Enhanced RAID is a feature that allows a ProLiant BL30p server blade
running Red Hat Linux to boot from a secondary disk drive when the bootable partition on the
primary disk drive becomes unavailable or out of sync with the mirror on the secondary drive.
However, this feature is currently supported under Red Hat Linux only.
Networked storage
ProLiant BL p-Class servers are designed to seamlessly integrate into data centers using
industry-standard, secure, and highly available technologies. Networked storage is pervasive in
IT data centers and is assessable using the ProLiant BL p-Class Fibre Channel option card and
NICs.
NAS and SAN fusion
The ProLiant BL p-Class server blades are optimized for HP StorageWorks arrays and can also
attach to some third-party SANs. In addition, the server blades can integrate with fused NAS
and SAN configurations to work seamlessly in file and block environments.

Rev. 4.41

68

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

NAS and SAN fusion (continued)


With the fusion of SAN and NAS, you can design a storage architecture that incorporates
application, database, and file serving functionality.
HP StorageWorks NAS, iSCSI, and SAN Accessing storage over Ethernet IP networks
allows instant access to StorageWorks NAS products. StorageWorks arrays include
Enterprise Virtual Array (EVA), Modular Smart Array (MSA), and XP storage. You can
access SAN devices using the StorageWorks IP Storage Router 2122-2. You can access
StorageWorks storage arrays and Fibre Channel tape libraries using the optional dual-port
Fibre Channel card installed on the server blade.
Third-party SAN connectivity ProLiant BL p-Class servers incorporate industrystandard components and technologies to ensure interoperability with third-party
components. These designs are validated by extensive compatibility testing and sustain
support through industry-wide cross-vendor cooperative support networks. Participating
third-party SAN vendors of HP server blade certifications include EMC, HDS, and IBM.
The ProLiant BL20p G2, G3, and BL40p server blades support redundant Fibre Channel SAN
connectivity:
SAN connectivity on the ProLiant BL20p G2 or G3 server blade The ProLiant
BL20p G2 and G3 include a dual-port Fibre Channel mezzanine card (2Gb) that provides
Fibre Channel capability. When using the ProLiant BL20p G2 or G3 Fibre Channel option,
the RJ-45 Patch Panel 2 or GbE2 Interconnect Switch is required.
SAN connectivity on the ProLiant BL40p server blade The ProLiant BL40p server
blade has two PCI-X slots (64-bit/100MHz) that enable Fibre Channel SAN connectivity
through the use of host bus adapters (HBAs).
The Fibre Channel mezzanine card includes an HBA routed to opposite interconnect bays for
redundant SAN connectivity with the following storage solutions:
StorageWorks MSA1000
StorageWorks Modular Array (MA) 8000, Enterprise Modular Array (EMA) 12000, and
EMA16000 (HSG80)
StorageWorks EVA V2
StorageWorks EVA3000
StorageWorks XP48, XP128, XP512, and XP1024
EMC Symmetrix and CLARiiON

Rev. 4.41

69

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Power infrastructure
Rack-centralized power subsystem provides redundant,
scalable power to all server blades
Key benefits
Eliminates the cost and cables of PDUs
Provides redundant power for current and future generation
ProLiant BL p-Class server blades

Main components
Power input
Power enclosure
Hot-pluggable power supplies

Investment protection
Same components in the enhanced server blade enclosure
Advantage over IBM

Two UPS options


Rev. 4.41

HP Restricted

2 70

Power infrastructure
The ProLiant BL p-Class system uses a unique, rack-centralized power subsystem that provides
redundant, scalable power to all server blades in a rack. The two key benefits of this design are
that it:
Eliminates the cost and cables of power distribution units (PDUs)
Provides redundant power for current and future generation ProLiant BL p-Class server
blades
The main components of the ProLiant BL p-Class power subsystem are:
Power input
Single phase 208VAC to 260VAC
Three phase 208VAC to 260VAC; supports more server blades than single phase
Direct current (DC) -48VDC
Power enclosure
Holds the power supplies
Ships in both single-phase and three-phase power enclosure models
Offers AC input redundancy
Hot-pluggable power supplies Convert AC input to -48VDC power for all server
blades in the rack
Note: The power supplies are front-accessible hot-plug units and can be installed in various
redundant configurations.
Halving the size of the ProLiant BL30p server blade essentially doubled the power
requirements for the given amount of space. The introduction of the HP enhanced server blade
enclosure was driven primarily by the need for power.

Rev. 4.41

70

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Power infrastructure (continued)


The newly designed server blade enclosure offers investment protection because you can reuse
the current power supplies and power enclosures. This compatibility is an advantage over IBM
because IBM frequently changes its power supplies, forcing customers to replace most of their
infrastructure. The HP strategy allows a quick, easy, and cost-effective upgrade.
To further protect the data center, HP strongly recommends establishing DC redundancy and
surge protection through an uninterruptible power supply (UPS). HP currently offers two UPSs
for BladeSystem customers:
The R12000XR is available for single-phase power customers.
The PowerWare 9305 three-phase UPS is available for three-phase customers.
Note: For more information, visit: http://www.eoptions.biz

Rev. 4.41

71

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Server blade enclosure


DC power distribution

A side DC
B side DC
Rev. 4.41

HP Restricted

2 72

Server blade enclosure DC power distribution


To accommodate the higher power load, the enhanced server blade enclosure is split in half
across the backplane between Side A and Side B, with three power supplies providing
-48VDC to the Side A bus bar and three other power supplies feeding the Side B bus bar. Side
A powers bays 1 through 5 and Side B powers bays 6 through 10.

Rev. 4.41

72

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Power enclosure
AC power redundancy

Rev. 4.41

HP Restricted

2 73

Power enclosure AC power redundancy


For AC power redundancy with the enhanced server blade enclosure, you must have a second
power enclosure. As illustrated in the graphic, each side (A and B) needs two AC inputs for
full redundancy.
Power supply redundancy is possible within a single power enclosure, depending on the
number of server blades installed and how they are configured.

Rev. 4.41

73

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Single and three-phase powering

North America

International

Single-phase (left) NEMA L6-30P,


208V, 30A, 60Hz
Three-phase (right) NEMA L15-30P,
250V, 30A, 3, 60Hz

Rev. 4.41

Single-phase (left) BLUE 32A IEC-309,


220-240V, 30A, 50Hz
Three-phase (right) RED 32A IEC-309,
380-415V, 20A, 3, 50Hz

2 74

HP Restricted

Single and three-phase powering

AC
0V
22

Three Phases

22
0V
AC

Single-phase powering
The power is generally transported over high-voltage transmission lines from a power company
substation to a local step-down transformer on or near the building for the server blade
infrastructure powering.
The secondary windings of the step-down transformer (the side that feeds the building), are
wound in three separate windings (called the three phases), each producing about 220VAC phase
to phase. Power considerations include:
Phase to ground produces 110VAC and phase to phase produces 220VAC. One of these
single 220VAC phases is used to power the single phase enclosure.
Side A of the enclosure should be a different phase than the B side.
A single-phase plug has three pins. Two are the phase-to-phase 220VAC; the third pin is
used for grounding and if the equipment also requires 110VAC phase to ground.
Three-phase powering
With three-phase power, all three of the separate phases of 220VAC are used. Three-phase plugs
have more than three pins. The extra pins are used for the additional phases.
Only three connectors plus the ground in the plug are needed for three-phase power. The three
phases are wired across each of the power connectors and the negative side is ground. The
international plugs often contain more pins, but operate in the same manner.
Important! One of the connectors (the keyed one) is the ground connector; the plugs must be
keyed so that the connectors align properly.

Ground

220 VAC
Ground
Rev. 4.41

74

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Three-phase power enclosure (front)

Fits the standard and enhanced server blade enclosuresfull width,


3U height
Converts 200/240VAC to -48VDC
Accessibly located at the bottom or center of the rack
Supports power supply and AC input redundancy
Includes an intelligent power management module
Supports up to six hot-pluggable power supplies
Rev. 4.41

HP Restricted

2 75

Three-phase power enclosure (front)


To maximize the lifecycle of the HP enhanced server blade enclosure, three-phase power is
required. Only three-phase power can support the maximum number of double-dense blades in
all configurations.
The three-phase power enclosure holds a maximum of six hot-plug power supplies. One power
enclosure can support up to one-half a rack of ProLiant BL p-Class server blades redundantly.
Two power enclosures can support a full 42U rack of ProLiant BL p-Class server blades
redundantly.
The three-phase power enclosure:
Fits the standard and enhanced server blade enclosuresfull width, 3U height
Converts 200/240VAC to -48VDC
Is accessibly located at the bottom or center of the rack
Supports power supply and AC input redundancy
Includes an intelligent power management module

Rev. 4.41

75

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Single-phase power enclosure (front)

Fits the standard and enhanced server blade enclosuresfull width, 3U


height
Converts 200/240VAC to -48VDC
Accessibly located at the bottom or center of the rack
Supports up to four hot-pluggable power supplies
Recommended for small configurations

Rev. 4.41

HP Restricted

2 76

Single-phase power enclosure (front)


The single-phase power enclosure supports up to four hot-pluggable power supplies and is
recommended for small configurations. In addition, the single-phase power enclosure:
Fits the standard and enhanced server blade enclosuresfull width, 3U height
Converts 200 to 240VAC to -48VDC
Accessibly located at the bottom or center of the rack
The single-phase power enclosure can only fully support one enclosure of double-dense blades.

Rev. 4.41

76

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Power enclosure (rear)


B Side

Rev. 4.41

HP Restricted

A Side

2 77

Power enclosure (rear)


The single-phase and three-phase power enclosures share the same input and output structure.

Rev. 4.41

77

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Power management module

Monitors the operation of the power supplies and power enclosure


Regulates the voltages to match all voltages in other power enclosures
Stores system faults and other data
Reports thermal, power, and protection events by communicating with the
iLO ASIC
Communicates server blade location, power supply budget, and status
information

Connects to the enclosures above and below it with RJ-45 cables


Provides a power enclosure service port
Rev. 4.41

HP Restricted

2 78

Power management module


The power management module, mounted on the rear of the power enclosure, performs the
following functions:
Monitors the operation of the power supplies and power enclosure
Regulates the voltages to match all voltages in other power enclosures
Stores system faults and other data that can be viewed
Reports thermal, power, and protection events by communicating with the iLO
application-specific integrated circuit (ASIC)
Communicates server blade location, power supply budget, and status information
Connects to the enclosures above and below it with RJ-45 cables
Provides a power enclosure service port for troubleshooting and diagnostics

Rev. 4.41

78

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Dual Power Input Kit


Two power enclosures can be connected to a pair of mini
bus bars
Contains
Two dual-power boxes
Installation instructions

Rev. 4.41

HP Restricted

2 79

Dual Power Input Kit


Deploying a full 42U rack of ProLiant BL p-Class blades requires using two pairs of mini bus
bars. In addition, when using a ProLiant BL p-Class server blade enclosure with enhanced
backplane components, you need two power enclosures for power redundancy.
The dual power boxes provided in the Dual Power Input Kit enable two power enclosures
(instead of one) to be connected to a pair of mini bus bars, providing additional power
redundancy.
The Dual Power Input Kit contains:
Two dual-power boxes
Installation instructions

Rev. 4.41

79

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Load balancing and grounding cables

If more than one power enclosure is installed in a rack, install a load


balancing cable between them
Attach the grounding cable to the infrastructure to make a solid positive
ground (facility DC power only)
Rev. 4.41

HP Restricted

2 80

Load balancing and grounding cables


If more than one power enclosure is installed in a rack, a load balancing cable must be installed
to connect the power enclosures. Similar to an RJ-45 cable, this cable balances the power
requirements between the power enclosures.
If the rack is not grounded properly, voltage drops may occur. The lack of ground may cause
operational or noise problems. The grounding cable must be attached from the power
enclosures to the facility infrastructure to make a solid positive ground. This satisfies an
enclosure-to-enclosure grounding requirement in facility DC power environments.

Rev. 4.41

80

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Power distribution options

Rev. 4.41

HP Restricted

2 81

Power distribution options


In the ProLiant BL p-Class power subsystem, -48 VDC power (from the power enclosures or
facility -48VDC power) is distributed from the power supplies in the power enclosures to the
server blade enclosures through one of three power distribution options:
Scalable bus bar Supports up to five server blade enclosures and two power enclosures
Mini bus bar Supports up to three server blade enclosures and one power enclosure
Power bus box Supports a single server blade enclosure
The bus bars are attached to rack rails using hinges that enable the bus bars to swing open from
the center, providing easy rear access to the server blade enclosure, network cables, and
management modules. Two mini bus bar configurations can be stacked to fill a standard 42U
rack.
The power enclosures, bus bars, and cabling is right/left redundant. The right side, or Side A, is
a mirror image of the left side, or Side B, when facing the rear of the rack. Each side has its
own AC feed and DC distribution path. Half of the power supplies in the 3U power enclosure
provide power through the Side B bus bar and the other power supplies in the power enclosure
provide power through the Side A bus bar.
These power distribution options offer:
Serviceability Circuit breakers on the bus bars enable you to shut off the power to
individual server blade enclosures for safe serviceability.
Flexibility A variety of deployment sizes are available, from single enclosure test or
evaluation configurations through full-rack blade deployments.
Cable reduction One of the most important advantages of server blades is the reduced
amount of cables necessary. One pair of mini bus bars can consolidate the power for up to
48 servers into four output cables from the power enclosure.

Rev. 4.41

81

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Scalable bus bars


Supports one or two power
enclosures
Supports up to five server blade
enclosures
Allows for future growth and
flexibility

Rev. 4.41

HP Restricted

2 82

Scalable bus bars


The scalable bus bar supports one or two 3U power enclosures and up to five 6U server blade
enclosures. This bus bar solution enables future growth and flexibility in two ways:
Customers can deploy this solution initially with less than the maximum supportable
configuration, and then add ProLiant BL p-Class server blades and server blade enclosures
as their computing needs grow.
Customers can mount other devices (such as switching hardware) in the same rack above
the 6U server blade enclosures.

Rev. 4.41

82

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Mini bus bars


Each pair of mini bus bars
supports
One power enclosure
Three server blade enclosures

A full 42U rack of ProLiant BL


p-Class server blades requires
Two pairs of mini bus bars
Two power enclosures
A Dual Power Input Kit

Rev. 4.41

HP Restricted

2 83

Mini bus bars


The mini bus bars ship in pairs; each pair supports one 3U power enclosure and up to three 6U
server blade enclosures. This solution offers flexibility because other devices can be mounted
above it in a rack.
By adding a Dual Power Input Kit plus a second power enclosure, you can install a second pair
of mini bus bars above the lower set to fill a 42U rack.

Rev. 4.41

83

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Scalable and mini bus bar examples

Rev. 4.41

HP Restricted

2 84

Scalable and mini bus bar examples


This slide shows how scalable and mini bus bars are implemented in a rack of server blades.

Rev. 4.41

84

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Power bus boxes

Occupy 9U of rack space


Distribute -48 VDC power from
one power enclosure to one
server blade enclosure
Ideal for evaluating a single
rack of blades
A built-in circuit breaker fully
protects the server blade
enclosure

1. DC input cables
2. DC power out to
couplers on blade
enclosures
3. Circuit breaker

Rev. 4.41

HP Restricted

2 85

Power bus boxes


Occupying 9U of rack space, the power bus box distributes direct connection (-48 VDC) from
one power enclosure to one server blade enclosure.
Power bus boxes are installed when the blade system is fed directly from facility power to the
backplane or when the backplane is energized through bus bars. Power bus boxes are available
in different configurations, depending on the number of server blade enclosures supported and
the available input power.
Power bus boxes are ideal for evaluating a single rack of blades. The integrated circuit breaker
fully protects the server blade enclosure. DC circuit breakers enable you to shut off the power
to individual server blade enclosures for safe physical access without interrupting the operation
of other blade enclosures.
Each server blade bay is individually fused to protect the backplanes and avoid disrupting other
server blades in the enclosure. These fuses are self-resetting.
-48VDC building power
Power bus boxes can be used when powering directly from a -48VDC facility feed. Some
facilities have -48VDC power generally distributed throughout the building from large copper
bus bars and a DC cable arrangement. This -48VDC system also supports emergency lighting
and emergency communications equipment.
The hot side of the powering is the negative side and the positive side is grounded. The
advantage of negative powering is that the polarity inhibits cable corrosion from electrolysis,
especially in outside cables. In addition, battery power acts as a large voltage filter that
eliminates damaging high-voltage pulses. Should there be a loss of facility power, the
equipment and lighting will still operate (sometimes for hours, depending on loading) on the
battery power.

Rev. 4.41

85

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Power choices and double density

HP recommends three-phase
power for the ProLiant BL30p

Single enclosure powering


(nonredundant) enables the
highest density

Up to 48 blades with mini bus


bars
Up to 54 blades with scalable
bus bars
Up to 96 blades with stacked
mini bus bars
Dual-power enclosures provide
redundant power and lower
densities
Up to 48 blades with mini bus
bars
Up to 80 blades with stacked
mini or scalable bus bars

Rev. 4.41

HP Restricted

2 86

Power choices and double density


Single enclosure powering enables the highest density:
Up to 48 blades with mini bus bars
Up to 54 blades with scalable bus bars
Up to 96 blades with stacked mini bus bars
Note: A single power enclosure offers no AC input redundancy.
Dual power enclosures in the enhanced server blade enclosure provide redundant power and
lower densities:
Up to 48 blades with mini bus bars
Up to 80 blades with stacked mini or scalable bus bars
A fully redundant power configuration consists of two mini bus bars installed on a rack
containing four power enclosures and five server blade enclosures. This additional power
provision can safely accommodate up to 80 ProLiant BL30p server blades and 10 interconnect
switches.
Three-phase power supports more server blades and interconnect switches than single-phase
power. Because of the high processor density (up to 160 processors in one rack) of the ProLiant
BL30p server blades, HP strongly recommends three-phase dual power enclosures with mini
bus bars.

Rev. 4.41

86

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Configurations using double-dense blades


Nonredundant power configurations
Bus bar

Server
blades

Height

Blade
Power
enclosures enclosures

Redundancy

Mini

48

21 U

None

Mini

96

42 U

None

Scalable 54

27 U

None

Scalable 80

36 U

None

Redundant power configurations


Bus bar

Server
blades

Height

Blade
Power
enclosures enclosures

Redundancy

Mini

48

24 U

3 + 3 P/S and AC

Mini

80

42 U

3 + 3 P/S and AC

Scalable 48

24 U

3 + 3 P/S and AC

Scalable 80

36 U

5 + 1 P/S, no AC

Rev. 4.41

HP Restricted

2 87

Configurations using double-dense blades


This table lists the advantages double-dense ProLiant BL30p server blades can offer a data
center customer. The most significant advantage is in acquisition costs. When customers buy
48 double-dense server blades, they buy three fewer power enclosures and one less mini bus
bar than if they buy other blades.
Customers also save on costs associated with maintaining the servers in a data center. As
shown in the table, double-dense 48-server configurations can occupy shorter racks, and
consequently are closer to the cooling floor vents. The reduced height of the cabinets also
allows better air management in the data center.
HP does not recommend deploying more than 48 ProLiant BL 30p server blades in a single
cabinet. At this density, one cabinet produces more than 300W per sq ft. This can present a
cooling problem for data centers, most of which are designed to accommodate less than 100W
per sq ft.

Rev. 4.41

87

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Nonredundant power delivery


Server blade
enclosure

Rev. 4.41

Enhanced server
blade enclosure

HP Restricted

2 88

Nonredundant power delivery


In the original server blade enclosure, redundancy was provided using six power supplies, three
of which could power all eight servers. The double-dense ProLiant BL30p server blades drove
the development of the enhanced server blade enclosure. In this enclosure, three power supplies
are required to power each side.

Rev. 4.41

88

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Redundant power delivery


Server blade
enclosure

Rev. 4.41

HP Restricted

Enhanced server
blade enclosure

2 89

Redundant power delivery


All ProLiant BL p-Class server enclosures support redundant A and B power feeds.
Mini bus bars
The enhanced server blade enclosure requires individual power to the A and B sides. As the
diagram on the slide shows, one enhanced server blade enclosure requires one power enclosure;
you must install two power enclosures for redundancy. HP offers a Dual Power Input Kit to add
a second power enclosure. A Power Enclosure Connectivity Kit is also available to upgrade a
mini bus bar system to redundantly supply more than three enhanced server blade enclosures.
Scalable bus bars
By design, a scalable bus bar supports two power enclosures and dual A and dual B feeds.
Thus, it provides full redundancy without modification.

Rev. 4.41

89

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Redundant power configurations


Assumes three-phase power, max blade configuration and GbE2 interconnect switches

Rev. 4.41

HP Restricted

2 90

Redundant power configurations


The enhanced server blade enclosure does not allow 96 server blades in one cabinet with
redundancy. With mini bus bars, the extra power enclosures occupy 6U of rack space. When
scalable bus bars are used, the two power enclosures cannot supply sufficient power for full
redundancy at the maximum configuration.

Rev. 4.41

90

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Power zones in a full 42U rack

Rev. 4.41

HP Restricted

2 91

Power zones in a full 42U rack


A full-rack 42U configuration with two pairs of mini bus bars requires two power zones.
To distinguish the two power zones, set all the power configuration switches on management
modules in the upper zone (zone 2) to the up position. The power configuration switches on the
management modules in the lower zone (zone 1) remain in the down (default) position.

Item Description

Rev. 4.41

Power zone 2

Zone 2 switches in the up (secondary) position

Power zone 1

Zone 1 switches in the down (default) position

91

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Learning
check

Rev. 4.41

HP Restricted

2 92

Learning check
1. List the four ProLiant BL p-Class server blades discussed in this module.
___________________________________________________________________
___________________________________________________________________
2. You can install up to _____ hard drives in the ProLiant BL30p server blade.
a. Eight
b. Six
c. Four
d. Two
3. ProLiant BL20p G2 server blades cannot be mixed with ProLiant BL40p server blades in
the same enclosure.
True
False
4. The ProLiant BL30p _______________ the per-enclosure density of ProLiant BL20p
server blades.
a. Decreases
b. Doubles
c. Triples
d. Quadruples
5. What are two main types of interconnect options in the ProLiant BL p-Class line of server
blades?
__________________________________________________________________
__________________________________________________________________
6. When would you use a power bus box?
__________________________________________________________________

Rev. 4.41

92

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Server Blades and Infrastructure

Learning check (continued)


7. Why is three-phase power recommended?
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
8. Describe the DC power scheme that is used by the enhanced server blade enclosure.
______________________________________________________________________
______________________________________________________________________
9. What must you use if there is more than one power enclosure installed in a rack?
______________________________________________________________________
10. Deploying a full 42U rack of ProLiant BL p-Class server blades requires using two pairs of
mini bus bars.
True
False
11. Single-phase power enclosures can only power up one side of a rack.
True
False

Rev. 4.41

93

HP BladeSystem Solutions I Planning and Deployment

Rev. 4.41

Rev. 4.41

ProLiant BL p-Class Server Blades and Infrastructure

HP Restricted

94

2 94

HP BladeSystem Solutions I Planning and Deployment

Site Planning and Infrastructure Design

Site Planning and


Infrastructure Design
Module 3

HP Restricted
2004 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice

Rev. 4.41

95

HP BladeSystem Solutions I Planning and Deployment

Site Planning and Infrastructure Design

Objectives
Plan a deployment site for HP BladeSystem solutions
Plan a target data center environment
Design the power infrastructure for ProLiant BL p-Class
servers

Rev. 4.41

HP Restricted

Objectives
After completing this module, you should be able to:
Plan a deployment site for HP BladeSystem solutions
Plan a target data center environment
Design the power infrastructure of HP ProLiant BL p-Class servers

Rev. 4.41

96

3 96

HP BladeSystem Solutions I Planning and Deployment

Site Planning and Infrastructure Design

Site planning
Begin planning early
Ensure environment meets
specifications
Data centers should be
modular
Use the Site Installation
Preparation Utility
Uses individual platform
power calculators
Calculates the impact of
racks with varying loads

Rev. 4.41

HP Restricted

3 97

Site planning
The high-density ProLiant BL platform requires enterprise-level power and produces
enterprise-level heat loads, driving organizations to begin the planning process earlier in the
procurement cycle.
For maximum performance and availability from HP BladeSystem solutions, ensure that the
operating environment meets the required specifications for:
Floor strength
Space
Power
Electrical grounding
Temperature
Airflow
Given the trend toward increased density, data centers also should be designed for scalability
and upgrades. Plans for the data center should be based on a modular design that provides
sufficient headroom for increasing power and cooling needs. A modular design provides the
flexibility to scale capacity in the future when planned and unplanned changes become
necessary.
HP provides a Site Installation Preparation Utility to assist you in approximating the power and
heat load per rack for facilities planning. The Site Installation Preparation Utility is a Microsoft
Excel spreadsheet that uses individual platform power calculators and enables you to calculate
the full environmental impact of racks with varying configurations and loads.
Note: The Site Installation Preparation Utility can be downloaded from:
http://h18001.www1.hp.com/partners/microsoft/utilities/power.html

Rev. 4.41

97

HP BladeSystem Solutions I Planning and Deployment

Site Planning and Infrastructure Design

Comparing site requirements of


server blades and traditional servers
Compare based on equal computational power
Traditional servers require greater floor area
Server blades offer
More transactions per rack unit
More measured transactions per watt of power consumed

Rev. 4.41

HP Restricted

3 98

Comparing site requirements of server blades and traditional servers


To compare a server blade with a traditional server, the systems must have equal computational
power. In general, the all-inclusive architecture of general-purpose servers requires more power
and space per system than server blades.
Example
The following server blade and traditional server have equal computational power and
therefore can be compared:
ProLiant DL320 G2 with an Intel Pentium 4 processor and 512KB cache running at
3.06GHz
ProLiant BL10e G2 with an Ultra-Low Voltage Pentium M processor with 1MB Level
2 (L2) cache running at 1GHz
When compared with general-purpose servers, ProLiant BL p-Class server blades deliver:
More transactions per unit (U) of space
More measured transactions per watt

Rev. 4.41

98

HP BladeSystem Solutions I Planning and Deployment

Site Planning and Infrastructure Design

ProLiant BL p-Class power calculators

Rev. 4.41

HP Restricted

3 99

ProLiant BL p-Class power calculators


HP provides power calculators for ProLiant BL p-Class servers to:
Review the server loading and identify the number of power supplies required for
redundancy
Approximate the electrical and heat load per server to assist facilities planning
The power calculators are in Microsoft Excel format and allow you to enter values for the:
Line input voltage
Input phase
Number of processors, expansion cards, and hard drives
By changing the variables, you can find a configuration that suits your facilities and computing
requirements.
Note: You can download the ProLiant power calculators from:
http://h18001.www1.hp.com/partners/microsoft/utilities/power.html

Rev. 4.41

99

HP BladeSystem Solutions I Planning and Deployment

Site Planning and Infrastructure Design

Environmental planning
Power consumption and heat load requirements
High-line AC power consumption
208V, three-phase
120V to neutral
Two 30-amp 208/230 circuits per rack (present)
Two 50-amp circuits per rack (future)

Rev. 4.41

HP Restricted

3 100

Environmental planning
A common and economical method of supplying power to high-density data centers is to use a
208V three-phase system known as high-line AC power. HP BladeSystems require high-line
AC power with 208V between any two transformer windings, giving 120V to neutral.
In new data centers, HP recommends providing two 30-amp 208/230V three-phase circuits per
rack. Future high-density server environments might require up to two 50-amp circuits per
rack.

Rev. 4.41

100

HP BladeSystem Solutions I Planning and Deployment

Site Planning and Infrastructure Design

Planning the data center environment


Total physical space requirements for the data center
Cooling footprint of all the racks
Free space for aisles, ramps, and air distribution

Considerations for optimum placement of HVAC units


Geometry of the room
Heat load distribution of the equipment

Special factors affect airflow distribution


Supply plenum static pressure
Airflow blockages beneath raised floors
Configurations that mix airflow in the data center

Rev. 4.41

HP Restricted

3 101

Planning the data center environment


The total physical space required for the data center includes the cooling footprint of all the
racks plus free space for aisles, ramps, and air distribution.
Typically, a width of at least two standard floor tiles is needed in the cold aisles between racks,
and a width of at least one unobstructed floor tile is needed in the hot aisles to facilitate cable
routing.
The geometry of the room and the heat load distribution of the equipment determine the best
placement of the HVAC units. HVAC units can be placed inside or outside the data center
walls. Consider placing liquid-cooled units outside the data center to avoid damage to electrical
equipment that could be caused by coolant leaks.
Place HVAC units perpendicular to the rows of equipment and aligned with the hot aisles.
Rooms that are long and narrow can be cooled effectively by placing HVAC units around the
perimeter. Large, square rooms might require HVAC units to be placed around the perimeter
and through the center of the room.
High-density data centers require special attention to factors that affect airflow distribution,
such as supply plenum static pressure, airflow blockages beneath raised floors, and
configurations that result in airflow mixing in the data center.
Important! To plan for system growth within the infrastructure, you should size a solution
based on the maximum number of server blades you plan to deploy into the racks.

Rev. 4.41

101

HP BladeSystem Solutions I Planning and Deployment

Site Planning and Infrastructure Design

HP recommendations
Use front-to-back ambient air for cooling
Front and rear rack doors must be adequately ventilated
Cover all gaps in the rack and open bays in the server blade
enclosure with blanking panels
Observe spatial requirements when installing racks

Rev. 4.41

HP Restricted

3 102

HP recommendations
ProLiant BL p-Class server blades use front-to-back ambient air for cooling. Therefore, the
front rack door must be adequately ventilated to allow ambient room air to enter the cabinet,
and the rear door must be adequately ventilated to allow the warm air to escape from the
cabinet.
When any vertical space in the rack is not filled by server blades or rack components, the gaps
between the components cause changes in airflow through the rack and across the server
blades. Cover all gaps in the rack with blanking panels and all open bays in the server blade
enclosure with blanks to maintain proper airflow.
HP 10000 and Compaq 9000 Series racks provide proper server blade cooling from flowthrough perforations in the front and rear doors that provide 65% open area for ventilation.
Total floor space
To enable servicing and adequate airflow, observe the following spatial requirements when
deciding where to install an HP, Compaq, Telco, or third-party rack:
Leave a minimum clearance of 63.5cm (25 inches) in front of the rack.
Leave a minimum clearance of 76.2cm (30 inches) in the back of the rack.
Leave a minimum clearance of 121.9cm (48 inches) from the back of the rack to the rear of
another rack or row of racks.
For more information, refer to the HP ProLiant BL System Common Procedures Guide and HP
ProLiant BL System Best Practices Guide available from:
http://www.hp.com/products/servers/proliant-bl/p-class/info

Rev. 4.41

102

HP BladeSystem Solutions I Planning and Deployment

Site Planning and Infrastructure Design

Data center thermal modeling


A/C

A/C

A/C

A/C

PDU

PDU

PDU

PDU

A/C

30kW racks

Rev. 4.41

A/C

A/C

A/C

Power density 75W per sq ft


Raised floor cooling No return ducts
120 racks total 1.8KW/rack average
HP Restricted

3 103

Data center thermal modeling


In the last decade, the processor thermal dissipation has increased by a factor of 10. This
increase in total heat, and to a greater extent the thermal density, has resulted in the need for
very low thermal resistance cooling solutions at the device and system levels.
System thermal dissipation, especially for high and mid-range servers, has undergone a similar
increase. In high compute density data centers populated with rack-mounted servers, a standard
rack can draw 15W of power. Server blades, with their increased density, further increase the
power consumption per square foot. For high power densities, considering energy balance
alone when sizing air conditioning capacity is not sufficient. It is also important to model the
airflow and temperature distribution to ensure proper inlet air temperature to systems.

Rev. 4.41

103

HP BladeSystem Solutions I Planning and Deployment

Site Planning and Infrastructure Design

Correcting cooling problems

Base

Temperature Scale

Recommended

Case Model CM75


Power density: 75W per sq ft
Rev. 4.41

HP Restricted

3 104

Correcting cooling problems


HP Labs has produced Smart Cooling, a tool for implementing server deployments in a data
center. In the base example, three racks of ProLiant BL30p server blades producing 30kW each
were added to a data center with an average density of 75W per sq ft.
Thermal modeling reveals that serious problems would result if the 30kW servers were added
directly to the data center using its original design. Furthermore, because the circled rack was
put near the aisle, hot air recirculated to the adjacent cool air aisle. Even though only three
server racks were added, this change could cause serious cooling issues for the data center.
The major advantage of thermal modeling is that customers can redesign their data centers
before they install server deployments. In the recommended arrangement, floor tiles were
rearranged, added, and subtracted in different places to produce a more even heat distribution.

Rev. 4.41

104

HP BladeSystem Solutions I Planning and Deployment

Site Planning and Infrastructure Design

Effects of intuitive data center


management
5% open vent tile results in
cooler inlet temperature
5% open tile

95% open vent tile results in


hotter inlet air temperature

Front View

95% open tile

Front View
Rev. 4.41

HP Restricted

3 105

Effects of intuitive data center management


Thermal modeling also corrects mistakes that might arise from taking an intuitive approach to
solving data center cooling problems. Thermal modeling can track the unusual properties of air
much better than human intuition, and can therefore prevent dangerous mistakes in
implementing changes to a data center.
The two pictures on the left compare the results of thermal models taken in HP Labs in Palo
Alto, California. HP researchers powered on the 10kW cabinets shown on the right to test the
thermal conditions they would produce. The racks generated pockets of hot air toward the top
edges of the cabinets at the end of the row. Intuitively, the researchers added a 95% open tile at
the end of the row so that the recirculation, which had produced the pockets of hot air, could be
stopped.
The tile made the problem worse, forming a cone of air that left the edges of the row too hot.
Thermal modeling revealed that a 5% open tile would produce better results. The researchers
found that the tile cooled the edges of the rows, cutting the dangerous recirculation.

Rev. 4.41

105

HP BladeSystem Solutions I Planning and Deployment

Site Planning and Infrastructure Design

Thermal modeling capabilities


and best practices
Thermal modeling shows the
impact of
New machines
AC failure
AC maintenance shutdown

HP recommendations
Perform room-level and local
area energy and airflow balance
Avoid local hot spots and high
airflow demand
Optimize hot and cold air
separation
Follow HP rack installation
guidelines
Rev. 4.41

HP Restricted

3 106

Thermal modeling capabilities and best practices


In addition to demonstrating the impact of implementing new machines, thermal modeling can
reveal the consequences of unexpected and scheduled AC downtime. Understanding potential
problems and contingencies helps data center managers manage risks.
HP recommends that data center managers assess room-level and local area energy and airflow
balance in existing facilities. This information helps to map local hot spots and high airflow
demand. HP best practices direct equipment arrangement to optimize hot and cold air
separation. This approach achieves results better than if the data center managers strive to keep
the data center at a single temperature. Other basic cooling strategies include following HP
rack installation guidelines.
The top slide graphic shows the results of a simulated failure of one air conditioning unit in a
data center that averages 225W per sq ft. Within 35 seconds, equipment began to overheat to
the point of critical shutdown. The bottom graphic shows the effect of regional thermal
management, which uses a workload redistribution mechanism to move large compute loads
around the data center in the event of infrastructural problems such as a cooling or power
delivery failure. Regions can span sections of rows with a shared chilled air supply. This
technique generalizes the workload redistribution policy across rows.
HP offers thermal modeling services for power density greater than 10Kw per cabinet. HP
Professional Services can work directly with customers to optimize existing data centers for
more efficient cooling and energy consumption.

Rev. 4.41

106

HP BladeSystem Solutions I Planning and Deployment

Site Planning and Infrastructure Design

Thermal management at the


processor level
Airflow is critical to successful cooling
Intake vents provide airflow over the heatsink to cool the fins
HP BladeSystems are designed to function uninterrupted at
95F (35C)

Rev. 4.41

HP Restricted

3 107

Thermal management at the processor level


Experts agree that data center designs must examine cooling from the data center level to the
server component level. Server components use a significant amount of power, which is
dissipated as heat in power supplies, integrated circuits, and processors. Airflow within the
system is critical to successful cooling.
The air flowing past the processor heatsink is warmed by contact with the fins. Each molecule
of air can carry a specific amount of heat. The goal is to pass as many cool molecules of air as
possible over the heatsink fins and then move the heated air out of the system. The size and
placement of intake vents must provide enough airflow over the heatsink to keep the fins cool.
It is important that air flows freely through the intake vents.
The equipment heat load determines the cooling requirements of HP BladeSystem solutions.
The effective cooling area includes the cooling footprint of the equipment.
All HP BladeSystems are designed to handle the maximum processor specifications, the
toughest applications, and environments up to 95F (35C) ambient air temperature, with a
safety margin to spare. If the heatsinks and flow paths are left as designed, overheating will not
occur.

Rev. 4.41

107

HP BladeSystem Solutions I Planning and Deployment

Site Planning and Infrastructure Design

Learning
check

Rev. 4.41

HP Restricted

3 108

Learning check
1. Name five operating environment attributes that must meet specifications in site planning.
_________________________________________________________________________
_________________________________________________________________________
________________________________________________________________
2. What tool should you use to review the server loading and identify the number of power
supplies required for redundancy?
____________________________________________________________________
3. What should you do before every server blade deployment to ensure that the data center
will have proper heating and cooling?
____________________________________________________________________
4. What is the HP recommendation for data centers with power densities greater than 10Kw
per cabinet?
____________________________________________________________________

Rev. 4.41

108

HP BladeSystem Solutions I Planning and Deployment

Rev. 4.41

Rev. 4.41

Site Planning and Infrastructure Design

HP Restricted

109

3 109

HP BladeSystem Solutions I Planning and Deployment

Rev. 4.41

Site Planning and Infrastructure Design

110

Using the HP ProLiant BL p-Class Sizing Utility


Module 3 Lab 1

Objectives
After completing this lab, you should be able to:

Access the ProLiant BL p-Class Sizing Utility

Use the sizing utility graphical user interface (GUI)

Configure the blade enclosures

Configure the server blades

Configure the rack-centralized power subsystem

Obtain the equipment list summary

Reset the sizing utility

Determine the maximum rack density

Requirements
To complete this lab, you need:

A computer with a Microsoft Windows operating system and Microsoft


Excel installed

Access to the Internet or a copy of the HP ProLiant BL p-Class Sizing Utility

Overview
The ProLiant BL p-Class Sizing Utility is an Excel-based tool that reveals the
power load on a server. From this information, the sizing utility determines the
number of power supplies required for a given configuration. The sizing utility
also approximates the electrical and heat load per server for facilities planning. It
provides data on:

Rev. 4.41

Power

Cooling

Weight

Configuration

L3.1 111

HP BladeSystem Solutions I Planning and Deployment

Exercise 1 Accessing the ProLiant BL p-Class


Sizing Utility
The HP ProLiant BL p-Class Sizing Utility is posted on the HP website for
download or over-the-Internet use. To access it, perform these steps:
Note
It is likely that your instructor has downloaded this tool and made it available
on a network share. Consult with your instructor for the location of this tool
and access instructions.

L3.1 112

1.

Start a web browser and enter http://www.hp.com in the address field.

2.

Click Servers HP BladeSystem Power Calculators. Click Open to open


the file or save to download the file to your local computer.

Rev. 4.41

Using the HP ProLiant BL p-Class Sizing Utility

Exercise 2 Using the GUI


The following graphic represents the initial ProLiant BL p-Class Sizing Utility
screen. Before you begin using the tool, review the Instructions, Purpose, and
Important sections.

Initial ProLiant BL p-Class Sizing Utility screen

Rev. 4.41

L3.1 113

HP BladeSystem Solutions I Planning and Deployment

The buttons across the top of the sizing tool are as follows:

L3.1 114

Configurator Returns you to the initial (home) screen of the sizing utility.

Power Summary Displays the power summary for the configured rack.

Equipment List Displays the equipment list for the configured rack,
including the part descriptions, quantity, and part numbers.

Server Information Displays information about the ProLiant BL p-Class


servers, server blade enclosures, interconnects, power enclosures, and power
supplies. Read this information before using the tool to familiarize yourself
with the different components and options.

Help Displays help information and frequently asked questions (FAQs).

Enclosure x Displays the configuration screen for a particular enclosure.


Note that enclosure 6 is the top-most enclosure, and enclosure 1 is the
bottom-most enclosure within the rack. Each enclosure configuration screen
contains a Clear Enclosure x button, which resets the enclosure configuration.

Rack & Power Displays the configuration screen for power selection,
such as input voltage, A/C line input phases, A/C redundancy, and power
enclosure type.

Rev. 4.41

Using the HP ProLiant BL p-Class Sizing Utility

Exercise 3 Configuring the blade enclosures


The blade enclosure configuration consists of three steps:

Selecting an interconnect option

Choosing to configure blades individually or identically

Selecting the type of blade enclosure

Selecting an interconnect option


Each blade enclosure supports a variety of interconnect options:

RJ-45 patch panel

RJ-45 patch panel 2

Gigabit Ethernet (GbE) Interconnect Switch

C-GbE With copper uplinks (10/100/1000Base-T)

F-GbE With fiber uplinks (1000Base-SX)

GbE2 Interconnect Switch with or without the Fibre Channel storage


connectivity

C-GbE2 With copper uplinks (10/100/1000Base-T)

F-GbE2 With fiber uplinks (1000Base-SX)


Note
Each interconnect kit contains two interconnects.

Rev. 4.41

1.

Click one of the Enclosure x buttons, for example Enclosure 6, to display the
blade enclosure configuration section.

2.

Click Select an Interconnect and choose the appropriate interconnect option


for the blade enclosure. For this exercise, select C-GbE2 Interconnect Switch
with Storage Connectivity.

L3.1 115

HP BladeSystem Solutions I Planning and Deployment

3.

Verify that the correct interconnect option displays on both sides of the blade
enclosure, as shown in the following graphic.

Choosing to configure blades individually or identically


Next, you must choose whether the blade server enclosure will have differently or
identically configured server blades.
To configure all server blades identically, from the next drop-down menu, select
Make all Bays same as Bay 1.

L3.1 116

Rev. 4.41

Using the HP ProLiant BL p-Class Sizing Utility

Selecting the type of blade enclosure


Finally, select the type of blade enclosure from the next drop-down menu. Two
options are available:

Blade Enclosure This is the first-generation blade enclosure, which


supports these server blades:

BL20p

BL20p G2

BL40p

Blade Enclosure w/ Enhanced Backplanes This is the new blade


enclosure, which supports these server blades:

BL20p

BL20p G2 and G3

BL30p

BL40p

The blade enclosure with the enhanced backplanes is backward compatible to


support server blades other than BL30p. The BL30p server blades require the
ProLiant BL p-Class Server Blade Sleeve.
For this exercise, select Blade Enclosure w/ Enhanced Backplanes w/ 8RDP
licenses.

Rev. 4.41

L3.1 117

HP BladeSystem Solutions I Planning and Deployment

Exercise 4 Configuring the server blades


On the initial utility screen, the configuration area for the server blades is
vertically divided into eight bays, numbered one through eight. Depending on your
previous selection, each bay can have a different server blade, or all bays can have
identical server blades.
This configuration area is also divided horizontally into a Top Blades section and
Bottom Blades section. This division reflects the structure of the new server blade
enclosure (blade enclosure with enhanced backplanes) and supports the half-height
ProLiant BL30p server blades. Unless you select the enhanced enclosure and
ProLiant BL30p server blades, the Bottom Blades area is unusable.

L3.1 118

Rev. 4.41

Using the HP ProLiant BL p-Class Sizing Utility

The left column of the blade enclosure section provides a configuration legend for
server blade type, SKU number, processor configuration, disk drives, type of
mezzanine card, and memory configuration. A list of preconfigured SKUs can be
accessed by clicking the SKU # link. For reference, the SKU list and description is
provided in the following graphic.

The drop-down menus in each bay column are context-sensitivetheir values


depend on your previous choices. For example, if you selected BL20p G2 as your
server blade, the SKU # menu lists SKUs with BL20p G2 server blades only.
Furthermore, the sizing utility will not allow you to enter an unsupported
configuration.

Rev. 4.41

L3.1 119

HP BladeSystem Solutions I Planning and Deployment

1.

Make the following selections for the Bay 1 server blade:

Type of Blade: BL20p G2

SKU #: 10 (selecting a SKU prepopulates the processor option


depending on what is contained within the SKU)

# of Processors: 2

Type of Processor: Xeon 3.06GHz 1MB

HDD 146GB 10K: 2

Type of Mezzanine: Fibre Channel

Memory 2GB: 4

Note the following:

2.

The Bottom Blades section is unusable because you have selected a fullheight server blade (BL20p G2).

The remaining bays are configured with the identical server blade
configuration because you previously selected the Make all Bays same
as Bay 1 option.

Certain configuration options, such as # of PCI(s), are unavailable


because the chosen server blade does not support such options.

Click the Clear Enclosure 6 button to clear enclosure 6 and configure the
blade enclosures as listed in the following table. If the table does not specify
a configuration option, make your own selection.

Enclosure
number
6 (top)
5

L3.1 120

Enclosure configuration
Empty
C-GbE2 with storage
connectivity
All blades are the same
Enhanced blade enclosure
with 8 ProLiant Essentials
Rapid Deployment Pack
(RDP) licenses

Server blade configuration


Bay 1
BL30p
SKU number 4
Two processors
Two 60GB ATA drives
Fibre Channel adapter
Two 1GB memory modules

Rev. 4.41

Using the HP ProLiant BL p-Class Sizing Utility

1 (bottom)

Rev. 4.41

C-GbE2
Configure individual blades
Enhanced blade enclosure

C-GbE2 with storage


connectivity
All blades are the same
Enhanced blade enclosure
F-GbE2
Configure individual blades
Generation 1 blade
enclosure with 8 RDP
licenses

C-GbE
All blades are the same
Enhanced blade enclosure

Bay 1
BL20p
SKU number 2
Bay 2
BL20p G2
SKU number 10
Bay 3 top blade
BL30p
SKU number 2
Bay 3 bottom blade
BL30p
SKU number 2
Bay 4 top blade
BL30p
SKU number 4
Bay 4 bottom blade
BL30p
SKU number 4
Bay 5
BL40p
SKU number 6
Bay 1
BL20p G2
SKU number 14
Bay 1
BL40p
SKU number 2
Bay 5
BL20p
SKU number 2
Bay 6
BL20p G2
SKU number 5
Bay 7
BL20p
SKU number 2
Bay 8
BL20p G2
SKU number 6
Bay 1
BL20p
SKU number 2

L3.1 121

HP BladeSystem Solutions I Planning and Deployment

Exercise 5 Configuring the rack-centralized power


subsystem
After configuring the blade enclosures and the server blades, you must determine
the appropriate power configuration, such as:

The line voltage and number of line phases

A/C redundancy

Type of power enclosure and bus bars

You can choose from two power enclosure models, depending on the number of
A/C line input phases at your facility:

The single phase power enclosure holds a maximum of four hot-plug power
supplies.

The three-phase power enclosure holds a maximum of six hot-plug power


supplies.
Note
The ProLiant BL p-Class server enclosures also support DC power. If your
environment is using DC power, you do not need power enclosures or power
supplies, but instead you must obtain the Facility DC Power Connection kit
from HP.

L3.1 122

Rev. 4.41

Using the HP ProLiant BL p-Class Sizing Utility

All power-related options are available in the Rack & Power section, accessible
with the Rack & Power button near the top of the sizing utility screen.
1.

Click the Rack & Power button near the top of the sizing utility screen.

2.

At the Rack & Power section, you have the option of selecting:

Type of rack to host the blade enclosure, server blades, and the power

Input voltage

A/C line input phases

A/C redundancy

Type of power enclosure

Type of bus bars

Power enclosure
Power supply

Depending on your selection, the sizing utility displays warning and error
messages to reflect configuration pitfalls such as no power redundancy.
The sizing utility also graphically displays the number of power enclosures
and power supplies, as shown in the preceding graphic.
Change the power selection options and observe the impact of your choices.

Rev. 4.41

L3.1 123

HP BladeSystem Solutions I Planning and Deployment

3.

L3.1 124

Click the Power Summary button at the top of the sizing utility screen to
display power information for the server blade rack, including:

Number of power supplies

Number of power enclosures

Total system input current

Current per phase

Total system input power

Total system VA

Total system BTUs per hour

Total system leakage current per branch

Total system inrush current per branch

System weight

Rev. 4.41

Using the HP ProLiant BL p-Class Sizing Utility

Exercise 6 Obtaining the equipment list summary


To obtain the Bill of Materials (BOM), click the Equipment List button at the top
of the sizing utility screen to display part descriptions, quantities, and part numbers
for the server blade rack components.

Rev. 4.41

L3.1 125

HP BladeSystem Solutions I Planning and Deployment

Exercise 7 Resetting the sizing utility


To reset the sizing utility, you must clear each of the blade enclosures. Use the
Enclosure x buttons at the top of the sizing utility and the Clear Enclosure x button
on each enclosure-specific screen.

Exercise 8 Determining the maximum rack density


To determine the maximum rack density, complete these steps:
1.

Reset the sizing utility, and configure the blade enclosures as listed in the
following table. This configuration represents the maximum theoretical
density using ProLiant BL30p server blades and six blade enclosures.

Enclosure
number

Enclosure configuration

6 (top)

1 (bottom)

2.

L3.1 126

C-GbE2 with storage


connectivity
All blades are the same
Enhanced blade enclosure
with 8 RDP licenses
C-GbE2 with storage
connectivity
All blades are the same
Enhanced blade enclosure
with 8 RDP licenses
C-GbE2 with storage
connectivity
All blades are the same
Enhanced blade enclosure
with 8 RDP licenses
C-GbE2 with storage
connectivity
All blades are the same
Enhanced blade enclosure
with 8 RDP licenses
C-GbE2 with storage
connectivity
All blades are the same
Enhanced blade enclosure
with 8 RDP licenses
C-GbE2 with storage
connectivity
All blades are the same
Enhanced blade enclosure
with 8 RDP licenses

Server blade configuration


Bay 1
BL30p
Two processors
1GB of memory
Bay 1
BL30p
Two processors
1GB of memory
Bay 1
BL30p
Two processors
1GB of memory
Bay 1
BL30p
Two processors
1GB of memory
Bay 1
BL30p
Two processors
1GB of memory
Bay 1
BL30p
Two processors
1GB of memory

Click the Rack & Power button near the top of the sizing utility screen.

Rev. 4.41

Using the HP ProLiant BL p-Class Sizing Utility

3.

4.

Select the following options:

AC Line Input Phases: Three Phase

AC Redundancy: Redundant

Power Enclosure with 6 Power Supplies

Mini Bus Bar

What size of rack does this configuration require?


............................................................................................................................
............................................................................................................................

5.

Explain why you came to this conclusion.


............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................

6.

What must you do to resolve this situation and maximize a 42U rack? If
necessary, review the ProLiant BL p-Class Server Blades and Infrastructure
module, or discuss this situation with your instructor.
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................

Rev. 4.41

L3.1 127

HP BladeSystem Solutions I Planning and Deployment

7.

Sketch what the 42U rack maximized with ProLiant BL30p server blades
would look like, using redundant power and mini bus bars.

Maximum density 42U rack configuration

8.

Complete the following statements:

The maximum number of ProLiant BL30p blades that fit into a 42U
rack in a redundant power configuration is:
...................................................................................................................

The number of blade enclosures required is:


...................................................................................................................

The number of power supply enclosures required is:


...................................................................................................................

The number of power supplies required is:


...................................................................................................................

L3.1 128

Rev. 4.41

Setting Up and Configuring


a p-Class Blade System
Module 3 Lab 2

Objectives
After completing this lab, you should be able to:

Identify the HP BladeSystem components

Install the power supplies in the power enclosure

Install the interconnects

Cable and power on the system

Requirements
To complete this lab, you will need:

Rev. 4.41

One ProLiant BL p-Class server blade enclosure with supported interconnect


option

One single-phase power enclosure with two or more power supplies

One or more ProLiant server blades such as the ProLiant BL20p Generation 2
(G2) or later

One diagnostic cable

HP ProLiant p-Class Server Blade Enclosure Installation Guide

L3.2 129

HP BladeSystem Solutions I Planning and Deployment

Introduction
The HP BladeSystem installation consists of these steps:
1.

Identifying the system components

2.

Installing the power enclosure and the server blade enclosure and in the
appropriate rack

3.

Installing the power supplies in the power enclosure

4.

Installing the interconnects and their respective modules

5.

Installing the blade servers

6.

Cabling the system

7.

a.

Cabling the management modules

b.

Connecting the bus bars or power bus boxes to the enclosure

c.

Connecting the grounding cable

d.

Connecting the load-balancing cable

e.

Installing facility DC cables

f.

Installing cable brockets

g.

Connecting the iLOs

h.

Connecting the network cables to the interconnects

i.

Connecting the Fibre Channel storage

Connecting to the power source and power up the system


Note
In the classroom environment, most, if not all, of the HP BladeSystem
installation has already been done. This lab focuses on selected installation and
configuration steps. For complete set of instructions, refer to HP
documentation such as the HP ProLiant p-Class Server Blade Enclosure
Installation Guide.

L3.2 130

Rev. 4.41

Setting Up and Configuring a p-Class Blade System

Exercise 1 Identifying the HP BladeSystem


components
Identify each of the following components and write its description in the space
provided.
Component

Rev. 4.41

Description

L3.2 131

HP BladeSystem Solutions I Planning and Deployment

L3.2 132

Rev. 4.41

Setting Up and Configuring a p-Class Blade System

Rev. 4.41

L3.2 133

HP BladeSystem Solutions I Planning and Deployment

Identifying the server blade enclosure components


The following figure represents the rear view of a server blade enclosure.

fe

1.

2.

3.

4.

L3.2 134

dc

Which server blade enclosure is shown?


a.

Server blade enclosure

b.

Enhanced server blade enclosure

Which interconnect option is shown?


a.

RJ-45 Patch Panel 2

b.

GbE2 Interconnect Switch

Which NIC column represents NIC 1 (the default Preboot eXecution


Environment [PXE]-enabled NIC) for the installed blade servers?
a.

b.

c.

d.

Which NIC column represents NIC 2 for the installed blade servers?
a.

b.

c.

d.

Rev. 4.41

Setting Up and Configuring a p-Class Blade System

5.

6.

7.

Which NIC column represents NIC 3 for the installed blade servers?
a.

b.

c.

d.

Which NIC column represents the iLO NICs for installed server blades?
a.

b.

c.

d.

Explain how the Fibre Channel signals are routed on the RJ-45 Patch Panel 2.
............................................................................................................................
............................................................................................................................
............................................................................................................................

Rev. 4.41

L3.2 135

HP BladeSystem Solutions I Planning and Deployment

The following figure represents the rear view of a different server blade enclosure.

i
f

1.

2.

3.

L3.2 136

Which server blade enclosure is shown?


a.

Server blade enclosure

b.

Enhanced server blade enclosure

Which interconnect option is shown?


a.

RJ-45 Patch Panel 2

b.

GbE2 Interconnect Switch

Match the following description with the correct callout number on the
preceding graphic.
a.

External 10/100/1000BaseT
Ethernet ports for Side A

..........

b.

External 10/100/1000BaseT
Ethernet ports for Side B

..........

c.

Fibre Channel pass-through


ports for Side A

..........

d.

Fibre Channel pass-through


ports for Side B

..........

e.

Signal backplane

..........

f.

Blade management module

..........

g.

Power backplane

..........

Rev. 4.41

Setting Up and Configuring a p-Class Blade System

Identifying the GbE2 Interconnect Switch components


The following figure represents the front view of the GbE2 Interconnect Switch.

Match the following description with the correct component.

Rev. 4.41

a.

Blade chassis latch and


handle

..........

b.

DB9 serial connector for


access to the command line
interface (CLI) and menudriven console

..........

c.

LED panel for link speed and ..........


activity status per port

d.

Power and management


status LEDs

..........

e.

Power and reset button

..........

f.

RJ-45 connector link speed


and activity LEDs

..........

g.

Two local-access
10/100/1000BaseT Ethernet
switch ports

..........

L3.2 137

HP BladeSystem Solutions I Planning and Deployment

Identifying the rear server blade enclosure and power enclosure


components
The following figure represents the rear of a server blade enclosure and the power
enclosure, including the power management module and the server blade
management module.

1.

L3.2 138

Match the following description with the correct component.


a.

DC output power cable pairs ..........


for Bus A

b.

Server blade management


link connectors

c.

Power management link


..........
connector to enclosure below

d.

Power management link


..........
connector to enclosure above

e.

Power enclosure

..........

f.

Load-balancing signal cable


connector

..........

g.

Server blade management


module

..........

h.

Power management module


service port

..........

..........

Rev. 4.41

Setting Up and Configuring a p-Class Blade System

2.

i.

DC power input connector


for Bus B

..........

j.

iLO port

..........

k.

DC power input connector


for Bus A

..........

l.

Grounding cable screw

..........

m.

Power enclosure AC circuit ..........


breakers (to hot-plug power
supplies) for Bus B (left) and
Bus A (right)

n.

Server blade enclosure

..........

o.

Power management module

..........

p.

DC output power cable pairs ..........


for Bus B

q.

Server blade management


module service port

..........

When is the grounding cable used?


............................................................................................................................
............................................................................................................................

3.

Explain how the grounding cable is installed.


............................................................................................................................
............................................................................................................................
............................................................................................................................

4.

When is it necessary to use the load-balancing signal cable and how is this
cable used?
............................................................................................................................
............................................................................................................................
............................................................................................................................

Rev. 4.41

L3.2 139

HP BladeSystem Solutions I Planning and Deployment

The following figure represents the rear of a power enclosure.

g
h

5.

L3.2 140

Match the following description with the correct component.


a.

Power management module

..........

b.

Grounding cable

..........

c.

Circuit breakers for Sides A


and B

..........

d.

-48VDC output leads for


Side B

..........

e.

-48VDC output leads for


Side A

..........

f.

208-250VAC input leads for ..........


Side B

g.

208-250VAC input leads for ..........


Side A

Rev. 4.41

Setting Up and Configuring a p-Class Blade System

Exercise 2 Installing the power supplies in the


power enclosure
Complete the following tasks:
1.

Indicate where two power supplies would be placed for nonredundant, singlephase power to a server blade enclosure (not enhanced). Also indicate
whether power blanking panels would be used and where.

Power enclosure
1

2.

Indicate where four power supplies would be placed for redundant, singlephase power to a server blade enclosure (not enhanced). Also indicate
whether power blanking panels would be used and where.

Power enclosure
1

3.

Indicate where two power supplies would be placed for nonredundant, threephase power to a server blade enclosure (not enhanced). Also indicate
whether power blanking panels would be used and where.

Power enclosure
1

Rev. 4.41

L3.2 141

HP BladeSystem Solutions I Planning and Deployment

4.

Indicate where four power supplies would be placed for redundant, threephase power to a server blade enclosure (not enhanced). Also indicate
whether power blanking panels would be used and where.

Power enclosure
1

5.

Indicate where two power supplies would be placed for nonredundant, singlephase power to an enhanced server blade enclosure. Also indicate whether
power blanking panels would be used and where.

Power enclosure
1

6.

Indicate where four power supplies would be placed for redundant, singlephase power to an enhanced server blade enclosure. Also indicate whether
power blanking panels would be used and where.

Power enclosure

L3.2 142

Rev. 4.41

Setting Up and Configuring a p-Class Blade System

7.

Indicate where two power supplies would be placed for nonredundant, threephase power to an enhanced server blade enclosure. Also indicate whether
power blanking panels would be used and where.

Power enclosure
1

8.

Indicate where four power supplies would be placed for redundant, threephase power to an enhanced server blade enclosure. Also indicate whether
power blanking panels would be used and where.

Power enclosure

Rev. 4.41

L3.2 143

HP BladeSystem Solutions I Planning and Deployment

Exercise 3 Installing the interconnects


Each server blade enclosure requires a pair of interconnects to provide network
and external storage access. The leftmost and rightmost bays of each server blade
enclosure are the interconnect bays.

Using the RJ-45 Patch Panel 2


For configurations using the RJ-45 Patch Panel 2 interconnect, answer the
following questions:
1.

Explain how the RJ-45 Patch Panel 2 connects the server blades to an
external SAN-based storage device such as the HP StorageWorks Modular
Smart Array 1000 (MSA1000).
............................................................................................................................
............................................................................................................................
............................................................................................................................

2.

You are to connect a server blade enclosure with eight ProLiant BL20p G2
server blades to external Ethernet switches and use all network interface
controller ports and iLO ports. How many Ethernet cables will run from the
server blade enclosure?
............................................................................................................................

Using the GbE2 Interconnect Switch


For configurations using the GbE2 Interconnect Switch, answer the following
questions:
1.

What is required for the GbE2 Interconnect Switch to connect the server
blades to an external SAN-based storage device such as the MSA1000?
............................................................................................................................
............................................................................................................................
............................................................................................................................

L3.2 144

Rev. 4.41

Setting Up and Configuring a p-Class Blade System

2.

The following figure represents the GbE2 Interconnect Switch ports. Label
each port and explain its functionality.
Chassis
Rear

22

21

20

1918

GbE2 interconnect switch

17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
Front
Panel

23

24 1

Serial Management Port

Rev. 4.41

L3.2 145

HP BladeSystem Solutions I Planning and Deployment

Exercise 4 Cabling and powering the system


After all system hardware is installed, you can cable the components. Refer to the
HP ProLiant BL System Best Practices Guide on the documentation CD or to the
HP website at www.hp.com for HP recommendations on ordering cables.
To cable the system:

Cable the management modules.

Connect the bus bars or power bus boxes to the enclosures.

Connect the grounding cable.

Connect the load-balancing signal cable.

If you have facility DC power, install the facility DC cables.

Install cable brackets.

Connect the iLO.

Connect network cables to the interconnects.

Connect the Fibre Channel storage devices.

Connect to your AC or facility DC power sources and power up the system.

Cabling the management modules


The server blade management modules and power management modules are
cabled together in daisy-chain fashion to provide the management link. Each
management module has two management link connectorsone to connect to
enclosures above and one to connect to enclosures below. Cabling the
management modules enables the system to identify the rack topology for power
and data management.
Caution
Do not install NIC cabling or telephone cabling in the management link
connectors. These devices are not supported.

Management modules are used only for information management such as asset
tracking. Disconnecting the management module cabling does not affect system
operation.
On the server blade management module:

L3.2 146

The upper management link connects to the upper enclosure.

The lower management link connects to the lower enclosure.

Rev. 4.41

Setting Up and Configuring a p-Class Blade System

On the power management module:

The right management link connector connects to enclosures above the


module.

The left management link connector connects to enclosures below the


module.
Note
When configuring a full 42U rack solution with two pairs of mini bus bars, you
must configure two power zones. Refer the HP ProLiant p-Class Server Blade
Enclosure Installation Guide for more information.

1.

Rev. 4.41

In the following figure, draw the correct management module cabling.

L3.2 147

HP BladeSystem Solutions I Planning and Deployment

2.

In the following figure, draw the correct management module cabling.

3.

Explain the difference between the previous two server blade enclosures.
............................................................................................................................
............................................................................................................................
............................................................................................................................

L3.2 148

Rev. 4.41

Setting Up and Configuring a p-Class Blade System

Connecting the bus bars or power bus boxes to the enclosures


Refer to section Connecting the Bus Bars or Power Bus Boxes to the Server
Blade Enclosure in the HP ProLiant p-Class Server Blade Enclosure Installation
Guide for explicit steps and figures.

Connecting the grounding cable


If you are using a facility DC power source, you must connect the grounding cable
to the server blade enclosures.
The grounding cable satisfies the enclosure-to-enclosure grounding requirements
in facility DC power environments. Each type of bus bar supports a different
number of enclosures; therefore, each facility DC cable option contains a
grounding cable to support the appropriate number of enclosures.
In the following figure, draw the correct ground cabling.

Rev. 4.41

L3.2 149

HP BladeSystem Solutions I Planning and Deployment

Connecting the load-balancing signal cable


The load-balancing signal cable enables two power enclosures in a scalable bus
bar configuration to balance their power output for the power load demand of the
system. If your configuration does not have power enclosures because it uses
facility DC power, omit this step.

!
1.

L3.2 150

Important
If the load-balancing signal cable is not installed, the management software
issues alerts.

In the following figure, draw the correct load-balancing cabling.

Rev. 4.41

Setting Up and Configuring a p-Class Blade System

Connecting the iLO


The location of the iLO connections depends on which the blade server enclosure
and interconnect option is used.

Blade server enclosure iLO connections are routed through side B of the
blade server enclosure.

If using the RJ-45 Patch Panel, iLO connections for each installed
server blade are located in the right RJ-45 column of the patch panel
side B.

If using the GbE2 interconnects, the iLO connections are routed to the
side B interconnect switch, and their exact external location depends on
the GbE2 switch configuration.

Enhanced blade server enclosure iLO connections for all installed


server blades are located on the server blade management module, regardless
of which interconnect option is used.

Complete these steps:


1.

In the following figure, indicate the type of blade server enclosure and
interconnect option used, and draw the correct cabling of iLO connections to
the management network.
............................................................................................................................

Management network

Rev. 4.41

L3.2 151

HP BladeSystem Solutions I Planning and Deployment

2.

In the following figure, indicate the type of blade server enclosure and
interconnect option used, and draw the correct cabling of iLO connections to
the management network.
............................................................................................................................

Management network

3.

In the following figure, indicate the type of blade server enclosure and
interconnect option used, and draw the correct cabling of iLO connections to
the management network.
............................................................................................................................

Management network

L3.2 152

Rev. 4.41

Setting Up and Configuring a p-Class Blade System

Connecting the Fibre Channel storage devices


Connect the Fibre Channel storage devices to the server blade enclosure
interconnect options.
If using the RJ-45 Patch Panel 2, each server blade Fibre Channel connection is
routed to the front of each RJ-45 Patch Panel 2. Connect the fiber cable to the
appropriate connection on the front of the patch panel, route the cable underneath
the connectors to the back of the server blade enclosure, and connect the other end
to either the appropriate Fibre Channel storage or a SAN switch.
If using the GbE2 Interconnect Switch, each server blade Fibre Channel
connection is routed to the back of each GbE2 Interconnect Switch. Connect the
fiber cable to the appropriate connection on the back of the interconnect switch,
and connect the other end to either the appropriate Fibre Channel storage or a SAN
switch.
Each server blade that supports Fibre Channel storage has two Fibre Channel ports
for redundancy. One port is routed to Side A interconnect, and the other is routed
to Side B interconnect. For storage path redundancy, connect both sides and install
the appropriate dual-storage-path software.

Connecting to the power sources and powering up


After cabling the system, connect to the power source and apply power to the
system.
To power up the system:

Rev. 4.41

WARNING
Ensure that all power enclosure, bus bar, and power bus box circuit
breakers are locked in the off position before connecting any power
components.

1.

Connect to the power source. If using an AC power source, connect the


power enclosure AC power cords to the appropriate AC outlets. If using
facility DC power, install the facility DC cable kits and connect to the facility
DC power source.

2.

Apply power to the facility power connection, if necessary.

3.

If using AC power, unlock the circuit breakers on the power enclosure.


Toggle the switches to the on position to apply AC power to the hot-plug
power supplies. Then, lock the switches in the on position.

L3.2 153

HP BladeSystem Solutions I Planning and Deployment

4.

Ensure that the hot-plug power supply LEDs, power enclosure DC power
LEDs, and bus bar power LEDs are green.
Note
Refer to Appendix D LEDs, Buttons, and Switches of the HP ProLiant
p-Class Server Blade Enclosure Installation Guide.

5.

Unlock the circuit breaker switches on the bus bars or power bus boxes and
toggle the switches to the on position. This applies DC power to the server
blade enclosures.

6.

Ensure that the server blade enclosure DC power LEDs are green.

7.

Lock all the circuit breaker switches in the on position. This prevents anyone
from accidentally powering down the system.

Power is now applied to all system hardware.

L3.2 154

Rev. 4.41

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

ProLiant BL p-Class
Network Connectivity
Options
Module 4

HP Restricted
2004 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice

Rev. 4.41

155

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

Objectives
Discuss general networking concepts
VLAN
STP
Port trunking, load balancing, and teaming

Discuss ProLiant BL p-Class server blade signal routing


Identify the available HP BladeSystem interconnect options
Choose the appropriate interconnect options for HP
BladeSystem servers
Describe GbE Interconnect Switch best practices

Rev. 4.41

HP Restricted

4 156

Objectives
After completing this module, you should be able to:
Discuss general networking concepts, including:
Virtual LAN (VLAN)
Spanning Tree Protocol (STP)
Port trunking and load balancing
Discuss ProLiant BL p-Class server blade signal routing
Identify the available ProLiant BL p-Class interconnect options
Choose the appropriate interconnect options for HP BladeSystem servers
Describe GbE Interconnect Switch best practices

Rev. 4.41

156

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

General networking concepts


ProLiant BL p-Class servers use
Interconnect switches or patch panels
Industry-standard switch technology for networks

IEEE standards
VLAN 802.1Q
STP 802.1D
Trunking 802.3ad (static mode only)

Rev. 4.41

HP Restricted

4 157

General networking concepts


ProLiant BL p-Class servers use either interconnect switches or patch panels to collect the NIC
signals from the servers and send them out to the network. Redundant features such as
redundant NIC capability provide high availability at the front end.
Industry-standard switch technology for networks
The NICs are on the server blades, but the network signals for each NIC are routed through the
signal backplane to the interconnect options, which plug into the outside bays of the server
blade enclosure, one on each side. The interconnect switches or patch panels provide passthrough of Ethernet network and storage signals to the external network infrastructure.
IEEE standards
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) identifies multiple standards
for various switch technologies. It is important to know which version is supported by each
technology, especially when determining how the switch fits into an existing network or
network plan.
The key IEEE standards supported by ProLiant BL p-Class include:
VLAN 802.1Q
STP 802.1D
Trunking 802.3ad (static mode only)

Rev. 4.41

157

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

VLANs
The GbE2 Interconnect Switch provides support for 255
VLANs
VLAN 1 enables communication between all server ports
and uplink ports on the interconnect switches
More VLANs means more GbE2 Interconnect Switch
processor utilization

Rev. 4.41

HP Restricted

4 158

VLANs
A virtual LAN (VLAN) is a network topology configured according to a logical scheme rather than
a physical layout. It logically segments the network into broadcast domains. It also conserves
bandwidth and improves security by limiting bandwidth to specific domains.
The ProLiant BL p-Class GbE2 Interconnect Switch supports a total of 255 IEEE 802.1Q VLANs.
Interconnect switches are shipped from the factory with all the ports set on VLAN 1, the default
VLAN. The default VLAN enables communication between all server ports and uplink ports on the
interconnect switches. Connectivity is provided to each server blade when it is inserted into the
enclosure and powered on.
The greater the number of VLANs, the greater the GbE2 Interconnect Switch processor utilization.
For maximum interconnect switch performance, be judicious when configuring the number of
VLANs. For example, you might want to isolate the server blade integrated Lights-Out (iLO) ports
from the rest of the NICs by assigning the iLO ports on each interconnect switch to their
own VLAN.

Rev. 4.41

158

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

VLAN deployment on a blade system

LAN

Switch B

Switch A
1 2

1 2

Blade
2

Blade
1

Rev. 4.41

HP Restricted

BL
BLe-Series
p-Class

4 159

VLAN deployment on a blade system


The graphic illustrates a sample server blade configuration. Traffic to and from Blade 1 and
Blade 2 must be secure (the server blades must be independent of each other).
Example
Put Blade 1 ports in VLAN Yellow; put Blade 2 ports in VLAN Green. Put switch ports in both
VLANs. Configure the blade ports for the corresponding VLAN ID.
In this example, the switches will tag each Ethernet frame with the corresponding VLAN ID.
Because of this tag, a packet with the Green VLAN ID will be blocked from any Yellow
VLAN port. Therefore, clients in one VLAN cannot connect to their blades using telnet and
then hack into another blade on a different VLAN. The switch will not allow the traffic from a
blade in one Green VLAN onto the server blade port in the Yellow VLAN.

Rev. 4.41

159

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

STP

Provides path redundancy and eliminates loops


Blocks redundant paths (which become standby paths)
Configures the network to use the most efficient path
Sets up another active path if the primary path fails
Communicates between switches on the network using
BPDUs
One switch is elected as the root switch
The shortest distance to the root switch is calculated
A designated switch is selected
A port for each switch is selected
Ports included in the STP are selected

Rev. 4.41

HP Restricted

4 160

STP
Supported on most bridges and switches, STP is a reliable method for providing path
redundancy and eliminating loops in bridged networks. Loops form never-ending data paths
that result in excessive system overhead. STP enables you to block the links that form loops
between switches in a network.
When multiple data paths exist, STP forces the redundant paths into a standby (blocked) state.
STP configures the network so that a switch uses only the most efficient path. If that path fails,
STP automatically sets up another active path on the network to sustain network operations.
STP supports, preserves, and maintains the quality of bridged LAN or media access control
(MAC) service. If a link is lost or the topology has changed, STP requires only 30 to 60
seconds to detect the changes and reconfigure.
STP communicates between switches on the network using Bridge Protocol Data Units
(BPDUs). Each BPDU contains the following information:
The unique identifier of the switch that the transmitting switch currently believes is the root
switch
The path cost to the root from the transmitting port
The port identifier of the transmitting port
The communication between switches through BPDUs results in the following:
One switch is elected as the root switch.
The shortest distance to the root switch is calculated for the switch.
A designated switch is selected. This is the switch closest to the root switch through which
packets will be forwarded to the root.
A port for each switch is selected. This is the port providing the best path from the switch
to the root switch.
Ports included in the STP are selected.

Rev. 4.41

160

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

Enabling STP
Enabled on ProLiant BL p-Class switches
One spanning tree domain per interconnect switch is
supported
Multiple spanning trees provide multiple data paths
To enable multiple spanning trees
Block loops at the VLAN level instead of the port level
Allow for a separate spanning tree per VLAN
Comply with IEEE specification 802.1s

Rev. 4.41

HP Restricted

4 161

Enabling STP
STP can be enabled or disabled at the switch level. By default, STP is enabled on ProLiant BL
p-Class switches. Disabling it can negatively affect the function and performance of the
ProLiant BL p-Class switches as well as the switches and traffic on the rest of the network.
Only one spanning tree domain per interconnect switch is supported. You can configure ports
to participate in that spanning tree domain by enabling or disabling STP on a per port basis.
Multiple spanning trees
Multiple spanning tree groups (STGs) provide multiple data paths, which can be used for load
balancing and redundancy. You can enable independent links on two interconnect switches
using multiple STGs by configuring each path with a different VLAN and then assigning each
VLAN to a separate STG.
Each STG is independent and must be independently configured. Each STG sends its own
BPDUs.
The STG forms a loop-free topology that includes one or more VLANs. The switch supports 16
STGs running simultaneously. The default STG 1 may contain an unlimited number of
VLANs. All other STGs (2-16) may contain one VLAN each.
To enable multiple spanning trees on ProLiant BL p-Class switches:
Block loops at the VLAN level instead of the port level
Allow for a separate spanning tree for each VLAN
Comply with IEEE specification 802.1s (extension to 802.1D)
Important! The ProLiant BL p-Class GbE Interconnect Switch supports mono-STP. Multiple
spanning tree domains are not supported. This means the Spanning Tree Algorithm makes
calculations without considering the VLAN domains to which the ports belong. All ports that
have STP enabled fall under one STP domain.

Rev. 4.41

161

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

Configuring STP

Switch
A

root = 01A
cost = 0
transmitter = 01A
root = 01A
cost = 1
transmitter = 10B

Rev. 4.41

Switch
B

X
Switch
C

X
HP Restricted

Switch
D

4 162

Configuring STP
The key configuration parameters of STP are normally set on switches upstream in the network
(top of the tree). The switches downstream must support STP and have it enabled.
With STP:
Set the Switch A bridge priority to 01 and Switch B bridge priority to 10.
Switch A broadcasts the BPDU packet.
Switches B, C, and D determine the lowest-cost path.
Because the cost path is 1 and the transmission of the packet came from the designated
root, the link is not blocked.
Switch B rebroadcasts the BPDU packet out of its remaining ports.
Switches C and D block their links to Switch B, which makes them standby links.
The link between Switch C and D is subsequently blocked following the rebroadcast of the
BPDU packet from those switches.
All that remains after the propagation of the BPDU packet are the primary links, which are the
direct links with the lowest cost path from switches B, C, and D to Switch A.
Important! Several customer advisories exist to help avoid configuration issues:
ProLiant BL C-GbE and F-GbE Interconnect Switches support mono-STP (IEEE 802.1D),
but multiple spanning tree domains (IEEE 802.1s) are not supported.
(PSD_EB020927_CW01)
ProLiant BL p-Class GbE2 Interconnect Switch STP configuration may cause ProLiant
Essentials Rapid Deployment Pack (RDP) jobs to generate PXE-E51 message.
(PSD_EB040310_CW04)
Third-party switches with STP disabled may prevent C-GbE Interconnect Switches from
identifying data loops.(PSD_EB021010_CW01)
For more information regarding support documentation for HP BladeSystems, refer to:
http://welcome.hp.com/country/us/en/support.html
Rev. 4.41

162

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

STP settings

Bridge
Priority
MAC Address

Rev. 4.41

HP Restricted

4 163

STP settings
Bridging and port settings can be configured globally on the interconnect switch.
The bridge ID is determined by the bridge priority, followed by the MAC address of the switch.
The switch on the network with the lowest bridge ID is the designated root, which is the switch
to which all broadcasts from lower switches are forwarded.
Any switches lower in the tree will receive global and per port STP parameter information
from the designated root. These parameters (Max Age, Hello Time, and Forward Delay) are
used to determine the most efficient path through the network to the designated root.

Rev. 4.41

163

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

What can happen without STP?


With the crosslink trunk disabled and STP disabled
No primary link is established between Switch A and Switch B
A broadcast over crosslink 1 returns over crosslink 2
A broadcast loop is created
Links to the blades are saturated with traffic

Integrated Administrator
NIC

Switch A

Crosslink 1

Switch B

Crosslink 2

Rev. 4.41

HP Restricted

4 164

What can happen without STP?


With the crosslink trunk disabled and STP disabled:
No primary link is established between Switch A and Switch B.
A broadcast over crosslink 1 returns over crosslink 2.
A broadcast loop is created.
Links to the blades are saturated with traffic.
Connecting the ProLiant BL p-Class switch uplinks to the LAN without enabling STP can
cause a broadcast loop on the network. A broadcast loop causes the network segment to be
saturated with traffic, which can ultimately cause the segment to fail. If critical services such as
the Domain Name Service (DNS), email, and Dynamic Host Configuration Protocol (DHCP)
are on that segment, the entire corporate network can fail.

Rev. 4.41

164

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

Port trunking and load balancing


Port trunking
Aggregates multiple ports into a single group
Shares the traffic load
Supports up to 12 trunk groups per switch
Allows up to 6 ports to act as a single logical trunk group
Provides a multiple of the bandwidth of a single link
Improves reliability

Load balancing
A port failure within the group causes the network traffic to be
directed to the remaining links in the group
Load balancing is maintained whenever a link in a trunk is lost
or returned to service

Rev. 4.41

HP Restricted

4 165

Port trunking and load balancing


Port trunking allows aggregation of multiple ports into a single group called a trunk. It enables up
to six ports with the same speed to be grouped together to act as a single logical trunk group and
supports up to 12 trunk groups per switch. Port trunking yields a bandwidth that is a multiple of the
bandwidth of a single link.
Example
Three gigabits ports can be aggregated into one 3Gb trunk.
An algorithm automatically applies load balancing to the ports in the trunk. A port failure within
the group causes the network traffic to be directed to the remaining ports. Load balancing is
maintained whenever a link in a trunk is lost or returned to service. This provides flexible and
scalable bandwidth with resiliency and load sharing across the links.
Note: With 16 ProLiant BL 30p blades in one rack, only 12 could run load balancing teams
because you can only configure 12 trunk groups.

Rev. 4.41

165

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

Port trunking and NIC teaming


Downlinks from Switch A in ProLiant BL p-Class systems
can be trunked
NIC teaming
Network fault tolerance
Provides redundancy
Uses only one NIC

Transmit load balancing


Provides load balancing
Uses both NICs simultaneously

Switch-assisted load balancing


Two NICs for single virtual link
Only configured on NICs connected to Switch A

Rev. 4.41

HP Restricted

4 166

Port trunking and NIC teaming


The downlinks from Switch A in the ProLiant BL p-Class system that connect to the same
server blade can be trunked. The Switch A downlinks connect to Data 1 and Data 2 on each
server blade.
Switch B has no downlinks that can be trunked because they connect to the Data 3 NIC and the
iLO NIC on each blade. The iLO NIC cannot be used for data NIC teaming.
NIC teaming
With NIC teaming in ProLiant BL p-Class servers, you can use the following technologies:
Network fault tolerance Network fault tolerance provides redundancy through two
switches and separate uplinks, but only one NIC is used. The second NIC is used only when
the primary link is lost because of a failure of the primary NIC or its switch or uplink.
Transmit load balancing Transmit load balancing can provide load balancing by
allowing the server to transmit on both NICs simultaneously. Transmit load balancing can
also be configured so that the NICs transmit to separate switches, but only one of the two
NICs can receive at a time.
Switch-assisted load balancing The two NICs form a single virtual link, which is
similar to two trunked ports on a switch. Switch-assisted load balancing can be configured
only on the two data NICs that are connected to Switch A and works with trunked
downlinks from the switch.
Note: For network fault tolerance and transmit load balancing using the ProLiant BL20p, any
of the NICs can be in a team, with the exception of the iLO NIC. For switch-assisted load
balancing, only the NICs connected to Switch A can be teamed.

Rev. 4.41

166

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

VLAN tagging and port trunking combined


a

Switch
b

VLAN 1
VLAN 2

c,d,e,f,f

a,b e,f,f

Rev. 4.41

a,b,c,d

Switch

Switch

cc

dd

ee

HP Restricted

f,f

f, ff,

4 167

VLAN tagging and port trunking combined


To create VLANs across the network, the GbE2 Interconnect Switch supports VLAN tagging.
Each switch port can be individually configured as tagged or untagged. Therefore, GbE2
Interconnect Switch VLANs can span switches that support the tagging methodology. The key
is to ensure that ports on both ends of the tagged link are assigned to the same VLANs.

Rev. 4.41

167

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

Combining VLAN tagging, port trunking,


and STP
Combined configuration
a

Prevents loops
Blocks VLAN 2

VLAN 1
VLAN 2

switch
b

Comparing dynamic and


static port trunking

c,d,f

a,b,e,f

Port trunking can be


implemented only as static
Dynamic port trunking
enables two compatible
devices to
Identify multiple ports
Link them together
Trunk the ports on both
ends of the links
Rev. 4.41

e,f

b,c

a,d
Red VLAN

switch

Green
VLAN

switch

f,f

f,f

Loops prevented on VLAN 2


without blocking VLAN 1
traffic

HP Restricted

4 168

VLAN tagging, port trunking, and STP combined


This graphic shows VLAN tagging, port trunking, and STP combined. It shows how this
combination:
Prevents loops
Blocks VLAN 2
Comparing dynamic and static port trunking
The GbE2 Interconnect Switch supports port trunking on ports connected to the same device.
However, port trunking can be implemented only in a static configuration on this switch.
Cisco supports the IEEE 802.3ad standard (port trunking) and EtherChannel, a Cisco
proprietary method of port trunking. Link aggregation control protocol (LACP) is an
enhancement over EtherChannel and other static port trunking methods. LACP dynamically
learns about the link status and decides which links to use for load balancing and failback in
case of link failure. As a result, IEE 802.3ad with LACP is often called dynamic trunking. With
dynamic port trunking, two compatible devices can identify multiple ports, link them together,
and trunk the ports on both ends of the links.
The GbE2 Interconnect Switch supports 802.3ad without LACP that is compatible with
EtherChannel. The GbE2 Interconnect interoperates with both Fast EtherChannel, providing
link aggregation for Fast Ethernet (100MB) ports, and Gigabit EtherChannel, which aggregates
Gigabit Ethernet (1000MB) links.
Third-party switches meeting the 802.3ad standard and EtherChannel can create dynamic port
trunks using LACP.
Note: The GbE2 Interconnect Switches are compatible with static channeling on Cisco
switches.

Rev. 4.41

168

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

Server blade enclosure signal routing


8 BL20p blades in
a standard
enclosure
(nondedicated iLO)

16 BL30p blades
in an enhanced
enclosure
(dedicated iLO)

Number of data NICs per enclosure

24

32

Number of iLO NICs per enclosure

16

Total NIC signals per enclosure

32

48

Number of data NICs routed to the


interconnects

24

32

Number of iLOs routed to the


interconnects

n/a

Total NICs routed to the interconnects

32

32

Total NICs routed to the centralized iLO


port

n/a

16

Rev. 4.41

HP Restricted

4 169

Server blade enclosure signal routing


The ProLiant BL p-Class server blade enclosure contains a signal backplane for routing of
Ethernet signals from the server blade NICs to the interconnects in a redundant, highly
available architecture. The six interconnect options (four Ethernet switch options and two patch
panel options) enable you to choose how the Ethernet Fibre Channel signals exit the server
blade enclosure.
The enhanced server blade enclosure supports network connectivity for the ProLiant BL30p
server. The management module in the enhanced server blade enclosure features a centralized
iLO port and a new signal backplane, which routes iLO signals to the management module
rather than to the interconnects. Only data NIC signals are routed through the interconnects.
The difference in iLO signal routing allows each enclosure to fulfill a specific set of iLO
management needs. The enhanced server blade enclosure is ideal for users who need a
simplified, centralized management point. The single 10/100T port can be included in a
network management VLAN for management of all server blades. The original enclosure is an
ideal solution for applications that require iLO management to reside in different VLANs or
subnets.
Note: The iLO NIC is a 10/100 NIC on all ProLiant BL p-Class servers.

Rev. 4.41

169

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

iLO port aggregation


iLO signals rerouted
Failsafe features

Rev. 4.41

HP Restricted

4 170

iLO port aggregation


The original server blade enclosure routed all iLO signals to one of the interconnect modules,
either a switch or a patch panel. With the enhanced server blade enclosure, those iLO signals
(up to 16 with the ProLiant BL30p) are routed to a single port on the server blade management
module at the back of the enclosure. Each iLO signal retains its unique IP address, even though
it shares a port with other iLO signals.
If the management module were to fail completely, server blade operation would be
uninterrupted and the management module itself could be hot-swapped with a spare with no
server downtime. Remote iLO access would be unavailable until the management module was
replaced, but local iLO access through the front I/O port would still be possible.
In addition, the blades would still be accessible using other tools such as Microsoft Terminal
Services and Virtual Network Computing. The risk of a failure is extremely low and there are
easy work-arounds, so operation is not affected.

Rev. 4.41

170

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

iLO and ProLiant BL p-Class blades


Server blade management module has a single iLO port
Provides 10/100Base-T connection speeds
Carries separate signals for all installed blades
Provides access to all server blade iLOs (BL20p, BL30p, and
BL40p)

Static IP Bay Configuration Utility enables each blade to


obtain a predefined IP address without relying on DHCP
Existing enclosures can be upgraded with new signal
backplane, power backplane, and management module

Rev. 4.41

HP Restricted

4 171

iLO and ProLiant BL p-Class blades


The server blade management module has a single 10/100Base-T iLO port that carries separate
signals for all installed blades (up to 16 in the case of the ProLiant BL30p). All server blade
iLOs (BL20p, BL30p, and BL40p) are accessible through this physical iLO NIC.
The Static IP Bay Configuration Utility (available as a Smart Component from HP) provides an
alternative to DHCP and iLO-by-iLO static IP assignment. This utility enables the iLO
management processor in each blade to obtain a predefined IP address without relying on
DHCP.
The system automatically reserves a block of 16 addresses starting with the first one set by the
user. Server blade iLOs are automatically assigned addresses from the reserved static pool
when they power on, even if DHCP is present. With the Static IP Bay Configuration Utility,
iLO is immediately accessible for server deployment using virtual media and other remote
administration functions.
Management through iLO provides all the advantages of a server equipped with monitor,
keyboard, and mouse. If the management module should fail or if server blade becomes
unavailable remotely, a diagnostic port mounted on the front of the server blade enables walkup access. Another management feature includes an intelligent power button that verifies
available rack power before powering up a blade.
Note: Advanced iLO ships standard on all new ProLiant BL p-Class servers. For details, visit:
http://h18013.www1.hp.com/products/servers/management/iloadv/index.html
Existing enclosures can be upgraded with new signal backplane, power backplane, and
management module.

Rev. 4.41

171

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

ProLiant BL20p G2 signal routing


Standard network mezzanine card

Fibre Channel mezzanine card

NC7781
10/100/1000T

NC7781
10/100/1000T

NC7781
10/100/1000T

iLO
10/100

NC7781
10/100/1000T
Fibre
Channel

Integrated
iLO
Port

NC7781
10/100/1000T

Fibre
Channel

To Interconnect Bay B

To Interconnect Bay A

NC7781
10/100/1000T

iLO
10/100

Rev. 4.41

HP Restricted

4 172

ProLiant BL20p G2 signal routing


This diagram indicates the layouts of the standard network mezzanine card and the Fibre Channel
mezzanine card for the ProLiant BL20p G2 server blade enclosure.
The four NICs are routed as follows:
Two NC7781 NICs are routed through the signal backplane to Interconnect Bay A, as viewed
from the front of the enclosure.
One NC7781 NIC is routed to Interconnect Bay B, as viewed from the front of the enclosure.
The iLO NIC is routed to Interconnect Bay B, unless connected to the new integrated iLO
port on the updated management module.
With the Fibre Channel mezzanine card installed, the NIC signal routing stays the same. One of
the two Fibre Channel ports is routed to both interconnect bays.

Rev. 4.41

172

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

ProLiant BL20p G3 signal routing


Fibre Channel mezzanine card

NC7781
10/100/1000T

NC7781
10/100/1000T

NC7781
10/100/1000T

NC7781
10/100/1000T

NC7781
10/100/1000T

NC7781
10/100/1000T

NC7781
10/100/1000T

NC7781
10/100/1000T

Fibre
Channel
iLO
10/100

Rev. 4.41

Integrated
iLO
Port

HP Restricted

iLO
10/100

Fibre
Channel

To Interconnect Bay B

To Interconnect Bay A

Standard network mezzanine card

4 173

ProLiant BL20p G3 signal routing


This diagram indicates the layouts of the standard NICs and the gigabit option for the BL20p
G3 server blade enclosure.

Rev. 4.41

173

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

ProLiant BL30p signal routing


Standard network mezzanine card With Fibre Channel mezzanine card
NC7781
10/100/1000T

NC7781
10/100/1000T
iLO
10/100

iLO
10/100

Fibre
Channel

NC7781
10/100/1000T

Integrated
iLO
Fibre
Port
Channel

NC7781
10/100/1000T

NC7781
10/100/1000T

iLO
10/100
Rev. 4.41

NC7781
10/100/1000T

Fibre
Channel
Fibre
Channel
NC7781
10/100/1000T

To Interconnect Bay B

To Interconnect Bay A

NC7781
10/100/1000T

iLO
10/100
HP Restricted

4 174

ProLiant BL30p signal routing


This diagram indicates the layouts of the standard NICs and the gigabit option for the BL30p
server blade enclosure.
When you are using the standard network mezzanine card with the dual-port Fiber Channel
Adapter (2GB) mezzanine card kit (mounted directly above the NIC card), you must use a GbE2
Interconnect Switch Kit (with the GbE2 Storage Connectivity Kit) to provide pass- through of the
Fibre Channel signals.
With the Fibre Channel mezzanine card installed, the NIC signal routing stays the same.
Note: When ProLiant BL30p servers are installed, all iLO signals from all servers in the enclosure
(no matter what model) are routed to the single integrated iLO port.
Each Fibre Channel port is routed to the interconnect bays.

Rev. 4.41

174

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

ProLiant BL40p signal routing

iLO
10/100T

NC7781
10/100/1000T

NC7781
10/100/1000T

NC7781
10/100/1000T

NC7781
10/100/1000T

Rev. 4.41

HP Restricted

Integrated
iLO
Port

To Interconnect Bay B

To Interconnect Bay A

NC7781
10/100/1000T

4 175

ProLiant BL40p signal routing


In a four-bay ProLiant BL40p server, three of the NC7781 NICs are routed through the signal
backplane to Interconnect Bay A, as viewed from the front of the enclosure.
Two of the NC7781 NICs are routed to the Interconnect Bay B, as viewed from the front of the
enclosure.
The iLO NIC is always routed to Interconnect Bay B (unless connected to the integrated ilO port
on the updated management module).
NIC assignments
NICs 1, 2, and 4 are routed to Switch A.
NICs 3, 5, and 6 (iLO) are routed to Switch B. The iLO NIC can be internally connected to the
integrated iLO port on the updated management module.

Rev. 4.41

175

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

RJ-45 Patch Panels


Enable Ethernet LAN signals to pass through to
third-party LAN devices
Provide all network data connections for one server
blade enclosure
RJ-45 Patch Panel 2 provides 16 Fibre Channel front
panel ports

Rev. 4.41

HP Restricted

4 176

RJ-45 Patch Panels


The RJ-45 Patch Panel and RJ-45 Patch Panel 2 enable Ethernet LAN signals to pass through
to third-party LAN devices, giving you flexibility in choosing a network switch, hub, or router.
One pair of RJ-45 patch panels provides all network data connections for one server blade
enclosure. Each patch panel gathers the NIC connections from all installed server blades that
are routed to Side A or Side B of the server blade enclosure.
Both the RJ-45 Patch Panel and RJ-45 Patch Panel 2 pass all 32 Ethernet signals as separate
RJ-45 connections through two rear-mounted LAN interconnect modules per patch panel. Both
patch panels support a combination of ProLiant BL p-Class servers.
In addition to Ethernet signal pass-though, the RJ-45 Patch Panel 2 provides 16 Fibre Channel
front panel ports to support signal pass-through for up to eight ProLiant BL20p G2 servers with
two Fibre Channel ports each. The ProLiant BL40p server blade does not require Fibre
Channel signals to be routed to the interconnect bays.
The HP RJ-45 Patch Panel and RJ-45 Patch Panel 2 Kits each contain two patch panel
interconnects.

Rev. 4.41

176

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

RJ-45 Patch Panel 2 connectors BL20p


Server 8

Server 8

Side A

Side B

Server 1

B1

Rear of enhanced server blade enclosure

B2

Rev. 4.41

Server 1

A1

HP Restricted

A2

4 177

RJ-45 Patch Panel 2 connectors ProLiant BL20p


Each RJ-45 Patch Panel has two RJ-45 modules:
Ten-connector module
Six-connector module
Patch Panel/Patch Panel 2 Ethernet connections for ProLiant BL20p G2/G3 server blades are
outlined in the following table.
Item

Original enclosure

Enclosure with enhanced backplane

B1

iLO NIC, server 1

Not used by BL20p G2


Data NIC 4, server 1 (BL20p G3 only)

B2

Data NIC, server 1

Data NIC, server 1

A1

Data NIC*, server 1

Data NIC*, server 1

A2

Data NIC, server 1

Data NIC, server 1


* indicates the default PXE NIC

Only one NIC at a time may be enabled for Preboot eXecution Environment (PXE). A NIC on
each server is pre-selected as the default PXE NIC. This results in all the PXE-enabled NICs
being routed to the same interconnect. However, you can use the ROM-Based Setup Utility
(RBSU) to designate any NIC to be the default PXE NIC. Thus, system availability can be
enhanced by selecting PXE-enabled NICs that are routed to different interconnect blades.

Rev. 4.41

177

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

RJ-45 Patch Panel 2 connectors BL30p


Server 16
Server 8

Server 8
Server 16

Side A

Side B

Server 9

Rear of enhanced server blade enclosure

Server 1

B1

B2

Server 1
Server 9

A1

Rev. 4.41

HP Restricted

A2

4 178

RJ-45 Patch Panel 2 connectors ProLiant BL30p


Patch Panel and Patch Panel 2 Ethernet connections for ProLiant BL30p series blades are
outlined in the following table.
Note: Enclosures with the enhanced backplane are required for ProLiant BL30p server blade
operation.
Item

Original enclosure

Enhanced server blade enclosure

B1

N/A

Data NIC 2, server 1

B2

N/A

Data NIC 2, server 9

A1

N/A

Data NIC 1*, server 1

A2

N/A

Data NIC 1*, server 9


* indicates the default PXE NIC in the table

Only one NIC at a time may be enabled for PXE. A NIC on each server is pre-selected as the
default PXE NIC. This results in all the PXE-enabled NICs being routed to the same
interconnect. However, you can use the RBSU to designate any NIC to be the default PXE
NIC. Thus, system availability can be enhanced by selecting PXE-enabled NICs that are routed
to different interconnect blades.

Rev. 4.41

178

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

RJ-45 Patch Panel 2 connectors BL40p

Server 2
Server 2

Side A

Side B

Server 1
Server 1

B1

Rear of enhanced server blade enclosure


B2

A1

B3

Rev. 4.41

HP Restricted

A2

A3

4 179

RJ-45 Patch Panel 2 connectors ProLiant BL40p


Patch Panel/Patch Panel 2 Ethernet connections for the ProLiant BL40p series are outlined in the
following table.
Item

Enclosure

Enclosure with enhanced backplane

B1

iLO NIC, server 1

Not used

B2, B3

Data NIC, server 1

Data NIC, server 1

A1

Data NIC*, server 1

Data NIC*, server 1

A2, A3

Data NIC, server 1

Data NIC, server 1


* indicates the default PXE NIC in the table

Only one NIC at a time may be enabled for PXE. A NIC on each server is pre-selected as the
default PXE NIC. This results in all the PXE-enabled NICs being routed to the same interconnect.
However, you can use the RBSU to designate any NIC to be the default PXE NIC. Thus, system
availability can be enhanced by selecting PXE-enabled NICs that are routed to different
interconnect blades.

Rev. 4.41

179

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

ProLiant GbE Interconnect Switches


High-performance, rapidly
deployable
Two Gigabit Ethernet crosslinks
provide redundant data paths
Each switch reduces up to 16
internal NIC ports to six external
Ethernet ports
Four 10/100/1000Base-T or
LC 1000Base-SX Gigabit Ethernet
external uplink ports
Two additional 10/100/1000Mb/s
ports

Rev. 4.41

HP Restricted

4 180

ProLiant GbE Interconnect Switches


Each ProLiant BL p-Class GbE Interconnect Switch is a high-performance, rapidly deployable
managed switch. Two hot-pluggable switches in an enclosure are cross-connected by two
Gigabit Ethernet links to provide redundant data paths for failover.
Each GbE Interconnect Switch reduces up to 16 internal server blade network NIC ports to six
external Ethernet ports:
Four 10/100/1000Base-T or LC 1000Base-SX Gigabit Ethernet external uplink ports on the
rear-mounted LAN interconnect module can be used for fast connection speeds and
flexibility.
Two additional 10/100/1000Mb/s ports on the front of the switch can be used for
maintenance or additional uplink ports.
The front panel ports are used for local switch access, port mirroring, or additional uplinks to
the network. Because each external Ethernet port can communicate to all the server blades, one
to 12 external ports (per enclosure) can be used to connect to the network infrastructure.
Layer 2 switching technology allows the switch to look into data packets and forward them
based on the destination MAC address. This feature reduces traffic congestion on the network
because packets, instead of being transmitted to all ports, are transmitted to the destination port
only.
You can manage the switches using a web or a serial interface.

Rev. 4.41

180

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

ProLiant GbE Interconnect Switches (continued)


The ProLiant BL p-Class enclosure provides eight server blade bays, each supporting up to
four Ethernet NICs. Thus, a fully configured enclosure can have up to 32 Ethernet cables, and a
fully configured 42U rack can have up to 192 Ethernet cables.
The ProLiant GbE Interconnect Switches address the need for network cable reduction. Each
switch consolidates the NIC signals to 100Mb/s (Fast Ethernet) using the two Ethernet switches
in the enclosure.
ProLiant GbE Interconnect Switch Kits
You can order a single kit to supply all the components you need for one server blade
enclosure. The GbE Interconnect Kit includes two hot-swappable, fully managed Layer 2
Ethernet switches and two rear-mounted, four-port LAN interconnect modules.
The GbE Interconnect Kit is available in two options based on the LAN interconnect module
that supports the uplink port media:
C-GbE Interconnect Kit for copper-based networks Includes two QuadT
interconnect modules, each with two 10/100/1000T and two 10/100T ports, all with RJ-45
connectors.
F-GbE Interconnect Kit for fiber-based networks Includes two DualTSX
interconnect modules, each with two 1000SX ports with LC connectors and two 10/100T
ports with RJ-45 connectors.

Rev. 4.41

181

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

Interconnect switches and ProLiant BL


p-Class server blade infrastructure
Switches connected across the backplane
Redundancy for continued access in case of failures

Rev. 4.41

HP Restricted

4 182

Interconnect switches and ProLiant BL p-Class server blade


infrastructure
The bays in the server blade enclosure are designed so that the server blades and interconnect
modules slide in and connect to the enclosure backplane for power and data connections,
including Fibre Channel connections for the ProLiant BL20p G2. The enclosure backplane
routes both Ethernet and Fibre Channel signals from the server blades to the interconnect bays
while completely isolating these signals from each other.
Note: The ProLiant BL40p server blade does not require Fibre Channel signals to be routed to
the interconnect bays.
The two GbE Interconnect Switches are connected across the enclosure backplane through a
pair of redundant Ethernet crosslinks bundled into a multiport EtherChannel compatible trunk.
These crosslinks permit communication between the switches for additional management
capability, fault tolerance, and cable reduction. As a result, any single uplink port may be used
to connect to all 32 NICs for a 32:1 network cable reduction.
The redundant architecture of the GbE Interconnect Switches enables you to configure the
Ethernet network for continued access to each server blade in case of the following failures:
Interconnect switch failure
Switch failure within the Ethernet network backbone
Server blade NIC failure
Server blade NIC-to-interconnect switch port failure or connection failure
Uplink port and uplink port connection or cable failure
Interconnect switch crosslink port or connection failure
Power or fan failure

Rev. 4.41

182

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

ProLiant GbE2 Interconnect Switches


Gigabit Ethernet performance
Multiport 10Gb/s fabric
Advanced network features
Optional pass-through of
Fibre Channel storage signals
using the GbE2 Storage
Connectivity Kit
Offline testing using a
Diagnostic Station

Rev. 4.41

HP Restricted

4 183

ProLiant GbE2 Interconnect Switches


The GbE2 Interconnect Switch ships preconfigured for immediate use. Industry-standard
protocols provide compatibility with other widely used networking components and support for
255 IEEE 802.1Q VLANs for server grouping and isolation. The GbE2 Interconnect Switch
meets the IEEE 802.1D standard and is compatible with Cisco switches that are 802.1D
compliant.
STP is enabled by default on the GbE2 Interconnect Switch to ensure that any existing network
Layer 2 loops are blocked. Other features include:
All switch ports provide Gigabit Ethernet performance to support applications that require
NIC consolidation to 1000Mb/s. Each GbE2 Interconnect Switch provides 24Gb/s full
duplex external port (uplink) bandwidth per server blade enclosure.
A multiport 10Gb/s fabric is standard on each GbE2 Interconnect Switch, supporting the
future Layer 3 to 7 IP load balancing option and 10 Gigabit Ethernet uplink upgradeability.
The switching layer and the uplink bandwidth can be independently selected within a single
switch offering.
Advanced network features support STP per VLAN, 9k jumbo frames, RADIUS, redundant
syslog servers, and redundant operating system firmware images and configuration files in
memory.
The GbE2 Storage Connectivity Kit enables optional pass-through of ProLiant BL20p
G2/G3 andBL30p Fibre Channel storage signals. Therefore, both Ethernet signal
consolidation and Fibre Channel pass-through is now possible using a single interconnect.
Offline testing and configuration is supported using an HP Diagnostic Station.
Note: All ProLiant BL p-Class servers can be operated together in a single enclosure and all
NICs can be operated through the GbE2 Interconnect Switches.

Rev. 4.41

183

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

ProLiant GbE2 Interconnect Switches (continued)


Redundant crosslinks
The two GbE2 Interconnect Switches are connected through a pair of redundant
10/100/1000Base-T Gigabit Ethernet crosslinks on ports 17 and 18. These two crosslinks are
bundled in a 2Gb/s Cisco EtherChannel compatible multiport trunk. The signals are routed as
Ethernet from switch to switch through individual CAT5e specified signal traces on the passive
backplane assembly of the enclosure. You also can:
Perform PXE server boots and access all iLO interfaces using external switch ports. You
can use a single switch uplink to perform all Ethernet management tasks.
Configure the BL p-Class system for advanced ProLiant Network Adapter Teaming
including switch fault tolerance.
Communicate with any server NIC from any switch uplink port. As a result, you can use
any single uplink port to communicate to all 32 NICs, providing additional system
redundancy. If uplinks on one switch or the connections to a switch fail, all NICs can be
accessed through the other switch.
10Base-T/100Base-TX/1000Base-T Gigabit Layer 2 switching technology provides up to a 32to-1 (blocking) to 32-to-12 (blocking) reduction in the number of networking cables per
ProLiant BL p-Class server enclosure.
Note: With 32-to-1 (one interconnect switch uplink port), there might be some network
blockage when the enclosure is fully populated with BL20p or BL30p servers. There will be a
reduction or no blockage (depending on how the switch is configured) using all six (12 with
two switches) interconnect switch uplink ports.
ProLiant GbE2 Interconnect Kits
The GbE2 Interconnect Kit contains two hot-swappable, fully managed Layer 2 GbE2
Interconnect Switches and two LAN interconnect modules. The GbE2 Interconnect Kit is
available for copper-based (C-GbE2) and fiber-based (F-GbE2) networks. These kits are
identical with exception of the interconnect modules:
The C-GbE2 Interconnect Kit contains two QuadT2 interconnect modules, each with four
10/100/1000Base-T ports with RJ-45 connectors. Each GbE2 Interconnect Switch and
QuadT2 Interconnect Module provides four RJ-45 100Base-TX/1000Base-T uplink ports
on the rear of the switch. Two RJ-45 10Base-T/100Base-TX/1000Base-T ports located on
the front of the switch may also be used as uplink ports.
The F-GbE2 Interconnect Kit includes two QuadSX interconnect modules, each with four
1000SX ports with LC connectors. Each GbE2 Interconnect Switch and QuadSX
Interconnect Module provides one to four LC 1000Base-SX ports (located on the
removable interconnect module) and two RJ-45 10Base-T/100Base-TX/1000Base-T ports
(located on the front of the switch).
Important! The QuadSX fiber interconnect modules only support 1000Mb/Full and Auto
options for the Speed/Duplex fields for Gigabit uplink ports, but operate at 1000Mb/s when
either is selected. The QuadSX fiber interconnect module does not support 10Mb/s or
100Mb/s speeds.

Rev. 4.41

184

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

Interconnect Switch Ethernet connections


Original enclosure for BL20p series
Uplink Ports
Ports
19 20 21 22

C-GbE

F-GbE

C-GbE2

F-GbE2

19, 20

10/100T

10/100T

10/100/1000T

1000SX

21, 22

10/100/1000T

1000SX

10/100/1000T

1000SX

19 20 21 22

Interconnect
Module A

Interconnect
Module B

1 NIC*
2 NIC

Front Panel Ports


Ports
23, 24

1
2

iLO
NIC

3
4

NIC*

iLO

NIC

NIC

5
6

C-GbE
F-GbE

C-GbE2
F-GbE2

10/100T

10/100/1000T

3
4
NIC*

iLO

NIC

NIC

7
8

23

9
10

24

11
12

5
6
NIC*

iLO

NIC

NIC

7
8
NIC*

iLO

NIC

NIC
NIC*

iLO

NIC

NIC

13
14

NIC*

iLO

NIC

NIC

15
16

Downlink Ports
Ports
1 - 16

17

C-GbE
F-GbE

C-GbE2
F-GbE2

10/100*

10/100/1000**

BL20p
1

Switch A

BL20p
2

BL20p
3

BL20p
4

BL20p
5

23

11
12

24

13
14
NIC*

iLO

NIC

NIC

15
16
17

Crosslink Ports

18

9
10

18

BL20p
6

Server blade enclosure front view

BL20p
7

BL20p
8

Switch B

*This is the default PXE NIC. You can use the ROM setup utility to make any other data NIC PXE-enabled.
Rev. 4.41

HP Restricted

4 185

Interconnect Switch Ethernet connections Original enclosure


for BL20p servers
The Ethernet connectivity diagram shows how the Ethernet ports are connected within the original
server blade enclosure for ProLiant BL20p servers.

Rev. 4.41

185

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

Interconnect Switch Ethernet connections


Enhanced enclosure for BL20p series
Uplink Ports

19 20 21 22

Ports

C-GbE

F-GbE

C-GbE2

F-GbE2

19, 20

10/100T

10/100T

10/100/1000T

1000SX

21, 22

10/100/1000T

1000SX

10/100/1000T

1000SX

19 20 21 22

Interconnect
Module A

Interconnect
Module B

1 NIC*
2 NIC

Front Panel Ports


C-GbE
F-GbE

Ports
23, 24

10/100T

1
2

NIC

3
4

NIC

NIC

5
6

C-GbE2
F-GbE2
10/100/1000T

7
8

23

9
10

24

11
12

3
4

NIC*

5
6

NIC*
NIC

NIC

7
8

NIC*
NIC

NIC
NIC*
NIC

NIC
NIC*
NIC

NIC

13
14

NIC

Downlink Ports
Ports
1 - 16

C-GbE
F-GbE

NIC

10/100*

NIC*
NIC

17
C-GbE2
F-GbE2

BL20p
1

10/100/1000**

Switch A

NIC

BL20p
2

BL20p
3

BL20p
4

BL20p
5

24

15
16
17

Crosslink Ports

18

23

11
12
13
14

NIC*

15
16

9
10

18

BL20p
6

Server blade enclosure front view

BL20p
7

BL20p
8

Switch B

*This is the default PXE NIC. You can use the ROM setup utility to make any other data NIC PXE-enabled.
Rev. 4.41

HP Restricted

4 186

Interconnect Switch Ethernet connections Enhanced enclosure for


BL20p series
The Ethernet connectivity diagram shows how the ports are connected within the ProLiant
BL20p series enhanced server blade enclosure. With the updated enclosure, iLO is routed from
the server blades to the iLO port on the enclosure management module, not to the interconnect
switches.
Note: iLO typically operates at 100Mb/s and is routed to the integrated iLO port on the
management module. The actual negotiated speed of any port is dependent upon the capability
of the device to which it is attached.

Rev. 4.41

186

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

Interconnect Switch Ethernet connections


Enhanced enclosure for BL30p series
Uplink Ports

19 20 21 22

Ports

C-GbE

F-GbE

C-GbE2

F-GbE2

19, 20

10/100T

10/100T

10/100/1000T

1000SX

21, 22

10/100/1000T

1000SX

10/100/1000T

1000SX

19 20 21 22

Interconnect
Module A

Interconnect
Module B

1 NIC*

NIC

NIC*

NIC

NIC*

Front Panel Ports

NIC
NIC*

NIC

NIC*

11

Ports

C-GbE
F-GbE

C-GbE2
F-GbE2

13

23, 24

10/100T

10/100/1000T

15

NIC*

24

BL30p 2

BL30p 3

BL30p 4

BL30p 5

NIC*
NIC*
NIC*
NIC*

10/100*

10/100/1000**

NIC*

NIC

NIC

NIC

NIC

NIC

16
14

24

12
10
8
6

NIC

NIC

NIC

BL30p 9

BL30p 10 BL30p 11 BL30p 12 BL30p 13 BL30p 14 BL30p 15 BL30p 16

17

Crosslink Ports

18

Switch A

15
23

2 NIC*

1 - 16

NIC

BL30p 8
NIC*

NIC*

Ports

BL30p 7

16

C-GbE2
F-GbE2

BL30p 6

14
12

C-GbE
F-GbE

13

NIC
NIC*

10

Downlink Ports

11

NIC
NIC*

BL30p 1
23

NIC

Server blade enclosure front view

17
18

Switch B

*This is the default PXE NIC. You can use the ROM setup utility to make any other data NIC PXE-enabled.
Rev. 4.41

HP Restricted

4 187

Interconnect Switch Ethernet connections Enhanced enclosure for


BL30p series
The Ethernet connectivity diagram shows how the ports are connected within the ProLiant
BL30p series enhanced server enclosure.
On a heavily used 16 BL30p server system, using a single uplink port for all 32 NICs can cause
a traffic bottleneck. For optimum performance and redundancy, use the uplink port from both
GbE2 Interconnect Switches.

Rev. 4.41

187

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

Interconnect Switch Ethernet connections


Original enclosure for BL40p series
Uplink Ports

19 20 21 22

Ports

C-GbE

F-GbE

C-GbE2

F-GbE2

19, 20

10/100T

10/100T

10/100/1000T

1000SX

21, 22

10/100/1000T

1000SX

10/100/1000T

1000SX

Interconnect
Module A

1 NIC*
2 NIC
3
4

Front Panel Ports


Ports
23, 24

C-GbE
F-GbE
10/100T

C-GbE2
F-GbE2

5
6

10/100/1000T

7
8
23
24

Downlink Ports
Ports
1 - 16

1
2

iLO
NIC

NIC

C-GbE2
F-GbE2

10/100*

10/100/1000**

3
4

NIC

5
6
7
8

9
10

NIC*

iLO

NIC

NIC

11
12

NIC

NIC

9
10

23

11
12

24

13
14

13
14

15
16

15
16

17

C-GbE
F-GbE

19 20 21 22
Interconnect
Module B

BL40p
1

Switch A

17

Crosslink Ports

18

18

BL40p
2

Server blade enclosure front view

Switch B

*This is the default PXE NIC. You can use the ROM setup utility to make any other data NIC PXE-enabled.
Rev. 4.41

HP Restricted

4 188

Interconnect Switch Ethernet connections Original enclosure


for BL40p series
The Ethernet connectivity block diagram shows how the Ethernet ports are connected within the
ProLiant BL40p server blade enclosure.

Rev. 4.41

188

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

Interconnect Switch Ethernet connections


Enhanced enclosure for BL40p series
Uplink Ports
Ports
19 20 21 22

C-GbE

F-GbE

C-GbE2

F-GbE2

19, 20

10/100T

10/100T

10/100/1000T

1000SX

21, 22

10/100/1000T

1000SX

10/100/1000T

1000SX

Interconnect
Module A

1 NIC*
2 NIC
3
4

Front Panel Ports


Ports

C-GbE
F-GbE

C-GbE2
F-GbE2

10/100T

10/100/1000T

Downlink Ports

Interconnect
Module B

1
2

NIC

NIC

3
4

NIC

5
6

5
6

23, 24

7
8

7
8

23

9
10

24

11
12

NIC*
NIC

C-GbE
F-GbE

C-GbE2
F-GbE2

1 - 16

10/100*

10/100/1000**

NIC

NIC

NIC

9
10

23

11
12

24

13
14

13
14

15
16

15
16

17

Ports

19 20 21 22

BL40p
1

Switch A

17

Crosslink Ports

18

18

BL40p
2

Server blade enclosure front view

Switch B

*This is the default PXE NIC. You can use the ROM setup utility to make any other data NIC PXE-enabled.
Rev. 4.41

HP Restricted

4 189

Interconnect Switch Ethernet connections Enhanced enclosure for


BL40p series
The Ethernet connectivity block diagram shows how the Ethernet ports are connected within the
enhanced ProLiant BL40p server enclosure.

Rev. 4.41

189

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

GbE2 Interconnect Switch redundancy


Fibre Channel
signals
exit upper
modules

LAN signals
exit lower
modules

Interconnect modules are inserted into the bottom left and bottom right
module bays on the rear of the enclosure
The upper module bays are reserved for Fibre Channel options
Rev. 4.41

HP Restricted

4 190

GbE2 Interconnect Switch redundancy


Two GbE2 Interconnect Switches in the ProLiant BL p-Class server blade enclosure provide switch
redundancy and redundant paths to the network ports on the server blades. You can configure the
network to enable continued access to each server blade in case of a component or link failure. Other
redundancy and failover features include:
Two GbE2 Interconnect Switches per ProLiant BL p-Class server blade enclosure.
Four QuadT2 or QuadSX Gigabit Ethernet uplink ports in the rear and two Gigabit Ethernet
copper uplink ports in the front, per GbE2 Interconnect Switch, for designing fully meshed
uplink paths to the network backbone. The uplink ports are inserted into the bottom left and
bottom right module bays on the rear of the enclosure. The upper module bays are reserved for
Fibre Channel options.
Server networking connections routed to each of the separate GbE2 Interconnect Switches for
redundant paths to tolerate a switch or port malfunction.
IEEE 802.1D and 802.1s STP support that eliminates potential problems caused by redundant
networking paths and provides for failover with a secondary path in case of primary path failure.
Using the updated enclosures, each pair of GbE2 Interconnect Switches consolidates up to 24
Ethernet controllers on eight BL20p G2 servers into Gigabit ports on the back of the system. The
iLO signals are integrated into one port on the management module.
The ProLiant BL30p server blades each have two NICs and one iLO controller. There can be up to
16 BL30p server blades per enclosure that require switch consolidation of 32 Ethernet controllers
from one to eight Gigabit ports on the switches. All 16 iLO controllers are connected internally to
the integrated iLO port on the management module.
The ProLiant BL40p server has five 1Gb integrated NICs and one integrated 10/100 NIC for iLO,
for a total of 10 network cables (one enclosure accommodates only two BL40p servers) and two
internal iLO connections to the integrated iLO port on the management module.

Rev. 4.41

190

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

GbE2 Storage Connectivity Kit


Available for use with the
GbE2 Interconnect Kits for
pass-through of BL20p G2,
BL20p G3/BL30p Fibre
Channel signals
Includes two OctalFC
interconnect modules with
eight SFP transceiver slots

Rev. 4.41

HP Restricted

4 191

GbE2 Storage Connectivity Kit


OctalFC Interconnect Module Ports
The GbE2 Storage Connectivity Kit is available for use with the GbE2 Interconnect Kits for
pass-through of Fibre Channel signals from BL20p G2, BL20p G3, and BL30p servers. The
GbE2 Storage Connectivity Kit includes two OctalFC interconnect modules with eight small
form-factor pluggable (SFP) transceiver slots for insertion of the SFPs provided with the Fibre
Channel cards supported on the server blades.
Item

BL20p G2 and BL20p G3

BL30p

Fibre Channel port for server blade 1

Fibre Channel port for server blades 1 and 9

Fibre Channel port for server blade 2

Fibre Channel port for server blades 2 and 10

Fibre Channel port for server blade 3

Fibre Channel port for server blades 3 and 11

Fibre Channel port for server blade 4

Fibre Channel port for server blades 4 and 12

Fibre Channel port for server blade 5

Fibre Channel port for server blades 5 and 13

Fibre Channel port for server blade 6

Fibre Channel port for server blades 6 and 14

Fibre Channel port for server blade 7

Fibre Channel port for server blades 7 and 15

Fibre Channel port for server blade 8

Fibre Channel port for server blades 8 and 16

Rev. 4.41

191

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

Configuration and management


GbE2 Interconnect Switch configuration and management
interfaces and tools
CLI and a menu-driven interface
A browser-based GUI for remote access using a browser
SNMP manageability and monitoring support

GbE2 Interconnect Switch functionality


Save and download interconnect switch configurations to a
TFTP server
Upload firmware from a TFTP server
Manually set the system time

Rev. 4.41

HP Restricted

4 192

Configuration and management


The GbE2 Interconnect Switch provides the following configuration and management interfaces and
tools:
Command line interface (CLI) with scripting capability and a menu-driven console interface
allow local, Telnet, or Secure Shell (SSH) access.
A browser-based graphical user interface (GUI) enables remote access using a web browser such
as Microsoft Internet Explorer or Netscape Navigator.
Simple Network Management Protocol (SNMP) manageability and monitoring are supported.
An SNMP-based scripting utility enables remote configuration of the GbE2 Interconnect Switch.
The GbE2 Interconnect Switch functionality allows you to:
Save and download interconnect switch configurations to a Trivial File Transfer Protocol
(TFTP) server. This support enables you to rapidly deploy multiple server blade systems and
provides robust backup and restore capabilities.
Upload firmware from a TFTP server.
Manually set the system time. Network Time Protocol (NTP) support allows you to display and
record the accurate date and time as provided by an NTP server.

Rev. 4.41

192

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

Configuration and management (continued)


Switch deployment and administration
Serial line IP (SLIP)/RS-232 based and Telnet-based access to the command line and menudriven console interfaces
SNMP-based scripting with best-case example scripts
SNMP v1 and remote monitoring (RMON) groups 1 (statistics), 2 (history), 3 (alarms), and 9
(events)
Up to four configurable community strings for access using SNMP and SNMP trap managers
HP enterprise switch management information bases (MIBs); MIB-II; Bridge MIB; Interface
MIB; 802.1Q MIB; RMON1 MIB groups 1, 2, 3, and 9; and Ethernet MIB
Ability to manage both switches from a single Ethernet port
Ability to communicate to any and all server blade network adapters from any Ethernet uplink
port
Manual or automatic IP settings using BOOTP or DHCP server
TFTP client to upgrade the switch firmware, to save the switch log file, and to save, restore, and
update the switch configuration file
Xmodem switch firmware transfer from the serial interface
Port mirroring with ability to mirror desired type of frames (egress, ingress, or both)
Ability to name ports on a per-port basis
Power-on self-test (POST) at boot for hardware verification
Fully preconfigured for immediate plug-in operation in the server blade enclosure
Front panel system and port speed and link activity LED annunciation panel on each switch
Port speed and link activity LEDs adjacent to all external Ethernet ports
Web-based active virtual graphic of the front of each interconnect switch
Monitoring of port utilization, data packets received and transmitted, port error packets, packet
size, and trunk utilization with both graphical and tabular displays

Rev. 4.41

193

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

Choosing an interconnect solution

Rev. 4.41

HP Restricted

4 194

Choosing an interconnect solution


HP offers four ProLiant BL p-Class interconnect options that allow you to choose how the
networking and storage signals exit the server blade enclosure.
The two patch panels provide pass-through of network signals (RJ-45 Patch Panel) or network
and storage signals (RJ-45 Patch Panel 2), giving you the flexibility to choose the switches you
prefer. Alternatively, the two GbE interconnect switch options provide different levels of
Ethernet switching functionality and Fibre Channel signal pass-through.
In general, choose the appropriate interconnect option based on the following criteria:
The RJ-45 Patch Panel provides Ethernet signal pass-through only.
The RJ-45 Patch Panel 2 provides both Ethernet and Fibre Channel signal pass-through.
The GbE Interconnect Switch consolidates 100Mb/s Fast Ethernet NIC signals.
The GbE2 Interconnect Switch consolidates 1000Mb/s Gigabit Ethernet NIC signals and
provides advanced network capabilities and Fibre Channel signal pass-through.
The RJ-45 Patch Panel, RJ-45 Patch Panel 2, and GbE and GbE2 Interconnect Switches may be
mixed within the rack, but not within the same server blade enclosure. The corresponding
interconnect modules may also be mixed within the rack, but not within the same server blade
enclosure.
Note: As an alternative to using the interconnect switch options, you can mount a standard
network switch above the ProLiant BL p-Class enclosures in the rack to concentrate cables
exiting the server blades.

Rev. 4.41

194

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

Gigabit Ethernet Switch best practices


Factory default configuration settings
User Accounts
Default VLANs
Remote Management IP interfaces

Access GbE switch using serial management port


GbE2 switch can be configured with up to 256 IP interfaces

Rev. 4.41

HP Restricted

4 195

Gigabit Ethernet Switch best practices


Consider the following best practices when implementing the GbE or GbE2 Interconnect
Switch:
Before you configure the interconnect switches, ensure that you understand the factory
default configuration settings for user accounts, default VLANs, and remote management
IP interfaces on the switches.
The interconnect switch does not have any initial user names or passwords set. HP
recommends that after logging on, you create at least one root-level user as the switch
administrator.
The interconnect switch ships with a default configuration with all ports (of both Switch
A and Switch B) enabled and assigned the same VLAN. By default this default VLAN
has a VLAN ID equal to 1, is mapped to Spanning Tree Group 1, and has STP enabled.
Each switch module must be assigned its own IP address, which is used for
communication with an SNMP network manager or other TCP/IP application (for
example, web or TFTP). The factory default is set for the switch module to
automatically obtain the IP address using the DHCP service from a DHCP server on the
attached network. You can also manually change the default switch IP address to meet
the specification of your networking address scheme.
The GbE2 Interconnect Switch can be configured with up to 256 IP interfaces. Each IP
interface represents the GbE2 Interconnect Switch on an IP subnet on your network.
The IP Interface option is disabled by default.

Rev. 4.41

195

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

Gigabit Ethernet Switch best practices (continued)


To enhance ProLiant BL p-Class GbE2 Interconnect Switch management and user
accountability, three levels or classes of user access have been implemented on the
interconnect switch: Root, User+, and User. Some menu selections available to users with
Root privileges may not be available to those with User+ and User privileges. The
following table summarizes user access rights.
Privilege

Root

User+

User

Configuration

Yes

Read-only

Read-only

Network Monitoring

Yes

Read-only

Read-only

Community Strings and Trap


Stations

Yes

Read-only

Read-only

Update Firmware and


Configuration Files

Yes

No

No

System Utilities

Yes

Ping-only

Ping-only

Factory Reset

Yes

No

No

Reboot Switch

Yes

Yes

No

Add/Update/Delete User Accounts

Yes

No

No

View User Accounts

Yes

No

No

You can access the ProLiant BL p-Class GbE Interconnect Switch using the serial (DB-9)
management port.

Rev. 4.41

196

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

Deploying a ProLiant GbE2 Interconnect


Switch in a Cisco environment

Interoperability with Cisco PVST+ 801.Q


Two methods to interoperate
1. Add all GbE2 Interconnect Switch VLANs to STG1
2. Generate a unique GbE2 Interconnect Switch STG for each
of the configured VLANs

Proprietary features
VLAN and STG configuration guidelines

Rev. 4.41

HP Restricted

4 197

Deploying a ProLiant GbE2 Interconnect Switch in a Cisco environment


The GbE2 Interconnect Switch provides interoperability with the Cisco Per-VLAN Spanning
Tree Plus (PVST+) 801.Q tagging proprietary protocol through the use of STGs. In the GbE2
implementation, an administrator creates an STG and then assigns a VLAN to it. This differs
from the Cisco implementation where an administrator creates a VLAN, and then a spanning
tree instance is automatically created and assigned to the VLAN.
Note: The GbE2 Interconnect Switch cannot be used as a participating node with Ciscos
VLAN Trunk Protocol (VTP). However, the interconnect switch can be used as a VTP
transparent mode to forward VTP information.
Manually assigning the bridge priorities, port costs, and port priorities on the GbE2
Interconnect Switch enables Cisco Catalyst switches to be the root bridge.
For rapid spanning tree convergence, many Catalyst switches support the proprietary Cisco
features PortFast, UplinkFast, and BackboneFast, as well as the IEEE 802.1w standard. The
802.1w extension is an enhancement to the original 802.1D standard. The 802.1w standard
provides convergence time improvements similar to the Cisco methods, but it also provides the
added benefit of interoperability between vendors.
Support for the 802.1w standard is planned for a future GbE2 Interconnect Switch software
release. Presently, the GbE2 Interconnect Switch allows you to disable STP on a per-switch or
per-port basis. This capability is ideal for networks designed without loops or individual switch
ports connected to server blades or other devices where a loop does not exist.
The Cisco proprietary VLAN tagging Inter Switch Link (ISL) is an alternative method that
predates the IEEE 801.1Q tagging standard. The GbE2 Interconnect Switch does not support
ISL.

Rev. 4.41

197

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

Deploying a ProLiant GbE2 Interconnect Switch in a Cisco environment


(continued)
VLAN and STG configuration guidelines
When creating a VLAN on the GbE2 Interconnect Switch, that VLAN automatically belongs to
the default STG 1. To add the VLAN in another STG, it must be assigned to another STG.
Remember the following rules when creating VLANs and assigning STGs:
The default VLAN (VLAN 1) cannot be removed from the default STG 1.
VLANs must be contained within a single STG; a VLAN cannot span multiple STGs.
When a VLAN spans multiple switches, the VLAN must be within the same STG (have the
same STG ID) across all the switches.
If ports are tagged, all trunked ports can belong to multiple STGs.
A port that is not a member of any VLAN cannot be added to an STG. The port must be
added to a VLAN, and that VLAN added to the desired STG.
Tagged ports can belong to more than one STG, but untagged ports can belong to only one
STG.
When a tagged port belongs to more than one STG, the egress BPDUs are tagged to
distinguish the BPDUs of one STG from those of another STG.
When a port is removed from a VLAN that belongs to an STG, that port will also be
removed from the STG. However, if that port belongs to another VLAN in the same STG,
the port remains in the STG.
An STG cannot be deleted, only disabled. If you disable the STG while it contains VLAN
members, STP will be disabled on all ports belonging to that VLAN.
If any STP port in the trunk is set to forwarding, the remaining ports in the trunk will also
be set to forwarding.
PVST+ interoperability
The PVST+ interoperability feature on the GbE2 Interconnect Switch includes the following
additional conditions and features:
One GbE2 Interconnect Switch supports 16 STGs operating simultaneously.
The default STG 1 can hold multiple VLANs; all other STGs (groups 2 through 16) can
hold one VLAN.
The GbE2 Interconnect Switch provides two methods to interoperate with PVST+:
1. All GbE2 Interconnect Switch VLANs configured on the ports connected to the Catalyst
switches may be added to the default STG (STG 1).
2. A unique GbE2 Interconnect Switch STG can be created for each of the configured
VLANs connecting to the Catalyst switches.
Note: The ProLiant BL40p is compatible with PVST+ when configured with a GbE2
Interconnect Switch and 802.1D STP for loop-free path redundancy.

Rev. 4.41

198

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

Learning
check

Rev. 4.41

HP Restricted

4 199

Learning check
1. Name four functions of STP.
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
2. You must enable STP manually.
True
False
3. With __________ port trunking, two compatible devices can identify multiple ports, link
them together, and trunk the ports on both ends of the links.
a. Parallel
b. Dynamic
c. Redundant
d. Cisco
4. The __________ __________ is determined by the bridge priority, followed by the MAC
address of the switch.
5. Name the interconnect option that would be appropriate for an enterprise that needs
reduced cabling, but does not need a Fibre Channel pass-through for the ProLiant BL20p
G2 or gigabit speed support from the server blade.
_______________________________________________________________

Rev. 4.41

199

HP BladeSystem Solutions I Planning and Deployment

ProLiant BL p-Class Network Connectivity Options

6. What two methods does the GbE2 Interconnect Switch provide to interoperate with
PVST+?
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
7. What failures does the redundant architecture of the GbE Interconnect Switch protect
against?
_______________________________________________________________
_______________________________________________________________
8. Explain iLO aggregation in the new server blade enclosure.
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
________________________________________________________________
9. The iLO NIC is a 10/100 NIC on all ProLiant BL p-Class servers.
True
False

Rev. 4.41

200

HP BladeSystem Solutions I Planning and Deployment

Rev. 4.41

Rev. 4.41

ProLiant BL p-Class Network Connectivity Options

HP Restricted

201

4 201

HP BladeSystem Solutions I Planning and Deployment

Rev. 4.41

ProLiant BL p-Class Network Connectivity Options

202

Configuring the
HP ProLiant BL GbE2 Interconnect Switch
Module 4 Lab 1

Objectives
After completing this lab, you should be able to:

Set up and cable the HP ProLiant BL GbE2 Interconnect Switch

Access the switch console interface

Set a static IP address for the switch management interface

Manipulate the switch configuration files and firmware images

Access the GbE2 Interconnect Switch with a web browser

Configure port trunking

Requirements
Depending on the hardware resources in the classroom, your instructor might give
you the option of completing a subset of exercises in this lab or the instructor
might demonstrate the GbE2 Interconnect Switch accessibility and configuration.
However, to complete all exercises, you will need:

HP ProLiant BL GbE2 Interconnect Switch Kit


Note
All exercises in this lab can be completed with a single switch either installed
in a powered p-Class server blade enclosure or attached to a diagnostic station.
Also, any exercise requiring network connectivity can be accomplished with a
connection to the front RJ-45 ports on the switch or through the network cubes
in the back of the enclosure. Only the minimum hardware requirements are
listed in the Requirements section.

Rev. 4.41

One of the following:

ProLiant BL p-Class server blade enclosure and power enclosure with


the appropriate power supplies and cabling

ProLiant BL p-Class diagnostic station with necessary cables to power


the switch

A null modem cable

An Ethernet network with appropriate cabling

L4.1 203

HP BladeSystem Solutions I Planning and Deployment

L4.1 204

A Microsoft Windows client with:

Microsoft Internet Explorer 5.5 or later

A NIC with the TCP/IP protocol

Available DB9 serial port

A GbE2 Interconnect Switch Kit quick install card and user manuals if you
are not familiar with this hardware installation (optional)

A Dynamic Host Configuration Protocol (DHCP) server with the Bootstrap


Protocol (BOOTP) support enabled and network connectivity between the
switch and the BOOTP server (optional)

Rev. 4.41

Configuring the HP ProLiant BL GbE2 Interconnect Switch

Overview
Deploying an HP ProLiant BL GbE2 Interconnect Switch requires extensive
knowledge of the network on which the server blade system is being installed. For
example, design of the physical network, details of spanning tree settings, and
configuration of the local Virtual Local Area Networks (VLANs) could be critical
to successful implementation of the BL series switch on the network.
Because there are an infinite number of deployment scenarios, and because labs
specific to the spanning tree protocol (STP) and VLANs require additional
networking equipment that might not be readily available, this lab guide covers
procedures for local and remote access to the switch and a few key configuration
concepts.

Rev. 4.41

Important
For all file and CD locations noted in this lab, use either a CD or a network
repository of source files as identified by your instructor. Verify all IP
addresses with your instructor before adding or modifying any IP addresses.

L4.1 205

HP BladeSystem Solutions I Planning and Deployment

Exercise 1 Setting up and cabling the hardware


If the class is sharing a single server blade enclosure, your instructor might
demonstrate the steps in this exercise, or might have completed them before the
class began. If your server blade enclosure is not set up, you must complete this
exercise.
1.

Install the GbE2 Interconnect Switch option in the enclosure or attach it to


the diagnostic station with the appropriate power cable. The switch powers
up automatically when installed in the enclosure or attached to a powered
diagnostic station.
Note
You will not be using any server blades for this lab. However, if you are using
an enclosure that has blades installed, they can be left in.

2.

Connect the switch to the TCP/IP network by plugging a network cable into
either of the RJ-45 ports on the front of the switch. The ports on the front are
labeled management ports but function like any other Ethernet port on the
switch.

Front ports

3.

L4.1 206

Plug one end of a null modem cable into the console port on the front of the
switch. Plug the other end of the cable into a serial port on your Microsoft
Windows client computer.

Rev. 4.41

Configuring the HP ProLiant BL GbE2 Interconnect Switch

Exercise 2 Accessing the switch console interface


HP ProLiant BL GbE2 Interconnect Switches can be managed with a text-based
scriptable command line interface that is accessed either over a TCP/IP network
with a Telnet session or directly through a serial (RS-232) connection. With a
Telnet session, you must know the IP address of the switch management interface.
If the ProLiant BL GbE2 Interconnect Switch is installed on a network that does
not dynamically register clients in DHCP or BOOTP, you must first access the
switch locally through the console port with a terminal session. When connected,
you can gather the information necessary to access the switch remotely, or
configure the switch through the command line interface.
In this exercise, you will access the switch locally through the console port.
1.

On the Windows workstation, start HyperTerminal and create a new


connection. You can give the connection any name, but be sure to choose the
correct COM port in the Connect Using drop-down box and to use the port
settings shown in the following graphic. Click OK to connect to the switch.

Rev. 4.41

Important
The bit rate must be set to 9600 baud because the switch will communicate
only at that speed through the console port.

L4.1 207

HP BladeSystem Solutions I Planning and Deployment

2.

If the switch completes the power-on self-test (POST), you might have to
press Enter for the login prompt. Log in with the administrator credentials
provided by your instructor (or the documentation that shipped with the
network tray). By default, the password is admin. There is no user name
prompt with a console session.

Console session login screen

3.

Navigate through the menus in the interface. Use the menu choices to access
and view the IP address of the switch. Record the sequence of commands you
use and the IP address of the switch.
Command sequence: ..........................................................................................
............................................................................................................................
............................................................................................................................
Switch IP address: ..............................................................................................
The switch IP address is an important setting because, in its default
configuration, the switch receives an IP address through BOOTP. Most
administrators set a static IP address for the switch for management purposes.

L4.1 208

Rev. 4.41

Configuring the HP ProLiant BL GbE2 Interconnect Switch

Exercise 3 Setting a static IP address for the


switch management interface
In most network environments, administrators set a static IP address for the switch
management interface and connect it to a dedicated network, or a separate VLAN,
to segment switch management traffic from everyday corporate traffic.
It is important to understand that this interface or IP address is the address of the
switch, not a specific port on the switch. In fact, depending on the configuration,
the IP address of the switch can be accessed through any Ethernet port on the
switch, including uplinks, downlinks, and the ports on the front labeled
management.
Note
You can manage the switch through a downlink to a blade by logging in to the
blade through HP integrated Lights-Out (iLO) and using a Telnet connection to
the switch management interface, or by browsing with a web browser.

The switch can have up to 256 interfaces (or IP addresses) configured for
management purposes. By default, Interface 1 is the only one enabled, and it is set
to obtain an IP address from a BOOTP server. Subsequently configured interfaces
must be configured and then enabled before they can be used.
Example

If you configured Interface 2 with an IP address of 10.1.1.1 to match a


management network, you would not be able to access the switch on that
network until you enabled that interface.
Note
The following commands are used in this exercise:
. Shows current working menu
.. Backs out one menu (to the parent menu)

Rev. 4.41

L4.1 209

HP BladeSystem Solutions I Planning and Deployment

1.

From the main menu of a console session, use the cfg ip if menu to
navigate to Interface 1.
The console interface can be navigated by entering one command at a time
and then pressing the Enter key to get to submenus. In some cases, you can
enter a command with subsequent subcommands on a single line. The
following graphics show two ways of navigating to the if menu for
Interface 1.

if menu for interface 1 accessed one command at a time

if menu for interface 1 accessed with a single command

L4.1 210

Rev. 4.41

Configuring the HP ProLiant BL GbE2 Interconnect Switch

Use the cfg ip if menu commands to set a static IP address of


192.168.2.1 for Interface 1. Which command did you use to accomplish this
task?

2.

............................................................................................................................
Use the cfg ip if menu commands to enable Interface 2. Which
command did you use to accomplish this task?

3.

............................................................................................................................
What else did you have to do to get to the Interface 2 menu?
............................................................................................................................
4.

Because you will not be using Interface 2, use the dis command to disable
that interface.

5.

Use the diff command to view the pending configuration changes before
applying them. Use the apply command to apply any pending configuration
changes. These commands may be used from any menu.

Rev. 4.41

Important
The apply command does not save the configuration. If the switch is disabled
or loses power before the configuration is saved, these changes will be lost.
The save command, used to save the configuration to memory in the switch, is
covered in the following exercise.

L4.1 211

HP BladeSystem Solutions I Planning and Deployment

Exercise 4 Working with configuration files and


firmware images
There are three configuration files for the switch and three operating system
firmware images. The configuration files are:

Active The current, or last applied, configuration for the switch (exists in
volatile, read/write memory). The switch loads this configuration by default
during POST.

Backup The backup configuration for the switch (exists in non-volatile,


read/write memory). The switch loads this configuration during the first
POST following a loss of power.

Factory The default factory configuration (exists in read-only memory).


The switch loads this configuration the first time it is powered on. You can
reset the switch to the factory image at any time.
Note
These files are backed up to and restored from a Trivial File Transfer Protocol
(TFTP) server with the ptcfg and gtcfg commands in the Configuration menu.
You can also use the scp command for backing up and restoring the
configuration file after enabling the Secure Shell (SSH) interface.

The operating system images are:

Image 1 The switch operating system (also referred to as the firmware


image).

Image 2 A second copy of the firmware image. This is usually the


previous version and used as a backup image following an upgrade. It can
also be used to store a backup copy of the operational image in case it
becomes corrupt.

Boot The switch BIOS. This image boots the switch hardware.
Note
These images can be uploaded to and downloaded from a TFTP server by
using the ptimg and gtimg commands from the Boot Options menu. These
images also can be downloaded to the switch using XModem through a serial
connection by entering download mode during switch boot up. Refer to the
switch user guide for specific procedures for using XModem.

L4.1 212

Rev. 4.41

Configuring the HP ProLiant BL GbE2 Interconnect Switch

Saving configuration files


In the previous lab, you used the apply command to commit your configuration
changes. However, this command does not save the changes to the active
configuration file. To save the changes from any menu in the console interface,
enter save and press Enter.
The save command allows you to save the current configuration to the active
configuration file and overwrites the backup file. The save n command allows you
to save the current configuration to the active configuration file without
overwriting the backup file.

Important
Before you use the save command, ask your instructor if you should overwrite
the backup file. The instructor might have a reason for keeping the backup file.

Boot options
The Boot Options menu allows you to choose which configuration file to use on
the next boot and to choose the operating system image to boot to.
1.

From the main menu, use the boot command to enter the Boot Options
menu. Which command in this menu would you use to configure the switch
to boot from Image 2?
............................................................................................................................

2.

If your class has access to a TFTP server with a new software image file for
the switch, enter the gtimg command to download the new software image
to your switch.

3.

If you completed the previous step, what must you do to ensure you booted to
the new image? Be prepared to discuss your answer with the class.
............................................................................................................................
............................................................................................................................
............................................................................................................................

Rev. 4.41

L4.1 213

HP BladeSystem Solutions I Planning and Deployment

Exercise 5 Accessing the GbE2 Interconnect


Switch with a web browser
The GbE2 Interconnect Switch includes a built-in web-based management
interface. The interface is accessed with a web browser over a TCP/IP network.
1.

In your web browser address field, enter http://192.168.1.10 and


press Enter.
Note
You must set a static IP address on your Windows client NIC that is in the
same subnet as the IP address of your switch.

2.

L4.1 214

At the login prompt, enter the administrator user name (admin) and password
provided by your instructor and click OK.

Rev. 4.41

Configuring the HP ProLiant BL GbE2 Interconnect Switch

3.

4.

Navigate through the menus in the web interface and notice how they change
when you click one of the three large buttons across the top of the page.

Configure Used for configuring the switch

Statistics Used to view various statistics on the switch

Dashboard Used to view the current configuration

Compare the procedure you used to find the switch IP address in the console
interface to the comparable procedure in the web-based interface. Document
the web-based procedure.
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................
Note
Currently, the web-based interface for the GbE2 switch does not provide full
switch management capabilities. Several procedures require the use of the
console session. HP recommends that all switch administrators learn the
console interface for administration. Additional capabilities are planned for the
web interface in the near future.

Rev. 4.41

L4.1 215

HP BladeSystem Solutions I Planning and Deployment

Exercise 6 Configuring port trunking


Each switch supports up to 12 trunk groups. A trunk group for the crosslink ports
is set up by default on all switches.
Any ports that are connected to the same device (a server blade, another switch,
and so on) can also be assigned to a trunk group. Which downlink ports (to the
blades) can be assigned to a trunk group?
With a BL40p server blade: ...............................................................................
With a BL20p G2 server blade: .........................................................................
In the console interface, the steps to assign a pair of ports to a trunk group are as
follows:

L4.1 216

1.

To view the current trunk settings, enter the trunk command in the
Configuration menu and press Enter.

2.

By default, only one trunk group is configured (Group 1). At the prompt
following the trunk command, enter 1 and press Enter. The Trunk Group 1
menu displays.

3.

Enter the cur command to view the current ports assigned to that trunk
group. If the switch is in its default configuration, ports 17 and 18 are in the
trunk group 1.

Rev. 4.41

Configuring the HP ProLiant BL GbE2 Interconnect Switch

4.

Starting from the current screen, what is the sequence of commands you
would use to create a second trunk group with two uplinks assigned?
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................

5.

What final step would you have to complete to ensure the new trunk group is
available if the switch is powered off and moved to a different enclosure?
............................................................................................................................

Rev. 4.41

L4.1 217

HP BladeSystem Solutions I Planning and Deployment

L4.1 218

Rev. 4.41

Configuring VLANs and STP with the HP


ProLiant BL GbE2 Interconnect Switch
Module 4 Lab 2

Objectives
After completing this lab, you should be able to:

Verify connectivity between server blades on two separate switches and a


single connection between the switches

Add Virtual Local Area Network (VLAN) connectivity between servers on


separate switches

Apply the basic concepts of the Spanning Tree Protocol (STP) by using
VLANs in conjunction with STP

Requirements
Depending on the hardware resources in the classroom, your instructor might give
you the option of completing a subset of exercises in this lab. Or, the instructor
might demonstrate these concepts. However, to complete all exercises, you will
need:

An HP ProLiant BL GbE2 Interconnect Switch Kit, which includes two


GbE2 Interconnect Switches

A ProLiant BL p-Class server blade enclosure and power enclosure with the
appropriate power supplies and cabling

Two HP ProLiant BL20p or BL20p Generation 2 (G2) server blades with a


supported Microsoft Windows operating system installed

A null modem cable

An Ethernet network with appropriate cabling

A Microsoft Windows client with:

Microsoft Internet Explorer 5.5 or later

A NIC with the TCP/IP protocol

Available DB9 serial port

Ensure that the management PC is connected to an RJ-45 management port


or uplink on one of the switches, or to another network device connected to
both switches.

Rev. 4.41

L4.2 219

HP BladeSystem Solutions I Planning and Deployment

Exercise 1 Verifying connectivity


To verify connectivity between servers on two separate switches and a single
connection between the switches, follow these steps:

1.

Important
During this lab, be sure to save the switch settings often. Saving the settings is
different from applying any changes, which only applies to the current
working settings, not the saved configuration. To save the settings, use the
save command in the main menu of the command line interface. If you restart
the switch without saving the settings to flash memory, any unsaved
configuration changes will be lost.

Set the IP address of the NIC in the management PC to 192.168.1.201.


Note
All NICs and ports will be on a 255.255.255.0 subnet mask. Depending on the
lab setup, your instructor might provide alternate IP addresses for this exercise.

2.

For each server blade, log in to HP integrated Lights-Out (iLO), start a


remote console session, and set the IP address of the NICs according to the
scheme listed in the following tables.
Blade 1 (Bay 1)

IP Address

Switch Port

iLO NIC
Local Area Connection 1
Local Area Connection 2
Local Area Connection 3

192.168.1.101
192.168.1.11
192.168.1.12
192.168.1.13

Switch B, port 1
Switch A, port 1
Switch A, port 2
Switch B, port 2

Blade 2 (Bay 2)

IP Address

Switch Port

iLO NIC
Local Area Connection 1
Local Area Connection 2
Local Area Connection 3

192.168.1.102
192.168.1.21
192.168.1.22
192.168.1.23

Switch B, port 3
Switch A, port 3
Switch A, port 4
Switch B, port 4

Note
If the iLO NIC is not already set to the address shown in the table, connect to
the iLO (at 192.168.1.1) with the diagnostic cable and set the iLO address
according to the table. To do this, you must put your management PC NIC on
the 192.168.1.0 network.

The configuration now consists of two server blades connected to two


separate switches. Each server blade has two data NICs connected to Switch
A, and one data NIC and the iLO connected to Switch B. There is a single
link (the trunked crosslinks) between the two switches. Also, all ports are on
the default VLAN 1 so that all ports can communicate with each other.

L4.2 220

Rev. 4.41

Configuring VLANs and STP with the ProLiant BL GbE2 Interconnect Switch

3.

Use a remote console session through iLO on one of the server blades and
issue a ping command between all the NICs you configured in the previous
step. The ping commands from any address on the 192.168.1.0 network to
any other address in the system complete successfully.
Note
Notice that you can also issue the ping command from the server blade to the
management interface on the switch. The management interface for Switch A
is set to 192.168.1.10 and Switch B to 192.168.1.20.

4.

Rev. 4.41

Use the configuration port command menu in a management console


session to disable the crosslink ports (17 and 18). To disable the port, enter
the port number and use the dis command. Complete these steps for both
crosslink ports and apply the configuration changes.

L4.2 221

HP BladeSystem Solutions I Planning and Deployment

5.

Using an iLO console session to Blade 1, issue a ping command to the other
NICs in the system. Did you receive a reply from the NICs connected to
Switch B?
............................................................................................................................
Even though you have a NIC on Blade 1 connected to Switch B, you will not
receive a reply from the Blade 2 NICs on Switch B because of the way the IP
protocol works. The ping is only sent on one NIC, usually the first NIC
enumerated by the operating system. In the default configuration, that NIC is
the one named Local Area Connection (LAC in the following graphic).
If you disable LAC and LAC2, you can ping the NICs on Switch B because
the routing table on Blade 1 changes so that the ping request is sent out
through LAC3 (the only remaining data NIC).

Important
If you disable LAC and LAC2 to test the concept, you must also disable LAC2
and LAC3. Then enable LAC and repeat the ping request. This process resets
the routing table to its original configuration. Otherwise, some of the steps
later in this exercise will not work.

20

20

Switch A

23

17

17

18

18

Switch B
1

23

LAC LAC2 iLO LAC3

LAC LAC2 iLO LAC3

Blade 1

Blade 2
NIC port mappings

L4.2 222

Rev. 4.41

Configuring VLANs and STP with the ProLiant BL GbE2 Interconnect Switch

Exercise 2 Adding VLAN connectivity between


servers on separate switches
In this exercise, you will add two static VLANs on each switch to segment traffic
to and from specific ports on the switch (and the associated NICs on the servers).
Each switch ships with all ports in VLAN 1 by default. You will be adding VLAN
2 and VLAN 3.
1.

Log in to the management console interface for Switch A.

2.

Use the configuration ports menu to re-enable the crosslink ports on


Switch A.

3.

To allow traffic between the switches, the crosslink ports must be included in
all VLANs. To add a port to more than one VLAN, you must first enable
tagging on that port. Use the configuration ports tag menu to enable
tagging on port 17 as shown in the following graphic.

Enabling tagging on port 17

Rev. 4.41

4.

Repeat step 3 for port 18.

5.

Return to the Configuration menu by entering .. at the management


console prompt until the Configuration menu displays.

L4.2 223

HP BladeSystem Solutions I Planning and Deployment

6.

To configure a VLAN, enter the vlan command. When prompted for a


VLAN number, enter 2 and press Enter. This creates VLAN 2.

Creating VLAN 2

7.

Repeat step 6 to create VLAN 3.

8.

Enter the add command to add ports to the appropriate VLAN according to
the following table.

9.

Switch

VLAN

Ports

A
A

2
3

1, 3, 17, 18
2, 4, 17, 18

Enter the ena command to enable each VLAN after configuring the ports.
Note
When you add the untagged switch ports connected to the NICs on the server
blades, they are removed from VLAN 1. If you enabled tagging on these ports
and added them to more than one VLAN, you would have to also enable
tagging on the NICs in Windows. In its default configuration, Windows does
not interpret tagged frames and will drop any incoming tagged packets.
Similarly, a tagged port on the switch will drop untagged packets that are
coming in.

10. You have not segmented the first two data NICs for each blade onto separate
VLANs. To test this configuration, log in to Blade 1 and ping LAC on
Blade 2 (192.168.1.21). Ensure that you receive a reply because both ports
are in VLAN 2 and both are connected to Switch A.

L4.2 224

Rev. 4.41

Configuring VLANs and STP with the ProLiant BL GbE2 Interconnect Switch

11. Ping LAC2 on Blade 2 (192.168.1.22). Did you receive a reply?


............................................................................................................................
Note
Remember, the ping request will only be sent out on the first NIC (unless you
changed the routing table by disabling the first NIC). Even though you have
another NIC on Blade 1 that is connected to VLAN 3, you still cannot ping
LAC2 on Blade 2 (which is also in VLAN 3), because the ping request is
coming from a NIC in VLAN 2. It will not reach Blade 2 because the switch
blocks all traffic to that port that does not come from a port in the same
VLAN. Notice that you also cannot ping anything on Switch B at this point
because no ports on Switch B have been added to VLAN 2.

20

V1

Switch A

V1

V2

V3

V2

V3

V1
V2
V3

17

Switch B

18

23

LAC LAC2 iLO LAC3

LAC LAC2 iLO LAC3

Blade 1

Blade 2

VLAN configuration Intermediate configuration (V1=VLAN 1, V2=VLAN 2, and V3=VLAN 3)

Rev. 4.41

L4.2 225

HP BladeSystem Solutions I Planning and Deployment

12. Complete the configuration for the system by adding VLAN 2 and VLAN 3
on Switch B and enabling tagging on ports 17 and 18 so they can be added to
more than one VLAN. The following table shows the VLANs and ports to be
added on Switch B.
Switch

VLAN

Ports

B
B

2
3

2, 4, 17, 18
17, 18

The VLAN configuration on the two switches is different because Switch B


is connected to one data NIC and the iLO NIC on each server blade. Switch
A is connected to two data NICs on each server blade. In this configuration,
you are adding the data NICs on Switch B to VLAN 2 and leaving the iLO
NICs on VLAN 1 (which becomes a management VLAN). The following
graphic illustrates this VLAN configuration.
V1

V1

Switch A

V1

V2

V3

V2

V3

V1
V2
V3

V1
V2
V3

Switch B
V1

V2

V1

V1

V2

LAC LAC2 iLO LAC3

LAC LAC2 iLO LAC3

Blade 1

Blade 2
VLAN configuration Final configuration

L4.2 226

Rev. 4.41

Configuring VLANs and STP with the ProLiant BL GbE2 Interconnect Switch

13. To verify the VLAN configuration on each switch, enter the vlan command
in the Information menu. The current VLAN configuration displays.

VLAN configuration for switch A

14. Issue a ping command from Blade 1 to the ports in VLAN 2 on Switch B.
Was the ping command successful?
............................................................................................................................
15. Issue a ping command from your management PC to both switches and to the
iLO ports on Switch B. Were the ping commands successful?
............................................................................................................................
You now have three VLANs configured on the system:

The default VLAN, VLAN 1, is the management VLAN that provides the
uplinks to the management PC and the iLO NICs.

VLAN 2 includes the crosslink ports on each switch and two ports for data
NICs on each switch.

VLAN 3 includes the crosslink ports on each switch and two ports for data
NICs on Switch A.

If this were a real configuration, what other steps would need to be performed on
the switch uplink ports to connect them to the corporate LAN?
............................................................................................................................
............................................................................................................................
............................................................................................................................

Rev. 4.41

L4.2 227

HP BladeSystem Solutions I Planning and Deployment

Exercise 3 Using VLANs and STP


Your system now has three VLANs on each switch, and all three of these VLANs
are in Spanning Tree Group 1. In this configuration, if the spanning tree protocol
blocks a link between the two switches, the link is blocked for all three VLANs.
Building on the lab setup so far, the following is a typical scenario where this
configuration can cause a problem.
Example

To put the server blade system into production on your corporate LAN, you
add all four uplink ports on the back of the Switch A to VLAN 2 and VLAN
3, and assign them into a trunk group to create a single 4Gb/s uplink. You do
the same on Switch B.
For security purposes, you take all of the uplink ports connected to the
corporate LAN out of VLAN 1 (your management VLAN). This leaves the
uplinks dedicated to data traffic.
For switch and iLO management, you connect one of the ports on the front of
Switch A to your management network and to your management PC.
Corporate LAN

Corporate LAN

V2, V3

V2, V3

Mgmt

Switch A

V1

V2

V3

V2

V3

V1
V2
V3

V1
V2
V3

Switch B
V1

V2

V1

V1

V2

LAC LAC2 iLO LAC3

LAC LAC2 iLO LAC3

Blade 1

Blade 2

All VLANs in the same spanning tree protocol group

L4.2 228

Rev. 4.41

Configuring VLANs and STP with the ProLiant BL GbE2 Interconnect Switch

With the preceding configuration, the spanning tree protocol automatically


blocks the crosslinks to prevent a broadcast loop on the network. The
crosslinks get blocked instead of the uplink trunks on one of the switches
because, for any data traveling to and from the server blades, the cost of the
route through the crosslinks and then out the uplinks is higher than the cost of
the route straight out the 4Gb/s uplink to the corporate network.
However, after the crosslinks are blocked, all management traffic over
VLAN 1 between the crosslinks is also blocked. Remember, the crosslinks
were the only route for management traffic between the switches.
This problem can be solved by physically connecting your management LAN
to both switches. But in a more complex switched network, the problem can
become more complicated, and adding more physical routes (network
connections) is not practical.
The Per VLAN Spanning Tree plus (PVST+) protocol also solves the
problem. PVST+ allows multiple spanning tree groups (typically one for each
VLAN) and allows the spanning tree protocol to block a physical link for one
VLAN but leave the same link unblocked for other VLANs.
Note
The spanning tree groups are limited to a maximum of 16 on the HP
interconnect switches because 16 spanning tree groups are sufficient in a
server blade solution.

Rev. 4.41

L4.2 229

HP BladeSystem Solutions I Planning and Deployment

Switch operation with a single spanning tree protocol group


You can duplicate previous situation by using a single uplink on each switch to
create another connection between the two and manually increasing the cost on the
crosslink ports to a value greater than the cost of the uplink port.
1.

2.

Use what you learned in the previous labs to perform the following steps on
Switch A:
a.

Enable tagging on port 20.

b.

Add port 20 to VLAN 2 and VLAN 3.

c.

Remove port 20 from VLAN 1.

By default, the cost on all ports in the switch is set to a value of 4. Because
the uplinks are not connected to another switch upstream, you must manually
assign port costs so that the crosslink ports are a higher cost than the uplinks.
With the crosslinks set to a higher port cost, the spanning tree protocol will
block the crosslinks instead of the uplinks.
In a real network, the uplinks on each switch would be connected to another
switch upstream, which would be configured so that it becomes the root
switch. This additional factor in the spanning tree protocol topology would
cause the crosslinks to be blocked automatically.
To change the port cost on the crosslinks, go to the configuration stp menu
for group 1. Enter the port command as shown in the following graphic to
change the cost for port 18 to a value of 19. Do the same for port 17.

L4.2 230

Rev. 4.41

Configuring VLANs and STP with the ProLiant BL GbE2 Interconnect Switch

3.

Repeat step 1 on Switch B.

4.

With an Ethernet cable, connect port 20 on Switch A to port 20 on Switch B.


This connection will function the same as it would if both uplinks had been
connected to another switch.
a.

Ensure the NIC in your management PC is connected to only one of the


front ports on Switch A.

b.

From your management PC, try to ping one of the iLO NICs. Did this
work? Why or why not?

............................................................................................................................
............................................................................................................................
The ping test may work the first time you try, but it will eventually stop
working because the spanning tree protocol will block the highest cost route
(the crosslink ports) to prevent a broadcast loop between the two switches.
You can speed up the test process by performing the ping test from the
management console interface for Switch A (use the same syntax as with a
command prompt in Windows).
You lose connectivity between the devices on VLAN 1 on each switch
because the crosslink was the only connection between the two switches that
was a member of VLAN 1, and it was just blocked by the spanning tree
protocol. The uplinks, which now form the only link between the two
switches, are not members of VLAN 1, so the ping from the management PC
(connected to Switch A) could not reach the iLO NIC through Switch B.

Rev. 4.41

L4.2 231

HP BladeSystem Solutions I Planning and Deployment

Switch operation with PVST+


To correct the problem caused in the previous steps, you must add a spanning tree
group for each VLAN.
Note
If you were using a Telnet session to access the management interface on the
switches, you must move your network cable to Switch B to make any
configuration changes on that switch.

1.

On Switch A, select the configuration stp menu and add VLAN 2 to


Spanning Tree Group 2 as shown in the following graphic. Remember to
apply the changes.

Adding VLAN 2 to Spanning Tree Group 2

2.

Repeat step 1 for VLAN 3 and Spanning Tree Group 3.


Note
When you added VLAN 2 to Spanning Tree Group 2 and VLAN 3 to
Spanning Tree Group 3, those VLANs were automatically taken out of
Spanning Tree Group 1.

3.

L4.2 232

Repeat steps 1 and 2 on Switch B.

Rev. 4.41

Configuring VLANs and STP with the ProLiant BL GbE2 Interconnect Switch

4.

On Switch A, change the cost for ports 17 and 18 to 19 for Spanning Tree
Group 2 and Spanning Tree Group 3 (like you did for Spanning Tree Group 1
in the previous section). Be sure to apply all configuration changes on both
switches.
When these configuration changes take effect, the crosslink port cost will be
greater than the uplink port cost for all spanning tree groups. PVST+ will
block the crosslink ports for any VLAN traffic that is assigned to both the
crosslink ports and the uplink ports. It will not block the crosslink ports for
any VLAN traffic that is only assigned to the crosslink ports.
The crosslinks are blocked for VLAN 2 (Spanning Tree Group 2) and VLAN
3 (Spanning Tree Group 3). No ports are blocked for VLAN 1 (Spanning
Tree Group 1) traffic, because the crosslink is the only route available
between the switches for VLAN 1 traffic.
Cost=4

V2, V3

Mgmt

Switch A

V1

V2

V3

V2

V2, V3

Cost=19

V3

V1

V1

Switch B
V1

V2

V1

V1

V2

LAC LAC2 iLO LAC3

LAC LAC2 iLO LAC3

Blade 1

Blade 2

VLAN 2 and VLAN 3 blocked on crosslinks

5.

Test the configuration by performing the ping test from your management PC
(on a VLAN 1 port on Switch A) to an iLO interface (on a VLAN 1 port on
Switch B).
Note
It may take a few minutes for the spanning tree group topology to change and
for the crosslinks to be unblocked for VLAN 1 traffic.

Rev. 4.41

L4.2 233

HP BladeSystem Solutions I Planning and Deployment

L4.2 234

Rev. 4.41

Accessing and Configuring iLO


Module 4 Lab 3

Objectives
After completing this lab, you should be able to:

Access and configure the integrated Lights-Out (iLO) of your server blade

Upgrade the HP BladeSystem firmware

Requirements
To complete this lab, you will need:

Rev. 4.41

One ProLiant BL p-Class server blade enclosure with supported interconnect


option

One single-phase power enclosure with two or more power supplies

One or more ProLiant server blades such as the ProLiant BL20p Generation 2
(G2)

One diagnostic cable

L4.3 235

HP BladeSystem Solutions I Planning and Deployment

Exercise 1 Accessing and configuring the server


blade iLO
iLO is a standard component of most ProLiant servers and provides server health
and remote server manageability. It is the primary means of accessing the ProLiant
BL p-Class server blades for configuration purposes. A complete list of supported
servers is available on the HP website.
Note
This section contains the basic iLO configuration and functionality as it applies
to HP BladeSystems. For additional information on iLO functionality and
general Lights-Out management, refer to the Lights-Out web-based training or
the Enterprise Systems Management course.

You can access the server blade iLO using either of the following methods:

With the diagnostic cable or the local I/O cable

Over the network with a web browser

Accessing the server blade iLO with the diagnostic cable or the
local I/O cable

The default IP address for the iLO diagnostic port connection is 192.168.1.1 with a
subnet mask of 255.255.255.0. The Network Settings configuration page allows
you to change the IP configuration for the iLO diagnostic port if the default values
are not appropriate for your environment.

L4.3 236

Rev. 4.41

Accessing and Configuring iLO

To access the server blade iLO with the diagnostic cable or with the local I/O
cable:
1.

Connect the diagnostic cable to the diagnostic port in the front of the server
blade. If using a server blade that requires the local I/O cable, connect the
local I/O cable to the server blade instead.
Note
The diagnostic cable is a Y-cable and incorporates a crossover so that normal
RJ-45 cable can be used between the diagnostic cable and the management
station.

2.

Set your management station to the following IP address and subnet mask:
IP address: 192.168.1.200
Subnet mask: 255.255.255.0

Rev. 4.41

3.

Using an RJ-45 network cable, connect your management station to the


RJ-45 port of the diagnostic cable.

4.

Start Microsoft Internet Explorer, enter http://192.168.1.1 in the


Address field, and press Enter.

5.

When the web browser successfully connects to the iLO, a security alert
displays. Click Yes to accept the certificate.

L4.3 237

HP BladeSystem Solutions I Planning and Deployment

6.

At the Account Login screen, enter either the default login credentials
provided on the iLO Default Network Settings tag or the user name and
password provided by your instructor. Click Log In to proceed to the iLO
web-based interface.

Accessing the server blade iLO over the network


To access the server blade iLO over the network using a web browser, you need
either the IP address or the Domain Name System (DNS) name of the iLO. If you
have new blades or blades with default settings, the iLO can obtain its IP address
from a Dynamic Host Configuration Protocol (DHCP) server. The default DNS
name, in addition to the Administrator username and default password for the
blade, is printed on a tag that ships attached to each server blade.

L4.3 238

1.

Start Microsoft Internet Explorer, enter http://defaultDNSname for


the DNS name of the iLO in the Address field, and press Enter. The default
DNS name is the name supplied on the blade iLO Default Network Settings
tag.

2.

When the web browser successfully connects to the iLO, a security alert
displays. Click Yes to accept the certificate.

3.

At the Account Login screen, enter either the default login credentials
provided on the iLO Default Network Settings tag or the user name and
password provided by your instructor. Click Log In to proceed to the iLO
web-based interface.

Rev. 4.41

Accessing and Configuring iLO

Configuring iLO
To configure the iLO:

Rev. 4.41

1.

At the iLO home page, review the information displayed.

2.

Click the Administration tab and select Network Settings in the left menu.
This screen allows you to configure iLO for your environment.

L4.3 239

HP BladeSystem Solutions I Planning and Deployment

3.

L4.3 240

At the Network Settings screen, configure the following iLO network


settings:

Enable DHCP: No

IP Address: <IP address supplied to you by your instructor>

Subnet Mask: <subnet mask supplied to you by your instructor>

Domain Name: class.local

Primary DNS Server: 192.168.0.1

4.

Scroll down to the iLO Diagnostic Port Configuration Parameters section.


This section contains the diagnostic port access information.

5.

Click Apply to save the settings. iLO resets itself with the new settings and
redirects you back to the login page.

6.

Log in again, and at the iLO home page, click the BL p-Class tab.

Rev. 4.41

Accessing and Configuring iLO

7.

Rev. 4.41

The Rack Settings page allows you to enter rack information, such as the rack
name, enclosure name, and bay name. You can also change the power on
control settings.

L4.3 241

HP BladeSystem Solutions I Planning and Deployment

L4.3 242

8.

Click the Rack Topology, Server Blade Mgt. Module, Power Mgt. Module,
and Redundant Power Mgt. Module options on the left side of the screen, and
familiarize yourself with the displayed information and settings.

9.

Click Log out in the top right corner of the screen to log out of iLO.

Rev. 4.41

Accessing and Configuring iLO

iLO Default Network Settings tag


Each Lights-Out device includes a network settings tag that identifies the default
DNS name, administrator account, and a case-sensitive password. The default
DNS name is, for example, ILO8J35LFR3T03D, where the letters and numbers
following ILO represent the server serial number. The DNS name can be
changed to any combination of letters and numbers. The iLO Default Network
Settings tag also includes a bar code of the information for easy scanning.
The iLO Default Network Settings tag is useful if the Administrator password is
lost or forgotten. If the tag is lost, you can view and configure the default DNS
name and user name in the ROM-based utility.
Locate the iLO Default Network Settings tag and enter the following information
as it pertains to your server blade:

Rev. 4.41

Server Serial Number: ...............................................................................

User Name: ................................................................................................

DNS Name: ...............................................................................................

Password: ..................................................................................................

L4.3 243

HP BladeSystem Solutions I Planning and Deployment

Resetting the iLO to the factory setting (optional)


When all access information to the iLO is lost, you must reset the iLO back to its
factory setting to regain access.
To reset the iLO configuration back to the factory settings:
1.

Power down the server blade and remove it from the server blade enclosure.

2.

Open the server blade by removing its cover.

3.

Use the inside cover legend to locate the server battery. Remove the battery,
wait a couple of minutes, and reinstall the battery.

4.

Reinstall the cover and reinsert the server blade into the server blade
enclosure. Apply power to the server blade enclosure if necessary.

5.

Connect the diagnostic cable to the diagnostic port in the front of the server
blade. If you are using a server blade that requires the local I/O cable,
connect the local I/O cable to the server blade instead.

6.

Set your management station to the following IP address and subnet mask:
IP address: 192.168.1.200
Subnet mask: 255.255.255.0

7.

Using an RJ-45 network cable, connect your management station to the


RJ-45 port of the diagnostic cable.

8.

Start Microsoft Internet Explorer, enter http://192.168.1.1 in the


Address field, and press Enter.

9.

At the iLO Account Login screen, enter Administrator in the Login Name
field and enter the password listed on the iLO Default Network Settings tag
in the Password field. Click Log In to continue.

10. The iLO configuration settings were restored to factory defaults when the
server battery was removed. After you log in to iLO, configure iLO as
desired.
11. Disconnect from the server blade when done and remove the diagnostic
cable.

L4.3 244

Rev. 4.41

Accessing and Configuring iLO

Bypassing the iLO security


To bypass the iLO security when the administrator credentials were lost or
forgotten, you must have physical access to the server. Complete these steps:
1.

Power down the server blade and remove it from the server blade enclosure.

2.

Open the server blade by removing its cover.

3.

Use the inside cover legend to locate the iLO Security override switch. For
example, on ProLiant BL20p G2 server blade, the iLO Security override
switch is the dip switch number 1 on the maintenance switch (SW4).
Note
You may need to temporarily remove the processor power module in slot 1 for
easier access to the switch.

Rev. 4.41

4.

The switch is normally in the off position, resulting in the iLO security being
active. Toggle the dip switch to its on position.

5.

Reinstall the cover and reinsert the server blade into the server blade
enclosure. Apply power to the server blade enclosure if necessary.

6.

Open Microsoft Internet Explorer and access the iLO of your blade server.

7.

The Account Login window still displays, but now contains an alert message,
Alert! iLO security override switch is set. Security enforcement is disabled.
Do not enter any information in the Login Name and Password text boxes;
instead, click Log In.

L4.3 245

HP BladeSystem Solutions I Planning and Deployment

8.

At the iLO home page, select the Administration tab and configure users and
their passwords as desired.

9.

When done, log out of iLO and power down the server blade. Remove the
server blade and repeat the steps 1 through 5 to toggle the dip switch to its off
position and to re-enable the iLO security.

Upgrading iLO firmware


Firmware upgrades enhance the iLO functionality. A firmware upgrade can be
completed from any network client using a standard web browser. Only users with
the Update iLO Firmware privilege can upgrade the iLO firmware.
The most recent firmware for the iLO is available on the HP website at:
http://www.hp.com/servers/manage
Two SoftPaqs are available; one creates a .bin file and the other creates a
diskette.

To upgrade iLO firmware, complete these steps at your management station:

L4.3 246

1.

Open Microsoft Internet Explorer and log in to the iLO of your blade server.

2.

At the iLO home page, click the Administration tab and select the Upgrade
iLO Firmware option on the left of the screen.

Rev. 4.41

Accessing and Configuring iLO

3.

Rev. 4.41

At the Upgrade iLO Firmware screen, click Browse next to the New firmware
image text box and navigate to the iLO firmware image provided by your
instructor. Click Open.

L4.3 247

HP BladeSystem Solutions I Planning and Deployment

4.

The iLO firmware image is listed in the New firmware image text box. Click
Send firmware image.

L4.3 248

Important
Do not power cycle, click another link, or otherwise interrupt the firmware
upgrade while it is in progress.

5.

The firmware upgrade process takes less than two minutes. When completed,
iLO resets itself and redirects you back to the login page.

6.

Log in to the iLO.

Rev. 4.41

Accessing and Configuring iLO

The Status Summary screen displays the current iLO firmware version.

Rev. 4.41

L4.3 249

HP BladeSystem Solutions I Planning and Deployment

Exercise 2 Upgrading the BladeSystem firmware


To upgrade the BladeSystem firmware such as the system, array controller, disk
drive, and management processor firmware, complete these steps at your
management station:

L4.3 250

1.

Locate the ProLiant Firmware Maintenance Release 7.10 Server and Options
Firmware for ProLiant BL, ML, and DL 300, 500, and 700 Servers CD and
insert it into the management station CD-ROM drive.

2.

If the CD autostarts, click Disagree at the End User License Agreement


screen.

3.

Open Microsoft Internet Explorer and log in to the iLO of your server blade.

4.

At the iLO home page, click Virtual Devices Virtual Media.

5.

At the Virtual Media window, select D: from the Local CD-ROM Drive pulldown menu, and click Connect. The Virtual Media window becomes similar
to the following graphic. Minimize the window.

Rev. 4.41

Accessing and Configuring iLO

Rev. 4.41

6.

At the iLO home page, click Remote Console Remote Console (dual
cursor). The iLO Remote Console window opens.

7.

Toggle back to the iLO home page, click Virtual Devices Virtual Power,
and power on your target server blade.

L4.3 251

HP BladeSystem Solutions I Planning and Deployment

8.

Toggle to the iLO Remote Console window and observe the server blade
behavior. The server blade boots from the ProLiant Firmware Maintenance
CD.

9.

At the ProLiant Firmware Maintenance CD 7.10 screen, select the


appropriate language and keyboard, and click Accept.

10. At the End User License Agreement screen, click Agree.

L4.3 252

Rev. 4.41

Accessing and Configuring iLO

11. At the Welcome screen, click ROM update utility.

Rev. 4.41

L4.3 253

HP BladeSystem Solutions I Planning and Deployment

12. The ROM Update Utility scans the system and determines what firmware
should be upgraded. It presents this information and the estimated time on the
following screen.

13. Click Update Now to start the process. When done, exit the ROM Update
Utility and the Firmware Maintenance CD.
14. Close the iLO Remote Console session.
15. Toggle to the Virtual Media window, click Disconnect and close the window.
16. Log out of the iLO session and close the web browser window.
The HP BladeSystem is now ready for operating system deployment.

L4.3 254

Rev. 4.41

HP BladeSystem Solutions I Planning and


Deployment

Deploying ProLiant BL p-Class Server Blades

Deploying ProLiant
BL p-Class Server
Blades
Module 5

HP Restricted
2004 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice

Rev. 4.41

255

HP BladeSystem Solutions I Planning and


Deployment

Deploying ProLiant BL p-Class Server Blades

Objectives
Deploy an HP BladeSystem server using
RDP
iLO
Systems Insight Manager

Prepare a deployment server


Use RDP and iLO to manage an HP BladeSystem solution

Rev. 4.41

HP Restricted

Objectives
After completing this module, you should be able to:
Deploy HP BladeSystem servers using:
HP ProLiant Essentials Rapid Deployment Pack (RDP)
HP integrated Lights-Out (iLO)
HP Systems Insight Manager
Prepare a deployment server
Use RDP and iLO to manage an HP BladeSystem solution

Rev. 4.41

256

5 256

HP BladeSystem Solutions I Planning and


Deployment

Deploying ProLiant BL p-Class Server Blades

Using RDP to deploy HP BladeSystem


servers
Deployment Solution
Out of the box functionality for
ProLiant server blades
Advanced features for
ProLiant server blades
iLO management capabilities

ProLiant Integration Module


Preconfigured scripts and
batch files
Sample configuration jobs
Support software

SmartStart Scripting Toolkit

Rev. 4.41

HP Restricted

5 257

Using RDP to deploy HP BladeSystem servers


RDP integrates HP and Altiris software to automate the process of deploying and provisioning
server operating systems and software, enabling companies to quickly and easily adapt to
changing business demands.
RDP combines the graphical user interface (GUI)-based remote console deployment capability
of the Altiris Deployment Server with the power and flexibility of the HP SmartStart Scripting
Toolkit, through the integration of the ProLiant Integration Module.
RDP features:
Deployment Solution
Out of the box functionality for ProLiant servers
Advanced features for ProLiant server blades
iLO management capabilities
ProLiant Integration Module
ProLiant server preconfigured scripts and batch files
Sample configuration jobs
Integrated support software
HP SmartStart Scripting Toolkit
RDP is available in two editions:
Windows edition
Deploys Microsoft Windows, Red Hat Linux, and UnitedLinux
Ideal for heterogeneous Windows/Linux environments
Linux edition
Deploys Red Hat Linux and UnitedLinux
Ideal for homogeneous Linux environments

Rev. 4.41

257

HP BladeSystem Solutions I Planning and


Deployment

Deploying ProLiant BL p-Class Server Blades

Using RDP to deploy HP BladeSystem servers (continued)


When RDP 2.0 and Systems Insight Manager 4.2 are released, the two applications will
integrate to resolve server failures. To accomplish this, several servers must be dedicated as
spares. When an in-service server fails, System Insight Manager will detect it. It then activates
RDP and passes along the name of the failed server. RDP deploys the image of the failed
server onto the spare server, which carries out the function of the failed server. This integration
enables problem resolution without server downtime.
Note: To deploy the server blades without RDP, you must create a bootable diskette or an
image of a bootable diskette and use iLO virtual media capabilities

Rev. 4.41

258

HP BladeSystem Solutions I Planning and


Deployment

Deploying ProLiant BL p-Class Server Blades

Deployment Solution components


Deployment Server
Deployment Server
console
Deployment Server
Web console
Deployment Server
database
PXE Server
Client Access Point
DHCP server

Client
Access Point
(file server)

Deployment
server
Deployment
database

Deployment
Server
console

DHCP

PXE

Managed
computers
Rev. 4.41

HP Restricted

5 259

Deployment Solution components


The Deployment Solution is based on a distributed model, which means you can install most of
the Altiris components on the same computer or on different computers. You can also have
multiple installations of most of the components in your system.
Components of the Deployment Solution are:
Deployment Server
Deployment Server console
Deployment Server web console
Deployment Server database
Preboot eXecution Environment (PXE) Server
Client Access Point (file server)
Dynamic Host Configuration Protocol (DHCP) server
Services that the Deployment Solution provides include:
Wake-on-LAN (WOL) services
Task scheduling
Migration and remote control of PCs
Deployment using imaging or scripting
The benefit of imaging is speed; the benefit of scripting is flexibility. By providing both
capabilities, RDP provides the most powerful features in one package.
Deployment Server provides a built-in imaging utility along with post-imaging
configuration utilities for Windows servers. You can use these utilities to modify the
system ID (SID), computer name, IP address, and domain accounts.

Rev. 4.41

259

HP BladeSystem Solutions I Planning and


Deployment

Deploying ProLiant BL p-Class Server Blades

Deployment Server
Controls the flow of work and information between the
managed and client computers and other components
Ensures that the computer management work is performed
correctly
Supports only one Deployment Server instance per
Deployment Solution
Can support multiple Deployment Solutions
Each Deployment Solution has its own Deployment Server

Does not require labor-intensive script writing


Features intuitive click-and-drag wizards for novices
Provides powerful features and shortcuts for advanced
users
Runs as a Windows service
Rev. 4.41

HP Restricted

5 260

Deployment Server
The main component of the Deployment Solution, Deployment Server is a fast, easy, pointand-click solution for deploying servers using imaging or scripting. Deployment Server
controls the flow of work and information between the managed and client computers and the
other Deployment Solution components. The managed and client computers connect and
communicate with the deployment server to register information about themselves using the
Deployment Agent for Windows and Deployment Agent for DOS. This information is stored in
the database.
Communication between the Deployment Server console and the other components ensures
that all work needed to manage the computers is performed correctly. The managed and client
computers need access to the deployment server at all times. If a client or managed computer
cannot communicate with the deployment server, remote management of the client cannot
occur.
You can install only one Deployment Server instance per Deployment Solution. However, you
can install multiple Deployment Solutions, each with their own deployment server.
Note: Deployment Server is a software component of RDP. After you install it on a server, that
server is referred to as a deployment server.
With Deployment Server, labor-intensive script writing is not required to set up or manage HP
ProLiant servers. Pre-configured scripts enable quick and easy configuration. These scripts are
installed on the deployment server by the ProLiant Integration Module and enhance the native
ability of Deployment Server.
The intuitive click-and-drag wizards walk novices through the most common management
tasks (including installation). Advanced users can take advantage of powerful advanced
features and shortcuts.
Deployment Server runs as a Windows service.

Rev. 4.41

260

HP BladeSystem Solutions I Planning and


Deployment

Deploying ProLiant BL p-Class Server Blades

Other components of the Deployment


Solution
Deployment Server console Assigns imaging,
application removal and installation, and backup and restore
jobs
Deployment Server web console Provides web
browser access to Deployment Server sites
Deployment Server database Must be SQL Server
2000 or 7 or MSDE
PXE Server Provides an automated hands-free
bootstrap method for server deployment
Client Access Point Stores the information needed to
manage your network
DHCP server Assigns TCP/IP addresses to computers
Rev. 4.41

HP Restricted

5 261

Other components of the Deployment Solution


Use the Deployment Server console to look at and manage the Deployment Server database to
remotely manage the computers in your network.
From the Deployment Server console, you can assign work such as imaging, removing
and installing applications, and backing up and restoring registries. Through the
Deployment Server console you can also control computers remotely.
The Deployment Server web console provides web browser access to individual
Deployment Server sites on Windows or Linux computers. This console also provides
access from mobile computers and wireless handhelds. You can manage all computer
devices from a browser using the Deployment Server web console.
The Deployment Server database is the heart of the Deployment Solution system. The
database must be either Microsoft SQL Server 2000 (or SQL Server 7) or Microsoft Data
Engine (MSDE). A Deployment Server database should only have one deployment server
communicating with it.
PXE Server enables servers that support PXE technology to boot to the required DOS
environment. PXE provides an automated hands-free bootstrap method for server
deployment and is needed to use RDP. PXE uses an industry-standard network boot process
to enable remote, headless deployment of servers or desktops. Target servers can connect to
the deployment server by performing a PXE boot and receiving a boot image over the
network. The boot image works like a boot diskette.
The Client Access Point is a shared device that provides the information needed to manage
your network. The directory where you install the Client Access Point must be accessible
from your Windows/DOS clients.
The DHCP server is a system set up to assign TCP/IP addresses to computers. The DHCP
server is required to use the PXE Server. HP recommends that you use DHCP to manage
TCP/IP addressing in your network, regardless of whether you use PXE. Using DHCP
reduces the amount of time it takes to set up and manage computers.

Rev. 4.41

261

HP BladeSystem Solutions I Planning and


Deployment

Deploying ProLiant BL p-Class Server Blades

ProLiant Integration Module


Provides the latest versions of
SmartStart Scripting Toolkit
Optimized drivers, management agents, and utilities
Configuration jobs and scripts

Prepares the Deployment Server


Installs configuration files for selected BL p-Class blades
Loads selected operating system files, drivers, and PSPs
Populates the \deploy directory

Rev. 4.41

HP Restricted

5 262

ProLiant Integration Module


During RDP installation you can select which operating systems you want build images of. The
ProLiant Integration Module enables swift delivery of operating systems to the BL p-Class
server.
ProLiant Integration Module provides the latest versions of:
SmartStart Scripting Toolkit
Optimized drivers, management agents, and utilities
Configuration jobs preloaded in the Deployment Server console
Adding the ProLiant Integration Module to a Deployment Server installs preconfigured jobs
and tasks to facilitate deploying, managing, and migrating ProLiant servers. Using SmartStart
technology, the supplied deployment jobs automate the process of server configuration and
software deployment.
The ProLiant Integration Module prepares the Deployment Server by:
Installing configuration files for the selected BL p-Class blades
Loading the selected operating system files, drivers, and ProLiant Support Pack (PSPs) files
Populating a \deploy directory in the default Altiris eXpress share that contains:
Preconfigured scripts for hardware and array configuration
Template hardware and array configuration files
Template operating system installation scripts
SmartStart Scripting Toolkit
Note: The ProLiant Integration Module requires Deployment Solution 5.5 or later.

Rev. 4.41

262

HP BladeSystem Solutions I Planning and


Deployment

Deploying ProLiant BL p-Class Server Blades

Preparing the deployment server


Windows Deployment
Server
Deployment Solution
Simple installation
Custom installation

ProLiant Integration
Module for Deployment
Server

Deployment Server for


Linux Web Console

Rev. 4.41

HP Restricted

5 263

Preparing the deployment server


Windows Deployment Server
After inserting the RDP CD into the deployment server, install the following programs:
Deployment Solution
Simple installation The simple install method is recommended for first-time
installations. This configuration method places the Altiris eXpress Deployment Server
console, Microsoft Data Engine (MSDE) (unless SQL Server has already been installed
on the system), File Server, and PXE server on the same machine.
Custom installation The custom installation method enables you to select if and
where to install each component of the Deployment Solution.
ProLiant Integration Module for Deployment Server
Deployment Server for Linux Web Console
The Deployment Server for Linux Web Console provides the means to view and deploy servers
within your network.
Access the web console by using a browser (http://hostname:8080/webconsole), where
hostname is the host name of the Deployment Server or the static IP address of the Deployment
Server in the form of xxx.xxx.xxx.xxx.
Example
http://192.168.1.1:8080/webconsole

Rev. 4.41

263

HP BladeSystem Solutions I Planning and


Deployment

Deploying ProLiant BL p-Class Server Blades

Deploying HP BladeSystem servers

Rev. 4.41

HP Restricted

5 264

Deploying HP BladeSystem servers


Three fundamental steps are involved in deploying HP BladeSystem servers:
1. Configuring hardware Before you can install an operating system or application
software on the server, you must configure the hardware. RDP provides the tools and
scripts to automate the server hardware configuration through the SmartStart Scripting
Toolkit.
2. Installing the operating system After the hardware is configured, you can install the
operating system on the server, using either of the following methods:
Scripted installation Required for first-time installation
Hard drive imaging Used to replicate the configuration of one server on other servers
3. Installing applications After the operating system installation is complete, you can
install applications on the server using any of the following methods:
Scripted installation
RapidInstall Packages
Imaged installation (included in the base image)

Rev. 4.41

264

HP BladeSystem Solutions I Planning and


Deployment

Deploying ProLiant BL p-Class Server Blades

Preconfiguration steps
Configure PXE
Create PXE boot
images
Remotely install
deployment agents
Enable Linux
deployment

Rev. 4.41

HP Restricted

5 265

Preconfiguration steps
Before using RDP to deploy servers, you might need to make the following changes:
Configure PXE Provides headless deployment by eliminating the need to select a menu
Create PXE boot images Enables you to customize your environment
Remotely install deployment agents Enables the management of existing Linux and
Microsoft Windows systems
Enable Linux deployment Creates a Network File System (NFS) share
To begin using RDP, connect the server blade enclosure to the network that contains the
deployment server and power on the enclosure. Then insert the server blades into the enclosure
and power them on. After the server blades PXE boot, they will display in the Deployment
Server console.
Important! If you plan to change the default rack and enclosure names, set these names before
the first server in an enclosure connects to the deployment server. After the server blades are
powered on for the first time and the rack and enclosure names are recorded in Deployment
Server database, the server blades must be rebooted for new rack and enclosure names to be
discovered. For more information, refer to Configuring ProLiant BL Server Enclosures in the
HP ProLiant Essentials Rapid Deployment PackWindows Edition Installation Guide.

Rev. 4.41

265

HP BladeSystem Solutions I Planning and


Deployment

Deploying ProLiant BL p-Class Server Blades

Using scripting and imaging to deploy


HP BladeSystem servers
Building the reference
server
Capturing the reference
server image
Deploying other servers
using the reference
server image

Rev. 4.41

HP Restricted

5 266

Using scripting and imaging to deploy HP BladeSystem servers


RDP uses scripting and imaging to deploy servers
Scripting Scripting is the use of batch files, utilities, and configuration files to execute a
sequence of commands without user interaction. The deployment scripts include hardware
configuration and operating system installation and can include application installation and
software configuration.
Imaging Imaging, or disk cloning, is a way to deploy servers, operating systems, and
applications by taking a snapshot of the entire hard drive, a portion of the hard drive, or a
set of files. The snapshot is used to quickly replicate the exact configuration on another
machine.
The benefit of imaging is speed.
Example
You can deploy Windows Server 2003 using an image in approximately seven to ten
minutes, whereas a scripted install can take more than an hour.
Imaging enables you to create identical duplicates of a server that is configured, patched,
and tested for functionality, performance, security, and conformity to specification. RDP
allows you to customize every image as it is deployed so that each server has an individual
server name, network settings, and user accounts.

Rev. 4.41

266

HP BladeSystem Solutions I Planning and


Deployment

Deploying ProLiant BL p-Class Server Blades

Using scripting and imaging to deploy HP BladeSystem servers


(continued)
Server blade deployment
You can deploy all of the server blades in an enclosure using a scripted install job. However, it
is faster to run the scripted install on the first blade, which is the reference server, then capture
and deploy the reference server image to all the other blades in the enclosure simultaneously.
Building the reference server Scripting is required for the initial configuration of the
reference server. To build a reference server, select a scripted install job for your specific
server model and operating system from the Deployment Server console Jobs pane. After
the first server blade has been deployed and configured, it becomes the reference server for
subsequent server blades.
Capturing the reference server Use the scripted install job to capture the configuration
of the reference server. From the Deployment Server console, select the Capture Hardware
Configuration and Image job for the specific server model and operating system and drag
this icon to the reference server.
Note: When performing an image capture and deployment, the hardware configuration of
the target servers must be identical to the hardware configuration of the reference server.
Deploying other servers using the reference server image Select all the server blades
to be deployed and drag them to the corresponding Deploy Hardware Configuration and
Image job for the server model and operating system.
Note: For Windows installations, the server name is the computer name indicated in the
Deployment Server console. For Linux installations, the server name is the default host
namelocalhost for Red Hat Linux or SUSE Linux.

Rev. 4.41

267

HP BladeSystem Solutions I Planning and


Deployment

Deploying ProLiant BL p-Class Server Blades

RDP and iLO


iLO and RILOE
Displaying stored information
Accessing iLO or RILOE
Altiris Boot Disk Creator Utility

Rev. 4.41

HP Restricted

5 268

RDP and iLO


Lights-Out management enables you to manage remote servers and remote console operations
regardless of the state of the operating system or hardware.
Deployment Server provides plug-ins, connectivity, and access to the power management
features of iLO and the Remote Insight Lights-Out Edition (RILOE) to power on, power off, or
cycle power on the target computer. Each time a computer connects to the Deployment Server,
the server polls the computer to see if iLO or RILOE is installed. If either is installed, the
server gathers information including the Domain Name Service (DNS) name, IP address, and
first user name. Security is maintained by requiring the user to enter the correct password for
that user name.
Deployment Server enables you to display the stored information for each computer by rightclicking the server, selecting Properties, then clicking the Lights-Out icon.
You can access the iLO or RILOE interface from the Deployment Server Console by rightclicking the server and selecting Power Control RILOE-Interface. The iLO and RILOE
interfaces provide easy access to features such as remote console.
Using the Altiris Boot Disk Creator Utility, you can also create boot diskettes with the target
server configuration. You can then use these boot diskettes with the Virtual Floppy and Virtual
Media/USB tools to create a bootable image anywhere on the network. You can boot from
these virtual floppies and connect to the Deployment Server to complete the installation and
deployment process.

Rev. 4.41

268

HP BladeSystem Solutions I Planning and


Deployment

Deploying ProLiant BL p-Class Server Blades

Rip and replace


Deployment Server
tracks the physical
location of server
blades
Deployment Server
console automatically
copies the job history
of a failed server blade
to a new server blade
A new server blade
requires a new license

Rev. 4.41

HP Restricted

5 269

Rip and replace


The HP BladeSystem servers include a rule-based deployment feature that detects changes in
the physical locations of blades. This feature enables rapid serviceability when replacing a
failed server blade, a procedure called rip and replace.
The Deployment Server keeps track of the physical location of every ProLiant BL server and
can detect when a new server has been placed in a particular bay. The Change Rules feature
can be configured so that when a failed server blade is replaced, the Deployment Server
console automatically copies the job history of the failed server blade to the new server blade.
Important! The new server blade requires a new RDP license. The previous license cannot be
removed from the failed blade or transferred to the new server blade.

Rev. 4.41

269

HP BladeSystem Solutions I Planning and


Deployment

Deploying ProLiant BL p-Class Server Blades

Learning
check

Rev. 4.41

HP Restricted

5 270

Learning check
1. In what type of environment is RDP for Linux an ideal solution?
a. Homogeneous Windows environments
b. Heterogeneous HP-UX, Windows, and Linux environments
c. Homogeneous Linux environments
d. Heterogeneous Windows and Linux environments
2. Name the three fundamental steps in deploying servers.
_________________________________________________________________________
_________________________________________________________________________
__________________________________________________________
3. The __________ __________ for Linux __________ __________ provides the means to
view and deploy servers within your network.
4. Explain the rip and replace process.
_________________________________________________________________________
_________________________________________________________________________
_________________________________________________________________________
_____________________________________________________
5. RDP integrates with __________ __________ __________ to resolve server failures.

Rev. 4.41

270

HP BladeSystem Solutions I Planning and


Deployment

Deploying ProLiant BL p-Class Server Blades

Learning check (continued)


6. What method is recommended for first-time installations of RDP?
____________________________________________________________________
7. How do you access the iLO remote console?.
_________________________________________________________________________
_________________________________________________________________________
__________________________________________________________

Rev. 4.41

271

HP BladeSystem Solutions I Planning and


Deployment

Rev. 4.41

Rev. 4.41

Deploying ProLiant BL p-Class Server Blades

HP Restricted

272

5 272

Preparing the Deployment Server


Module 5 Lab 1

Objectives
After completing this lab, you should be able to:

Install the Altiris Deployment Solution 6.1 on a deployment server running


Microsoft Windows Server 2003

Install the HP ProLiant Integration Module for the Deployment Solution 1.60

Complete the HP ProLiant Essentials Rapid Deployment Pack (RDP)


predeployment configuration:

Configure the Preboot eXecution Environment (PXE) to process new


computers automatically.

Synchronize the console name with the Microsoft Windows name.

Preconfigure the Deployment Server agent.

Preconfigure the Insight Web Agent in the HP ProLiant Support Pack


(PSP) for Windows.

Requirements
To complete this lab, you will need:

Rev. 4.41

HP ProLiant server running Microsoft Windows Server 2003 Enterprise


Edition to use as a deployment server. This server must have Active
Directory, Dynamic Host Configuration Protocol (DHCP), and dynamic
Domain Name System (DNS) installed and configured.

RDP Windows Edition Release 1.60 CD.

Microsoft Windows Server 2003 Enterprise Edition CD.

Any of the following media:

Microsoft Windows 95 or Windows 98 CD

Windows 95 or Windows 98 startup diskette

L5.1 273

HP BladeSystem Solutions I Planning and Deployment

Exercise 1 Installing the Altiris Deployment


Solution
The Deployment Solution enables you to perform an easy drag-and-drop server
deployment. The solution includes a multiple-server management console with an
intuitive graphical user interface (GUI). Server deployment can be completed
through scripting, imaging, or a combination of both.
The Deployment Solution installation consists of these steps:

Install Microsoft SQL Server Desktop Engine (MSDE) or Microsoft SQL


Server 2000 Enterprise Edition (optional)

Disable Server Message Block (SMB) signing

Install the Deployment Solution

Installing SQL Server 2000 Enterprise Edition (optional)


To install SQL Server 2000 Enterprise Edition, autostart the SQL Server CD or
execute \setup.bat. Then select the following options:

Install Database Server

Default instance name

Typical setup

Use the same account for each service

Mixed mode security, with the sa password set to Altiris

Accept all other defaults on the screen. After the initial installation, apply the
Microsoft SQL Server 2000 Service Pack 3.

L5.1 274

Rev. 4.41

Preparing the Deployment Server

Disabling SMB signing


When running jobs on Microsoft Windows Server 2003, you must change the
following SMB signing registry keys to execute DOS-based deployment jobs such
as Create Disk Image by setting their DWORD value to 0:
HKLM\System\CurrentControlSet\Services\lanmanserver\
parameters\enablesecuritysignature
HKLM\System\CurrentControlSet\Services\lanmanserver\
parameters\requiresecuritysignature

WARNING
Incorrectly editing the registry may severely damage your system. At the
very least, you should back up any valued data on the computer before
making changes to the registry.

1.

Click Start Run and enter regedt32 in the text box.

2.

In Registry Editor, navigate to the appropriate area of the registry and change
the enablesecuritysignature and requiresecuritysignature settings to 0.

3.

Close the Registry Editor and restart the server.


Note
With SMB Signing disabled, you can open up man-in-the-middle attacks
that could allow an attacker to disrupt a facility by which security settings are
applied to Windows-based computers in a corporate network. This could allow
the attacker to loosen settings on his or her own computer or impose tighter
settings on another computer. Microsoft does not recommend disabling this
setting.

Rev. 4.41

L5.1 275

HP BladeSystem Solutions I Planning and Deployment

Installing the Deployment Solution


To install the Deployment Solution on a deployment server:

L5.1 276

1.

Insert the RDP 1.60 CD into the CD-ROM drive of the designated
deployment server. If AutoRun is enabled, the Rapid Deployment Pack
installation utility runs automatically. If AutoRun is not enabled, double-click
the autorun.exe file in the CD root directory.

2.

When the software license agreement displays, click Agree to continue with
the installation.

Rev. 4.41

Preparing the Deployment Server

Rev. 4.41

3.

Click Altiris Deployment Solution.

4.

Click Step 1: Install Altiris Deployment Solution 6.1 SP1.

L5.1 277

HP BladeSystem Solutions I Planning and Deployment

5.

If you receive the following warning message, ensure that the SMB signing is
disabled by completing the Disabling SMB signing section. Click OK to
continue.

6.

If you do not have an existing local database, use the Local Computer Install
Helper option to install MSDE 2000 and Service Pack 3a. At the Install
Configuration screen, click Local Computer Install Helper Install. When
finished, reboot the server and repeat steps 1 through 5. Then continue with
step 7.
If you have a local database, proceed to the next step.

7.

At the Install Configuration screen, select the Simple Install option. Select the
Include PXE Server check box and click Install.
Note
The Simple Install option installs all the deployment server components on a
single machine.

L5.1 278

Rev. 4.41

Preparing the Deployment Server

Rev. 4.41

8.

When the Altiris Software License Agreement displays, review it and then
click Yes to agree and continue the installation.

9.

If the system has multiple active NICs, a pop-up dialog box displays asking
you to specify which IP address to use. Select the NIC designated by your
instructor and click Select IP.

L5.1 279

HP BladeSystem Solutions I Planning and Deployment

10. At the Deployment Server Client Access Point Information screen, complete
the fields as follows and click Next.

L5.1 280

File server path Accept the default path of C:\Program


Files\Altiris\eXpress\Deployment Server. This is the default installation
directory of Altiris eXpress.

Create eXpress share Verify that this option is selected (default


selection).

License Select the Free 7 day license option, or select License file
and browse to the desired license file.

Service username Accept the default user name (ensure that it has
administrator privileges). This name should be the administrator account
name.

Service password Enter the administrator account password for your


deployment server. Obtain this password from your instructor.

Rev. 4.41

Preparing the Deployment Server

11. At the Installation Information screen, click Install. The Deployment Solution
installation begins. During the installation, you are prompted for either a CD
or diskette from which to extract several DOS files. This step is required so
that the Altiris Boot Disk Creator utility will have the necessary files to make
DOS boot diskettes.
12. At the Configure Boot Disk Creator screen, select Use Windows 95/98
original CD-ROM, and then click Next Ignore.

Rev. 4.41

L5.1 281

HP BladeSystem Solutions I Planning and Deployment

13. At the next screen, browse to the C:\classfiles\win98 directory (this directory
contains a copy of a Microsoft Windows 98 CD), and then click OK.

Note
If you do not have the original Windows 95/98 CD, you can use a CD from the
Microsoft Developers Network package or a Windows 95/98 boot diskette.

14. Click Finish. The Boot Disk Creator copies the appropriate DOS files, and
the installation continues.

L5.1 282

Rev. 4.41

Preparing the Deployment Server

15. When the final screen prompts you to install clients remotely or to download
Adobe Acrobat, do not select either option. Click Finish.

Rev. 4.41

L5.1 283

HP BladeSystem Solutions I Planning and Deployment

16. Restart the deployment server.


17. Click Start Settings Control Panel Administrative Tools
Computer Management Services and Applications Services and verify
that all of the Altiris services are running.

L5.1 284

Rev. 4.41

Preparing the Deployment Server

Exercise 2 Installing the HP ProLiant Integration


Module for the Altiris Deployment Solution
The ProLiant Integration Module provides the latest versions of the HP SmartStart
Scripting Toolkit, drivers and management agents optimized for HP servers, and
deployment jobs and scripts supplied by HP.

Installing the ProLiant Integration Module


To install the HP ProLiant Integration Module for the Deployment Solution,
follow these steps:

Rev. 4.41

1.

Insert the RDP 1.60 CD into the CD-ROM drive of the designated
deployment server. If AutoRun is enabled, the Rapid Deployment Pack
installation utility runs automatically. If AutoRun is not enabled, double-click
the autorun.exe file in the CD root directory.

2.

When the software license agreement displays, click Agree to continue with
the installation.

3.

Click HP ProLiant Integration Module for Deployment Server Step 2:


Install HP ProLiant Integration Module for Deployment Server 1.60.

L5.1 285

HP BladeSystem Solutions I Planning and Deployment

4.

L5.1 286

The software license agreement for the ProLiant Integration Module displays.
After reading the license agreement, select the check box to accept the
agreement and click Next.

Rev. 4.41

Preparing the Deployment Server

5.

The Job Selection screen displays, enabling you to select which configuration
jobs you want to install. Select ProLiant BL20p Scripted Install for Microsoft
Windows 2000 and ProLiant BL20p Scripted Install for Microsoft Windows
2003. If using blade servers other than the ProLiant BL20p, change your
selection appropriately. Click Next to continue.
Note
The Windows and Linux scripted install jobs are initially not selected by
default, but the SmartStart Toolkit and OS Imaging Events, and SmartStart
Toolkit Hardware Configuration Events are selected.

Rev. 4.41

L5.1 287

HP BladeSystem Solutions I Planning and Deployment

6.

L5.1 288

When the Installation screen displays, click Next to start the installation.

Rev. 4.41

Preparing the Deployment Server

7.

Rev. 4.41

The installer copies all the necessary files from the RDP 1.60 CD and
prompts you for the appropriate Windows operating system CDs, which will
be provided by your instructor. Insert the appropriate operating system CD
and click Next to start copying the files.

L5.1 289

HP BladeSystem Solutions I Planning and Deployment

L5.1 290

8.

After all of the operating system installation files have been copied, the
Finish screen displays. You will perform some the configuration steps listed
on the screen in the next exercise. Click Finish to close the screen.

9.

Close the Rapid Deployment Pack screen.

Rev. 4.41

Preparing the Deployment Server

Exercise 3 Configuring the Rapid Deployment


Pack predeployment environment
By default, a computer that is unknown to the deployment server performs the
following steps:
1.

Downloads the initial deployment PXE boot image.

2.

Establishes a network connection to the deployment server.

3.

Loads the deployment agent for DOS.

4.

Completes optional processing.

If the deployment server already recognizes a given computer, and that computer
has no assigned tasks, the deployment server allows the computer to bypass the
PXE and perform a local boot.
If the computer has a configuration task assigned, it performs these tasks:

Rev. 4.41

1.

Goes through the Managed Computer menu items and downloads a PXE boot
image.

2.

Establishes a connection to the deployment server.

3.

Processes the tasks assigned.

L5.1 291

HP BladeSystem Solutions I Planning and Deployment

Configuring PXE to process new computers automatically


By default, the correct menu choice for the user is automatically selected in the
menu displayed on the server booting with PXE, but an initial deployment requires
you to press the Enter key to confirm the choice. This process prevents destructive
events from running on a computer because the deployment server was unaware of
the computer.
The default action for servers is to wait for the Deployment Server Console
administrator to assign the computer a task. Therefore, it is not necessary to warn
the user about potentially data destructive operations.
To configure the PXE server to choose the initial deployment menu item
automatically and continue without user interaction:
1.

Click Start All Programs Altiris PXE Services PXE


Configuration Utility.

2.

Select Altiris BootWorks (Initial Deployment).

L5.1 292

Important
Do not rearrange the order of the menu items. Changing the menu item order
can cause your target servers to fail to boot to local hard drives.

Rev. 4.41

Preparing the Deployment Server

Rev. 4.41

3.

Click Edit to display the Menu Item Properties screen.

4.

Select the Execute Immediately option to eliminate wait time and click OK
OK to close both windows. The initial deployment will now run
automatically for every server not in the database.

L5.1 293

HP BladeSystem Solutions I Planning and Deployment

Synchronizing the console and Windows names and changing the


primary lookup key
The deployment server enables you to specify an alias, which is a name that is
displayed in the Deployment Server Console and is different from the one used by
the operating system or NetBIOS. However, you can change a setting to ensure
that the Deployment Server Console always displays the name used by the
operating system.
The deployment server uses the Media Access Control (MAC) address of the NIC
as the primary lookup key, that is, the primary means of identifying the computer.
If you change the NIC in a computer, the deployment server treats it as a new
computer.
By associating this primary lookup key with the serial number, you need to know
only the serial number, not the MAC address, when you import new computers.
To modify these settings:
1.

Double-click the Deployment Server Console icon on the desktop.

2.

Close the Getting Started screen.


Note
To prevent the Getting Started screen from displaying every time the
Deployment Server Console is started, select the Dont ask me again box.

L5.1 294

3.

At the Deployment Server Console screen, select Tools Options. The


Program Options screen displays.

4.

Click the Global tab.

5.

Select the Synchronize display names with Windows computer names option.

Rev. 4.41

Preparing the Deployment Server

Rev. 4.41

6.

Change the primary lookup key to Serial Number (SMBIOS) and click OK.

7.

When prompted to restart the control servers, click Yes.

L5.1 295

HP BladeSystem Solutions I Planning and Deployment

Preconfiguring the Deployment Server Agent for Windows


The Deployment Server Agent for Windows (AClient) is a small utility that runs
as a service on a server, enabling that server to be managed by the deployment
server. By installing AClient, you can:

Redeploy or manage existing computers in your infrastructure

Perform pre- and post-imaging configuration and software installation

If you do not install AClient as part of a deployment process, you lose these
capabilities. Scripted installation jobs provided by RDP install the agent by default
on deployed servers. By preconfiguring the default settings, all agents installed as
part of the provided Windows scripted install jobs have consistent settings.
Note
HP recommends that you leave the agent on the system after initial installation.

The deployment server enables you to install AClient remotely on Windows


systems provided that you have the appropriate administrative permissions on that
system. You can perform this installation in a batch mode, installing the agent to
many systems at once. By using batch mode, you do not have to visit each system
to manually install the agent from a diskette.
The provided Windows scripted install jobs use the aclient.inp file in the
deployment server root directory for agent settings. These settings are independent
of the Remote Client Installer settings that are established from
Tools Options Agent Settings.
Note
By default, the Deployment Server root directory is
C:\Program files\Altiris\eXpress\Deployment Server.

L5.1 296

Rev. 4.41

Preparing the Deployment Server

To configure AClient:
1.

From a text editor, open the aclient.inp file in the deployment server root
directory.

2.

Verify that the static IP address listed in the TcpAddr= line is the IP address
of your deployment server.

3.

To force applications to close when the server needs to restart, ensuring that
no jobs fail if the server must be restarted, change the line:
; ForceReboot=No

to:
ForceReboot=Yes

Rev. 4.41

L5.1 297

HP BladeSystem Solutions I Planning and Deployment

4.

Modify the BootWorks disk prompting behavior by changing the line:


; BootDiskMessageUsage=4

to:
BootDiskMessageUsage=0

If boot diskettes are used instead of PXE and a configuration task is issued to
a computer when no diskette is in the diskette drive, a prompt instructs you to
insert a diskette. If this occurs when you are not logged in to the server, you
must log in and close the prompt before the job can continue. By selecting to
never be prompted for a boot diskette, the server restarts to the normal
operating system if a boot diskette is not inserted in the server when required.
5.

Select the option to synchronize the target server time with the deployment
server time by changing the line:
; SyncTimeWithServer=No

to:
SyncTimeWithServer=Yes

6.

L5.1 298

Save the file and close the text editor.

Rev. 4.41

Preparing the Deployment Server

Preconfiguring the HP Insight Web Agent in the PSP for Windows


The HP Insight Web Agent requires that a password be configured in the Smart
Components before installation. The Web Agent is used by several other utilities
in the PSP. Without the password, these other utilities install but do not function
correctly and are not accessible.

Important
The PSPs must reside on writable media so that you can configure the Smart
Components in the PSP before PSP deployment. You cannot configure the
PSPs from the CD-ROM drive.
The components in the PSP only need to be configured once. You do not need
to configure the components each time they are deployed. After a PSP is
configured, it is ready for deployment.

To configure the Web Agent (and other Smart Components) in the PSP for
deployment:
1.

Open Microsoft Windows Explorer and browse to the following directory:


C:\ProgramFiles\Altiris\eXpress\DeploymentServer\Deploy\cds\compaq\
ss.xxx\wnet\csp
where xxx is the version of SmartStart you are using. Double-click setup.exe.

Rev. 4.41

L5.1 299

HP BladeSystem Solutions I Planning and Deployment

L5.1 300

2.

Click OK at the HP Remote Deployment Utility warning message.

3.

Expand the All Configurable Components directory in the tree in the left
pane.

4.

Right-click HP Insight Management Agents for Windows and select


Configure. The Item Configuration screen displays.

Rev. 4.41

Preparing the Deployment Server

Rev. 4.41

5.

Scroll down to modify the administrator password. Enter password in the


Password field, and enter password again in the Confirm field.

6.

In the HP Systems Insight Manager Trust Relationship section farther down


on the screen, select Trust All from the Select Trust Mode drop-down menu
and click Save at the top of the window. The Web Agents are now
configured.

7.

Close the HP Remote Deployment Utility screen.

L5.1 301

HP BladeSystem Solutions I Planning and Deployment

L5.1 302

Rev. 4.41

Creating a Windows Server 2003


Reference Server
Module 5 Lab 2

Objectives
After completing this lab, you should be able to:

Connect the server blade to an HP StorageWorks Modular Smart Array 1000


(MSA1000)

Deploy a scripted Microsoft Windows Server 2003 installation to a Preboot


eXecution Environment (PXE)-enabled server blade

Install the Altiris eXpress Deployment Server Agent on the reference server
blade

Remotely access the server blade

Capture the reference server blade hardware configuration and disk image

Requirements
To complete this lab, you will need:

Rev. 4.41

One HP ProLiant deployment server running:

Microsoft Windows Server 2003 Enterprise Edition with Active


Directory, Dynamic Host Configuration Protocol (DHCP), and dynamic
Domain Name System (DNS)

Microsoft SQL Server Desktop Engine (MSDE) 2000 or Microsoft SQL


Server 2000, either with Service Pack 3 or later

HP ProLiant Essentials Rapid Deployment Pack (RDP) 1.60

One ProLiant BL p-Class server blade enclosure with supported interconnect


option

One or more ProLiant server blades such as the ProLiant BL20p Generation 2
(G2)

L5.2 303

HP BladeSystem Solutions I Planning and Deployment

Optionally, to deploy a server blade connected to the MSA1000, you will also
need:

L5.2 304

One or more ProLiant server blades with SAN support, such as the ProLiant
BL20p G2 with Fibre Channel Mezzanine Card

One MSA1000 with two or more hot-pluggable drives

An RJ-45 Patch Panel 2 or GbE2 Interconnect Switch with the GbE2 Storage
Connectivity Kit

One Lucent Connector (LC-to-LC) fiber cable for connecting the server blade
to the MSA1000

Rev. 4.41

Creating a Windows Server 2003 Reference Server

Introduction
RDP 1.60 simplifies the installation of server blades connected to an HP SAN
solution such as the MSA1000 because the appropriate Storage Area Network
(SAN) support software is now installed as part of the scripted Windows Server
2003 installation. Previous versions of RDP required several manual steps to
configure the deployment server with the correct MSA1000 drivers.
The scripted Windows Server 2003 installation to a server blade connected to an
MSA1000 is no different than an installation to a server blade with only internal
drives. When the Windows Server 2003 installation completes, all you need to do
is connect to the server blade and use the HP Array Configuration Utility to
configure the MSA1000 as instructed.

Rev. 4.41

L5.2 305

HP BladeSystem Solutions I Planning and Deployment

Exercise 1 Connecting the server blade to an


MSA1000 (optional)
The remaining lab exercises assume that if an MSA1000 is used, it is properly
cabled and powered on. Consult with your instructor about which hardware
installation steps, if any, need to be completed:

L5.2 306

1.

Prepare the server blade by installing the Fibre Channel mezzanine card.

2.

Prepare the MSA1000 by installing the drives.

3.

Prepare the RJ-45 Patch Panel 2 or GbE2 Interconnect Switch with the GbE2
Storage Connectivity Kit.

4.

Properly cable the components.

5.

Power on the MSA1000.

Rev. 4.41

Creating a Windows Server 2003 Reference Server

Exercise 2 Deploying a scripted Windows Server


2003 installation to a PXE-enabled server blade
The scripted Windows Server 2003 installation to a PXE-enabled server blade
consists of these tasks:

Customizing the unattended installation file

Customizing the Altiris client installation file

Deploying Windows Server 2003 to the target server blade

Customizing the unattended installation file


The first step in deploying a scripted installation is to ensure that the configuration
is customized to your environment. To modify the unattend.txt file for your
environment:
1.

Open Microsoft Windows Explorer on the deployment server and browse to


the C:\Program Files\Altiris\eXpress\Deployment Server\Deploy\configs
directory.

2.

Locate the wnet.txt file and open it using a text editor. This is the unattend.txt
file used during the scripted installation.

3.

In the [UserData] section of the file, change the ComputerName setting to:
ComputerName=refsrv

4.

If not using the SELECT version of the operating system, add the following
line to the [UserData] section:
ProductID=XXXXX-XXXXX-XXXXX-XXXXX-XXXXX

where XXXX... is the actual product ID of the operating system.


Note
The SELECT version of the operating system does not require a product ID.
The operating system product ID, if necessary, will be provided by your
instructor.

Rev. 4.41

L5.2 307

HP BladeSystem Solutions I Planning and Deployment

5.

In the [Identification] group, change the JoinWorkgroup=WORKGROUP line


to:
JoinWorkgroup=WORKGROUP1

Or change the JoinWorkgroup= line to JoinDomain=class and add


these two entries:
DomainAdmin=Administrator
DomainAdminPassword=password
Note
To have the server join an existing domain, use the JoinDomain= line. Ensure
that the domain exists; otherwise, the server will default to a workgroup named
WORKGROUP. To have the server become part of a workgroup, use the
JoinWorkgroup= line.

6.

Verify that the AdminPassword= line is set to password. This parameter sets
the administrator account password. Change it if necessary.

7.

Save the file. You will use this file during the scripted installation.

Customizing the Altiris client installation file


After the operating system installation is complete, the ProLiant BL Scripted
Install for Windows 2003 job will install the ProLiant Support Packs (PSPs) and
the Altiris deployment agent (ACLIENT.EXE). Because there might be multiple
deployment servers in the classroom environment, the Altiris deployment agent
must be configured to attach only to your deployment server.
Note
Before proceeding, verify the IP addressing scheme with your instructor.

To configure the deployment agent, follow these steps:


1.

Open Windows Explorer on the deployment server and browse to the


C:\Program Files\Altiris\eXpress\Deployment Server\aclient.inp file. Open
this file using Notepad. This file is used to configure the Altiris deployment
agent automatically.

2.

Remove the semicolons (remark indicators) from the following two lines:
TcpAddr=10.10.1.1
TcpPort=402

L5.2 308

Rev. 4.41

Creating a Windows Server 2003 Reference Server

Rev. 4.41

3.

If necessary, replace the IP address in the TcpAddr= line with the IP address
of your deployment server.

4.

Enable remote control of deployed servers by setting the


AllowRemoteControl option to Yes.

5.

Save the changes to the file and exit Notepad.

L5.2 309

HP BladeSystem Solutions I Planning and Deployment

Deploying Windows Server 2003 to the target server blade


To perform an unattended Windows Server 2003 scripted installation:
1.

At the deployment server, start the Deployment Server Console.

2.

In the Jobs pane, expand the Microsoft Windows 2003 Scripted Install Events
folder.

3.

Double-click the ProLiant BL20p Scripted Install for Microsoft Windows


2003 event.
Note
If using a server blade other than the ProLiant BL20p, adjust your selections
accordingly.

L5.2 310

Rev. 4.41

Creating a Windows Server 2003 Reference Server

Rev. 4.41

4.

Click the individual tasks within the job to view details about each task.
Double-click the first Run Script task.

5.

After you have finished browsing, click Cancel twice to return to the
Deployment Server Console.

L5.2 311

HP BladeSystem Solutions I Planning and Deployment

6.

To begin the deployment process, power on the target server blade. Start a
browser session, connect to the iLO of the target server, and use the Remote
Console capability to view the server deployment progress.
Note
Depending on the previous state of the server, you may have to press F12
during the boot sequence for a PXE boot.

7.

L5.2 312

Confirm that the new computer is listed in the New Computers folder of the
Deployment Server Console.

Rev. 4.41

Creating a Windows Server 2003 Reference Server

Rev. 4.41

8.

Move the ProLiant BL20p Scripted Install for Microsoft Windows 2003 job
to the target server.

9.

The Schedule Job screen automatically displays. Select Run this job
immediately, and click OK to start the scripted installation on your target
server.

L5.2 313

HP BladeSystem Solutions I Planning and Deployment

10. In the confirmation dialog box, click Yes to perform the scripted install.
Note
To bypass this step in the future, select the Dont prompt me again box.

The scripted installation of Windows Server 2003, including the installation of the
PSP, continues unattended and takes approximately 60 minutes to complete.
11. After the scripted installation completes, ensure that the server is correctly
added to the Active Directory domain, as shown in the following graphic.

12. Ensure that the server is correctly added to the DNS forward lookup zone, as
shown in the following graphic.

L5.2 314

Rev. 4.41

Creating a Windows Server 2003 Reference Server

Exercise 3 Installing the Altiris eXpress


Deployment Server Agent on the reference server
(optional)
The Altiris eXpress Deployment Server Agent for Windows (ACLIENT.EXE) is a
small utility that runs as a service on a computer, enabling it to communicate with
the deployment server and process commands sent to it by the server. By installing
the deployment agent, you can redeploy or manage existing computers in your
infrastructure as well as perform pre- and post-imaging configuration and
application installation.
If you do not install the deployment agent as part of the deployment process, you
lose these capabilities. Scripted installation jobs provided by the RDP install the
deployment agent by default on deployed servers. HP recommends that you leave
the agent on the system after initial installation.
Altiris enables you to install the deployment agent remotely on Microsoft
Windows systems provided that you have the appropriate administrative
permissions on that system. You can even perform the installation in batch mode,
installing the agent to many systems simultaneously. With batch mode, you do not
have to visit each system to install the agent manually from a diskette.
To manually install the deployment agent:

Rev. 4.41

1.

If it is not already running, open the Deployment Server Console on the


deployment server by double-clicking the icon on the desktop.

2.

From the Tools menu, select Remote Agent Installer.

L5.2 315

HP BladeSystem Solutions I Planning and Deployment

L5.2 316

3.

At the Remote Agent Install screen, select Let me specify a username and
password for each machine as its installed and click Next.

4.

Select Enable this agent to use SIDgen and/or Microsoft Sysprep and click
Change Settings to open the Default Agent Settings screen.

Rev. 4.41

Creating a Windows Server 2003 Reference Server

Rev. 4.41

5.

At the Default Agent Settings screen, click the Transport tab.

6.

Select Use TCP/IP to connect to a Deployment Server, and enter the IP


address of your deployment server. Verify that the port defaults to 402 and
click OK Next.

7.

At the next screen, select Use only Altris SIDgen utility from the drop-down
menu.

L5.2 317

HP BladeSystem Solutions I Planning and Deployment

L5.2 318

8.

Select the Update file system permissions when changing SIDs check box and
click Next.

9.

At the next screen, click Add.

Rev. 4.41

Creating a Windows Server 2003 Reference Server

10. At the Browse computers screen, select a computer from the list, or enter the
name or IP address of the target server on which to install the agent, and
click OK.

11. Click the server you just added, and then click Properties.

Rev. 4.41

L5.2 319

HP BladeSystem Solutions I Planning and Deployment

12. At the Agent Properties screen, enter the user name (Administrator) and
password (password) for the administrator and click OK.

13. Click Finish. The agent installs on the remote client. When the Installing
Clients screen shows the All clients installed successfully status, click Exit
Install to return to the Deployment Server Console.

L5.2 320

Rev. 4.41

Creating a Windows Server 2003 Reference Server

Exercise 4 Remotely accessing the server blade


There are a number of ways to remotely access the server blade running Microsoft
Windows Server 2003. Three such methods are explained in this exercise:

HP integrated Lights-Out (iLO) Remote Console

Remote Desktop for Administration

Deployment Server Console Remote Control

Choose your preferred method and complete the appropriate configuration steps.

iLO Remote Console


The Remote Console capability allows you to securely view and manage a server
with integrated Lights Out. You can view the server console in both text and
graphics modes, and use the keyboard and mouse, as if you were standing in front
of the remote server. iLO Remote Console allows you to view the server
operations occurring before the operating system is loaded, such as ROM-Based
Setup Utility (RBSU) configuration and boot sequence.
To access the iLO Remote Console:

Rev. 4.41

1.

Start a supported web browser and connect to iLO.

2.

At the Account Login screen, log in using the appropriate login credentials.

3.

At the iLO Status Summary screen, click Remote Console Remote


Console or Remote Console Remote Console (dual cursor). The dual
cursor option allows you to align the local and remote mouse cursors.

L5.2 321

HP BladeSystem Solutions I Planning and Deployment

The iLO Remote Console opens in a new browser window.

L5.2 322

Rev. 4.41

Creating a Windows Server 2003 Reference Server

Remote Desktop for Administration


Remote Desktop for Administration provides remote server management
capabilities for Windows Server 2003. Using this feature, you can administer a
server from virtually any computer on your network. No license is required for up
to two simultaneous remote connections in addition to the server console session.
To enable Remote Desktop for Administration at the target server, use the iLO
Remote Console to complete these steps:

Rev. 4.41

1.

Click Start All Programs Control Panel System Remote.

2.

At the System Properties window, click the Remote tab and select the Allow
users to connect remotely to this computer option and click OK. At the
Remote Sessions popup screen, click OK.

3.

Click OK to close the System Properties window.

L5.2 323

HP BladeSystem Solutions I Planning and Deployment

To connect to the target server from a management server:

L5.2 324

1.

At the deployment server, click Start All Programs Administrative


Tools Remote Desktops.

2.

At the Remote Desktops screen, right-click Remote Desktops and select Add
new connection.

3.

At the Add New Connection window, enter the required information and
click OK.

Rev. 4.41

Creating a Windows Server 2003 Reference Server

4.

Rev. 4.41

Right-click the new connection icon and select Connect. To disconnect from
the target server, right-click the connection icon and select Disconnect.

L5.2 325

HP BladeSystem Solutions I Planning and Deployment

Deployment Server Console Remote Control


The Deployment Server Console Remote Control option is a computer
management feature built into the Deployment Server Console. It allows you to
control all types of computers to view problems or make immediate changes as if
you were sitting at the managed computer screen and using its keyboard and
mouse.
When a managed computer is being remotely controlled, the Deployment Agent
icon in the system tray of the managed computer flashes alternate icons.
Remote Control also provides Chat, Copy File to, and CTRL+ALT+DELETE
features to assist in administrating managed computers from the console.
Before you can remotely control a managed computer:

The managed computer must have the Altiris Agent for Windows installed
and properly set up.

The client (the management PC) must have the appropriate Remote Control
option selected in Altiris client properties. This option is not selected by
default.

The managed computer and Deployment Server Console must be able to


communicate with each other through TCP/IP.

To install the Altiris Agent for Windows, complete Exercise 3.

L5.2 326

Rev. 4.41

Creating a Windows Server 2003 Reference Server

To configure the managed computer for remote control, start the Deployment
Server Console and complete these steps:
Note
You can also enable remote control of deployed servers by setting the
AllowRemoteControl option to Yes in the C:\Program
Files\Altiris\eXpress\Deployment Server\aclient.inp file. If you do so, the
following steps are unnecessary.

1.

Rev. 4.41

At the Deployment Server Console screen, right-click the appropriate


computer name in the Computers pane and select Change Agent Settings
Windows/Linux Agent.

L5.2 327

HP BladeSystem Solutions I Planning and Deployment

2.

At the Windows/Linux Agent Settings screen, click the Remote Control tab,
select the Allow this computer to be remote controlled option, and click OK.

To remotely control a managed computer:


1.

L5.2 328

Right-click the appropriate computer name in the Computers pane and select
Remote Control.

Rev. 4.41

Creating a Windows Server 2003 Reference Server

2.

Rev. 4.41

The Remote Control window for the selected server displays. To close the
remote session, click Control Close Window.

L5.2 329

HP BladeSystem Solutions I Planning and Deployment

Exercise 5 Capturing the reference server blade


image
After you have completed a scripted installation of Microsoft Windows Server
2003, your server is ready. As part of the scripted installation process, the Altiris
Deployment Agent for Windows was added to the server. This server can now
become your reference server for future deployments.
To capture hardware configuration and disk image from your reference server:

L5.2 330

1.

In the Jobs pane of the Deployment Server Console, expand the SmartStart
Toolkit and OS Imaging Events folder.

2.

Double-click the Capture Hardware Configuration and Windows Image job.

3.

In the Job Properties screen, double-click the Run Script task. The Run Script
screen displays.

Rev. 4.41

Creating a Windows Server 2003 Reference Server

4.

5.

Rev. 4.41

In the Script Information screen, change the default names of the hardware
information and array information files that will be captured and click Finish.

Change the wincap-h.ini file to rsw2k3-h.ini

Change the wincap-a.ini file to rsw2k3-a.ini

In the Job Properties screen, double-click the Create Image task to open the
Save Disk Image to a File screen.

L5.2 331

HP BladeSystem Solutions I Planning and Deployment

L5.2 332

6.

Change the default image file name wincap.img to rsw2k3.img.

7.

Click Advanced to view the optional settings for imaging. For example, you
can change the maximum file size and compression ratio. For this exercise,
do not change the settings and click OK to return to the Save Disk Image to a
File screen.

Rev. 4.41

Creating a Windows Server 2003 Reference Server

Rev. 4.41

8.

Click Finish OK to close the Job Properties screen.

9.

In the Deployment Server Console screen, move the modified Capture


Hardware Configuration and Windows Image job to the appropriate server
icon in the Computers pane.

L5.2 333

HP BladeSystem Solutions I Planning and Deployment

10. The Schedule Job screen displays automatically. Select Run this job
immediately in the Schedule Computers for Job screen, and click OK. The
reference server restarts and processes the job.

11. The imaging process should take less than 10 minutes. When completed,
verify that the following files were created successfully:

L5.2 334

C:\Program Files\Altiris\eXpress\Deployment Server\Images


\rsw2k3.img

C:\Program Files\Altiris\eXpress\Deployment Server\Deploy\configs


\rsw2k3-h.ini

C:\Program Files\Altiris\eXpress\Deployment Server\Deploy\configs


\rsw2k3-a.ini

Rev. 4.41

Creating a Red Hat Enterprise Linux AS 3


Reference Server
Module 5 Lab 3

Objectives
After completing this lab, you should be able to:

Add Linux jobs to the Deployment Server Console

Erase the target server

Deploy a scripted Linux installation

Configure the HP StorageWorks Modular Smart Array 1000 (MSA1000)

Capture the reference server blade hardware configuration and disk image

Requirements
To complete this lab, you will need:

Rev. 4.41

One HP ProLiant deployment server running:

Microsoft Windows Server 2003 Enterprise Edition with Active


Directory, Dynamic Host Configuration Protocol (DHCP), and dynamic
Domain Name System (DNS)

Microsoft SQL Server Desktop Engine (MSDE) 2000 or Microsoft SQL


Server 2000, either with Service Pack 3 or later

HP ProLiant Essentials Rapid Deployment Pack (RDP) 1.60

One ProLiant BL p-Class server blade enclosure with supported interconnect


option

One or more ProLiant server blades such as the ProLiant BL20p G2

One HP ProLiant Essentials Rapid Deployment Pack Windows Edition


Release 1.60 CD

Red Hat Enterprise Linux AS 3 (RHEL AS 3) Update 2 CDs (total of four


CDs)

A Red Hat Network File System (NFS) server on the network (or, you may
use Microsoft Windows Services for UNIX instead of a Red Hat NFS server)
with copies of the RHEL AS 3 CDs

L5.3 335

HP BladeSystem Solutions I Planning and Deployment

Optionally, to deploy a server blade connected to the MSA1000, you will also
need:

L5.3 336

One or more ProLiant server blades with Storage Area Network (SAN)
support, such as the ProLiant BL20p G2 with Fibre Channel Mezzanine Card

One MSA1000 with two or more hot-pluggable drives and no logical drives
defined

An RJ-45 Patch Panel 2 or GbE2 Interconnect Switch with the GbE2 Storage
Connectivity Kit

One Lucent Connector (LC-to-LC) fiber cable for connecting the server blade
to the MSA1000

Rev. 4.41

Creating a Red Hat Enterprise Linux AS 3 Reference Server

Introduction
RDP 1.60 simplifies the installation of server blades connected to an HP SAN
solution such as the MSA1000 because the appropriate SAN support software is
now installed as part of the scripted Red Hat Enterprise Linux AS 3 installation.
Previous versions of RDP required several manual steps to configure the
deployment server with the correct MSA1000 drivers.
The scripted RHEL AS 3 installation to a server blade connected to an MSA1000
is no different than an installation to a server blade with only internal drives. When
the operating system installation completes, all you need to do is connect to the
server blade and use the HP Array Configuration Utility (ACU) to configure the
MSA1000 as instructed.
Three major actions must be performed to deploy Linux using RDP 1.60:
1.

Add Linux jobs to the Deployment Server Console for the appropriate server
family.

2.

Erase the target server.

3.

Deploy the operating system using one of the predefined RDP jobs.

After the operating system is deployed, configure the MSA1000 (if one exists) and
install server applications. After the reference server is configured as desired,
capture the hardware configuration and disk image for future deployments of like
servers.

Rev. 4.41

L5.3 337

HP BladeSystem Solutions I Planning and Deployment

Contents of the Red Hat Linux scripted install job

The contents of the ProLiant BL20p Scripted Install for Red Hat Linux AS 3 job
are:
1.

Sets the operating system configuration file


(set osfile=linux-h.ini).

Sets the hardware configuration file (set hwrfile=bl20p-h.ini).

Sets the array configuration file (set aryfile=bl20p-a.ini).

Calls the f:\deploy\tools\scripts\setcfg.bat file, which


applies the hardware and array configuration settings.

2.

Power Management (Reboot) Restarts the server.

3.

Run Script (embedded) Performs the following:

4.

L5.3 338

Run Script (Set Hardware Configuration) Performs the following:

Sets the partition configuration file (set prtfile=linux-p.ini).

Calls the f:\deploy\tools\scripts\setpart.bat file, which


applies the disk partition configuration settings.

Power Management (Reboot) Restarts the server.

Rev. 4.41

Creating a Red Hat Enterprise Linux AS 3 Reference Server

5.

Run Script (Install OS) Performs the following:

Sets the NFS server IP address (set nfsserver=192.168.0.1).

Sets the HP SmartStart version (set ss=ss.710).

Sets the operating system version (set os=rhas3).

Sets the kickstart file (set ksfile=bl20p.ks.cfg).

Calls the f:\deploy\tools\scripts\rhas3.bat file, which


prepares the target server for a Red Hat Linux scripted installation. First,
it formats the hard drive. Next, it copies the Red Hat Linux boot files to
the target server. And finally, it creates an autoexec.bat file.

Upon rebooting, the target server boots to the C drive and runs the autoexec.bat
file, which loads the Linux setup kernel. This reboot begins the Linux NFS-based
scripted installation.

Default hardware configuration


Hardware configuration is accomplished by means of automatic smart defaults
provided by the SmartStart Scripting Toolkit utilities. The BIOS is configured to
accept default parameters, and the array controller (if any) is configured according
to the defaults listed in the following table.

Rev. 4.41

If the array controller has:

It is configured for:

One drive
Two drives
Three drives
Four or more drives

RAID 0
RAID 1
RAID 5
RAID Advanced Data Guarding (ADG) (if supported);
otherwise, RAID 5

L5.3 339

HP BladeSystem Solutions I Planning and Deployment

Default Red Hat installation settings


The provided deployment jobs specify certain default configuration parameters. To
deploy servers with specific configuration settings, you must modify the scripted
install job or underlying files as necessary. The following table contains default
settings for Red Hat Enterprise Linux AS 3.
Component

Default setting

Linux root password

The root password for servers created with the provided scripts
is password. This password is stored as clear text in the
kickstart file. HP recommends that you change the root
password to your own password and in encrypted form within
the kickstart file. For instructions, refer to the Red Hat Linux
Customization Guide located at:
http://www.redhat.com/docs/manuals/linux
When configuring the disk partition for a scripted operating
system installation, a 75MB boot partition is created. The
remainder of the disk space is then partitioned according to
Linux default specifications.
Basic Linux packages are installed during a scripted operating
system installation. The GNOME and KDE packages are not
installed automatically.
Firewall settings are disabled.
HP installs the latest support pack drivers and agents. The
default Linux Web Agent password is password. This password
is stored as clear text in the input file, linuxpsp.txt, located on
the NFS server.

Drive configuration

Packages

Firewall
ProLiant Support Pack files

Many of these settings are contained within the appropriate kickstart file, for
example, bl20p.ks.cfg is provided for the ProLiant BL20p server blades.

L5.3 340

Rev. 4.41

Creating a Red Hat Enterprise Linux AS 3 Reference Server

Exercise 1 Adding Linux jobs to the Deployment


Server Console
Because no Linux jobs were selected during the initial installation of the
Deployment Server Console, you must add them at this time. To add Linux jobs to
the Deployment Server Console:

Rev. 4.41

1.

Insert the RDP 1.60 CD into the CD-ROM drive of your deployment server.
If autorun is enabled, the RDP installation utility runs automatically. If
autorun is disabled, double-click the autorun.exe file in the CD-ROM root
directory.

2.

When the software license agreement displays, click Agree to continue with
the installation.

3.

Select the HP ProLiant Integration Module for Deployment Server 1.60


option.

L5.3 341

HP BladeSystem Solutions I Planning and Deployment

L5.3 342

4.

At the Welcome screen, select I agree to all the terms of the preceding
License Agreement and then click Next.

5.

At the Job Selection screen, select the appropriate Linux jobs for the version
of the operating system you are going to install. The Windows and Linux
scripted jobs are cleared by default.

Rev. 4.41

Creating a Red Hat Enterprise Linux AS 3 Reference Server

Rev. 4.41

6.

Scroll down and clear the SmartStart Toolkit and OS Imaging Events and the
SmartStart Toolkit Hardware Configuration Events. These jobs were
imported during the initial RDP installation. Click Next to continue.

7.

At the Installation screen, click Next.

L5.3 343

HP BladeSystem Solutions I Planning and Deployment

L5.3 344

8.

At the Confirm File Replace screen, click No. Answer No to all subsequent
file and folder replacement confirmation screens.

9.

At the OS Distribution Copying screen, insert the Red Hat Enterprise Linux
AS 3 Update 2 CD #1 into the CD-ROM drive and click Next.

Rev. 4.41

Creating a Red Hat Enterprise Linux AS 3 Reference Server

10. At the Finish screen, click Finish.

11. Close the Rapid Deployment Pack window.

Rev. 4.41

L5.3 345

HP BladeSystem Solutions I Planning and Deployment

Exercise 2 Erasing the target server


If the server being deployed has been preconfigured, run the Erase Hardware
Configuration and Disks job before continuing with this exercise.
Also, if necessary, delete the target server from the deployment server database.
Open the Deployment Server Console and delete the target server entry from the
Computers pane.

L5.3 346

Rev. 4.41

Creating a Red Hat Enterprise Linux AS 3 Reference Server

Exercise 3 Deploying a scripted Linux installation


Deploying a scripted Linux installation consists of two basic steps:

Customizing the kickstart file

Deploying the operating system

Customizing the kickstart file


The default kickstart file that is used for the Red Hat Linux deployment contains
only basic configuration parameters and installs only basic Linux packages.
Further customization of the kickstart file may be necessary.
Locating and editing the Linux kickstart files
If using a Red Hat NFS server:
1.

At the NFS server, click Start Programs Applications Nautilus.

2.

At the Nautilus screen, click the Tree tab and click the
Usr/cpqrdp/ss.710/rhas3/ folder.

3.

Right-click the appropriate xx.ks.cfg file, where xx refers to a ProLiant server


(for example, bl20p.ks.cfg), and open it with a Linux text editor.

If using Microsoft Windows Services for UNIX:

Rev. 4.41

1.

At the Microsoft Windows Services for UNIX server, navigate to the


C:\Program Files\Altiris\eXpress\Deployment
Server\Deploy\CDS\Linux\cpqrdp\ss.710\rhas3 folder.

2.

Right-click the xx.ks.cfg file, where xx refers to a ProLiant server (for


example, bl20p.ks.cfg), and open it with a Linux-compatible text editor such
as AB-Edit or TextEdit. Do not use a Windows-based editor such as Notepad
because it uses CR+LF (Carriage Return + Line Feed) to terminate each line.
Linux editors use LF instead.

L5.3 347

HP BladeSystem Solutions I Planning and Deployment

Installing the GUI for Red Hat Linux


By default, the graphical packages (GNOME or KDE) are not installed
automatically. If you want the graphical user interface (GUI) installed for Red Hat
Linux, edit the appropriate kickstart file to make these modifications:
1.

2.

Locate the # PACKAGES section that contains:

#@ GNOME

#@ KDE

To select a graphical package, remove the # (comment mark) from the line
that corresponds to the graphical package you prefer and save your changes.

Installing the graphical Internet toolset


By default, the graphical Internet toolset, including the Mozilla web browser, is
not installed automatically. You will need Mozilla to use the ACU. To have the
graphical Internet toolset installed automatically, edit the appropriate kickstart file
to make these modifications:
1.

Locate the # PACKAGES section.

2.

Add the following line:


@ graphical-internet

L5.3 348

Rev. 4.41

Creating a Red Hat Enterprise Linux AS 3 Reference Server

Modifying the network settings


To modify network settings such as the host name, edit the appropriate kickstart
file to make these modifications:
1.

Locate the network line and modify it to include the following options:
network --bootproto dhcp --hostname refsrv.class.local
--nameserver 192.168.0.1

2.

Verify that the Linux NFS server address and directory are correct. Modify
them if necessary.

Other customization of the kickstart file


For other customization of the kickstart file, refer to The Official Red Hat Linux
Customization Guide available on the http://www.redhat.com site.

Rev. 4.41

L5.3 349

HP BladeSystem Solutions I Planning and Deployment

Deploying the operating system


To deploy the operating system:

L5.3 350

1.

Ensure that your target server is powered off and that the server icon in the
Deployment Server Console is deleted.

2.

In the Jobs pane, expand the Red Hat Enterprise Linux AS 3 Scripted Install
Events folder.

3.

Double-click the ProLiant BL20p Scripted Install for Red Hat Linux AS 3
job.

4.

At the Job Properties screen, double-click the Run Script Install OS task.

Rev. 4.41

Creating a Red Hat Enterprise Linux AS 3 Reference Server

5.

At the Script Information screen, notice how the DOS environment variables
are used to specify the configuration files used during the job. Ensure that the
Run this script option is selected, and change the set nfsserver= line to:
set nfsserver=192.168.0.1

where the IP address reflects your NFS server. Click Finish to continue.

Rev. 4.41

6.

Click OK to close the Job Properties screen.

7.

Power on your target server and open an iLO Remote Console session to
observe the deployment progress.

L5.3 351

HP BladeSystem Solutions I Planning and Deployment

L5.3 352

8.

After the target server displays in the Computers pane of the Deployment
Server Console, drag the ProLiant BL20p Scripted Install for Red Hat Linux
AS 3 job to the new computer icon representing your target server. Schedule
the job to execute immediately.

9.

The scripted Red Hat Linux installation continues unattended, and completes
within 45 to 60 minutes.

Rev. 4.41

Creating a Red Hat Enterprise Linux AS 3 Reference Server

10. After the installation completes, log in with the login name of root and the
password of password.

11. If you elected to install a graphical package, enter startx to start the GUI
after logging in.

Rev. 4.41

L5.3 353

HP BladeSystem Solutions I Planning and Deployment

Exercise 4 Configuring the MSA1000 (optional)


If an MSA1000 is connected to your server blade, configure it using these steps:
1.

At the GNOME GUI desktop, click Main Menu Run Program and execute
cpqacuxe R. This starts the cpqacuxe service.

L5.3 354

2.

Close the xterm window.

3.

Click the Mozilla Web Browser icon to start the Mozilla web browser.

4.

In the address field, type https://localhost:2381/ and press Enter.

Rev. 4.41

Creating a Red Hat Enterprise Linux AS 3 Reference Server

Rev. 4.41

5.

At the Website Certified by an Unknown Authority screen, select Accept this


certificate permanently and click OK.

6.

At the Security Error: Domain Name Mismatch screen, click OK.

7.

At the Security Warning screen, clear the Alert me whenever I am about to


view an encrypted page option and click OK.

L5.3 355

HP BladeSystem Solutions I Planning and Deployment

L5.3 356

8.

At the System Management Homepage screen for your server, enter


password in the Password field, and click OK.

9.

At the System Management Homepage screen, click Array Configuration


Utility.

Rev. 4.41

Creating a Red Hat Enterprise Linux AS 3 Reference Server

10. If you receive an About Popups warning message, click No.

11. The Array Configuration Utility screen displays. Ensure that all your array
controllers are visible in the left pane. Now you can use the ACU to
configure your arrays.

Rev. 4.41

L5.3 357

HP BladeSystem Solutions I Planning and Deployment

12. Create arrays and logical drives as instructed by your instructor.

13. Exit ACU.


14. Log out of the System Management Homepage and close the browser
session.
15. Using Linux disk tools, create extended partitions and logical drives as
instructed by your instructor. Format them with a file system of your choice.

L5.3 358

Rev. 4.41

Creating a Red Hat Enterprise Linux AS 3 Reference Server

Exercise 5 Capturing a hardware configuration


and Linux disk image
After you have completed a scripted installation of Red Hat Enterprise Linux
AS 3, your server is ready to become your reference server for future deployments.
To capture the hardware configuration and disk image from your reference server:

Rev. 4.41

1.

In the Jobs pane of the Deployment Server Console, expand the SmartStart
Toolkit and OS Imaging Events folder.

2.

Double-click the Capture Hardware Configuration and Linux Image job.

3.

In the Job Properties screen, double-click the Capture Hardware


Configuration task.

L5.3 359

HP BladeSystem Solutions I Planning and Deployment

4.

L5.3 360

In the Run Script screen, change the default names of the hardware
information and array information files that will be captured and click Finish.

Change the lnxcap-h.ini file to rslnx-h.ini

Change the lnxcap-a.ini file to rslnx-a.ini

Rev. 4.41

Creating a Red Hat Enterprise Linux AS 3 Reference Server

Rev. 4.41

5.

In the Job Properties screen, double-click the .\images\lnxcap.img task to


open the Save Disk Image to a File screen.

6.

Change the default image file name lnxcap.img to rslinux.img, and click
Advanced to view the optional settings for imaging.

7.

In the Create Disk Image Advanced screen, notice that you can change the
maximum file size and compression ratio, and then click OK to return to the
Create Disk Image screen.

L5.3 361

HP BladeSystem Solutions I Planning and Deployment

8.

Click OK to return to the Job Properties screen.

9.

Double-click the Remove cached DHCP information task. If the image being
captured uses the DHCP for any of the NICs, cached information must be
removed to avoid duplicate IP addresses. The second command in the Run
this script field removes this information.

10. Click Finish OK to close the Job Properties screen.


11. Log out of the Linux server.

L5.3 362

Rev. 4.41

Creating a Red Hat Enterprise Linux AS 3 Reference Server

12. In the Deployment Server Console screen, move the modified Capture
Hardware Configuration and Linux Image job to the appropriate server icon
in the Computers pane.

13. Select Run this job immediately in the Schedule Computers for Job screen,
and click OK. The reference server restarts and processes the job.

Rev. 4.41

L5.3 363

HP BladeSystem Solutions I Planning and Deployment

14. The imaging process should take 10 to 15 minutes. When completed, verify
that the following files were created successfully:

L5.3 364

C:\Program Files\Altiris\eXpress\Deployment Server\Images\


rslinux.img

C:\Program Files\Altiris\eXpress\Deployment Server\Deploy\configs\


rslnx-h.ini

C:\Program Files\Altiris\eXpress\Deployment Server\Deploy\configs\


rslnx-a.ini

Rev. 4.41

Deploying Windows Server 2003


Using Disk Imaging
Module 5 Lab 4

Objectives
After completing this lab, you should be able to:

Deploy Windows Server 2003 using the hardware configuration files and the
disk image previously created

Configure and demonstrate the rip-and-replace functionality

Requirements
To complete this lab, you will need:

One HP ProLiant deployment server running:

Microsoft Windows Server 2003 Enterprise Edition with Active


Directory, Dynamic Host Configuration Protocol (DHCP), and dynamic
Domain Name System (DNS)

Microsoft SQL Server Desktop Engine (MSDE) 2000 or Microsoft SQL


Server 2000, either with Service Pack 3 or later

HP ProLiant Essentials Rapid Deployment Pack (RDP) 1.60

One ProLiant BL p-Class server blade enclosure with supported interconnect


option

One or more ProLiant server blades such as the ProLiant BL20p Generation 2
(G2)

Windows Server 2003 image and hardware configuration files created


previously

Optionally, to deploy a server blade connected to the HP StorageWorks Modular


Smart Array 1000 (MSA1000), you will also need:

Rev. 4.41

One or more ProLiant server blades with Storage Area Network (SAN)
support, such as the ProLiant BL20p G2 with Fibre Channel Mezzanine Card

One MSA1000 with two or more hot-pluggable drives

An RJ-45 Patch Panel 2 or GbE2 Interconnect Switch with the GbE2 Storage
Connectivity Kit

One Lucent Connector (LC-to-LC) fiber cable for connecting the server blade
to the MSA1000

L5.4 365

HP BladeSystem Solutions I Planning and Deployment

Exercise 1 Deploying a hardware configuration and


Windows Server 2003 disk image
To deploy a hardware configuration and Windows Server 2003 disk image:

L5.4 366

1.

Start the Deployment Server Console.

2.

In the Jobs pane, expand the SmartStart Toolkit and OS Imaging Events
folder, and double-click the Deploy Hardware Configuration and Windows
Image job.

3.

At the Job Properties screen, double-click the Deploy Hardware


Configuration task.

Rev. 4.41

Deploying Windows Server 2003 Using Disk Imaging

4.

Rev. 4.41

At the Script Information screen, change the default names of the hardware
information and array information files that will be used in the deployment.
These are the files you captured in the earlier lab.

Change the wincap-h.ini file to rsw2k3-h.ini

Change the wincap-a.ini file to rsw2k3-a.ini

5.

Click Finish to return to the Job Properties screen.

6.

Double-click the Deploy Image task.

L5.4 367

HP BladeSystem Solutions I Planning and Deployment

7.

At the Disk Image Source screen, change the default image name,
wincap.img, to rsw2k3.img.

8.

Click Advanced to see the additional settings available when deploying an


image.

9.

Click OK Finish to return to the Job Properties screen.

10. Click OK to close the Job Properties screen and return to the Deployment
Server Console.

L5.4 368

Rev. 4.41

Deploying Windows Server 2003 Using Disk Imaging

11. If your reference server is also your target server, erase the configuration on
your reference server before deploying the captured image file:
Caution
This step is data destructive. You must have successfully captured a disk
image of your reference server before proceeding.

a.

In the Deployment Server Console, expand the SmartStart Toolkit


Hardware Configuration Events folder.

b.

Move the Erase Hardware Configuration and Disks job to the reference
server icon in the Computers pane.

c.

At the Schedule Computers for Job screen, select Run this job
immediately and click OK to start the job.

d.

Open the HP integrated Lights-Out (iLO) Remote Console to observe


the erase job progress.

12. After the erase job is completed, delete your reference server from the
Deployment Server Console by right-clicking its icon and selecting Delete.
13. At the Confirm Delete pop-up window, select Delete computers and groups
contained within selected items and click Yes.

Rev. 4.41

L5.4 369

HP BladeSystem Solutions I Planning and Deployment

14. If necessary, power cycle your target server. At the PXE Boot Selection
menu, verify that Altiris BootWorks (Initial Deployment) is selected.
The Initial Deployment job runs for all new computers that are not registered
in the Altiris database. This job does not perform any work, such as imaging
the server. When finished, it displays the new computer in the Deployment
Server Console and waits for further instructions.
The Initial Deployment job adds the target server to the New Computers
group in the Deployment Server Console. The target server displays in the
Deployment Server Console with the waiting state icon:

L5.4 370

Rev. 4.41

Deploying Windows Server 2003 Using Disk Imaging

15. Move the modified Deploy Hardware Configuration and Windows Image job
to the target server icon in the Computers pane.

16. At the Schedule Computers for Job screen, schedule the job to run
immediately and click OK.
17. When a warning message displays on the target server, let the timer expire or
press any key (except ESC) to continue with the installation. No further
interaction with the target server is required as the image deploys.

Rev. 4.41

L5.4 371

HP BladeSystem Solutions I Planning and Deployment

18. It takes approximately 15 minutes to deploy the hardware configuration and


the Windows Server 2003 image. Open an iLO Remote Console screen and
to observe the progress.

L5.4 372

Rev. 4.41

Deploying Windows Server 2003 Using Disk Imaging

Exercise 2 Configuring and demonstrating the ripand-replace functionality


The RDP console has several features specifically created to support the ProLiant
BL p-Class server blades. One such functionality is the ability to control the
automatic redeployment of server blades by altering what is called the server
change rule. This functionality is also called rip-and-replace.
Using the rip-and-replace functionality, the administrator can use RDP to preassign a particular function or role to each server blade bay in a server blade
enclosure. The deployment server keeps track of the physical location of every
ProLiant BL p-Class server blade and detects when a new server blade is placed in
a particular bay. The server change rule can be configured so that when the
deployment server detects a new server blade placed into a previously occupied
bay, a pre-determined action occurs automatically.
For example, a ProLiant BL20p G2 server blade in bay 4 runs Microsoft Windows
2000 with Microsoft Internet Information Server. The administrator builds the
server blade (configures the server and the software) and captures the server blade
hardware configuration and disk image. The administrator then configures the
server role for bay 4 she configures the server change rule for bay 4 to
automatically deploy the captured hardware configuration and disk image when a
new server is inserted into bay 4.
When a new server blade is inserted into bay 4, it seeks out the deployment server,
downloads the pre-assigned script, and begins working shortly thereafter without
any intervention. If an existing server blade in bay 4 requires replacement, the new
server blade automatically seeks out the deployment server and downloads the preassigned script to configure itself identically. In other words, the new server blade
automatically takes on the role of the previous server blade, significantly reducing
the time and effort needed to keep servers in production.
During this process, iLO obtains the rack name, chassis name, and bay number,
and communicates this information to RDP for the correct operating system
configuration to be deployed.

Rev. 4.41

L5.4 373

HP BladeSystem Solutions I Planning and Deployment

To configure and demonstrate the rip-and-replace functionality:


1.

At the Deployment Server Console screen, right-click your server blade icon
in the Computers pane and select History. In the space below, write the jobs
that were executed on your server blade, starting with the most recent one.
Close the History window when finished.
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................

L5.4 374

2.

In the Computers pane, double-click the server blade icon to launch the
Computer Properties screen.

3.

At the Computer Properties screen, scroll down and click the Bay icon.

Rev. 4.41

Deploying Windows Server 2003 Using Disk Imaging

4.

Rev. 4.41

From the Server change rule drop down menu, select Re-Deploy Computer
and click OK. The possible choices are:

Re-Deploy Computer This rule takes the previous server blade


configuration history and replays it on the new server blade. All tasks
and jobs in the server history replay starting from the most recent image
or scripted installation job.

Run a Predefined Job The server blade processes any job specified
by the user, including the Initial Deployment job.

Wait for User Interaction No job or task is performed. The


Deployment Agent on the server blade is instructed to wait, and the icon
on the Deployment Server Console is changed to reflect a waiting server
blade. This is the default selection.

Ignore the Change The new server blade is ignored, meaning that
no jobs are initiated. If the server blade existed in a previous bay, the
history and parameters for the server blade are moved or associated with
the new bay. If the server blade is unrecognized, its properties are
associated with the bay, and the normal process defined for new server
blades, if any, is followed.

L5.4 375

HP BladeSystem Solutions I Planning and Deployment

5.

Shut down your server blade and remove it from its bay. Swap it with another
student group that is ready to test this feature. Insert the replacement server
blade in the corresponding bay, and observe what happens. Ensure the new
server blade powers on, and allow several minutes for the server change rule
to take effect.

6.

L5.4 376

Important
If connected to a SAN and using Selective Storage Presentation (SSP), you
must retain the server blade Host Bus Adapters (HBAs) to ensure that the
replacement server hardware components are identical in every way to the one
you are replacing. Place each HBA in the new server blade in the same order
and location as they were in the old server blade.
When using SSP, the SAN storage solution is configured to allow access to the
LUNs from a specific server blade using the World Wide ID (WWID) of its
HBAs. If you replace the server blade and the HBAs change, the HBA
WWIDs change, and you will lose connectivity to your SAN. RDP 2.0 will
include jobs that automate the SAN connection recovery.
If SSP is not configured, the HBA WWID is not used, and the rip-and-replace
functionality will not be affected by replacement server blades with different
HBA WWIDs.

When the new server blade registers with the Deployment Server Console, its
name displayed in the Computers pane will change. The Deployment Server
Console will then initiate a redeployment job.

Rev. 4.41

Deploying Windows Server 2003 Using Disk Imaging

What jobs were executed on the new server blade as a result of the server change
rule execution?
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................
If the following jobs were recorded in the history for the original server blade,
what jobs would be automatically executed on the new server blade and in what
order?
a.

Capture Hardware Configuration and Windows Image (most recent)

b.

ProLiant BL20p Scripted Install for Microsoft Windows 2003

............................................................................................................................
............................................................................................................................
If the following jobs were recorded in the history for the original server blade,
what jobs would be automatically executed on the new server blade and in what
order?
a.

Deploy Hardware Configuration and Windows Image (most recent)

b.

Capture Hardware Configuration and Windows Image

c.

ProLiant BL20p Scripted Install for Microsoft Windows 2003

............................................................................................................................
............................................................................................................................
............................................................................................................................
For a fast recovery of a failed server blade, what is the most recent job that should
be in the server blade history?
............................................................................................................................
............................................................................................................................

Rev. 4.41

L5.4 377

HP BladeSystem Solutions I Planning and Deployment

L5.4 378

Rev. 4.41

Deploying Red Hat Enterprise Linux AS 3


Using Disk Imaging
Module 5 Lab 5

Objectives
After completing this lab, you should be able to:

Deploy Red Hat Enterprise Linux AS 3 using the hardware configuration


files and the disk image previously created

Configure and demonstrate the rip-and-replace functionality

Requirements
To complete this lab, you will need:

One HP ProLiant deployment server running:

Microsoft Windows Server 2003 Enterprise Edition with Active


Directory, Dynamic Host Configuration Protocol (DHCP), and dynamic
Domain Name System (DNS)

Microsoft SQL Server Desktop Engine (MSDE) 2000 or Microsoft SQL


Server 2000, either with Service Pack 3 or later

HP ProLiant Essentials Rapid Deployment Pack (RDP) 1.60

One ProLiant BL p-Class server blade enclosure with supported interconnect


option

One or more ProLiant server blades such as the ProLiant BL20p Generation 2
(G2)

Red Hat Enterprise Linux AS 3 image and hardware configuration files


created previously

Optionally, to deploy a server blade connected to the HP StorageWorks Modular


Smart Array 1000 (MSA1000), you will also need:

Rev. 4.41

One or more ProLiant server blades with Storage Area Network (SAN)
support, such as the ProLiant BL20p G2 with Fibre Channel Mezzanine Card

One MSA1000 with two or more hot-pluggable drives

An RJ-45 Patch Panel 2 or GbE2 Interconnect Switch with the GbE2 Storage
Connectivity Kit

One Lucent Connector LC-to-LC fiber cable for connecting the server blade
to the MSA1000

L5.5 379

HP BladeSystem Solutions I Planning and Deployment

Exercise 1 Deploying a hardware configuration and


Red Hat Enterprise Linux AS 3 disk image
To deploy a hardware configuration and Red Hat Enterprise Linux AS 3 disk
image:

L5.5 380

1.

Start the Deployment Server Console.

2.

In the Jobs pane, expand the SmartStart Toolkit and OS Imaging Events
folder, and double-click the Deploy Hardware Configuration and Linux
Image job.

3.

At the Job Properties screen, double-click the Deploy Hardware


Configuration task.

Rev. 4.41

Deploying Red Hat Enterprise Linux AS 3 Using Disk Imaging

4.

Rev. 4.41

At the Script Information screen, change the default names of the hardware
information and array information files that will be used in the deployment.
These are the files you captured in the earlier lab.

Change the lnxcap-h.ini file to rslnx-h.ini

Change the lnxcap-a.ini file to rslnx-a.ini

5.

Click Finish to return to the Job Properties screen.

6.

Double-click the Deploy Image task.

L5.5 381

HP BladeSystem Solutions I Planning and Deployment

7.

At the Disk Image Source screen, change the default image name,
lnxcap.img, to rslinux.img.

8.

Click Advanced to see the additional settings available when deploying an


image.

9.

Click OK Finish to return to the Job Properties screen.

10. Click OK to close the Job Properties screen and return to the Deployment
Server Console.

L5.5 382

Rev. 4.41

Deploying Red Hat Enterprise Linux AS 3 Using Disk Imaging

11. If your reference server is also your target server, erase the configuration on
your reference server before deploying the captured image file:
Caution
This step is data destructive. You must have successfully captured a disk
image of your reference server before proceeding.

a.

In the Deployment Server Console, expand the SmartStart Toolkit


Hardware Configuration Events folder.

b.

Move the Erase Hardware Configuration and Disks job to the reference
server icon in the Computers pane.

c.

At the Schedule Computers for Job screen, select Run this job
immediately and click OK to start the job.

d.

Open the HP integrated Lights-Out (iLO) Remote Console to observe


the erase job progress.

12. After the erase job is completed, delete your reference server from the
Deployment Server Console by right-clicking its icon and selecting Delete.
13. At the Confirm Delete pop-up window, select Delete computers and groups
contained within selected items and click Yes.

Rev. 4.41

L5.5 383

HP BladeSystem Solutions I Planning and Deployment

14. If necessary, power cycle your target server. At the PXE Boot Selection
menu, Altiris BootWorks (Initial Deployment) should be auto-selected.
The Initial Deployment job runs for all new computers that are not registered
in the Altiris database. This job does not perform any work, such as imaging
the server. When finished, it displays the new computer in the Deployment
Server Console and waits for further instructions.
The Initial Deployment job adds the target server to the New Computers
group in the Deployment Server Console. The target server displays in the
Deployment Server Console with the waiting state icon:

L5.5 384

Rev. 4.41

Deploying Red Hat Enterprise Linux AS 3 Using Disk Imaging

15. Move the modified Deploy Hardware Configuration and Linux Image job to
the target server icon in the Computers pane.

16. At the Schedule Computers for Job screen, schedule the job to run
immediately and click OK.
17. When a warning message displays on the target server, let the timer expire or
press any key (except ESC) to continue with the installation. No further
interaction with the target server is required as the image deploys.

Rev. 4.41

L5.5 385

HP BladeSystem Solutions I Planning and Deployment

18. It takes approximately 15 minutes to deploy the hardware configuration and


the Red Hat Enterprise Linux AS 3 image. Open an iLO Remote Console
session and observe the progress.

L5.5 386

Rev. 4.41

Deploying Red Hat Enterprise Linux AS 3 Using Disk Imaging

Exercise 2 Configuring and demonstrating the ripand-replace functionality


The RDP console has several features specifically created to support the ProLiant
BL p-Class server blades. One such functionality is the ability to control the
automatic redeployment of server blades by altering what is called the server
change rule. This functionality is also called rip-and-replace.
Using the rip-and-replace functionality, the administrator can use RDP to preassign a particular function or role to each server blade bay in a server blade
enclosure. The deployment server keeps track of the physical location of every
ProLiant BL p-Class server blade and detects when a new server blade is placed in
a particular bay. The server change rule can be configured so that when the
deployment server detects a new server blade placed into a previously occupied
bay, a pre-determined action occurs automatically.
For example, a ProLiant BL20p G2 server blade in bay 4 runs Red Hat Enterprise
Linux AS 3 with an Internet application. The administrator builds the server blade
(configures the server and the software) and captures the server blade hardware
configuration and disk image. The administrator then configures the server role for
bay 4 she configures the server change rule for bay 4 to automatically deploy
the captured hardware configuration and disk image when a new server is inserted
into bay 4.
When a new server blade is inserted into bay 4, it seeks out the deployment server,
downloads the pre-assigned script, and begins working shortly thereafter without
any intervention. If an existing server blade in bay 4 requires replacement, the new
server blade automatically seeks out the deployment server and downloads the preassigned script to configure itself identically. In other words, the new server blade
automatically takes on the role of the previous server blade, significantly reducing
the time and effort needed to keep servers in production.
During this process, iLO obtains the rack name, chassis name, and bay number,
and communicates this information to RDP for the correct operating system
configuration to be deployed.

Rev. 4.41

L5.5 387

HP BladeSystem Solutions I Planning and Deployment

To configure and demonstrate the rip-and-replace functionality:


1.

At the Deployment Server Console screen, right-click your server blade icon
in the Computers pane and select History. In the space below, write the jobs
that were executed on your server blade, starting with the most recent one.
Close the History window when finished.
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................

L5.5 388

2.

In the Computers pane, double-click the server blade icon to launch the
Computer Properties screen.

3.

At the Computer Properties screen, scroll down and click the Bay icon.

Rev. 4.41

Deploying Red Hat Enterprise Linux AS 3 Using Disk Imaging

4.

Rev. 4.41

From the Server change rule drop down menu, select Re-Deploy Computer
and click OK. The choices are:

Re-Deploy Computer This rule takes the previous server blade


configuration history and replays it on the new server blade. All tasks
and jobs in the server history replay starting from the most recent image
or scripted installation job.

Run a Predefined Job The server blade processes any job specified
by the user, including the Initial Deployment job.

Wait for User Interaction No job or task is performed. The


Deployment Agent on the server blade is instructed to wait, and the icon
on the Deployment Server Console is changed to reflect a waiting server
blade. This is the default selection.

Ignore the Change The new server blade is ignored, meaning that
no jobs are initiated. If the server blade existed in a previous bay, the
history and parameters for the server blade are moved or associated with
the new bay. If the server blade is unrecognized, its properties are
associated with the bay, and the normal process defined for new server
blades, if any, is followed.

L5.5 389

HP BladeSystem Solutions I Planning and Deployment

5.

Shut down your server blade and remove it from its bay. Swap it with another
student group that is ready to test this feature. Insert the replacement server
blade in the corresponding bay, and observe what happens. Ensure the new
server blade powers on, and allow several minutes for the server change rule
to take effect.

6.

L5.5 390

Important
If connected to a SAN and using Selective Storage Presentation (SSP), you
must retain the server blade Host Bus Adapters (HBAs) to ensure that the
replacement server hardware components are identical in every way to the one
you are replacing. Place each HBA in the new server blade in the same order
and location as they were in the old server blade.
When using SSP, the SAN storage solution is configured to allow access to the
LUNs from a specific server blade using the World Wide ID (WWID) of its
HBAs. If you replace the server blade and the HBAs change, the HBA
WWIDs change, and you will lose connectivity to your SAN. RDP 2.0 will
include jobs that automate the SAN connection recovery.
If SSP is not configured, the HBA WWID is not used, and the rip-and-replace
functionality will not be affected by replacement server blades with different
HBA WWIDs.

When the new server blade registers with the Deployment Server Console, its
name displayed in the Computers pane will change. The Deployment Server
Console will then initiate a redeployment job.

Rev. 4.41

Deploying Red Hat Enterprise Linux AS 3 Using Disk Imaging

What jobs were executed on the new server blade as a result of the server change
rule execution?
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................
If the following jobs were recorded in the history for the original server blade,
what jobs would be automatically executed on the new server blade and in what
order?
a.

Capture Hardware Configuration and Linux Image (most recent)

b.

ProLiant BL20p Scripted Install for Red Hat Linux AS 3

............................................................................................................................
............................................................................................................................
If the following jobs were recorded in the history for the original server blade,
what jobs would be automatically executed on the new server blade and in what
order?
a.

Deploy Hardware Configuration and Linux Image (most recent)

b.

Capture Hardware Configuration and Linux Image

c.

ProLiant BL20p Scripted Install for Red Hat Linux AS 3

............................................................................................................................
............................................................................................................................
............................................................................................................................
For a fast recovery of a failed server blade, what is the most recent job that should
be in the server blade history?
............................................................................................................................
............................................................................................................................

Rev. 4.41

L5.5 391

HP BladeSystem Solutions I Planning and Deployment

L5.5 392

Rev. 4.41

Вам также может понравиться