Академический Документы
Профессиональный Документы
Культура Документы
Deployment
CSG21230SG10411
HP Training
Student guide
Contents
Compare the differences among the ProLiant BL20p G2, BL20p G3, BL30p,
and BL40p server blades
Rev. 4.41
Network interconnectivity
Storage connectivity
Power infrastructure
HP Restricted
ii
VLAN
STP
HP Restricted
Rev. 4.41
Contents
Apply the basic concepts of the Spanning Tree Protocol (STP) by using
VLANs in conjunction with STP
Access and configure the integrated Lights-Out (iLO) of your server blade
Rev. 4.41
RDP
iLO
HP Restricted
iii
Install the HP ProLiant Integration Module for the Deployment Solution 1.60
Install the Altiris eXpress Deployment Server Agent on the reference server
blade
Capture the reference server blade hardware configuration and disk image
iv
Capture the reference server blade hardware configuration and disk image
HP Restricted
Rev. 4.41
Contents
Deploy Windows Server 2003 using the hardware configuration files and the
disk image previously created
Disable the integrated array controller and change the boot order
Rev. 4.41
Disable the integrated array controller and change the boot order
HP Restricted
Explain how Systems Insight Manager integrates with OVO for Windows to
manage HP BladeSystems
Verify the HP System Insight Manager (HP SIM) 4.1 hardware and software
requirements
vi
HP Restricted
Rev. 4.41
Course Overview
HP BladeSystem
Solutions I Planning
and Deployment
Course Overview
HP Restricted
2004 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice
Rev. 4.41
Course Overview
Introduction
Rev. 4.41
HP Restricted
Overview 2
Introduction
The HP BladeSystem portfolio includes a full range of solutions to optimize server, network,
and storage use. HP BladeSystem solutions are built on HP ProLiant BL
p-Class server blades, which provide higher density and less cabling than rack-mounted
servers. ProLiant BL p-Class server blades also offer simple deployment solutions and efficient
management tools.
Because the high-density ProLiant BL p-Class server blades offer several features specific to
their design, there are special considerations in their implementation and maintenance. In this
course, you will learn how to:
Identify HP BladeSystem server blades
Deploy HP BladeSystem server blades
Connect HP BladeSystem server blades to a network
Connect HP BladeSystem server blades to storage devices
Manage HP BladeSystem server blades
Troubleshoot common HP BladeSystem server blade problems
Rev. 4.41
Course Overview
Prerequisites
Prerequisite certifications
MCSE for Windows 2000/Windows Server 2003 or
Red Hat/SuSE Linux
ASE
Prerequisite training
HP StorageWorks Full-Line Technical WBT
Windows 2000 Integration and Performance course
ProLiant Essentials RDP
Advanced Technical Training
Deploying Linux on ProLiant servers using RDP Linux Edition
HP Restricted
Overview 3
Prerequisites
This course is a Master Accredited Systems Engineer (ASE) level course. It is designed for
people who have the required certifications or equivalent knowledge and experience. The
student guide and labs designed for this course, combined with other information you receive
from HP, will help you prepare for the Master ASE exam.
Prerequisite certifications
The following certifications or equivalent knowledge and experience are required before taking
this class:
Microsoft Certified Systems Engineer (MCSE) for Windows 2000/Windows Server 2003 or
Red Hat/SuSE Linux certification
HP Accredited Systems Engineer (ASE) certification
Prerequisite training
In addition to these certifications, each student is required to have completed the following
training or have the equivalent knowledge and experience:
HP StorageWorks Full-Line Technical Web-Based Training (WBT)
Microsoft Windows 2000 Integration and Performance course
ProLiant Essentials Rapid Deployment Pack (RDP) Advanced Technical Training or
Deploying Linux on HP ProLiant servers using RDP Linux Edition
Installing and Using HP Systems Insight Manager (optional but recommended)
Important! This course builds on knowledge gained in the prerequisite certifications and
training. If a student does not meet the prerequisites, this course can be extremely difficult or
impossible to complete. The course is written, and will be taught, as if the prerequisites have
been met by all students.
Rev. 4.41
Course Overview
Course objectives (1 of 3)
Identify the major components of an HP BladeSystem
solution
Describe the ProLiant BL server blade line strategy
Identify the HP BladeSystem families
Discuss the deployment and management tools available
for HP BladeSystem solutions
Compare the differences among the ProLiant BL20p G2,
BL30p, and BL40p server blades
Identify the ProLiant BL p-Class system components
Describe technologies and concepts unique to the ProLiant
BL p-Class server blade system
Rev. 4.41
HP Restricted
Overview 4
Course objectives (1 of 3)
After completing this course, you should be able to:
Identify the major components of an HP BladeSystem solution
Describe the HP ProLiant BL server blade line strategy
Identify the HP BladeSystem families
Discuss the deployment and management tools available for HP BladeSystem solutions
Compare the significant differences among the HP ProLiant BL20p G2, BL30p, and BL40p
server blades
Identify the ProLiant BL p-Class system components
Describe technologies and concepts that are unique to the ProLiant BL p-Class server blade
system
Rev. 4.41
Course Overview
Course objectives (2 of 3)
Discuss the architecture of the ProLiant BL p-Class system
components and the benefits they provide
List server blade options and describe how they are used to
enhance performance
Plan a deployment site for HP BladeSystem servers
Plan the deployment of a target environment
Design the power infrastructure of ProLiant BL p-Class
server blades
Deploy HP BladeSystem servers
Prepare a deployment server and deploy HP BladeSystem
servers
Discuss general networking concepts
Choose interconnect options for HP BladeSystem servers
Explain iLO port aggregation in HP BladeSystem servers
Rev. 4.41
HP Restricted
Overview 5
Course objectives (2 of 3)
Discuss the architecture of the ProLiant BL p-Class system components and the benefits
they provide
List the server blade options and describe how they are used to enhance performance in the
ProLiant BL p-Class system
Plan a deployment site for HP BladeSystem servers
Plan the deployment of a target environment
Design the power infrastructure of HP ProLiant BL p-Class server blades
Deploy HP ProLiant BladeSystem servers using
HP ProLiant Essentials RDP
HP integrated Lights-Out (iLO)
HP Systems Insight Manager
Prepare a deployment server and deploy HP BladeSystem servers
Discuss general networking concepts
VLAN
Spanning tree protocol
Port trunking and teaming
Choose the appropriate interconnect options for HP BladeSystem servers
Explain iLO port aggregation in HP BladeSystem servers
Rev. 4.41
Course Overview
Course objectives (3 of 3)
Identify the storage solutions supported by the HP BladeSystem
Describe HP BladeSystem SAN support
Explain how to connect a ProLiant BL p-Class server to a SAN
Discuss the process of booting from a SAN
Identify functions and components of Systems Insight Manager
Discuss how HP OVO for Windows provides management
services for HP BladeSystems
Explain how Systems Insight Manager integrates with OVO for
Windows and iLO technology to manage HP BladeSystems
Use the ProLiant BL p-Class Diagnostic Station to communicate
with an HP BladeSystem solution
Discuss service and troubleshooting procedures for HP
BladeSystems
List HP warranty and support options for HP BladeSystems
Rev. 4.41
HP Restricted
Overview 6
Course objectives (3 of 3)
Rev. 4.41
Course Overview
Other information
Course modules
Classroom facilities
Location of restrooms and smoking areas
Class hours
Classroom guidelines
Do not interfere with other students learning
Do not smoke in classroom
Rev. 4.41
HP Restricted
Other information
Course modules
This course includes the following modules:
Module 1 Introducing the HP BladeSystem Portfolio
Module 2 ProLiant BL p-Class Server Blades and Infrastructure
Module 3 Site Planning and Infrastructure Design
Module 4 ProLiant BL p-Class Network Connectivity Options
Module 5 Deploying ProLiant BL p-Class Server Blades
Module 6 ProLiant BL p-Class Storage Connectivity Options
Module 7 ProLiant BL p-Class Server Blade Management
Module 8 ProLiant BL p-Class Service and Troubleshooting
Classroom facilities
The instructor will give you detailed information concerning:
Location of restrooms and smoking areas
Class hours
Class start time
Scheduled breaks
Class stop time
Rev. 4.41
Overview 7
Course Overview
Classroom guidelines
Use the following guidelines when attending this class:
Do not interfere with other students learning.
Be on time for class.
Turn all mobile phones and pagers to off or the silent setting.
Be professional in your speech and actions.
Do not change or modify lab equipment, passwords, or software configurations.
Do not smoke in the classroom.
Important! You may be removed from the classroom and not allowed to return if you fail to
follow the classroom guidelines.
Rev. 4.41
Rev. 4.41
Rev. 4.41
Course Overview
HP Restricted
Overview 9
Rev. 4.41
Course Overview
10
Introducing the HP
BladeSystem Portfolio
Module 1
HP Restricted
2004 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice
Rev. 4.41
11
Objectives
Define server blade and server blade enclosure
Describe the HP BladeSystem product line
Identify the deployment and management tools available for
the HP BladeSystem solutions
Discuss the benefits of server blades and HP BladeSystem
solutions TCO
Rev. 4.41
HP Restricted
1 12
Objectives
After completing this module, you should be able to:
Define server blade and server blade enclosure
Describe the HP BladeSystem product line
Identify the deployment and management tools available for the HP BladeSystem solutions
Discuss the benefits of server blades and the total cost of ownership (TCO) of HP
BladeSystem solutions
Rev. 4.41
12
Rev. 4.41
HP Restricted
1 13
Rev. 4.41
13
Rev. 4.41
HP Restricted
1 14
Rev. 4.41
14
HP Restricted
1 15
Rev. 4.41
15
HP industry-leading
technologies
Tool-free mechanical design
Hot-pluggable components
Remote management through
iLO
Blade PC solutions and ProLiant server
blades in a 42U rack
Rev. 4.41
HP Restricted
1 16
Rev. 4.41
16
HP BladeSystem portfolio
Blade PC solutions Power
and space efficiency
System redundancy
ProLiant C-GbE Interconnect
Model
bc1000 blade PC
BL40p
Rev. 4.41
HP Restricted
1 17
HP BladeSystem portfolio
The HP BladeSystem server portfolio includes two distinct families of server blades:
Blade PC solutions Designed for power and space efficiency
Optimized architecture for office productivity applications
Integrated management and deployment tools for scale-out environments
System-level redundancy
ProLiant C-GbE Interconnect (with four 10/100/1000Mb/s Gigabit Ethernet RJ-45
uplinks for up to 40-to-1 network cable reduction)
Model HP bc1000 blade PC
ProLiant server blades High-performance, high-availability server blades designed for
multitiered data center architectures
Intelligent fault-resilient power, redundant NICs, integrated RAID, and optional hotpluggable SCSI drives.
Note: The BL30p has optional internal boot drives.
Remote management using iLO Advanced with built-in graphic console and virtual
media
System of choice for dynamic web hosting and media streaming
Current models
ProLiant BL20p Generation 2 (G2) and Generation 3 (G3)
ProLiant BL30p
ProLiant BL40p
Rev. 4.41
17
Blade PC solutions
Centralize desktop compute and storage resources
Used in CCI solutions
HP bc1000 blade PC on back end
Thin clients on front end
Rev. 4.41
HP Restricted
1 18
Blade PC solutions
HP blade PC solutions are computing solutions that centralize desktop compute and storage
resources into easily managed, highly secure data centers. They also offer users the
convenience and familiarity of a traditional desktop environment.
The HP blade PC solutions are single-processor blades used in three-tiered Consolidated Client
Infrastructure (CCI) solutions, which feature:
A compute tier with racks of HP bc1000 blade PCs on the back end
An access tier using thin clients on the front end
A resource tier made up of a storage pool, network printers, application servers, and other
networked resources
Blade PC solutions are covered by End User Workplace Solutions from HP Services. These
services can help you simplify the provisioning, support, and management of any access device
or printer. These services also provide users with secure access to corporate information, email,
Internet, and printer services, anywhere, anytime.
Note: The HP blade PC solutions are currently available in North America only.
Rev. 4.41
18
HP bc1000 blade PC
Features
3U form factor
1.0GHz Transmeta Efficeon TM8000 processor
512MB DDR SDRAM PC2700 at 333MHz with 1024KB L2 cache
expandable to 1GB
Dual 10/100 integrated NICs
40GB hard drive
Three-year limited hardware warranty
Benefits
Maximizes density up to 20 blade PCs per enclosure
Minimizes power and cooling requirements
Offers good performance
Maximizes network throughput
Provides ample storage
Rev. 4.41
HP Restricted
1 19
HP bc1000 blade PC
The HP bc1000 blade PC offers the following features:
Form factor A 3U form factor maximizes density.
Processor A 1.0GHz Transmeta Efficeon TM8000 processor offers thermal efficiencies
designed to minimize power and cooling requirements, which lowers TCO.
Memory 512MB DDR SDRAM PC2700 at 333MHz with 1024KB L2 cache memory
expandable to 1GB offers good performance for mainstream applications.
Network controllers Dual 10/100 integrated NICs help maximize throughput
efficiencies.
40GB hard drive Ample storage is available for accessing and working with data
Warranty A three-year limited hardware warranty is standard on the blade, one year on
the drive.
Currently, the HP bc1000 blade PC is only available with Windows XP Embedded Service
Pack (SP) 1 operating system installed.
Benefits
Provides extreme densityup to 20 blade PCs per enclosure
Minimizes power and cooling requirements
Offers good performance
Maximizes network throughput
Provides ample storage
Rev. 4.41
19
Components
Client tier
DAE
Compute tier
Data resources tier
Rev. 4.41
HP Restricted
1 20
Rev. 4.41
20
Rev. 4.41
HP Restricted
1 21
Rev. 4.41
21
Thin clients
Rev. 4.41
HP Restricted
1 22
Thin clients
The HP Thin Client Series uses streamlined components to reduce hardware and software
duplication across a network. Both Windows CE.NET and Windows XP Embedded operating
systems are supported
The HP portfolio of thin clients includes:
HP thin client t5700
HP thin client t5515
HP thin client t5500
HP thin client t5300
You can read the complete specifications at:
http://h18004.www1.hp.com/products/thinclients/index_t5000.html
Rev. 4.41
22
BL20p G2 and G3
2P
BL30p
BL40p
4P
Rev. 4.41
HP Restricted
1 23
Rev. 4.41
23
Blade PC solutions
Rev. 4.41
HP Restricted
1 24
Rev. 4.41
24
RDP
SmartStart Scripting Toolkit
Systems Insight Manager
OpenView
Rev. 4.41
HP Restricted
1 25
Rev. 4.41
25
RDP
ProLiant Integration Module
SmartStart Scripting Toolkit
Configuration jobs for industry-standard operating systems
Sample unattended files
PSPs
Software drivers
Management agents
Documentation
Rev. 4.41
HP Restricted
1 26
RDP
RDP is a complete solution for ProLiant servers that automates the process of deploying and
provisioning server software, enabling companies to quickly and easily adapt to changing
business demands. RDP combines an off-the-shelf version of Altiris Deployment Solution with
the ProLiant Integration Module.
The ProLiant Integration Module consists of software optimizations for ProLiant servers,
including:
SmartStart Scripting Toolkit
Configuration jobs for industry-standard operating systems
Sample unattended files
ProLiant Support Packs (PSPs) that include software drivers, management agents, and
important documentation
Deployment Solution provides a choice between a Windows-based or a web-based
management console. Both consoles have an intuitive user interface, making deployment of a
server or multiple servers easy and consistent. You can deploy servers through the imaging
feature or by scripting using the SmartStart Scripting Toolkit.
RDP is available in Linux and Windows editions. The Windows edition is hosted on a
Windows server and is intended for heterogeneous environments deploying both Windows and
Linux systems. The Linux edition is hosted on a Linux server and is intended for a
homogeneous Linux environment deploying only Linux systems.
Rev. 4.41
26
HP Restricted
1 27
Rev. 4.41
27
HP Restricted
1 28
Rev. 4.41
28
Rev. 4.41
29
HP OpenView
NNM
OVO for Windows
OVO for UNIX
GlancePlus
GlancePlus Pak (GlancePlus and Performance Agent)
Performance Manager
SPIs
Rev. 4.41
HP Restricted
1 30
HP OpenView
OpenView products ensure smooth system operation by managing the availability and
performance of critical services across the enterprise. Products in the OpenView software suite
include:
Network Node Manager (NNM) Designed for all sizes of networks requiring
discovery, graphical layout, and advanced management of network equipment,
sophisticated root-cause analysis, and distributed management for large networks spanning
multiple departments.
OpenView Operations (OVO) for Windows Provides comprehensive event
management, proactive performance monitoring, and automated alerting, reporting, and
graphing for heterogeneous systems, middleware, and applications.
OVO for UNIX (Network Node Manager and Service Navigator) Provides a
distributed, large-scale management solution that monitors, controls, and reports the health
of IT environments.
GlancePlus Allows easy examination of system activities, identifies and resolves
performance bottlenecks, and tunes the system for more efficient operation.
GlancePlus Pak (GlancePlus and Performance Agent) Offers the diagnostic
capabilities of GlancePlus and the logging, alarming, and collection capabilities of
Performance Agent.
Performance Manager Monitors, analyzes, and forecasts resource utilization for
distributed and multi-vendor environments. Performance Manager uses data collected from
the Performance Agent and other sources to isolate bottlenecks and maximize resource
uptime.
Smart Plug-Ins (SPIs) Extends the management capabilities of OpenView to ensure
optimum performance and uptime for specific applications. SPIs include BEA WebLogic
Server, IBM WebSphere, Microsoft .Net, Active Directory, Exchange and SQL Server,
Oracle, PeopleSoft, and Sun Java System Application Server 7.
Rev. 4.41
30
Rev. 4.41
HP Restricted
1 31
Rev. 4.41
31
Rev. 4.41
HP Restricted
1 32
Rev. 4.41
32
HP BladeSystem benefits
Data center space savings
Lower connectivity costs and simplified cabling
Rev. 4.41
HP Restricted
1 33
HP BladeSystem benefits
Data center space savings Server blade systems reduce required data center space 14%
to 24%.
Lower connectivity costs and simplified cabling Up to 25% of a system
administrators time is spent in cable management, and cable failures are a primary cause of
downtime. Server blade systems are wired once and reconfigured through virtual LAN
(VLAN) software configuration tools.
Fewer spare parts Server blade architectures are designed for shared storage, so all user
changeable data should be on NAS and SANs. Server blades run operating systems and
applications only and are managed by software deployment tools, so there are fewer errors
during operating system, patch, and application maintenance.
Reduced installation, upgrade, and maintenance time HP BladeSystems are installed
once, then reconfigured with software tools as necessary. Adding and reconfiguring server
blades, network ports, cables, and disk capacity takes minutes instead of days.
Higher system availability Server blades are fully redundant with dual VLAN switches
per blade enclosure, redundant shared power systems across all blades in a rack, backplane
data paths (Ethernet and Fibre Channel SAN), local disks (RAID 1), and fans. In addition,
rip and replace server maintenancethrough the enclosure slot and using software
deployment tools such as RDPdecreases downtime.
Improved data center efficiency Blade systems are a catalyst to improving data center
ratios (devices managed per administrator) and reduce the need to touch every device in the
data center.
Remote access for centralized management You can centralize management of
multiple data centers and merge separate management domains (servers, network, and
storage).
Automated deployment and provisioning Blade systems provide optional redundant
SAN connectivity.
Rev. 4.41
33
Acquisition
Planning
Deployment and provisioning
Maintenance
Upgrades and replacements
Reprovisioning
Rev. 4.41
HP Restricted
1 34
Rev. 4.41
34
Rev. 4.41
HP Restricted
1 35
Rev. 4.41
35
8
BL20P G2
Scenario
TCO / NPV
1U Server
BL20p G2 Savings
44,661 $
(115,473) $
5,583 $
(14,434) $
20,017
(79,926) $
(97,212) $
17,286
Acquisition Cost
160,133
Installation Cost
(2,333) $
(8,653) $
6,320
53,176 $
(4,000) $
57,176
Details
1U Server Scenario
Acquisition Costs
Server Acquisition cost
Year 0
Year 1
44,528
52,684
5,600
2,000
1,053
Year 2
Year 3
Cabling Costs
=number of cables x time to install one x labor rate
Maintenance/Upgrade Costs
4,000 $
4,000 $
4,000
= # of servers x # of events per year x time to remove and install x labor rate
Rev. 4.41
HP Restricted
1 36
Rev. 4.41
36
Learning
check
Rev. 4.41
HP Restricted
1 37
Learning check
1. Define server blade.
_________________________________________________________________________
_________________________________________________________________________
__________________________________________________________
2. Define server blade enclosure.
_________________________________________________________________________
_________________________________________________________________________
__________________________________________________________
3. The HP BladeSystem _______________ portfolio offers high availability server blades
designed for multitiered data center architectures.
4. The HP BladeSystem _______________ portfolio only offers single-processor server
blades.
5. How does the HP BladeSystem lower TCO during acquisition?
_________________________________________________________________________
_________________________________________________________________________
_____________________________________________________________
6. Match the deployment and management tool with its feature.
a. RDP
___ Performs unattended installation
b. SmartStart Scripting Toolkit ___ Automates deployment
c. Systems Insight Manager
___ Identifies pre-failure conditions
d. HP OpenView
___ Manages the availability and performance of
critical services across the enterprise
Rev. 4.41
37
Rev. 4.41
Rev. 4.41
HP Restricted
38
1 38
Objectives
After completing this lab, you should be able to:
Introduction
This lab is provided as a reference point for validating the classroom setup and
configuration. Typically, most of the configuration work described in this lab has
already been performed by your instructor or the support staff.
Important
Do not proceed without first consulting your instructor or carefully reviewing a
configuration deviation document. Your instructor or the configuration
deviation document will explain what configuration steps, if any, must yet be
performed.
Regardless of the initial configuration state, you must become familiar with the
classroom setup and configuration. Depending on the initial configuration state,
perform one of the following:
1.
If you are to complete certain configuration steps described in this lab and
identified by your instructor or the configuration deviation document,
perform them as instructed. When done, review this entire lab to become
familiar with the classroom setup and configuration.
2.
Rev. 4.41
L1 39
Student stations
Six student stations should be available. Each student station consists of:
Each student station server has Microsoft Windows Server 2003 Enterprise
Edition installed and hosts these roles:
HP BladeSystem
One HP BladeSystem is required in the classroom, with configuration as follows:
L1 40
Single processor
Rev. 4.41
Important
It is mandatory to have 240VAC 30a power to each ProLiant BL p-Class
power enclosure.
L1 41
L1 42
IP address/subnet mask:
Rev. 4.41
Blade rack
with one blade enclosure
and six ProLiant BL20p G2s
Data/PXE network
(192.168.0.x)
iLO network
(192.168.1.x)
MSA1000
SAN connection
This figure shows the overall classroom configuration. At the center, a blade rack
consists of a single-phase power enclosure with two single-phase power supplies
installed in bays 1 and 2. One blade server enclosure contains two GbE2
Interconnect Switches and six ProLiant BL20p G2 server blades. Each server
blade is connected to the shared MSA1000 using the embedded StorageWorks
SAN Switch 2/8.
Six student stations are connected to the GbE2 interconnect switches. NIC 1 of
each student server is connected to the left (side A) GbE2 Interconnect Switch,
providing data and the PXE network. NIC 2 of each student server is connected to
the right (side B) GbE2 interconnect switch, providing the iLO network.
Rev. 4.41
L1 43
MSA1000 with
embedded SAN Switch 2/8
Blade enclosure
Each student group has a dedicated ProLiant BL20p G2/G3 server blade in the
corresponding bay as the student group number. For example, student group 4 has
a dedicated server blade in bay 4.
L1 44
Rev. 4.41
LUN 6 (blade 6)
LUN 5 (blade 5)
LUN 4 (blade 4)
LUN 3 (blade 3)
LUN 2 (blade 2)
LUN 1 (blade 1)
The MSA1000 has at least five 18.2GB disk drives configured with RAID 5. Six
logical drives (LUNs) were created and assigned to their respective server blades.
Access to the individual LUNs is controlled by Selective Storage Presentation
(SSP).
Rev. 4.41
L1 45
GbE2 ports
1 (default)
11
12
13
14
15
16
Deployment server
N/A
1
2
3
4
5
6
Server blade
N/A
1
2
3
4
5
6
Ports 17 and 18 are the crosslink ports. Ports 13, 14, 15, and 16 belong to server
blades in bays 7 and 8, which are empty. All these ports are left in the default
VLAN 1.
L1 46
Rev. 4.41
ProLiant BL p-Class
Server Blades and
Infrastructure
Module 2
HP Restricted
2004 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice
Rev. 4.41
47
Objectives
Discuss the HP ProLiant BL p-Class system anatomy
Describe the server blade enclosure
Compare the differences among the ProLiant BL20p G2,
BL20p G3, BL30p, and BL40p server blades
List and describe the server blade options
Network interconnectivity
Storage connectivity
Power infrastructure
Rev. 4.41
HP Restricted
2 48
Objectives
After completing this module, you should be able to:
Discuss the HP ProLiant BL p-Class system anatomy
Describe the server blade enclosure
Compare the differences among the ProLiant BL20p Generation 2 (G2), BL20p G3,
BL30p, and BL40p server blades
List and describe the ProLiant BL p-Class server blade options such as the:
Network interconnectivity options
Storage connectivity options
Power infrastructure
Design the ProLiant BL p-Class power infrastructure
Rev. 4.41
48
Rev. 4.41
Interconnect options
Power infrastructure
HP Restricted
Rev. 4.41
49
2 49
BL20p G2
Rev. 4.41
BL20p G3
BL30p
HP Restricted
Rev. 4.41
50
BL40p
2 50
ProLiant BL20p G2
Increases the capabilities of the original
two-processor server blade
533MHz front-side bus
2MB or 1MB L3 cache or 512KB L2 cache
Smart Array 5i
Interconnects
Three NICs, two SANs, and one iLO
Rev. 4.41
HP Restricted
2 51
ProLiant BL20p G2
The ProLiant BL20p G2 is a high-performance 2P server blade designed for enterprise
availability. As a mid-tier applications server blade, it increases the capabilities of the original
two-processor server blade.
The ProLiant BL20p G2 features a 533MHz front-side bus with one or two of the following
Intel Xeon processors:
3.2GHz with 2M or 1M Level 3 (L3) cache
3.06GHz with 512KB L2 cache or 1MB L3 cache
2.8GHz processor with 1MB L3 cache or 512KB L2 cache
The integrated Smart Array 5i drive controller offers Ultra3 performance and optional batterybacked write cache (BBWC). Standard interconnects include three NICs, two SANs, and one
local I/O port on the front of the server blade that supports USB, video, network, and serial
access.
The ProLiant BL 20p G2 is compatible with the original server blade enclosure.
With the addition of SAN connectivity, the ProLiant BL20p G2 moves into the small database
and application server markets but maintains its role as a leading web-hosting,
e-commerce, streaming media, and messaging server.
Rev. 4.41
51
ProLiant BL20p G3
Processors
Faster 3.2GHz processors
1MB cache
Smart Array 6i
Optional 128MB BBWC
Interconnects
Four NICs, two SANs, and one
iLO
Rev. 4.41
HP Restricted
2 52
ProLiant BL20p G3
The next generation of the BL20p server blade is a dual-processor-capable server blade that
mounts in either the original or the enhanced server blade enclosure. Important components
include:
Processors The ProLiant BL20p G3 features an approximate 10% processor
performance boost over the BL20p G2. It can be configured with one or two 3.2, 3.4, or
3.6MHz Intel Xeon processors (all with Hyper-Threading and EM64t technology and the
Intel E7520 chipset) and up to 8GB of DDR II PC2-3200 DDR SDRAM memory. It has a
1MB cache memory on the processor chip.
Smart Array 6i The ProLiant BL20p G3 ships with a Smart Array 6i controller and has
an optional, fully transportable 128MB BBWC to protect data from system, hard boot, and
power failures.
Interconnects The ProLiant BL20p G3 includes four integrated Gigabit NICs, dual
Fibre Channel host bus adapters (HBAs), and one iLO port.
Rev. 4.41
52
Rev. 4.41
HP Restricted
2 53
Rev. 4.41
53
Rev. 4.41
HP Restricted
Rev. 4.41
54
2 54
ProLiant BL30p
Features and benefits
Optimized for compute density and external
storage solutions
Double the density of ProLiant BL20p series
Dual-processor capable
Fibre Channel SAN support
Single aggregated iLO port
Rev. 4.41
HP Restricted
2 55
ProLiant BL30p
The ProLiant BL30p server blade is a mid-tier applications server blade that is optimized for
compute density and external storage solutions. It has double the density of the ProLiant
BL20p series and is capable of supporting dual processors. It also offers Fibre Channel SAN
support and a single aggregated iLO port, which enable the reuse of existing interconnects and
power infrastructure options.
Note: The ProLiant BL30p server blade does not offer hot-pluggable drives or a Smart Array
controller.
The ProLiant BL30p server blade sleeve is required for the ProLiant BL30p server blade only.
Each sleeve holds two ProLiant BL30p server blades and requires an enhanced server blade
enclosure.
The power infrastructure of the ProLiant BL30p has a split power distribution design that is
fully compatible with existing hardware and interconnect options. HP also offers the optional
Dual Power Input Kit, which enables you to attach two power enclosures to a mini bus bar.
New balcony card form factor
The ProLiant BL30p features a similar Fibre Channel solution as the ProLiant BL20p G2 or
G3, but with a new balcony card that stacks on the mezzanine card. This new card:
Prevents waste of materials in upgrade situations and saves on cost
Improves installation and accessibility
Enables future standardization among all server blades
Rev. 4.41
55
Rev. 4.41
HP Restricted
Rev. 4.41
56
2 56
12
11
10
Rev. 4.41
HP Restricted
Rev. 4.41
57
2 57
ProLiant BL40p
Drives blade technology through
the data center
Is suited to ERP and CRM
databases
Rev. 4.41
HP Restricted
2 58
ProLiant BL40p
The ProLiant BL40p builds on the success of the ProLiant BL20p by adding processors,
memory, hard drives, and I/O option slots.
Designed for mission-critical applications, the ProLiant BL40p drives blade technology
through the data center. In addition to front-end and application server functionalities, the
ProLiant BL40p provides a high-performance, back-end solution that is suited to large
databases such as enterprise resource planning (ERP) and customer relationship management
(CRM).
Rev. 4.41
58
Rev. 4.41
HP Restricted
Rev. 4.41
59
2 59
Rev. 4.41
HP Restricted
2 60
5. PPM slot 4
6. Processor socket 4
7. System battery
Rev. 4.41
60
BL20p G2
BL20p G3
BL30p
Xeon 3.2GHz
Xeon 3.6GHz
Xeon 3.2GHz
Xeon 3.06GHz
Xeon 3.4GHz
533MHz bus
Xeon 2.80GHz
Xeon 3.2GHz
BL40p
Xeon MP
1.5/2.0GHz
400MHz bus
533MHz bus
Number of
processors
Maximum of 2
Maximum of 2
Maximum of 2
Maximum of 2
Standard/
max RAM
512MB
1024MB
1024MB
512MB
8GB maximum
8GB maximum
4GB maximum
12GB maximum
Array
controller
Integrated
ServerWorks
chipset
Integrated Smart
Array 5i Plus
NIC
Three NICs
Four NICs
Two NICs
Five NICs
Drive bays
Rev. 4.41
Two hot-plug
Two hot-plug
Two ATA drive
SCSI drive bays SCSI drive bays bays
Four hot-plug
SCSI drive bays
HP Restricted
2 61
Rev. 4.41
61
BL20p G2
BL20p G3
BL30p
BL40p
Slots
No slots
No slots
No slots
Chassis
6U form factor
6U form factor
3U form factor
6U form factor
Four slots wide
Server
management
iLO Advanced
iLO Advanced
iLO Advanced
iLO Advanced
Power
Rack-centralized
Rack-centralized
Rack-centralized
Rack-centralized
Fibre Channel
storage
connectivity
Yes
Yes
Yes
Yes
Rev. 4.41
HP Restricted
2 62
Rev. 4.41
62
Power backplane
Management module
Rev. 4.41
HP Restricted
Signal backplane
2 63
Rev. 4.41
63
Key changes
iLO port aggregation
Power infrastructure
Rev. 4.41
HP Restricted
2 64
Rev. 4.41
64
Rev. 4.41
HP Restricted
2 65
Rev. 4.41
65
Storage connectivity
Local
Network
Power infrastructure
Power supplies
Bus bars and bus boxes
Rev. 4.41
HP Restricted
2 66
Rev. 4.41
66
Rev. 4.41
HP Restricted
2 67
Rev. 4.41
67
Networked storage
Integrate into data centers using industry-standard
and highly available technologies
Rev. 4.41
HP Restricted
2 68
Rev. 4.41
68
Rev. 4.41
69
Power infrastructure
Rack-centralized power subsystem provides redundant,
scalable power to all server blades
Key benefits
Eliminates the cost and cables of PDUs
Provides redundant power for current and future generation
ProLiant BL p-Class server blades
Main components
Power input
Power enclosure
Hot-pluggable power supplies
Investment protection
Same components in the enhanced server blade enclosure
Advantage over IBM
HP Restricted
2 70
Power infrastructure
The ProLiant BL p-Class system uses a unique, rack-centralized power subsystem that provides
redundant, scalable power to all server blades in a rack. The two key benefits of this design are
that it:
Eliminates the cost and cables of power distribution units (PDUs)
Provides redundant power for current and future generation ProLiant BL p-Class server
blades
The main components of the ProLiant BL p-Class power subsystem are:
Power input
Single phase 208VAC to 260VAC
Three phase 208VAC to 260VAC; supports more server blades than single phase
Direct current (DC) -48VDC
Power enclosure
Holds the power supplies
Ships in both single-phase and three-phase power enclosure models
Offers AC input redundancy
Hot-pluggable power supplies Convert AC input to -48VDC power for all server
blades in the rack
Note: The power supplies are front-accessible hot-plug units and can be installed in various
redundant configurations.
Halving the size of the ProLiant BL30p server blade essentially doubled the power
requirements for the given amount of space. The introduction of the HP enhanced server blade
enclosure was driven primarily by the need for power.
Rev. 4.41
70
Rev. 4.41
71
A side DC
B side DC
Rev. 4.41
HP Restricted
2 72
Rev. 4.41
72
Power enclosure
AC power redundancy
Rev. 4.41
HP Restricted
2 73
Rev. 4.41
73
North America
International
Rev. 4.41
2 74
HP Restricted
AC
0V
22
Three Phases
22
0V
AC
Single-phase powering
The power is generally transported over high-voltage transmission lines from a power company
substation to a local step-down transformer on or near the building for the server blade
infrastructure powering.
The secondary windings of the step-down transformer (the side that feeds the building), are
wound in three separate windings (called the three phases), each producing about 220VAC phase
to phase. Power considerations include:
Phase to ground produces 110VAC and phase to phase produces 220VAC. One of these
single 220VAC phases is used to power the single phase enclosure.
Side A of the enclosure should be a different phase than the B side.
A single-phase plug has three pins. Two are the phase-to-phase 220VAC; the third pin is
used for grounding and if the equipment also requires 110VAC phase to ground.
Three-phase powering
With three-phase power, all three of the separate phases of 220VAC are used. Three-phase plugs
have more than three pins. The extra pins are used for the additional phases.
Only three connectors plus the ground in the plug are needed for three-phase power. The three
phases are wired across each of the power connectors and the negative side is ground. The
international plugs often contain more pins, but operate in the same manner.
Important! One of the connectors (the keyed one) is the ground connector; the plugs must be
keyed so that the connectors align properly.
Ground
220 VAC
Ground
Rev. 4.41
74
HP Restricted
2 75
Rev. 4.41
75
Rev. 4.41
HP Restricted
2 76
Rev. 4.41
76
Rev. 4.41
HP Restricted
A Side
2 77
Rev. 4.41
77
HP Restricted
2 78
Rev. 4.41
78
Rev. 4.41
HP Restricted
2 79
Rev. 4.41
79
HP Restricted
2 80
Rev. 4.41
80
Rev. 4.41
HP Restricted
2 81
Rev. 4.41
81
Rev. 4.41
HP Restricted
2 82
Rev. 4.41
82
Rev. 4.41
HP Restricted
2 83
Rev. 4.41
83
Rev. 4.41
HP Restricted
2 84
Rev. 4.41
84
1. DC input cables
2. DC power out to
couplers on blade
enclosures
3. Circuit breaker
Rev. 4.41
HP Restricted
2 85
Rev. 4.41
85
HP recommends three-phase
power for the ProLiant BL30p
Rev. 4.41
HP Restricted
2 86
Rev. 4.41
86
Server
blades
Height
Blade
Power
enclosures enclosures
Redundancy
Mini
48
21 U
None
Mini
96
42 U
None
Scalable 54
27 U
None
Scalable 80
36 U
None
Server
blades
Height
Blade
Power
enclosures enclosures
Redundancy
Mini
48
24 U
3 + 3 P/S and AC
Mini
80
42 U
3 + 3 P/S and AC
Scalable 48
24 U
3 + 3 P/S and AC
Scalable 80
36 U
5 + 1 P/S, no AC
Rev. 4.41
HP Restricted
2 87
Rev. 4.41
87
Rev. 4.41
Enhanced server
blade enclosure
HP Restricted
2 88
Rev. 4.41
88
Rev. 4.41
HP Restricted
Enhanced server
blade enclosure
2 89
Rev. 4.41
89
Rev. 4.41
HP Restricted
2 90
Rev. 4.41
90
Rev. 4.41
HP Restricted
2 91
Item Description
Rev. 4.41
Power zone 2
Power zone 1
91
Learning
check
Rev. 4.41
HP Restricted
2 92
Learning check
1. List the four ProLiant BL p-Class server blades discussed in this module.
___________________________________________________________________
___________________________________________________________________
2. You can install up to _____ hard drives in the ProLiant BL30p server blade.
a. Eight
b. Six
c. Four
d. Two
3. ProLiant BL20p G2 server blades cannot be mixed with ProLiant BL40p server blades in
the same enclosure.
True
False
4. The ProLiant BL30p _______________ the per-enclosure density of ProLiant BL20p
server blades.
a. Decreases
b. Doubles
c. Triples
d. Quadruples
5. What are two main types of interconnect options in the ProLiant BL p-Class line of server
blades?
__________________________________________________________________
__________________________________________________________________
6. When would you use a power bus box?
__________________________________________________________________
Rev. 4.41
92
Rev. 4.41
93
Rev. 4.41
Rev. 4.41
HP Restricted
94
2 94
HP Restricted
2004 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice
Rev. 4.41
95
Objectives
Plan a deployment site for HP BladeSystem solutions
Plan a target data center environment
Design the power infrastructure for ProLiant BL p-Class
servers
Rev. 4.41
HP Restricted
Objectives
After completing this module, you should be able to:
Plan a deployment site for HP BladeSystem solutions
Plan a target data center environment
Design the power infrastructure of HP ProLiant BL p-Class servers
Rev. 4.41
96
3 96
Site planning
Begin planning early
Ensure environment meets
specifications
Data centers should be
modular
Use the Site Installation
Preparation Utility
Uses individual platform
power calculators
Calculates the impact of
racks with varying loads
Rev. 4.41
HP Restricted
3 97
Site planning
The high-density ProLiant BL platform requires enterprise-level power and produces
enterprise-level heat loads, driving organizations to begin the planning process earlier in the
procurement cycle.
For maximum performance and availability from HP BladeSystem solutions, ensure that the
operating environment meets the required specifications for:
Floor strength
Space
Power
Electrical grounding
Temperature
Airflow
Given the trend toward increased density, data centers also should be designed for scalability
and upgrades. Plans for the data center should be based on a modular design that provides
sufficient headroom for increasing power and cooling needs. A modular design provides the
flexibility to scale capacity in the future when planned and unplanned changes become
necessary.
HP provides a Site Installation Preparation Utility to assist you in approximating the power and
heat load per rack for facilities planning. The Site Installation Preparation Utility is a Microsoft
Excel spreadsheet that uses individual platform power calculators and enables you to calculate
the full environmental impact of racks with varying configurations and loads.
Note: The Site Installation Preparation Utility can be downloaded from:
http://h18001.www1.hp.com/partners/microsoft/utilities/power.html
Rev. 4.41
97
Rev. 4.41
HP Restricted
3 98
Rev. 4.41
98
Rev. 4.41
HP Restricted
3 99
Rev. 4.41
99
Environmental planning
Power consumption and heat load requirements
High-line AC power consumption
208V, three-phase
120V to neutral
Two 30-amp 208/230 circuits per rack (present)
Two 50-amp circuits per rack (future)
Rev. 4.41
HP Restricted
3 100
Environmental planning
A common and economical method of supplying power to high-density data centers is to use a
208V three-phase system known as high-line AC power. HP BladeSystems require high-line
AC power with 208V between any two transformer windings, giving 120V to neutral.
In new data centers, HP recommends providing two 30-amp 208/230V three-phase circuits per
rack. Future high-density server environments might require up to two 50-amp circuits per
rack.
Rev. 4.41
100
Rev. 4.41
HP Restricted
3 101
Rev. 4.41
101
HP recommendations
Use front-to-back ambient air for cooling
Front and rear rack doors must be adequately ventilated
Cover all gaps in the rack and open bays in the server blade
enclosure with blanking panels
Observe spatial requirements when installing racks
Rev. 4.41
HP Restricted
3 102
HP recommendations
ProLiant BL p-Class server blades use front-to-back ambient air for cooling. Therefore, the
front rack door must be adequately ventilated to allow ambient room air to enter the cabinet,
and the rear door must be adequately ventilated to allow the warm air to escape from the
cabinet.
When any vertical space in the rack is not filled by server blades or rack components, the gaps
between the components cause changes in airflow through the rack and across the server
blades. Cover all gaps in the rack with blanking panels and all open bays in the server blade
enclosure with blanks to maintain proper airflow.
HP 10000 and Compaq 9000 Series racks provide proper server blade cooling from flowthrough perforations in the front and rear doors that provide 65% open area for ventilation.
Total floor space
To enable servicing and adequate airflow, observe the following spatial requirements when
deciding where to install an HP, Compaq, Telco, or third-party rack:
Leave a minimum clearance of 63.5cm (25 inches) in front of the rack.
Leave a minimum clearance of 76.2cm (30 inches) in the back of the rack.
Leave a minimum clearance of 121.9cm (48 inches) from the back of the rack to the rear of
another rack or row of racks.
For more information, refer to the HP ProLiant BL System Common Procedures Guide and HP
ProLiant BL System Best Practices Guide available from:
http://www.hp.com/products/servers/proliant-bl/p-class/info
Rev. 4.41
102
A/C
A/C
A/C
PDU
PDU
PDU
PDU
A/C
30kW racks
Rev. 4.41
A/C
A/C
A/C
3 103
Rev. 4.41
103
Base
Temperature Scale
Recommended
HP Restricted
3 104
Rev. 4.41
104
Front View
Front View
Rev. 4.41
HP Restricted
3 105
Rev. 4.41
105
HP recommendations
Perform room-level and local
area energy and airflow balance
Avoid local hot spots and high
airflow demand
Optimize hot and cold air
separation
Follow HP rack installation
guidelines
Rev. 4.41
HP Restricted
3 106
Rev. 4.41
106
Rev. 4.41
HP Restricted
3 107
Rev. 4.41
107
Learning
check
Rev. 4.41
HP Restricted
3 108
Learning check
1. Name five operating environment attributes that must meet specifications in site planning.
_________________________________________________________________________
_________________________________________________________________________
________________________________________________________________
2. What tool should you use to review the server loading and identify the number of power
supplies required for redundancy?
____________________________________________________________________
3. What should you do before every server blade deployment to ensure that the data center
will have proper heating and cooling?
____________________________________________________________________
4. What is the HP recommendation for data centers with power densities greater than 10Kw
per cabinet?
____________________________________________________________________
Rev. 4.41
108
Rev. 4.41
Rev. 4.41
HP Restricted
109
3 109
Rev. 4.41
110
Objectives
After completing this lab, you should be able to:
Requirements
To complete this lab, you need:
Overview
The ProLiant BL p-Class Sizing Utility is an Excel-based tool that reveals the
power load on a server. From this information, the sizing utility determines the
number of power supplies required for a given configuration. The sizing utility
also approximates the electrical and heat load per server for facilities planning. It
provides data on:
Rev. 4.41
Power
Cooling
Weight
Configuration
L3.1 111
L3.1 112
1.
2.
Rev. 4.41
Rev. 4.41
L3.1 113
The buttons across the top of the sizing tool are as follows:
L3.1 114
Configurator Returns you to the initial (home) screen of the sizing utility.
Power Summary Displays the power summary for the configured rack.
Equipment List Displays the equipment list for the configured rack,
including the part descriptions, quantity, and part numbers.
Rack & Power Displays the configuration screen for power selection,
such as input voltage, A/C line input phases, A/C redundancy, and power
enclosure type.
Rev. 4.41
Rev. 4.41
1.
Click one of the Enclosure x buttons, for example Enclosure 6, to display the
blade enclosure configuration section.
2.
L3.1 115
3.
Verify that the correct interconnect option displays on both sides of the blade
enclosure, as shown in the following graphic.
L3.1 116
Rev. 4.41
BL20p
BL20p G2
BL40p
BL20p
BL20p G2 and G3
BL30p
BL40p
Rev. 4.41
L3.1 117
L3.1 118
Rev. 4.41
The left column of the blade enclosure section provides a configuration legend for
server blade type, SKU number, processor configuration, disk drives, type of
mezzanine card, and memory configuration. A list of preconfigured SKUs can be
accessed by clicking the SKU # link. For reference, the SKU list and description is
provided in the following graphic.
Rev. 4.41
L3.1 119
1.
# of Processors: 2
Memory 2GB: 4
2.
The Bottom Blades section is unusable because you have selected a fullheight server blade (BL20p G2).
The remaining bays are configured with the identical server blade
configuration because you previously selected the Make all Bays same
as Bay 1 option.
Click the Clear Enclosure 6 button to clear enclosure 6 and configure the
blade enclosures as listed in the following table. If the table does not specify
a configuration option, make your own selection.
Enclosure
number
6 (top)
5
L3.1 120
Enclosure configuration
Empty
C-GbE2 with storage
connectivity
All blades are the same
Enhanced blade enclosure
with 8 ProLiant Essentials
Rapid Deployment Pack
(RDP) licenses
Rev. 4.41
1 (bottom)
Rev. 4.41
C-GbE2
Configure individual blades
Enhanced blade enclosure
C-GbE
All blades are the same
Enhanced blade enclosure
Bay 1
BL20p
SKU number 2
Bay 2
BL20p G2
SKU number 10
Bay 3 top blade
BL30p
SKU number 2
Bay 3 bottom blade
BL30p
SKU number 2
Bay 4 top blade
BL30p
SKU number 4
Bay 4 bottom blade
BL30p
SKU number 4
Bay 5
BL40p
SKU number 6
Bay 1
BL20p G2
SKU number 14
Bay 1
BL40p
SKU number 2
Bay 5
BL20p
SKU number 2
Bay 6
BL20p G2
SKU number 5
Bay 7
BL20p
SKU number 2
Bay 8
BL20p G2
SKU number 6
Bay 1
BL20p
SKU number 2
L3.1 121
A/C redundancy
You can choose from two power enclosure models, depending on the number of
A/C line input phases at your facility:
The single phase power enclosure holds a maximum of four hot-plug power
supplies.
L3.1 122
Rev. 4.41
All power-related options are available in the Rack & Power section, accessible
with the Rack & Power button near the top of the sizing utility screen.
1.
Click the Rack & Power button near the top of the sizing utility screen.
2.
At the Rack & Power section, you have the option of selecting:
Type of rack to host the blade enclosure, server blades, and the power
Input voltage
A/C redundancy
Power enclosure
Power supply
Depending on your selection, the sizing utility displays warning and error
messages to reflect configuration pitfalls such as no power redundancy.
The sizing utility also graphically displays the number of power enclosures
and power supplies, as shown in the preceding graphic.
Change the power selection options and observe the impact of your choices.
Rev. 4.41
L3.1 123
3.
L3.1 124
Click the Power Summary button at the top of the sizing utility screen to
display power information for the server blade rack, including:
Total system VA
System weight
Rev. 4.41
Rev. 4.41
L3.1 125
Reset the sizing utility, and configure the blade enclosures as listed in the
following table. This configuration represents the maximum theoretical
density using ProLiant BL30p server blades and six blade enclosures.
Enclosure
number
Enclosure configuration
6 (top)
1 (bottom)
2.
L3.1 126
Click the Rack & Power button near the top of the sizing utility screen.
Rev. 4.41
3.
4.
AC Redundancy: Redundant
5.
6.
What must you do to resolve this situation and maximize a 42U rack? If
necessary, review the ProLiant BL p-Class Server Blades and Infrastructure
module, or discuss this situation with your instructor.
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................
Rev. 4.41
L3.1 127
7.
Sketch what the 42U rack maximized with ProLiant BL30p server blades
would look like, using redundant power and mini bus bars.
8.
The maximum number of ProLiant BL30p blades that fit into a 42U
rack in a redundant power configuration is:
...................................................................................................................
L3.1 128
Rev. 4.41
Objectives
After completing this lab, you should be able to:
Requirements
To complete this lab, you will need:
Rev. 4.41
One or more ProLiant server blades such as the ProLiant BL20p Generation 2
(G2) or later
L3.2 129
Introduction
The HP BladeSystem installation consists of these steps:
1.
2.
Installing the power enclosure and the server blade enclosure and in the
appropriate rack
3.
4.
5.
6.
7.
a.
b.
c.
d.
e.
f.
g.
h.
i.
L3.2 130
Rev. 4.41
Rev. 4.41
Description
L3.2 131
L3.2 132
Rev. 4.41
Rev. 4.41
L3.2 133
fe
1.
2.
3.
4.
L3.2 134
dc
b.
b.
b.
c.
d.
Which NIC column represents NIC 2 for the installed blade servers?
a.
b.
c.
d.
Rev. 4.41
5.
6.
7.
Which NIC column represents NIC 3 for the installed blade servers?
a.
b.
c.
d.
Which NIC column represents the iLO NICs for installed server blades?
a.
b.
c.
d.
Explain how the Fibre Channel signals are routed on the RJ-45 Patch Panel 2.
............................................................................................................................
............................................................................................................................
............................................................................................................................
Rev. 4.41
L3.2 135
The following figure represents the rear view of a different server blade enclosure.
i
f
1.
2.
3.
L3.2 136
b.
b.
Match the following description with the correct callout number on the
preceding graphic.
a.
External 10/100/1000BaseT
Ethernet ports for Side A
..........
b.
External 10/100/1000BaseT
Ethernet ports for Side B
..........
c.
..........
d.
..........
e.
Signal backplane
..........
f.
..........
g.
Power backplane
..........
Rev. 4.41
Rev. 4.41
a.
..........
b.
..........
c.
d.
..........
e.
..........
f.
..........
g.
Two local-access
10/100/1000BaseT Ethernet
switch ports
..........
L3.2 137
1.
L3.2 138
b.
c.
d.
e.
Power enclosure
..........
f.
..........
g.
..........
h.
..........
..........
Rev. 4.41
2.
i.
..........
j.
iLO port
..........
k.
..........
l.
..........
m.
n.
..........
o.
..........
p.
q.
..........
3.
4.
When is it necessary to use the load-balancing signal cable and how is this
cable used?
............................................................................................................................
............................................................................................................................
............................................................................................................................
Rev. 4.41
L3.2 139
g
h
5.
L3.2 140
..........
b.
Grounding cable
..........
c.
..........
d.
..........
e.
..........
f.
g.
Rev. 4.41
Indicate where two power supplies would be placed for nonredundant, singlephase power to a server blade enclosure (not enhanced). Also indicate
whether power blanking panels would be used and where.
Power enclosure
1
2.
Indicate where four power supplies would be placed for redundant, singlephase power to a server blade enclosure (not enhanced). Also indicate
whether power blanking panels would be used and where.
Power enclosure
1
3.
Indicate where two power supplies would be placed for nonredundant, threephase power to a server blade enclosure (not enhanced). Also indicate
whether power blanking panels would be used and where.
Power enclosure
1
Rev. 4.41
L3.2 141
4.
Indicate where four power supplies would be placed for redundant, threephase power to a server blade enclosure (not enhanced). Also indicate
whether power blanking panels would be used and where.
Power enclosure
1
5.
Indicate where two power supplies would be placed for nonredundant, singlephase power to an enhanced server blade enclosure. Also indicate whether
power blanking panels would be used and where.
Power enclosure
1
6.
Indicate where four power supplies would be placed for redundant, singlephase power to an enhanced server blade enclosure. Also indicate whether
power blanking panels would be used and where.
Power enclosure
L3.2 142
Rev. 4.41
7.
Indicate where two power supplies would be placed for nonredundant, threephase power to an enhanced server blade enclosure. Also indicate whether
power blanking panels would be used and where.
Power enclosure
1
8.
Indicate where four power supplies would be placed for redundant, threephase power to an enhanced server blade enclosure. Also indicate whether
power blanking panels would be used and where.
Power enclosure
Rev. 4.41
L3.2 143
Explain how the RJ-45 Patch Panel 2 connects the server blades to an
external SAN-based storage device such as the HP StorageWorks Modular
Smart Array 1000 (MSA1000).
............................................................................................................................
............................................................................................................................
............................................................................................................................
2.
You are to connect a server blade enclosure with eight ProLiant BL20p G2
server blades to external Ethernet switches and use all network interface
controller ports and iLO ports. How many Ethernet cables will run from the
server blade enclosure?
............................................................................................................................
What is required for the GbE2 Interconnect Switch to connect the server
blades to an external SAN-based storage device such as the MSA1000?
............................................................................................................................
............................................................................................................................
............................................................................................................................
L3.2 144
Rev. 4.41
2.
The following figure represents the GbE2 Interconnect Switch ports. Label
each port and explain its functionality.
Chassis
Rear
22
21
20
1918
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
Front
Panel
23
24 1
Rev. 4.41
L3.2 145
Management modules are used only for information management such as asset
tracking. Disconnecting the management module cabling does not affect system
operation.
On the server blade management module:
L3.2 146
Rev. 4.41
1.
Rev. 4.41
L3.2 147
2.
3.
Explain the difference between the previous two server blade enclosures.
............................................................................................................................
............................................................................................................................
............................................................................................................................
L3.2 148
Rev. 4.41
Rev. 4.41
L3.2 149
!
1.
L3.2 150
Important
If the load-balancing signal cable is not installed, the management software
issues alerts.
Rev. 4.41
Blade server enclosure iLO connections are routed through side B of the
blade server enclosure.
If using the RJ-45 Patch Panel, iLO connections for each installed
server blade are located in the right RJ-45 column of the patch panel
side B.
If using the GbE2 interconnects, the iLO connections are routed to the
side B interconnect switch, and their exact external location depends on
the GbE2 switch configuration.
In the following figure, indicate the type of blade server enclosure and
interconnect option used, and draw the correct cabling of iLO connections to
the management network.
............................................................................................................................
Management network
Rev. 4.41
L3.2 151
2.
In the following figure, indicate the type of blade server enclosure and
interconnect option used, and draw the correct cabling of iLO connections to
the management network.
............................................................................................................................
Management network
3.
In the following figure, indicate the type of blade server enclosure and
interconnect option used, and draw the correct cabling of iLO connections to
the management network.
............................................................................................................................
Management network
L3.2 152
Rev. 4.41
Rev. 4.41
WARNING
Ensure that all power enclosure, bus bar, and power bus box circuit
breakers are locked in the off position before connecting any power
components.
1.
2.
3.
L3.2 153
4.
Ensure that the hot-plug power supply LEDs, power enclosure DC power
LEDs, and bus bar power LEDs are green.
Note
Refer to Appendix D LEDs, Buttons, and Switches of the HP ProLiant
p-Class Server Blade Enclosure Installation Guide.
5.
Unlock the circuit breaker switches on the bus bars or power bus boxes and
toggle the switches to the on position. This applies DC power to the server
blade enclosures.
6.
Ensure that the server blade enclosure DC power LEDs are green.
7.
Lock all the circuit breaker switches in the on position. This prevents anyone
from accidentally powering down the system.
L3.2 154
Rev. 4.41
ProLiant BL p-Class
Network Connectivity
Options
Module 4
HP Restricted
2004 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice
Rev. 4.41
155
Objectives
Discuss general networking concepts
VLAN
STP
Port trunking, load balancing, and teaming
Rev. 4.41
HP Restricted
4 156
Objectives
After completing this module, you should be able to:
Discuss general networking concepts, including:
Virtual LAN (VLAN)
Spanning Tree Protocol (STP)
Port trunking and load balancing
Discuss ProLiant BL p-Class server blade signal routing
Identify the available ProLiant BL p-Class interconnect options
Choose the appropriate interconnect options for HP BladeSystem servers
Describe GbE Interconnect Switch best practices
Rev. 4.41
156
IEEE standards
VLAN 802.1Q
STP 802.1D
Trunking 802.3ad (static mode only)
Rev. 4.41
HP Restricted
4 157
Rev. 4.41
157
VLANs
The GbE2 Interconnect Switch provides support for 255
VLANs
VLAN 1 enables communication between all server ports
and uplink ports on the interconnect switches
More VLANs means more GbE2 Interconnect Switch
processor utilization
Rev. 4.41
HP Restricted
4 158
VLANs
A virtual LAN (VLAN) is a network topology configured according to a logical scheme rather than
a physical layout. It logically segments the network into broadcast domains. It also conserves
bandwidth and improves security by limiting bandwidth to specific domains.
The ProLiant BL p-Class GbE2 Interconnect Switch supports a total of 255 IEEE 802.1Q VLANs.
Interconnect switches are shipped from the factory with all the ports set on VLAN 1, the default
VLAN. The default VLAN enables communication between all server ports and uplink ports on the
interconnect switches. Connectivity is provided to each server blade when it is inserted into the
enclosure and powered on.
The greater the number of VLANs, the greater the GbE2 Interconnect Switch processor utilization.
For maximum interconnect switch performance, be judicious when configuring the number of
VLANs. For example, you might want to isolate the server blade integrated Lights-Out (iLO) ports
from the rest of the NICs by assigning the iLO ports on each interconnect switch to their
own VLAN.
Rev. 4.41
158
LAN
Switch B
Switch A
1 2
1 2
Blade
2
Blade
1
Rev. 4.41
HP Restricted
BL
BLe-Series
p-Class
4 159
Rev. 4.41
159
STP
Rev. 4.41
HP Restricted
4 160
STP
Supported on most bridges and switches, STP is a reliable method for providing path
redundancy and eliminating loops in bridged networks. Loops form never-ending data paths
that result in excessive system overhead. STP enables you to block the links that form loops
between switches in a network.
When multiple data paths exist, STP forces the redundant paths into a standby (blocked) state.
STP configures the network so that a switch uses only the most efficient path. If that path fails,
STP automatically sets up another active path on the network to sustain network operations.
STP supports, preserves, and maintains the quality of bridged LAN or media access control
(MAC) service. If a link is lost or the topology has changed, STP requires only 30 to 60
seconds to detect the changes and reconfigure.
STP communicates between switches on the network using Bridge Protocol Data Units
(BPDUs). Each BPDU contains the following information:
The unique identifier of the switch that the transmitting switch currently believes is the root
switch
The path cost to the root from the transmitting port
The port identifier of the transmitting port
The communication between switches through BPDUs results in the following:
One switch is elected as the root switch.
The shortest distance to the root switch is calculated for the switch.
A designated switch is selected. This is the switch closest to the root switch through which
packets will be forwarded to the root.
A port for each switch is selected. This is the port providing the best path from the switch
to the root switch.
Ports included in the STP are selected.
Rev. 4.41
160
Enabling STP
Enabled on ProLiant BL p-Class switches
One spanning tree domain per interconnect switch is
supported
Multiple spanning trees provide multiple data paths
To enable multiple spanning trees
Block loops at the VLAN level instead of the port level
Allow for a separate spanning tree per VLAN
Comply with IEEE specification 802.1s
Rev. 4.41
HP Restricted
4 161
Enabling STP
STP can be enabled or disabled at the switch level. By default, STP is enabled on ProLiant BL
p-Class switches. Disabling it can negatively affect the function and performance of the
ProLiant BL p-Class switches as well as the switches and traffic on the rest of the network.
Only one spanning tree domain per interconnect switch is supported. You can configure ports
to participate in that spanning tree domain by enabling or disabling STP on a per port basis.
Multiple spanning trees
Multiple spanning tree groups (STGs) provide multiple data paths, which can be used for load
balancing and redundancy. You can enable independent links on two interconnect switches
using multiple STGs by configuring each path with a different VLAN and then assigning each
VLAN to a separate STG.
Each STG is independent and must be independently configured. Each STG sends its own
BPDUs.
The STG forms a loop-free topology that includes one or more VLANs. The switch supports 16
STGs running simultaneously. The default STG 1 may contain an unlimited number of
VLANs. All other STGs (2-16) may contain one VLAN each.
To enable multiple spanning trees on ProLiant BL p-Class switches:
Block loops at the VLAN level instead of the port level
Allow for a separate spanning tree for each VLAN
Comply with IEEE specification 802.1s (extension to 802.1D)
Important! The ProLiant BL p-Class GbE Interconnect Switch supports mono-STP. Multiple
spanning tree domains are not supported. This means the Spanning Tree Algorithm makes
calculations without considering the VLAN domains to which the ports belong. All ports that
have STP enabled fall under one STP domain.
Rev. 4.41
161
Configuring STP
Switch
A
root = 01A
cost = 0
transmitter = 01A
root = 01A
cost = 1
transmitter = 10B
Rev. 4.41
Switch
B
X
Switch
C
X
HP Restricted
Switch
D
4 162
Configuring STP
The key configuration parameters of STP are normally set on switches upstream in the network
(top of the tree). The switches downstream must support STP and have it enabled.
With STP:
Set the Switch A bridge priority to 01 and Switch B bridge priority to 10.
Switch A broadcasts the BPDU packet.
Switches B, C, and D determine the lowest-cost path.
Because the cost path is 1 and the transmission of the packet came from the designated
root, the link is not blocked.
Switch B rebroadcasts the BPDU packet out of its remaining ports.
Switches C and D block their links to Switch B, which makes them standby links.
The link between Switch C and D is subsequently blocked following the rebroadcast of the
BPDU packet from those switches.
All that remains after the propagation of the BPDU packet are the primary links, which are the
direct links with the lowest cost path from switches B, C, and D to Switch A.
Important! Several customer advisories exist to help avoid configuration issues:
ProLiant BL C-GbE and F-GbE Interconnect Switches support mono-STP (IEEE 802.1D),
but multiple spanning tree domains (IEEE 802.1s) are not supported.
(PSD_EB020927_CW01)
ProLiant BL p-Class GbE2 Interconnect Switch STP configuration may cause ProLiant
Essentials Rapid Deployment Pack (RDP) jobs to generate PXE-E51 message.
(PSD_EB040310_CW04)
Third-party switches with STP disabled may prevent C-GbE Interconnect Switches from
identifying data loops.(PSD_EB021010_CW01)
For more information regarding support documentation for HP BladeSystems, refer to:
http://welcome.hp.com/country/us/en/support.html
Rev. 4.41
162
STP settings
Bridge
Priority
MAC Address
Rev. 4.41
HP Restricted
4 163
STP settings
Bridging and port settings can be configured globally on the interconnect switch.
The bridge ID is determined by the bridge priority, followed by the MAC address of the switch.
The switch on the network with the lowest bridge ID is the designated root, which is the switch
to which all broadcasts from lower switches are forwarded.
Any switches lower in the tree will receive global and per port STP parameter information
from the designated root. These parameters (Max Age, Hello Time, and Forward Delay) are
used to determine the most efficient path through the network to the designated root.
Rev. 4.41
163
Integrated Administrator
NIC
Switch A
Crosslink 1
Switch B
Crosslink 2
Rev. 4.41
HP Restricted
4 164
Rev. 4.41
164
Load balancing
A port failure within the group causes the network traffic to be
directed to the remaining links in the group
Load balancing is maintained whenever a link in a trunk is lost
or returned to service
Rev. 4.41
HP Restricted
4 165
Rev. 4.41
165
Rev. 4.41
HP Restricted
4 166
Rev. 4.41
166
Switch
b
VLAN 1
VLAN 2
c,d,e,f,f
a,b e,f,f
Rev. 4.41
a,b,c,d
Switch
Switch
cc
dd
ee
HP Restricted
f,f
f, ff,
4 167
Rev. 4.41
167
Prevents loops
Blocks VLAN 2
VLAN 1
VLAN 2
switch
b
c,d,f
a,b,e,f
e,f
b,c
a,d
Red VLAN
switch
Green
VLAN
switch
f,f
f,f
HP Restricted
4 168
Rev. 4.41
168
16 BL30p blades
in an enhanced
enclosure
(dedicated iLO)
24
32
16
32
48
24
32
n/a
32
32
n/a
16
Rev. 4.41
HP Restricted
4 169
Rev. 4.41
169
Rev. 4.41
HP Restricted
4 170
Rev. 4.41
170
Rev. 4.41
HP Restricted
4 171
Rev. 4.41
171
NC7781
10/100/1000T
NC7781
10/100/1000T
NC7781
10/100/1000T
iLO
10/100
NC7781
10/100/1000T
Fibre
Channel
Integrated
iLO
Port
NC7781
10/100/1000T
Fibre
Channel
To Interconnect Bay B
To Interconnect Bay A
NC7781
10/100/1000T
iLO
10/100
Rev. 4.41
HP Restricted
4 172
Rev. 4.41
172
NC7781
10/100/1000T
NC7781
10/100/1000T
NC7781
10/100/1000T
NC7781
10/100/1000T
NC7781
10/100/1000T
NC7781
10/100/1000T
NC7781
10/100/1000T
NC7781
10/100/1000T
Fibre
Channel
iLO
10/100
Rev. 4.41
Integrated
iLO
Port
HP Restricted
iLO
10/100
Fibre
Channel
To Interconnect Bay B
To Interconnect Bay A
4 173
Rev. 4.41
173
NC7781
10/100/1000T
iLO
10/100
iLO
10/100
Fibre
Channel
NC7781
10/100/1000T
Integrated
iLO
Fibre
Port
Channel
NC7781
10/100/1000T
NC7781
10/100/1000T
iLO
10/100
Rev. 4.41
NC7781
10/100/1000T
Fibre
Channel
Fibre
Channel
NC7781
10/100/1000T
To Interconnect Bay B
To Interconnect Bay A
NC7781
10/100/1000T
iLO
10/100
HP Restricted
4 174
Rev. 4.41
174
iLO
10/100T
NC7781
10/100/1000T
NC7781
10/100/1000T
NC7781
10/100/1000T
NC7781
10/100/1000T
Rev. 4.41
HP Restricted
Integrated
iLO
Port
To Interconnect Bay B
To Interconnect Bay A
NC7781
10/100/1000T
4 175
Rev. 4.41
175
Rev. 4.41
HP Restricted
4 176
Rev. 4.41
176
Server 8
Side A
Side B
Server 1
B1
B2
Rev. 4.41
Server 1
A1
HP Restricted
A2
4 177
Original enclosure
B1
B2
A1
A2
Only one NIC at a time may be enabled for Preboot eXecution Environment (PXE). A NIC on
each server is pre-selected as the default PXE NIC. This results in all the PXE-enabled NICs
being routed to the same interconnect. However, you can use the ROM-Based Setup Utility
(RBSU) to designate any NIC to be the default PXE NIC. Thus, system availability can be
enhanced by selecting PXE-enabled NICs that are routed to different interconnect blades.
Rev. 4.41
177
Server 8
Server 16
Side A
Side B
Server 9
Server 1
B1
B2
Server 1
Server 9
A1
Rev. 4.41
HP Restricted
A2
4 178
Original enclosure
B1
N/A
B2
N/A
A1
N/A
A2
N/A
Only one NIC at a time may be enabled for PXE. A NIC on each server is pre-selected as the
default PXE NIC. This results in all the PXE-enabled NICs being routed to the same
interconnect. However, you can use the RBSU to designate any NIC to be the default PXE
NIC. Thus, system availability can be enhanced by selecting PXE-enabled NICs that are routed
to different interconnect blades.
Rev. 4.41
178
Server 2
Server 2
Side A
Side B
Server 1
Server 1
B1
A1
B3
Rev. 4.41
HP Restricted
A2
A3
4 179
Enclosure
B1
Not used
B2, B3
A1
A2, A3
Only one NIC at a time may be enabled for PXE. A NIC on each server is pre-selected as the
default PXE NIC. This results in all the PXE-enabled NICs being routed to the same interconnect.
However, you can use the RBSU to designate any NIC to be the default PXE NIC. Thus, system
availability can be enhanced by selecting PXE-enabled NICs that are routed to different
interconnect blades.
Rev. 4.41
179
Rev. 4.41
HP Restricted
4 180
Rev. 4.41
180
Rev. 4.41
181
Rev. 4.41
HP Restricted
4 182
Rev. 4.41
182
Rev. 4.41
HP Restricted
4 183
Rev. 4.41
183
Rev. 4.41
184
C-GbE
F-GbE
C-GbE2
F-GbE2
19, 20
10/100T
10/100T
10/100/1000T
1000SX
21, 22
10/100/1000T
1000SX
10/100/1000T
1000SX
19 20 21 22
Interconnect
Module A
Interconnect
Module B
1 NIC*
2 NIC
1
2
iLO
NIC
3
4
NIC*
iLO
NIC
NIC
5
6
C-GbE
F-GbE
C-GbE2
F-GbE2
10/100T
10/100/1000T
3
4
NIC*
iLO
NIC
NIC
7
8
23
9
10
24
11
12
5
6
NIC*
iLO
NIC
NIC
7
8
NIC*
iLO
NIC
NIC
NIC*
iLO
NIC
NIC
13
14
NIC*
iLO
NIC
NIC
15
16
Downlink Ports
Ports
1 - 16
17
C-GbE
F-GbE
C-GbE2
F-GbE2
10/100*
10/100/1000**
BL20p
1
Switch A
BL20p
2
BL20p
3
BL20p
4
BL20p
5
23
11
12
24
13
14
NIC*
iLO
NIC
NIC
15
16
17
Crosslink Ports
18
9
10
18
BL20p
6
BL20p
7
BL20p
8
Switch B
*This is the default PXE NIC. You can use the ROM setup utility to make any other data NIC PXE-enabled.
Rev. 4.41
HP Restricted
4 185
Rev. 4.41
185
19 20 21 22
Ports
C-GbE
F-GbE
C-GbE2
F-GbE2
19, 20
10/100T
10/100T
10/100/1000T
1000SX
21, 22
10/100/1000T
1000SX
10/100/1000T
1000SX
19 20 21 22
Interconnect
Module A
Interconnect
Module B
1 NIC*
2 NIC
Ports
23, 24
10/100T
1
2
NIC
3
4
NIC
NIC
5
6
C-GbE2
F-GbE2
10/100/1000T
7
8
23
9
10
24
11
12
3
4
NIC*
5
6
NIC*
NIC
NIC
7
8
NIC*
NIC
NIC
NIC*
NIC
NIC
NIC*
NIC
NIC
13
14
NIC
Downlink Ports
Ports
1 - 16
C-GbE
F-GbE
NIC
10/100*
NIC*
NIC
17
C-GbE2
F-GbE2
BL20p
1
10/100/1000**
Switch A
NIC
BL20p
2
BL20p
3
BL20p
4
BL20p
5
24
15
16
17
Crosslink Ports
18
23
11
12
13
14
NIC*
15
16
9
10
18
BL20p
6
BL20p
7
BL20p
8
Switch B
*This is the default PXE NIC. You can use the ROM setup utility to make any other data NIC PXE-enabled.
Rev. 4.41
HP Restricted
4 186
Rev. 4.41
186
19 20 21 22
Ports
C-GbE
F-GbE
C-GbE2
F-GbE2
19, 20
10/100T
10/100T
10/100/1000T
1000SX
21, 22
10/100/1000T
1000SX
10/100/1000T
1000SX
19 20 21 22
Interconnect
Module A
Interconnect
Module B
1 NIC*
NIC
NIC*
NIC
NIC*
NIC
NIC*
NIC
NIC*
11
Ports
C-GbE
F-GbE
C-GbE2
F-GbE2
13
23, 24
10/100T
10/100/1000T
15
NIC*
24
BL30p 2
BL30p 3
BL30p 4
BL30p 5
NIC*
NIC*
NIC*
NIC*
10/100*
10/100/1000**
NIC*
NIC
NIC
NIC
NIC
NIC
16
14
24
12
10
8
6
NIC
NIC
NIC
BL30p 9
17
Crosslink Ports
18
Switch A
15
23
2 NIC*
1 - 16
NIC
BL30p 8
NIC*
NIC*
Ports
BL30p 7
16
C-GbE2
F-GbE2
BL30p 6
14
12
C-GbE
F-GbE
13
NIC
NIC*
10
Downlink Ports
11
NIC
NIC*
BL30p 1
23
NIC
17
18
Switch B
*This is the default PXE NIC. You can use the ROM setup utility to make any other data NIC PXE-enabled.
Rev. 4.41
HP Restricted
4 187
Rev. 4.41
187
19 20 21 22
Ports
C-GbE
F-GbE
C-GbE2
F-GbE2
19, 20
10/100T
10/100T
10/100/1000T
1000SX
21, 22
10/100/1000T
1000SX
10/100/1000T
1000SX
Interconnect
Module A
1 NIC*
2 NIC
3
4
C-GbE
F-GbE
10/100T
C-GbE2
F-GbE2
5
6
10/100/1000T
7
8
23
24
Downlink Ports
Ports
1 - 16
1
2
iLO
NIC
NIC
C-GbE2
F-GbE2
10/100*
10/100/1000**
3
4
NIC
5
6
7
8
9
10
NIC*
iLO
NIC
NIC
11
12
NIC
NIC
9
10
23
11
12
24
13
14
13
14
15
16
15
16
17
C-GbE
F-GbE
19 20 21 22
Interconnect
Module B
BL40p
1
Switch A
17
Crosslink Ports
18
18
BL40p
2
Switch B
*This is the default PXE NIC. You can use the ROM setup utility to make any other data NIC PXE-enabled.
Rev. 4.41
HP Restricted
4 188
Rev. 4.41
188
C-GbE
F-GbE
C-GbE2
F-GbE2
19, 20
10/100T
10/100T
10/100/1000T
1000SX
21, 22
10/100/1000T
1000SX
10/100/1000T
1000SX
Interconnect
Module A
1 NIC*
2 NIC
3
4
C-GbE
F-GbE
C-GbE2
F-GbE2
10/100T
10/100/1000T
Downlink Ports
Interconnect
Module B
1
2
NIC
NIC
3
4
NIC
5
6
5
6
23, 24
7
8
7
8
23
9
10
24
11
12
NIC*
NIC
C-GbE
F-GbE
C-GbE2
F-GbE2
1 - 16
10/100*
10/100/1000**
NIC
NIC
NIC
9
10
23
11
12
24
13
14
13
14
15
16
15
16
17
Ports
19 20 21 22
BL40p
1
Switch A
17
Crosslink Ports
18
18
BL40p
2
Switch B
*This is the default PXE NIC. You can use the ROM setup utility to make any other data NIC PXE-enabled.
Rev. 4.41
HP Restricted
4 189
Rev. 4.41
189
LAN signals
exit lower
modules
Interconnect modules are inserted into the bottom left and bottom right
module bays on the rear of the enclosure
The upper module bays are reserved for Fibre Channel options
Rev. 4.41
HP Restricted
4 190
Rev. 4.41
190
Rev. 4.41
HP Restricted
4 191
BL30p
Rev. 4.41
191
Rev. 4.41
HP Restricted
4 192
Rev. 4.41
192
Rev. 4.41
193
Rev. 4.41
HP Restricted
4 194
Rev. 4.41
194
Rev. 4.41
HP Restricted
4 195
Rev. 4.41
195
Root
User+
User
Configuration
Yes
Read-only
Read-only
Network Monitoring
Yes
Read-only
Read-only
Yes
Read-only
Read-only
Yes
No
No
System Utilities
Yes
Ping-only
Ping-only
Factory Reset
Yes
No
No
Reboot Switch
Yes
Yes
No
Yes
No
No
Yes
No
No
You can access the ProLiant BL p-Class GbE Interconnect Switch using the serial (DB-9)
management port.
Rev. 4.41
196
Proprietary features
VLAN and STG configuration guidelines
Rev. 4.41
HP Restricted
4 197
Rev. 4.41
197
Rev. 4.41
198
Learning
check
Rev. 4.41
HP Restricted
4 199
Learning check
1. Name four functions of STP.
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
2. You must enable STP manually.
True
False
3. With __________ port trunking, two compatible devices can identify multiple ports, link
them together, and trunk the ports on both ends of the links.
a. Parallel
b. Dynamic
c. Redundant
d. Cisco
4. The __________ __________ is determined by the bridge priority, followed by the MAC
address of the switch.
5. Name the interconnect option that would be appropriate for an enterprise that needs
reduced cabling, but does not need a Fibre Channel pass-through for the ProLiant BL20p
G2 or gigabit speed support from the server blade.
_______________________________________________________________
Rev. 4.41
199
6. What two methods does the GbE2 Interconnect Switch provide to interoperate with
PVST+?
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
7. What failures does the redundant architecture of the GbE Interconnect Switch protect
against?
_______________________________________________________________
_______________________________________________________________
8. Explain iLO aggregation in the new server blade enclosure.
_______________________________________________________________
_______________________________________________________________
_______________________________________________________________
________________________________________________________________
9. The iLO NIC is a 10/100 NIC on all ProLiant BL p-Class servers.
True
False
Rev. 4.41
200
Rev. 4.41
Rev. 4.41
HP Restricted
201
4 201
Rev. 4.41
202
Configuring the
HP ProLiant BL GbE2 Interconnect Switch
Module 4 Lab 1
Objectives
After completing this lab, you should be able to:
Requirements
Depending on the hardware resources in the classroom, your instructor might give
you the option of completing a subset of exercises in this lab or the instructor
might demonstrate the GbE2 Interconnect Switch accessibility and configuration.
However, to complete all exercises, you will need:
Rev. 4.41
L4.1 203
L4.1 204
A GbE2 Interconnect Switch Kit quick install card and user manuals if you
are not familiar with this hardware installation (optional)
Rev. 4.41
Overview
Deploying an HP ProLiant BL GbE2 Interconnect Switch requires extensive
knowledge of the network on which the server blade system is being installed. For
example, design of the physical network, details of spanning tree settings, and
configuration of the local Virtual Local Area Networks (VLANs) could be critical
to successful implementation of the BL series switch on the network.
Because there are an infinite number of deployment scenarios, and because labs
specific to the spanning tree protocol (STP) and VLANs require additional
networking equipment that might not be readily available, this lab guide covers
procedures for local and remote access to the switch and a few key configuration
concepts.
Rev. 4.41
Important
For all file and CD locations noted in this lab, use either a CD or a network
repository of source files as identified by your instructor. Verify all IP
addresses with your instructor before adding or modifying any IP addresses.
L4.1 205
2.
Connect the switch to the TCP/IP network by plugging a network cable into
either of the RJ-45 ports on the front of the switch. The ports on the front are
labeled management ports but function like any other Ethernet port on the
switch.
Front ports
3.
L4.1 206
Plug one end of a null modem cable into the console port on the front of the
switch. Plug the other end of the cable into a serial port on your Microsoft
Windows client computer.
Rev. 4.41
Rev. 4.41
Important
The bit rate must be set to 9600 baud because the switch will communicate
only at that speed through the console port.
L4.1 207
2.
If the switch completes the power-on self-test (POST), you might have to
press Enter for the login prompt. Log in with the administrator credentials
provided by your instructor (or the documentation that shipped with the
network tray). By default, the password is admin. There is no user name
prompt with a console session.
3.
Navigate through the menus in the interface. Use the menu choices to access
and view the IP address of the switch. Record the sequence of commands you
use and the IP address of the switch.
Command sequence: ..........................................................................................
............................................................................................................................
............................................................................................................................
Switch IP address: ..............................................................................................
The switch IP address is an important setting because, in its default
configuration, the switch receives an IP address through BOOTP. Most
administrators set a static IP address for the switch for management purposes.
L4.1 208
Rev. 4.41
The switch can have up to 256 interfaces (or IP addresses) configured for
management purposes. By default, Interface 1 is the only one enabled, and it is set
to obtain an IP address from a BOOTP server. Subsequently configured interfaces
must be configured and then enabled before they can be used.
Example
Rev. 4.41
L4.1 209
1.
From the main menu of a console session, use the cfg ip if menu to
navigate to Interface 1.
The console interface can be navigated by entering one command at a time
and then pressing the Enter key to get to submenus. In some cases, you can
enter a command with subsequent subcommands on a single line. The
following graphics show two ways of navigating to the if menu for
Interface 1.
L4.1 210
Rev. 4.41
2.
............................................................................................................................
Use the cfg ip if menu commands to enable Interface 2. Which
command did you use to accomplish this task?
3.
............................................................................................................................
What else did you have to do to get to the Interface 2 menu?
............................................................................................................................
4.
Because you will not be using Interface 2, use the dis command to disable
that interface.
5.
Use the diff command to view the pending configuration changes before
applying them. Use the apply command to apply any pending configuration
changes. These commands may be used from any menu.
Rev. 4.41
Important
The apply command does not save the configuration. If the switch is disabled
or loses power before the configuration is saved, these changes will be lost.
The save command, used to save the configuration to memory in the switch, is
covered in the following exercise.
L4.1 211
Active The current, or last applied, configuration for the switch (exists in
volatile, read/write memory). The switch loads this configuration by default
during POST.
Boot The switch BIOS. This image boots the switch hardware.
Note
These images can be uploaded to and downloaded from a TFTP server by
using the ptimg and gtimg commands from the Boot Options menu. These
images also can be downloaded to the switch using XModem through a serial
connection by entering download mode during switch boot up. Refer to the
switch user guide for specific procedures for using XModem.
L4.1 212
Rev. 4.41
Important
Before you use the save command, ask your instructor if you should overwrite
the backup file. The instructor might have a reason for keeping the backup file.
Boot options
The Boot Options menu allows you to choose which configuration file to use on
the next boot and to choose the operating system image to boot to.
1.
From the main menu, use the boot command to enter the Boot Options
menu. Which command in this menu would you use to configure the switch
to boot from Image 2?
............................................................................................................................
2.
If your class has access to a TFTP server with a new software image file for
the switch, enter the gtimg command to download the new software image
to your switch.
3.
If you completed the previous step, what must you do to ensure you booted to
the new image? Be prepared to discuss your answer with the class.
............................................................................................................................
............................................................................................................................
............................................................................................................................
Rev. 4.41
L4.1 213
2.
L4.1 214
At the login prompt, enter the administrator user name (admin) and password
provided by your instructor and click OK.
Rev. 4.41
3.
4.
Navigate through the menus in the web interface and notice how they change
when you click one of the three large buttons across the top of the page.
Compare the procedure you used to find the switch IP address in the console
interface to the comparable procedure in the web-based interface. Document
the web-based procedure.
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................
Note
Currently, the web-based interface for the GbE2 switch does not provide full
switch management capabilities. Several procedures require the use of the
console session. HP recommends that all switch administrators learn the
console interface for administration. Additional capabilities are planned for the
web interface in the near future.
Rev. 4.41
L4.1 215
L4.1 216
1.
To view the current trunk settings, enter the trunk command in the
Configuration menu and press Enter.
2.
By default, only one trunk group is configured (Group 1). At the prompt
following the trunk command, enter 1 and press Enter. The Trunk Group 1
menu displays.
3.
Enter the cur command to view the current ports assigned to that trunk
group. If the switch is in its default configuration, ports 17 and 18 are in the
trunk group 1.
Rev. 4.41
4.
Starting from the current screen, what is the sequence of commands you
would use to create a second trunk group with two uplinks assigned?
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................
5.
What final step would you have to complete to ensure the new trunk group is
available if the switch is powered off and moved to a different enclosure?
............................................................................................................................
Rev. 4.41
L4.1 217
L4.1 218
Rev. 4.41
Objectives
After completing this lab, you should be able to:
Apply the basic concepts of the Spanning Tree Protocol (STP) by using
VLANs in conjunction with STP
Requirements
Depending on the hardware resources in the classroom, your instructor might give
you the option of completing a subset of exercises in this lab. Or, the instructor
might demonstrate these concepts. However, to complete all exercises, you will
need:
A ProLiant BL p-Class server blade enclosure and power enclosure with the
appropriate power supplies and cabling
Rev. 4.41
L4.2 219
1.
Important
During this lab, be sure to save the switch settings often. Saving the settings is
different from applying any changes, which only applies to the current
working settings, not the saved configuration. To save the settings, use the
save command in the main menu of the command line interface. If you restart
the switch without saving the settings to flash memory, any unsaved
configuration changes will be lost.
2.
IP Address
Switch Port
iLO NIC
Local Area Connection 1
Local Area Connection 2
Local Area Connection 3
192.168.1.101
192.168.1.11
192.168.1.12
192.168.1.13
Switch B, port 1
Switch A, port 1
Switch A, port 2
Switch B, port 2
Blade 2 (Bay 2)
IP Address
Switch Port
iLO NIC
Local Area Connection 1
Local Area Connection 2
Local Area Connection 3
192.168.1.102
192.168.1.21
192.168.1.22
192.168.1.23
Switch B, port 3
Switch A, port 3
Switch A, port 4
Switch B, port 4
Note
If the iLO NIC is not already set to the address shown in the table, connect to
the iLO (at 192.168.1.1) with the diagnostic cable and set the iLO address
according to the table. To do this, you must put your management PC NIC on
the 192.168.1.0 network.
L4.2 220
Rev. 4.41
Configuring VLANs and STP with the ProLiant BL GbE2 Interconnect Switch
3.
Use a remote console session through iLO on one of the server blades and
issue a ping command between all the NICs you configured in the previous
step. The ping commands from any address on the 192.168.1.0 network to
any other address in the system complete successfully.
Note
Notice that you can also issue the ping command from the server blade to the
management interface on the switch. The management interface for Switch A
is set to 192.168.1.10 and Switch B to 192.168.1.20.
4.
Rev. 4.41
L4.2 221
5.
Using an iLO console session to Blade 1, issue a ping command to the other
NICs in the system. Did you receive a reply from the NICs connected to
Switch B?
............................................................................................................................
Even though you have a NIC on Blade 1 connected to Switch B, you will not
receive a reply from the Blade 2 NICs on Switch B because of the way the IP
protocol works. The ping is only sent on one NIC, usually the first NIC
enumerated by the operating system. In the default configuration, that NIC is
the one named Local Area Connection (LAC in the following graphic).
If you disable LAC and LAC2, you can ping the NICs on Switch B because
the routing table on Blade 1 changes so that the ping request is sent out
through LAC3 (the only remaining data NIC).
Important
If you disable LAC and LAC2 to test the concept, you must also disable LAC2
and LAC3. Then enable LAC and repeat the ping request. This process resets
the routing table to its original configuration. Otherwise, some of the steps
later in this exercise will not work.
20
20
Switch A
23
17
17
18
18
Switch B
1
23
Blade 1
Blade 2
NIC port mappings
L4.2 222
Rev. 4.41
Configuring VLANs and STP with the ProLiant BL GbE2 Interconnect Switch
2.
3.
To allow traffic between the switches, the crosslink ports must be included in
all VLANs. To add a port to more than one VLAN, you must first enable
tagging on that port. Use the configuration ports tag menu to enable
tagging on port 17 as shown in the following graphic.
Rev. 4.41
4.
5.
L4.2 223
6.
Creating VLAN 2
7.
8.
Enter the add command to add ports to the appropriate VLAN according to
the following table.
9.
Switch
VLAN
Ports
A
A
2
3
1, 3, 17, 18
2, 4, 17, 18
Enter the ena command to enable each VLAN after configuring the ports.
Note
When you add the untagged switch ports connected to the NICs on the server
blades, they are removed from VLAN 1. If you enabled tagging on these ports
and added them to more than one VLAN, you would have to also enable
tagging on the NICs in Windows. In its default configuration, Windows does
not interpret tagged frames and will drop any incoming tagged packets.
Similarly, a tagged port on the switch will drop untagged packets that are
coming in.
10. You have not segmented the first two data NICs for each blade onto separate
VLANs. To test this configuration, log in to Blade 1 and ping LAC on
Blade 2 (192.168.1.21). Ensure that you receive a reply because both ports
are in VLAN 2 and both are connected to Switch A.
L4.2 224
Rev. 4.41
Configuring VLANs and STP with the ProLiant BL GbE2 Interconnect Switch
20
V1
Switch A
V1
V2
V3
V2
V3
V1
V2
V3
17
Switch B
18
23
Blade 1
Blade 2
Rev. 4.41
L4.2 225
12. Complete the configuration for the system by adding VLAN 2 and VLAN 3
on Switch B and enabling tagging on ports 17 and 18 so they can be added to
more than one VLAN. The following table shows the VLANs and ports to be
added on Switch B.
Switch
VLAN
Ports
B
B
2
3
2, 4, 17, 18
17, 18
V1
Switch A
V1
V2
V3
V2
V3
V1
V2
V3
V1
V2
V3
Switch B
V1
V2
V1
V1
V2
Blade 1
Blade 2
VLAN configuration Final configuration
L4.2 226
Rev. 4.41
Configuring VLANs and STP with the ProLiant BL GbE2 Interconnect Switch
13. To verify the VLAN configuration on each switch, enter the vlan command
in the Information menu. The current VLAN configuration displays.
14. Issue a ping command from Blade 1 to the ports in VLAN 2 on Switch B.
Was the ping command successful?
............................................................................................................................
15. Issue a ping command from your management PC to both switches and to the
iLO ports on Switch B. Were the ping commands successful?
............................................................................................................................
You now have three VLANs configured on the system:
The default VLAN, VLAN 1, is the management VLAN that provides the
uplinks to the management PC and the iLO NICs.
VLAN 2 includes the crosslink ports on each switch and two ports for data
NICs on each switch.
VLAN 3 includes the crosslink ports on each switch and two ports for data
NICs on Switch A.
If this were a real configuration, what other steps would need to be performed on
the switch uplink ports to connect them to the corporate LAN?
............................................................................................................................
............................................................................................................................
............................................................................................................................
Rev. 4.41
L4.2 227
To put the server blade system into production on your corporate LAN, you
add all four uplink ports on the back of the Switch A to VLAN 2 and VLAN
3, and assign them into a trunk group to create a single 4Gb/s uplink. You do
the same on Switch B.
For security purposes, you take all of the uplink ports connected to the
corporate LAN out of VLAN 1 (your management VLAN). This leaves the
uplinks dedicated to data traffic.
For switch and iLO management, you connect one of the ports on the front of
Switch A to your management network and to your management PC.
Corporate LAN
Corporate LAN
V2, V3
V2, V3
Mgmt
Switch A
V1
V2
V3
V2
V3
V1
V2
V3
V1
V2
V3
Switch B
V1
V2
V1
V1
V2
Blade 1
Blade 2
L4.2 228
Rev. 4.41
Configuring VLANs and STP with the ProLiant BL GbE2 Interconnect Switch
Rev. 4.41
L4.2 229
2.
Use what you learned in the previous labs to perform the following steps on
Switch A:
a.
b.
c.
By default, the cost on all ports in the switch is set to a value of 4. Because
the uplinks are not connected to another switch upstream, you must manually
assign port costs so that the crosslink ports are a higher cost than the uplinks.
With the crosslinks set to a higher port cost, the spanning tree protocol will
block the crosslinks instead of the uplinks.
In a real network, the uplinks on each switch would be connected to another
switch upstream, which would be configured so that it becomes the root
switch. This additional factor in the spanning tree protocol topology would
cause the crosslinks to be blocked automatically.
To change the port cost on the crosslinks, go to the configuration stp menu
for group 1. Enter the port command as shown in the following graphic to
change the cost for port 18 to a value of 19. Do the same for port 17.
L4.2 230
Rev. 4.41
Configuring VLANs and STP with the ProLiant BL GbE2 Interconnect Switch
3.
4.
b.
From your management PC, try to ping one of the iLO NICs. Did this
work? Why or why not?
............................................................................................................................
............................................................................................................................
The ping test may work the first time you try, but it will eventually stop
working because the spanning tree protocol will block the highest cost route
(the crosslink ports) to prevent a broadcast loop between the two switches.
You can speed up the test process by performing the ping test from the
management console interface for Switch A (use the same syntax as with a
command prompt in Windows).
You lose connectivity between the devices on VLAN 1 on each switch
because the crosslink was the only connection between the two switches that
was a member of VLAN 1, and it was just blocked by the spanning tree
protocol. The uplinks, which now form the only link between the two
switches, are not members of VLAN 1, so the ping from the management PC
(connected to Switch A) could not reach the iLO NIC through Switch B.
Rev. 4.41
L4.2 231
1.
2.
3.
L4.2 232
Rev. 4.41
Configuring VLANs and STP with the ProLiant BL GbE2 Interconnect Switch
4.
On Switch A, change the cost for ports 17 and 18 to 19 for Spanning Tree
Group 2 and Spanning Tree Group 3 (like you did for Spanning Tree Group 1
in the previous section). Be sure to apply all configuration changes on both
switches.
When these configuration changes take effect, the crosslink port cost will be
greater than the uplink port cost for all spanning tree groups. PVST+ will
block the crosslink ports for any VLAN traffic that is assigned to both the
crosslink ports and the uplink ports. It will not block the crosslink ports for
any VLAN traffic that is only assigned to the crosslink ports.
The crosslinks are blocked for VLAN 2 (Spanning Tree Group 2) and VLAN
3 (Spanning Tree Group 3). No ports are blocked for VLAN 1 (Spanning
Tree Group 1) traffic, because the crosslink is the only route available
between the switches for VLAN 1 traffic.
Cost=4
V2, V3
Mgmt
Switch A
V1
V2
V3
V2
V2, V3
Cost=19
V3
V1
V1
Switch B
V1
V2
V1
V1
V2
Blade 1
Blade 2
5.
Test the configuration by performing the ping test from your management PC
(on a VLAN 1 port on Switch A) to an iLO interface (on a VLAN 1 port on
Switch B).
Note
It may take a few minutes for the spanning tree group topology to change and
for the crosslinks to be unblocked for VLAN 1 traffic.
Rev. 4.41
L4.2 233
L4.2 234
Rev. 4.41
Objectives
After completing this lab, you should be able to:
Access and configure the integrated Lights-Out (iLO) of your server blade
Requirements
To complete this lab, you will need:
Rev. 4.41
One or more ProLiant server blades such as the ProLiant BL20p Generation 2
(G2)
L4.3 235
You can access the server blade iLO using either of the following methods:
Accessing the server blade iLO with the diagnostic cable or the
local I/O cable
The default IP address for the iLO diagnostic port connection is 192.168.1.1 with a
subnet mask of 255.255.255.0. The Network Settings configuration page allows
you to change the IP configuration for the iLO diagnostic port if the default values
are not appropriate for your environment.
L4.3 236
Rev. 4.41
To access the server blade iLO with the diagnostic cable or with the local I/O
cable:
1.
Connect the diagnostic cable to the diagnostic port in the front of the server
blade. If using a server blade that requires the local I/O cable, connect the
local I/O cable to the server blade instead.
Note
The diagnostic cable is a Y-cable and incorporates a crossover so that normal
RJ-45 cable can be used between the diagnostic cable and the management
station.
2.
Set your management station to the following IP address and subnet mask:
IP address: 192.168.1.200
Subnet mask: 255.255.255.0
Rev. 4.41
3.
4.
5.
When the web browser successfully connects to the iLO, a security alert
displays. Click Yes to accept the certificate.
L4.3 237
6.
At the Account Login screen, enter either the default login credentials
provided on the iLO Default Network Settings tag or the user name and
password provided by your instructor. Click Log In to proceed to the iLO
web-based interface.
L4.3 238
1.
2.
When the web browser successfully connects to the iLO, a security alert
displays. Click Yes to accept the certificate.
3.
At the Account Login screen, enter either the default login credentials
provided on the iLO Default Network Settings tag or the user name and
password provided by your instructor. Click Log In to proceed to the iLO
web-based interface.
Rev. 4.41
Configuring iLO
To configure the iLO:
Rev. 4.41
1.
2.
Click the Administration tab and select Network Settings in the left menu.
This screen allows you to configure iLO for your environment.
L4.3 239
3.
L4.3 240
Enable DHCP: No
4.
5.
Click Apply to save the settings. iLO resets itself with the new settings and
redirects you back to the login page.
6.
Log in again, and at the iLO home page, click the BL p-Class tab.
Rev. 4.41
7.
Rev. 4.41
The Rack Settings page allows you to enter rack information, such as the rack
name, enclosure name, and bay name. You can also change the power on
control settings.
L4.3 241
L4.3 242
8.
Click the Rack Topology, Server Blade Mgt. Module, Power Mgt. Module,
and Redundant Power Mgt. Module options on the left side of the screen, and
familiarize yourself with the displayed information and settings.
9.
Click Log out in the top right corner of the screen to log out of iLO.
Rev. 4.41
Rev. 4.41
Password: ..................................................................................................
L4.3 243
Power down the server blade and remove it from the server blade enclosure.
2.
3.
Use the inside cover legend to locate the server battery. Remove the battery,
wait a couple of minutes, and reinstall the battery.
4.
Reinstall the cover and reinsert the server blade into the server blade
enclosure. Apply power to the server blade enclosure if necessary.
5.
Connect the diagnostic cable to the diagnostic port in the front of the server
blade. If you are using a server blade that requires the local I/O cable,
connect the local I/O cable to the server blade instead.
6.
Set your management station to the following IP address and subnet mask:
IP address: 192.168.1.200
Subnet mask: 255.255.255.0
7.
8.
9.
At the iLO Account Login screen, enter Administrator in the Login Name
field and enter the password listed on the iLO Default Network Settings tag
in the Password field. Click Log In to continue.
10. The iLO configuration settings were restored to factory defaults when the
server battery was removed. After you log in to iLO, configure iLO as
desired.
11. Disconnect from the server blade when done and remove the diagnostic
cable.
L4.3 244
Rev. 4.41
Power down the server blade and remove it from the server blade enclosure.
2.
3.
Use the inside cover legend to locate the iLO Security override switch. For
example, on ProLiant BL20p G2 server blade, the iLO Security override
switch is the dip switch number 1 on the maintenance switch (SW4).
Note
You may need to temporarily remove the processor power module in slot 1 for
easier access to the switch.
Rev. 4.41
4.
The switch is normally in the off position, resulting in the iLO security being
active. Toggle the dip switch to its on position.
5.
Reinstall the cover and reinsert the server blade into the server blade
enclosure. Apply power to the server blade enclosure if necessary.
6.
Open Microsoft Internet Explorer and access the iLO of your blade server.
7.
The Account Login window still displays, but now contains an alert message,
Alert! iLO security override switch is set. Security enforcement is disabled.
Do not enter any information in the Login Name and Password text boxes;
instead, click Log In.
L4.3 245
8.
At the iLO home page, select the Administration tab and configure users and
their passwords as desired.
9.
When done, log out of iLO and power down the server blade. Remove the
server blade and repeat the steps 1 through 5 to toggle the dip switch to its off
position and to re-enable the iLO security.
L4.3 246
1.
Open Microsoft Internet Explorer and log in to the iLO of your blade server.
2.
At the iLO home page, click the Administration tab and select the Upgrade
iLO Firmware option on the left of the screen.
Rev. 4.41
3.
Rev. 4.41
At the Upgrade iLO Firmware screen, click Browse next to the New firmware
image text box and navigate to the iLO firmware image provided by your
instructor. Click Open.
L4.3 247
4.
The iLO firmware image is listed in the New firmware image text box. Click
Send firmware image.
L4.3 248
Important
Do not power cycle, click another link, or otherwise interrupt the firmware
upgrade while it is in progress.
5.
The firmware upgrade process takes less than two minutes. When completed,
iLO resets itself and redirects you back to the login page.
6.
Rev. 4.41
The Status Summary screen displays the current iLO firmware version.
Rev. 4.41
L4.3 249
L4.3 250
1.
Locate the ProLiant Firmware Maintenance Release 7.10 Server and Options
Firmware for ProLiant BL, ML, and DL 300, 500, and 700 Servers CD and
insert it into the management station CD-ROM drive.
2.
3.
Open Microsoft Internet Explorer and log in to the iLO of your server blade.
4.
5.
At the Virtual Media window, select D: from the Local CD-ROM Drive pulldown menu, and click Connect. The Virtual Media window becomes similar
to the following graphic. Minimize the window.
Rev. 4.41
Rev. 4.41
6.
At the iLO home page, click Remote Console Remote Console (dual
cursor). The iLO Remote Console window opens.
7.
Toggle back to the iLO home page, click Virtual Devices Virtual Power,
and power on your target server blade.
L4.3 251
8.
Toggle to the iLO Remote Console window and observe the server blade
behavior. The server blade boots from the ProLiant Firmware Maintenance
CD.
9.
L4.3 252
Rev. 4.41
Rev. 4.41
L4.3 253
12. The ROM Update Utility scans the system and determines what firmware
should be upgraded. It presents this information and the estimated time on the
following screen.
13. Click Update Now to start the process. When done, exit the ROM Update
Utility and the Firmware Maintenance CD.
14. Close the iLO Remote Console session.
15. Toggle to the Virtual Media window, click Disconnect and close the window.
16. Log out of the iLO session and close the web browser window.
The HP BladeSystem is now ready for operating system deployment.
L4.3 254
Rev. 4.41
Deploying ProLiant
BL p-Class Server
Blades
Module 5
HP Restricted
2004 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice
Rev. 4.41
255
Objectives
Deploy an HP BladeSystem server using
RDP
iLO
Systems Insight Manager
Rev. 4.41
HP Restricted
Objectives
After completing this module, you should be able to:
Deploy HP BladeSystem servers using:
HP ProLiant Essentials Rapid Deployment Pack (RDP)
HP integrated Lights-Out (iLO)
HP Systems Insight Manager
Prepare a deployment server
Use RDP and iLO to manage an HP BladeSystem solution
Rev. 4.41
256
5 256
Rev. 4.41
HP Restricted
5 257
Rev. 4.41
257
Rev. 4.41
258
Client
Access Point
(file server)
Deployment
server
Deployment
database
Deployment
Server
console
DHCP
PXE
Managed
computers
Rev. 4.41
HP Restricted
5 259
Rev. 4.41
259
Deployment Server
Controls the flow of work and information between the
managed and client computers and other components
Ensures that the computer management work is performed
correctly
Supports only one Deployment Server instance per
Deployment Solution
Can support multiple Deployment Solutions
Each Deployment Solution has its own Deployment Server
HP Restricted
5 260
Deployment Server
The main component of the Deployment Solution, Deployment Server is a fast, easy, pointand-click solution for deploying servers using imaging or scripting. Deployment Server
controls the flow of work and information between the managed and client computers and the
other Deployment Solution components. The managed and client computers connect and
communicate with the deployment server to register information about themselves using the
Deployment Agent for Windows and Deployment Agent for DOS. This information is stored in
the database.
Communication between the Deployment Server console and the other components ensures
that all work needed to manage the computers is performed correctly. The managed and client
computers need access to the deployment server at all times. If a client or managed computer
cannot communicate with the deployment server, remote management of the client cannot
occur.
You can install only one Deployment Server instance per Deployment Solution. However, you
can install multiple Deployment Solutions, each with their own deployment server.
Note: Deployment Server is a software component of RDP. After you install it on a server, that
server is referred to as a deployment server.
With Deployment Server, labor-intensive script writing is not required to set up or manage HP
ProLiant servers. Pre-configured scripts enable quick and easy configuration. These scripts are
installed on the deployment server by the ProLiant Integration Module and enhance the native
ability of Deployment Server.
The intuitive click-and-drag wizards walk novices through the most common management
tasks (including installation). Advanced users can take advantage of powerful advanced
features and shortcuts.
Deployment Server runs as a Windows service.
Rev. 4.41
260
HP Restricted
5 261
Rev. 4.41
261
Rev. 4.41
HP Restricted
5 262
Rev. 4.41
262
ProLiant Integration
Module for Deployment
Server
Rev. 4.41
HP Restricted
5 263
Rev. 4.41
263
Rev. 4.41
HP Restricted
5 264
Rev. 4.41
264
Preconfiguration steps
Configure PXE
Create PXE boot
images
Remotely install
deployment agents
Enable Linux
deployment
Rev. 4.41
HP Restricted
5 265
Preconfiguration steps
Before using RDP to deploy servers, you might need to make the following changes:
Configure PXE Provides headless deployment by eliminating the need to select a menu
Create PXE boot images Enables you to customize your environment
Remotely install deployment agents Enables the management of existing Linux and
Microsoft Windows systems
Enable Linux deployment Creates a Network File System (NFS) share
To begin using RDP, connect the server blade enclosure to the network that contains the
deployment server and power on the enclosure. Then insert the server blades into the enclosure
and power them on. After the server blades PXE boot, they will display in the Deployment
Server console.
Important! If you plan to change the default rack and enclosure names, set these names before
the first server in an enclosure connects to the deployment server. After the server blades are
powered on for the first time and the rack and enclosure names are recorded in Deployment
Server database, the server blades must be rebooted for new rack and enclosure names to be
discovered. For more information, refer to Configuring ProLiant BL Server Enclosures in the
HP ProLiant Essentials Rapid Deployment PackWindows Edition Installation Guide.
Rev. 4.41
265
Rev. 4.41
HP Restricted
5 266
Rev. 4.41
266
Rev. 4.41
267
Rev. 4.41
HP Restricted
5 268
Rev. 4.41
268
Rev. 4.41
HP Restricted
5 269
Rev. 4.41
269
Learning
check
Rev. 4.41
HP Restricted
5 270
Learning check
1. In what type of environment is RDP for Linux an ideal solution?
a. Homogeneous Windows environments
b. Heterogeneous HP-UX, Windows, and Linux environments
c. Homogeneous Linux environments
d. Heterogeneous Windows and Linux environments
2. Name the three fundamental steps in deploying servers.
_________________________________________________________________________
_________________________________________________________________________
__________________________________________________________
3. The __________ __________ for Linux __________ __________ provides the means to
view and deploy servers within your network.
4. Explain the rip and replace process.
_________________________________________________________________________
_________________________________________________________________________
_________________________________________________________________________
_____________________________________________________
5. RDP integrates with __________ __________ __________ to resolve server failures.
Rev. 4.41
270
Rev. 4.41
271
Rev. 4.41
Rev. 4.41
HP Restricted
272
5 272
Objectives
After completing this lab, you should be able to:
Install the HP ProLiant Integration Module for the Deployment Solution 1.60
Requirements
To complete this lab, you will need:
Rev. 4.41
L5.1 273
Typical setup
Accept all other defaults on the screen. After the initial installation, apply the
Microsoft SQL Server 2000 Service Pack 3.
L5.1 274
Rev. 4.41
WARNING
Incorrectly editing the registry may severely damage your system. At the
very least, you should back up any valued data on the computer before
making changes to the registry.
1.
2.
In Registry Editor, navigate to the appropriate area of the registry and change
the enablesecuritysignature and requiresecuritysignature settings to 0.
3.
Rev. 4.41
L5.1 275
L5.1 276
1.
Insert the RDP 1.60 CD into the CD-ROM drive of the designated
deployment server. If AutoRun is enabled, the Rapid Deployment Pack
installation utility runs automatically. If AutoRun is not enabled, double-click
the autorun.exe file in the CD root directory.
2.
When the software license agreement displays, click Agree to continue with
the installation.
Rev. 4.41
Rev. 4.41
3.
4.
L5.1 277
5.
If you receive the following warning message, ensure that the SMB signing is
disabled by completing the Disabling SMB signing section. Click OK to
continue.
6.
If you do not have an existing local database, use the Local Computer Install
Helper option to install MSDE 2000 and Service Pack 3a. At the Install
Configuration screen, click Local Computer Install Helper Install. When
finished, reboot the server and repeat steps 1 through 5. Then continue with
step 7.
If you have a local database, proceed to the next step.
7.
At the Install Configuration screen, select the Simple Install option. Select the
Include PXE Server check box and click Install.
Note
The Simple Install option installs all the deployment server components on a
single machine.
L5.1 278
Rev. 4.41
Rev. 4.41
8.
When the Altiris Software License Agreement displays, review it and then
click Yes to agree and continue the installation.
9.
If the system has multiple active NICs, a pop-up dialog box displays asking
you to specify which IP address to use. Select the NIC designated by your
instructor and click Select IP.
L5.1 279
10. At the Deployment Server Client Access Point Information screen, complete
the fields as follows and click Next.
L5.1 280
License Select the Free 7 day license option, or select License file
and browse to the desired license file.
Service username Accept the default user name (ensure that it has
administrator privileges). This name should be the administrator account
name.
Rev. 4.41
11. At the Installation Information screen, click Install. The Deployment Solution
installation begins. During the installation, you are prompted for either a CD
or diskette from which to extract several DOS files. This step is required so
that the Altiris Boot Disk Creator utility will have the necessary files to make
DOS boot diskettes.
12. At the Configure Boot Disk Creator screen, select Use Windows 95/98
original CD-ROM, and then click Next Ignore.
Rev. 4.41
L5.1 281
13. At the next screen, browse to the C:\classfiles\win98 directory (this directory
contains a copy of a Microsoft Windows 98 CD), and then click OK.
Note
If you do not have the original Windows 95/98 CD, you can use a CD from the
Microsoft Developers Network package or a Windows 95/98 boot diskette.
14. Click Finish. The Boot Disk Creator copies the appropriate DOS files, and
the installation continues.
L5.1 282
Rev. 4.41
15. When the final screen prompts you to install clients remotely or to download
Adobe Acrobat, do not select either option. Click Finish.
Rev. 4.41
L5.1 283
L5.1 284
Rev. 4.41
Rev. 4.41
1.
Insert the RDP 1.60 CD into the CD-ROM drive of the designated
deployment server. If AutoRun is enabled, the Rapid Deployment Pack
installation utility runs automatically. If AutoRun is not enabled, double-click
the autorun.exe file in the CD root directory.
2.
When the software license agreement displays, click Agree to continue with
the installation.
3.
L5.1 285
4.
L5.1 286
The software license agreement for the ProLiant Integration Module displays.
After reading the license agreement, select the check box to accept the
agreement and click Next.
Rev. 4.41
5.
The Job Selection screen displays, enabling you to select which configuration
jobs you want to install. Select ProLiant BL20p Scripted Install for Microsoft
Windows 2000 and ProLiant BL20p Scripted Install for Microsoft Windows
2003. If using blade servers other than the ProLiant BL20p, change your
selection appropriately. Click Next to continue.
Note
The Windows and Linux scripted install jobs are initially not selected by
default, but the SmartStart Toolkit and OS Imaging Events, and SmartStart
Toolkit Hardware Configuration Events are selected.
Rev. 4.41
L5.1 287
6.
L5.1 288
When the Installation screen displays, click Next to start the installation.
Rev. 4.41
7.
Rev. 4.41
The installer copies all the necessary files from the RDP 1.60 CD and
prompts you for the appropriate Windows operating system CDs, which will
be provided by your instructor. Insert the appropriate operating system CD
and click Next to start copying the files.
L5.1 289
L5.1 290
8.
After all of the operating system installation files have been copied, the
Finish screen displays. You will perform some the configuration steps listed
on the screen in the next exercise. Click Finish to close the screen.
9.
Rev. 4.41
2.
3.
4.
If the deployment server already recognizes a given computer, and that computer
has no assigned tasks, the deployment server allows the computer to bypass the
PXE and perform a local boot.
If the computer has a configuration task assigned, it performs these tasks:
Rev. 4.41
1.
Goes through the Managed Computer menu items and downloads a PXE boot
image.
2.
3.
L5.1 291
2.
L5.1 292
Important
Do not rearrange the order of the menu items. Changing the menu item order
can cause your target servers to fail to boot to local hard drives.
Rev. 4.41
Rev. 4.41
3.
4.
Select the Execute Immediately option to eliminate wait time and click OK
OK to close both windows. The initial deployment will now run
automatically for every server not in the database.
L5.1 293
2.
L5.1 294
3.
4.
5.
Select the Synchronize display names with Windows computer names option.
Rev. 4.41
Rev. 4.41
6.
Change the primary lookup key to Serial Number (SMBIOS) and click OK.
7.
L5.1 295
If you do not install AClient as part of a deployment process, you lose these
capabilities. Scripted installation jobs provided by RDP install the agent by default
on deployed servers. By preconfiguring the default settings, all agents installed as
part of the provided Windows scripted install jobs have consistent settings.
Note
HP recommends that you leave the agent on the system after initial installation.
L5.1 296
Rev. 4.41
To configure AClient:
1.
From a text editor, open the aclient.inp file in the deployment server root
directory.
2.
Verify that the static IP address listed in the TcpAddr= line is the IP address
of your deployment server.
3.
To force applications to close when the server needs to restart, ensuring that
no jobs fail if the server must be restarted, change the line:
; ForceReboot=No
to:
ForceReboot=Yes
Rev. 4.41
L5.1 297
4.
to:
BootDiskMessageUsage=0
If boot diskettes are used instead of PXE and a configuration task is issued to
a computer when no diskette is in the diskette drive, a prompt instructs you to
insert a diskette. If this occurs when you are not logged in to the server, you
must log in and close the prompt before the job can continue. By selecting to
never be prompted for a boot diskette, the server restarts to the normal
operating system if a boot diskette is not inserted in the server when required.
5.
Select the option to synchronize the target server time with the deployment
server time by changing the line:
; SyncTimeWithServer=No
to:
SyncTimeWithServer=Yes
6.
L5.1 298
Rev. 4.41
Important
The PSPs must reside on writable media so that you can configure the Smart
Components in the PSP before PSP deployment. You cannot configure the
PSPs from the CD-ROM drive.
The components in the PSP only need to be configured once. You do not need
to configure the components each time they are deployed. After a PSP is
configured, it is ready for deployment.
To configure the Web Agent (and other Smart Components) in the PSP for
deployment:
1.
Rev. 4.41
L5.1 299
L5.1 300
2.
3.
Expand the All Configurable Components directory in the tree in the left
pane.
4.
Rev. 4.41
Rev. 4.41
5.
6.
7.
L5.1 301
L5.1 302
Rev. 4.41
Objectives
After completing this lab, you should be able to:
Install the Altiris eXpress Deployment Server Agent on the reference server
blade
Capture the reference server blade hardware configuration and disk image
Requirements
To complete this lab, you will need:
Rev. 4.41
One or more ProLiant server blades such as the ProLiant BL20p Generation 2
(G2)
L5.2 303
Optionally, to deploy a server blade connected to the MSA1000, you will also
need:
L5.2 304
One or more ProLiant server blades with SAN support, such as the ProLiant
BL20p G2 with Fibre Channel Mezzanine Card
An RJ-45 Patch Panel 2 or GbE2 Interconnect Switch with the GbE2 Storage
Connectivity Kit
One Lucent Connector (LC-to-LC) fiber cable for connecting the server blade
to the MSA1000
Rev. 4.41
Introduction
RDP 1.60 simplifies the installation of server blades connected to an HP SAN
solution such as the MSA1000 because the appropriate Storage Area Network
(SAN) support software is now installed as part of the scripted Windows Server
2003 installation. Previous versions of RDP required several manual steps to
configure the deployment server with the correct MSA1000 drivers.
The scripted Windows Server 2003 installation to a server blade connected to an
MSA1000 is no different than an installation to a server blade with only internal
drives. When the Windows Server 2003 installation completes, all you need to do
is connect to the server blade and use the HP Array Configuration Utility to
configure the MSA1000 as instructed.
Rev. 4.41
L5.2 305
L5.2 306
1.
Prepare the server blade by installing the Fibre Channel mezzanine card.
2.
3.
Prepare the RJ-45 Patch Panel 2 or GbE2 Interconnect Switch with the GbE2
Storage Connectivity Kit.
4.
5.
Rev. 4.41
2.
Locate the wnet.txt file and open it using a text editor. This is the unattend.txt
file used during the scripted installation.
3.
In the [UserData] section of the file, change the ComputerName setting to:
ComputerName=refsrv
4.
If not using the SELECT version of the operating system, add the following
line to the [UserData] section:
ProductID=XXXXX-XXXXX-XXXXX-XXXXX-XXXXX
Rev. 4.41
L5.2 307
5.
6.
Verify that the AdminPassword= line is set to password. This parameter sets
the administrator account password. Change it if necessary.
7.
Save the file. You will use this file during the scripted installation.
2.
Remove the semicolons (remark indicators) from the following two lines:
TcpAddr=10.10.1.1
TcpPort=402
L5.2 308
Rev. 4.41
Rev. 4.41
3.
If necessary, replace the IP address in the TcpAddr= line with the IP address
of your deployment server.
4.
5.
L5.2 309
2.
In the Jobs pane, expand the Microsoft Windows 2003 Scripted Install Events
folder.
3.
L5.2 310
Rev. 4.41
Rev. 4.41
4.
Click the individual tasks within the job to view details about each task.
Double-click the first Run Script task.
5.
After you have finished browsing, click Cancel twice to return to the
Deployment Server Console.
L5.2 311
6.
To begin the deployment process, power on the target server blade. Start a
browser session, connect to the iLO of the target server, and use the Remote
Console capability to view the server deployment progress.
Note
Depending on the previous state of the server, you may have to press F12
during the boot sequence for a PXE boot.
7.
L5.2 312
Confirm that the new computer is listed in the New Computers folder of the
Deployment Server Console.
Rev. 4.41
Rev. 4.41
8.
Move the ProLiant BL20p Scripted Install for Microsoft Windows 2003 job
to the target server.
9.
The Schedule Job screen automatically displays. Select Run this job
immediately, and click OK to start the scripted installation on your target
server.
L5.2 313
10. In the confirmation dialog box, click Yes to perform the scripted install.
Note
To bypass this step in the future, select the Dont prompt me again box.
The scripted installation of Windows Server 2003, including the installation of the
PSP, continues unattended and takes approximately 60 minutes to complete.
11. After the scripted installation completes, ensure that the server is correctly
added to the Active Directory domain, as shown in the following graphic.
12. Ensure that the server is correctly added to the DNS forward lookup zone, as
shown in the following graphic.
L5.2 314
Rev. 4.41
Rev. 4.41
1.
2.
L5.2 315
L5.2 316
3.
At the Remote Agent Install screen, select Let me specify a username and
password for each machine as its installed and click Next.
4.
Select Enable this agent to use SIDgen and/or Microsoft Sysprep and click
Change Settings to open the Default Agent Settings screen.
Rev. 4.41
Rev. 4.41
5.
6.
7.
At the next screen, select Use only Altris SIDgen utility from the drop-down
menu.
L5.2 317
L5.2 318
8.
Select the Update file system permissions when changing SIDs check box and
click Next.
9.
Rev. 4.41
10. At the Browse computers screen, select a computer from the list, or enter the
name or IP address of the target server on which to install the agent, and
click OK.
11. Click the server you just added, and then click Properties.
Rev. 4.41
L5.2 319
12. At the Agent Properties screen, enter the user name (Administrator) and
password (password) for the administrator and click OK.
13. Click Finish. The agent installs on the remote client. When the Installing
Clients screen shows the All clients installed successfully status, click Exit
Install to return to the Deployment Server Console.
L5.2 320
Rev. 4.41
Choose your preferred method and complete the appropriate configuration steps.
Rev. 4.41
1.
2.
At the Account Login screen, log in using the appropriate login credentials.
3.
L5.2 321
L5.2 322
Rev. 4.41
Rev. 4.41
1.
2.
At the System Properties window, click the Remote tab and select the Allow
users to connect remotely to this computer option and click OK. At the
Remote Sessions popup screen, click OK.
3.
L5.2 323
L5.2 324
1.
2.
At the Remote Desktops screen, right-click Remote Desktops and select Add
new connection.
3.
At the Add New Connection window, enter the required information and
click OK.
Rev. 4.41
4.
Rev. 4.41
Right-click the new connection icon and select Connect. To disconnect from
the target server, right-click the connection icon and select Disconnect.
L5.2 325
The managed computer must have the Altiris Agent for Windows installed
and properly set up.
The client (the management PC) must have the appropriate Remote Control
option selected in Altiris client properties. This option is not selected by
default.
L5.2 326
Rev. 4.41
To configure the managed computer for remote control, start the Deployment
Server Console and complete these steps:
Note
You can also enable remote control of deployed servers by setting the
AllowRemoteControl option to Yes in the C:\Program
Files\Altiris\eXpress\Deployment Server\aclient.inp file. If you do so, the
following steps are unnecessary.
1.
Rev. 4.41
L5.2 327
2.
At the Windows/Linux Agent Settings screen, click the Remote Control tab,
select the Allow this computer to be remote controlled option, and click OK.
L5.2 328
Right-click the appropriate computer name in the Computers pane and select
Remote Control.
Rev. 4.41
2.
Rev. 4.41
The Remote Control window for the selected server displays. To close the
remote session, click Control Close Window.
L5.2 329
L5.2 330
1.
In the Jobs pane of the Deployment Server Console, expand the SmartStart
Toolkit and OS Imaging Events folder.
2.
3.
In the Job Properties screen, double-click the Run Script task. The Run Script
screen displays.
Rev. 4.41
4.
5.
Rev. 4.41
In the Script Information screen, change the default names of the hardware
information and array information files that will be captured and click Finish.
In the Job Properties screen, double-click the Create Image task to open the
Save Disk Image to a File screen.
L5.2 331
L5.2 332
6.
7.
Click Advanced to view the optional settings for imaging. For example, you
can change the maximum file size and compression ratio. For this exercise,
do not change the settings and click OK to return to the Save Disk Image to a
File screen.
Rev. 4.41
Rev. 4.41
8.
9.
L5.2 333
10. The Schedule Job screen displays automatically. Select Run this job
immediately in the Schedule Computers for Job screen, and click OK. The
reference server restarts and processes the job.
11. The imaging process should take less than 10 minutes. When completed,
verify that the following files were created successfully:
L5.2 334
Rev. 4.41
Objectives
After completing this lab, you should be able to:
Capture the reference server blade hardware configuration and disk image
Requirements
To complete this lab, you will need:
Rev. 4.41
A Red Hat Network File System (NFS) server on the network (or, you may
use Microsoft Windows Services for UNIX instead of a Red Hat NFS server)
with copies of the RHEL AS 3 CDs
L5.3 335
Optionally, to deploy a server blade connected to the MSA1000, you will also
need:
L5.3 336
One or more ProLiant server blades with Storage Area Network (SAN)
support, such as the ProLiant BL20p G2 with Fibre Channel Mezzanine Card
One MSA1000 with two or more hot-pluggable drives and no logical drives
defined
An RJ-45 Patch Panel 2 or GbE2 Interconnect Switch with the GbE2 Storage
Connectivity Kit
One Lucent Connector (LC-to-LC) fiber cable for connecting the server blade
to the MSA1000
Rev. 4.41
Introduction
RDP 1.60 simplifies the installation of server blades connected to an HP SAN
solution such as the MSA1000 because the appropriate SAN support software is
now installed as part of the scripted Red Hat Enterprise Linux AS 3 installation.
Previous versions of RDP required several manual steps to configure the
deployment server with the correct MSA1000 drivers.
The scripted RHEL AS 3 installation to a server blade connected to an MSA1000
is no different than an installation to a server blade with only internal drives. When
the operating system installation completes, all you need to do is connect to the
server blade and use the HP Array Configuration Utility (ACU) to configure the
MSA1000 as instructed.
Three major actions must be performed to deploy Linux using RDP 1.60:
1.
Add Linux jobs to the Deployment Server Console for the appropriate server
family.
2.
3.
Deploy the operating system using one of the predefined RDP jobs.
After the operating system is deployed, configure the MSA1000 (if one exists) and
install server applications. After the reference server is configured as desired,
capture the hardware configuration and disk image for future deployments of like
servers.
Rev. 4.41
L5.3 337
The contents of the ProLiant BL20p Scripted Install for Red Hat Linux AS 3 job
are:
1.
2.
3.
4.
L5.3 338
Rev. 4.41
5.
Upon rebooting, the target server boots to the C drive and runs the autoexec.bat
file, which loads the Linux setup kernel. This reboot begins the Linux NFS-based
scripted installation.
Rev. 4.41
It is configured for:
One drive
Two drives
Three drives
Four or more drives
RAID 0
RAID 1
RAID 5
RAID Advanced Data Guarding (ADG) (if supported);
otherwise, RAID 5
L5.3 339
Default setting
The root password for servers created with the provided scripts
is password. This password is stored as clear text in the
kickstart file. HP recommends that you change the root
password to your own password and in encrypted form within
the kickstart file. For instructions, refer to the Red Hat Linux
Customization Guide located at:
http://www.redhat.com/docs/manuals/linux
When configuring the disk partition for a scripted operating
system installation, a 75MB boot partition is created. The
remainder of the disk space is then partitioned according to
Linux default specifications.
Basic Linux packages are installed during a scripted operating
system installation. The GNOME and KDE packages are not
installed automatically.
Firewall settings are disabled.
HP installs the latest support pack drivers and agents. The
default Linux Web Agent password is password. This password
is stored as clear text in the input file, linuxpsp.txt, located on
the NFS server.
Drive configuration
Packages
Firewall
ProLiant Support Pack files
Many of these settings are contained within the appropriate kickstart file, for
example, bl20p.ks.cfg is provided for the ProLiant BL20p server blades.
L5.3 340
Rev. 4.41
Rev. 4.41
1.
Insert the RDP 1.60 CD into the CD-ROM drive of your deployment server.
If autorun is enabled, the RDP installation utility runs automatically. If
autorun is disabled, double-click the autorun.exe file in the CD-ROM root
directory.
2.
When the software license agreement displays, click Agree to continue with
the installation.
3.
L5.3 341
L5.3 342
4.
At the Welcome screen, select I agree to all the terms of the preceding
License Agreement and then click Next.
5.
At the Job Selection screen, select the appropriate Linux jobs for the version
of the operating system you are going to install. The Windows and Linux
scripted jobs are cleared by default.
Rev. 4.41
Rev. 4.41
6.
Scroll down and clear the SmartStart Toolkit and OS Imaging Events and the
SmartStart Toolkit Hardware Configuration Events. These jobs were
imported during the initial RDP installation. Click Next to continue.
7.
L5.3 343
L5.3 344
8.
At the Confirm File Replace screen, click No. Answer No to all subsequent
file and folder replacement confirmation screens.
9.
At the OS Distribution Copying screen, insert the Red Hat Enterprise Linux
AS 3 Update 2 CD #1 into the CD-ROM drive and click Next.
Rev. 4.41
Rev. 4.41
L5.3 345
L5.3 346
Rev. 4.41
2.
At the Nautilus screen, click the Tree tab and click the
Usr/cpqrdp/ss.710/rhas3/ folder.
3.
Rev. 4.41
1.
2.
L5.3 347
2.
#@ GNOME
#@ KDE
To select a graphical package, remove the # (comment mark) from the line
that corresponds to the graphical package you prefer and save your changes.
2.
L5.3 348
Rev. 4.41
Locate the network line and modify it to include the following options:
network --bootproto dhcp --hostname refsrv.class.local
--nameserver 192.168.0.1
2.
Verify that the Linux NFS server address and directory are correct. Modify
them if necessary.
Rev. 4.41
L5.3 349
L5.3 350
1.
Ensure that your target server is powered off and that the server icon in the
Deployment Server Console is deleted.
2.
In the Jobs pane, expand the Red Hat Enterprise Linux AS 3 Scripted Install
Events folder.
3.
Double-click the ProLiant BL20p Scripted Install for Red Hat Linux AS 3
job.
4.
At the Job Properties screen, double-click the Run Script Install OS task.
Rev. 4.41
5.
At the Script Information screen, notice how the DOS environment variables
are used to specify the configuration files used during the job. Ensure that the
Run this script option is selected, and change the set nfsserver= line to:
set nfsserver=192.168.0.1
where the IP address reflects your NFS server. Click Finish to continue.
Rev. 4.41
6.
7.
Power on your target server and open an iLO Remote Console session to
observe the deployment progress.
L5.3 351
L5.3 352
8.
After the target server displays in the Computers pane of the Deployment
Server Console, drag the ProLiant BL20p Scripted Install for Red Hat Linux
AS 3 job to the new computer icon representing your target server. Schedule
the job to execute immediately.
9.
The scripted Red Hat Linux installation continues unattended, and completes
within 45 to 60 minutes.
Rev. 4.41
10. After the installation completes, log in with the login name of root and the
password of password.
11. If you elected to install a graphical package, enter startx to start the GUI
after logging in.
Rev. 4.41
L5.3 353
At the GNOME GUI desktop, click Main Menu Run Program and execute
cpqacuxe R. This starts the cpqacuxe service.
L5.3 354
2.
3.
Click the Mozilla Web Browser icon to start the Mozilla web browser.
4.
Rev. 4.41
Rev. 4.41
5.
6.
7.
L5.3 355
L5.3 356
8.
9.
Rev. 4.41
11. The Array Configuration Utility screen displays. Ensure that all your array
controllers are visible in the left pane. Now you can use the ACU to
configure your arrays.
Rev. 4.41
L5.3 357
L5.3 358
Rev. 4.41
Rev. 4.41
1.
In the Jobs pane of the Deployment Server Console, expand the SmartStart
Toolkit and OS Imaging Events folder.
2.
3.
L5.3 359
4.
L5.3 360
In the Run Script screen, change the default names of the hardware
information and array information files that will be captured and click Finish.
Rev. 4.41
Rev. 4.41
5.
6.
Change the default image file name lnxcap.img to rslinux.img, and click
Advanced to view the optional settings for imaging.
7.
In the Create Disk Image Advanced screen, notice that you can change the
maximum file size and compression ratio, and then click OK to return to the
Create Disk Image screen.
L5.3 361
8.
9.
Double-click the Remove cached DHCP information task. If the image being
captured uses the DHCP for any of the NICs, cached information must be
removed to avoid duplicate IP addresses. The second command in the Run
this script field removes this information.
L5.3 362
Rev. 4.41
12. In the Deployment Server Console screen, move the modified Capture
Hardware Configuration and Linux Image job to the appropriate server icon
in the Computers pane.
13. Select Run this job immediately in the Schedule Computers for Job screen,
and click OK. The reference server restarts and processes the job.
Rev. 4.41
L5.3 363
14. The imaging process should take 10 to 15 minutes. When completed, verify
that the following files were created successfully:
L5.3 364
Rev. 4.41
Objectives
After completing this lab, you should be able to:
Deploy Windows Server 2003 using the hardware configuration files and the
disk image previously created
Requirements
To complete this lab, you will need:
One or more ProLiant server blades such as the ProLiant BL20p Generation 2
(G2)
Rev. 4.41
One or more ProLiant server blades with Storage Area Network (SAN)
support, such as the ProLiant BL20p G2 with Fibre Channel Mezzanine Card
An RJ-45 Patch Panel 2 or GbE2 Interconnect Switch with the GbE2 Storage
Connectivity Kit
One Lucent Connector (LC-to-LC) fiber cable for connecting the server blade
to the MSA1000
L5.4 365
L5.4 366
1.
2.
In the Jobs pane, expand the SmartStart Toolkit and OS Imaging Events
folder, and double-click the Deploy Hardware Configuration and Windows
Image job.
3.
Rev. 4.41
4.
Rev. 4.41
At the Script Information screen, change the default names of the hardware
information and array information files that will be used in the deployment.
These are the files you captured in the earlier lab.
5.
6.
L5.4 367
7.
At the Disk Image Source screen, change the default image name,
wincap.img, to rsw2k3.img.
8.
9.
10. Click OK to close the Job Properties screen and return to the Deployment
Server Console.
L5.4 368
Rev. 4.41
11. If your reference server is also your target server, erase the configuration on
your reference server before deploying the captured image file:
Caution
This step is data destructive. You must have successfully captured a disk
image of your reference server before proceeding.
a.
b.
Move the Erase Hardware Configuration and Disks job to the reference
server icon in the Computers pane.
c.
At the Schedule Computers for Job screen, select Run this job
immediately and click OK to start the job.
d.
12. After the erase job is completed, delete your reference server from the
Deployment Server Console by right-clicking its icon and selecting Delete.
13. At the Confirm Delete pop-up window, select Delete computers and groups
contained within selected items and click Yes.
Rev. 4.41
L5.4 369
14. If necessary, power cycle your target server. At the PXE Boot Selection
menu, verify that Altiris BootWorks (Initial Deployment) is selected.
The Initial Deployment job runs for all new computers that are not registered
in the Altiris database. This job does not perform any work, such as imaging
the server. When finished, it displays the new computer in the Deployment
Server Console and waits for further instructions.
The Initial Deployment job adds the target server to the New Computers
group in the Deployment Server Console. The target server displays in the
Deployment Server Console with the waiting state icon:
L5.4 370
Rev. 4.41
15. Move the modified Deploy Hardware Configuration and Windows Image job
to the target server icon in the Computers pane.
16. At the Schedule Computers for Job screen, schedule the job to run
immediately and click OK.
17. When a warning message displays on the target server, let the timer expire or
press any key (except ESC) to continue with the installation. No further
interaction with the target server is required as the image deploys.
Rev. 4.41
L5.4 371
L5.4 372
Rev. 4.41
Rev. 4.41
L5.4 373
At the Deployment Server Console screen, right-click your server blade icon
in the Computers pane and select History. In the space below, write the jobs
that were executed on your server blade, starting with the most recent one.
Close the History window when finished.
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................
L5.4 374
2.
In the Computers pane, double-click the server blade icon to launch the
Computer Properties screen.
3.
At the Computer Properties screen, scroll down and click the Bay icon.
Rev. 4.41
4.
Rev. 4.41
From the Server change rule drop down menu, select Re-Deploy Computer
and click OK. The possible choices are:
Run a Predefined Job The server blade processes any job specified
by the user, including the Initial Deployment job.
Ignore the Change The new server blade is ignored, meaning that
no jobs are initiated. If the server blade existed in a previous bay, the
history and parameters for the server blade are moved or associated with
the new bay. If the server blade is unrecognized, its properties are
associated with the bay, and the normal process defined for new server
blades, if any, is followed.
L5.4 375
5.
Shut down your server blade and remove it from its bay. Swap it with another
student group that is ready to test this feature. Insert the replacement server
blade in the corresponding bay, and observe what happens. Ensure the new
server blade powers on, and allow several minutes for the server change rule
to take effect.
6.
L5.4 376
Important
If connected to a SAN and using Selective Storage Presentation (SSP), you
must retain the server blade Host Bus Adapters (HBAs) to ensure that the
replacement server hardware components are identical in every way to the one
you are replacing. Place each HBA in the new server blade in the same order
and location as they were in the old server blade.
When using SSP, the SAN storage solution is configured to allow access to the
LUNs from a specific server blade using the World Wide ID (WWID) of its
HBAs. If you replace the server blade and the HBAs change, the HBA
WWIDs change, and you will lose connectivity to your SAN. RDP 2.0 will
include jobs that automate the SAN connection recovery.
If SSP is not configured, the HBA WWID is not used, and the rip-and-replace
functionality will not be affected by replacement server blades with different
HBA WWIDs.
When the new server blade registers with the Deployment Server Console, its
name displayed in the Computers pane will change. The Deployment Server
Console will then initiate a redeployment job.
Rev. 4.41
What jobs were executed on the new server blade as a result of the server change
rule execution?
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................
If the following jobs were recorded in the history for the original server blade,
what jobs would be automatically executed on the new server blade and in what
order?
a.
b.
............................................................................................................................
............................................................................................................................
If the following jobs were recorded in the history for the original server blade,
what jobs would be automatically executed on the new server blade and in what
order?
a.
b.
c.
............................................................................................................................
............................................................................................................................
............................................................................................................................
For a fast recovery of a failed server blade, what is the most recent job that should
be in the server blade history?
............................................................................................................................
............................................................................................................................
Rev. 4.41
L5.4 377
L5.4 378
Rev. 4.41
Objectives
After completing this lab, you should be able to:
Requirements
To complete this lab, you will need:
One or more ProLiant server blades such as the ProLiant BL20p Generation 2
(G2)
Rev. 4.41
One or more ProLiant server blades with Storage Area Network (SAN)
support, such as the ProLiant BL20p G2 with Fibre Channel Mezzanine Card
An RJ-45 Patch Panel 2 or GbE2 Interconnect Switch with the GbE2 Storage
Connectivity Kit
One Lucent Connector LC-to-LC fiber cable for connecting the server blade
to the MSA1000
L5.5 379
L5.5 380
1.
2.
In the Jobs pane, expand the SmartStart Toolkit and OS Imaging Events
folder, and double-click the Deploy Hardware Configuration and Linux
Image job.
3.
Rev. 4.41
4.
Rev. 4.41
At the Script Information screen, change the default names of the hardware
information and array information files that will be used in the deployment.
These are the files you captured in the earlier lab.
5.
6.
L5.5 381
7.
At the Disk Image Source screen, change the default image name,
lnxcap.img, to rslinux.img.
8.
9.
10. Click OK to close the Job Properties screen and return to the Deployment
Server Console.
L5.5 382
Rev. 4.41
11. If your reference server is also your target server, erase the configuration on
your reference server before deploying the captured image file:
Caution
This step is data destructive. You must have successfully captured a disk
image of your reference server before proceeding.
a.
b.
Move the Erase Hardware Configuration and Disks job to the reference
server icon in the Computers pane.
c.
At the Schedule Computers for Job screen, select Run this job
immediately and click OK to start the job.
d.
12. After the erase job is completed, delete your reference server from the
Deployment Server Console by right-clicking its icon and selecting Delete.
13. At the Confirm Delete pop-up window, select Delete computers and groups
contained within selected items and click Yes.
Rev. 4.41
L5.5 383
14. If necessary, power cycle your target server. At the PXE Boot Selection
menu, Altiris BootWorks (Initial Deployment) should be auto-selected.
The Initial Deployment job runs for all new computers that are not registered
in the Altiris database. This job does not perform any work, such as imaging
the server. When finished, it displays the new computer in the Deployment
Server Console and waits for further instructions.
The Initial Deployment job adds the target server to the New Computers
group in the Deployment Server Console. The target server displays in the
Deployment Server Console with the waiting state icon:
L5.5 384
Rev. 4.41
15. Move the modified Deploy Hardware Configuration and Linux Image job to
the target server icon in the Computers pane.
16. At the Schedule Computers for Job screen, schedule the job to run
immediately and click OK.
17. When a warning message displays on the target server, let the timer expire or
press any key (except ESC) to continue with the installation. No further
interaction with the target server is required as the image deploys.
Rev. 4.41
L5.5 385
L5.5 386
Rev. 4.41
Rev. 4.41
L5.5 387
At the Deployment Server Console screen, right-click your server blade icon
in the Computers pane and select History. In the space below, write the jobs
that were executed on your server blade, starting with the most recent one.
Close the History window when finished.
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................
L5.5 388
2.
In the Computers pane, double-click the server blade icon to launch the
Computer Properties screen.
3.
At the Computer Properties screen, scroll down and click the Bay icon.
Rev. 4.41
4.
Rev. 4.41
From the Server change rule drop down menu, select Re-Deploy Computer
and click OK. The choices are:
Run a Predefined Job The server blade processes any job specified
by the user, including the Initial Deployment job.
Ignore the Change The new server blade is ignored, meaning that
no jobs are initiated. If the server blade existed in a previous bay, the
history and parameters for the server blade are moved or associated with
the new bay. If the server blade is unrecognized, its properties are
associated with the bay, and the normal process defined for new server
blades, if any, is followed.
L5.5 389
5.
Shut down your server blade and remove it from its bay. Swap it with another
student group that is ready to test this feature. Insert the replacement server
blade in the corresponding bay, and observe what happens. Ensure the new
server blade powers on, and allow several minutes for the server change rule
to take effect.
6.
L5.5 390
Important
If connected to a SAN and using Selective Storage Presentation (SSP), you
must retain the server blade Host Bus Adapters (HBAs) to ensure that the
replacement server hardware components are identical in every way to the one
you are replacing. Place each HBA in the new server blade in the same order
and location as they were in the old server blade.
When using SSP, the SAN storage solution is configured to allow access to the
LUNs from a specific server blade using the World Wide ID (WWID) of its
HBAs. If you replace the server blade and the HBAs change, the HBA
WWIDs change, and you will lose connectivity to your SAN. RDP 2.0 will
include jobs that automate the SAN connection recovery.
If SSP is not configured, the HBA WWID is not used, and the rip-and-replace
functionality will not be affected by replacement server blades with different
HBA WWIDs.
When the new server blade registers with the Deployment Server Console, its
name displayed in the Computers pane will change. The Deployment Server
Console will then initiate a redeployment job.
Rev. 4.41
What jobs were executed on the new server blade as a result of the server change
rule execution?
............................................................................................................................
............................................................................................................................
............................................................................................................................
............................................................................................................................
If the following jobs were recorded in the history for the original server blade,
what jobs would be automatically executed on the new server blade and in what
order?
a.
b.
............................................................................................................................
............................................................................................................................
If the following jobs were recorded in the history for the original server blade,
what jobs would be automatically executed on the new server blade and in what
order?
a.
b.
c.
............................................................................................................................
............................................................................................................................
............................................................................................................................
For a fast recovery of a failed server blade, what is the most recent job that should
be in the server blade history?
............................................................................................................................
............................................................................................................................
Rev. 4.41
L5.5 391
L5.5 392
Rev. 4.41