Академический Документы
Профессиональный Документы
Культура Документы
Administration I
H3064S J.00
Student guide
1 of 3
Use of this material to deliver training without prior written permission from HP is prohibited.
HP-UX System and Network
Administration I
H3064S J.00
Student guide
1 of 3
Use of this material to deliver training without prior written permission from HP is prohibited.
© Copyright 2010 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice. The only warranties for
HP products and services are set forth in the express warranty statements accompanying such
products and services. Nothing herein should be construed as constituting an additional
warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
This is an HP copyrighted work that may not be reproduced without the written permission of
HP. You may not use these materials to deliver training to any person outside of your
organization without the written permission of HP.
UNIX® is a registered trademark of The Open Group.
X/Open® is a registered trademark, and the X device is a trademark of X/Open Company
Ltd. in the UK and other countries.
Export Compliance Agreement
Export Requirements. You may not export or re-export products subject to this agreement in
violation of any applicable laws or regulations.
Without limiting the generality of the foregoing, products subject to this agreement may not be
exported, re-exported, otherwise transferred to or within (or to a national or resident of)
countries under U.S. economic embargo and/or sanction including the following countries:
Cuba, Iran, North Korea, Sudan and Syria.
This list is subject to change.
In addition, products subject to this agreement may not be exported, re-exported, or otherwise
transferred to persons or entities listed on the U.S. Department of Commerce Denied Persons
List; U.S. Department of Commerce Entity List (15 CFR 744, Supplement 4); U.S. Treasury
Department Designated/Blocked Nationals exclusion list; or U.S. State Department Debarred
Parties List; or to parties directly or indirectly involved in the development or production of
nuclear, chemical, or biological weapons, missiles, rocket systems, or unmanned air vehicles
as specified in the U.S. Export Administration Regulations (15 CFR 744); or to parties directly
or indirectly involved in the financing, commission or support of terrorist activities.
By accepting this agreement you confirm that you are not located in (or a national or resident
of) any country under U.S. embargo or sanction; not identified on any U.S. Department of
Commerce Denied Persons List, Entity List, US State Department Debarred Parties List or
Treasury Department Designated Nationals exclusion list; not directly or indirectly involved in
the development or production of nuclear, chemical, biological weapons, missiles, rocket
systems, or unmanned air vehicles as specified in the U.S. Export Administration Regulations
(15 CFR 744), and not directly or indirectly involved in the financing, commission or support
of terrorist activities.
Printed in the US
HP-UX System and Network Administration I
Student guide (1 of 3)
September 2010
Contents
Course Audience
This fast-paced 5-day course is the first of two courses HP offers to prepare
new UNIX administrators to successfully manage an HP-UX server or
workstation.
The course assumes that the student has experience with general UNIX user
commands.
Student Notes
This fast-paced 5-day course is the first of two courses HP offers to prepare new UNIX
administrators to successfully manage an HP-UX server or workstation.
The course assumes that the student has experience with general UNIX user commands.
Course Agenda
Course Overview Accessing the System Console
Navigating the SMH Booting PA-RISC Systems
Managing Users and Groups Booting Integrity Systems
Navigating the HP-UX File System Configuring the Kernel
Student Notes
HP-UX System Administrators often serve a number of roles – from configuring peripherals,
to managing user accounts, to installing software and patches. Over the span of five days,
this course covers the core skills required by all HP-UX system administrators.
HP recommends that students attend the follow-on to this course, HP-UX System and
Network Administration 2 (H3065S), to complete the course sequence for new HP-UX
administrators.
HP Education also offers courses covering numerous advanced HP-UX system and network
administration topics. See our website, http://www.hp.com/education for more
information.
HP-UX Versions
• HP currently supports several HP-UX 11i versions
• Slides and notes in this course cover all three current versions
• Labs will be completed on 11i v3
* Updated 11i v2/v3 media kits continue to be released every ~six months
Student Notes
Since HP-UX 11i was first released for PA-RISC in 2000, HP has released a number of
versions of the operating system for the Integrity product line. The table on the slide lists the
release identifier (as reported by HP-UX commands), release name (as used in the HP-UX
documentation), and supported platform for each release of HP-UX 11i. HP distributes
updated media kits with new patches and minor software updates approximately every six
months. The four digits following “11i v1v2/v3” indicate each release’s release year and
month.
Use the uname -r command to determine which HPUX version your system is currently
running:
# uname -r
B.11.31
To determine which media kit your system was installed from, use swlist to check the
version# on the QPKBASE patch bundle.
The slides and notes in this course cover all three currently supported versions of the
operating system: 11i v1, v2, and v3. The lab exercises require 11i v3.
To determine end of support dates for each current HP-UX version, see HP’s support
roadmap online at http://www.hp.com/go/hpuxservermatrix.
HP Education Services:
http://www.hp.com/education
Student Notes
Beyond this course, there is a wealth of resources available to assist new HP-UX system
administrators.
Student Notes
New HP-UX System Administrators often find that the HP’s System Administration Manager
(SAM) and the System Management Homepage (SMH) interfaces simplify many
administration tasks.
Both tools provide intuitive, menu-based interfaces for adding users, configuring the kernel,
configuring network interface cards, and other common administration tasks. Both also
include informative help screens, and automatic error-checking.
Like many menu-based interfaces, though, both SAM and SMH often provide less flexibility
than command line utilities.
The notes below describe the features of both tools. The remainder of this module focuses
on the SMH. An appendix at the end of the course discusses SAM in a bit more detail.
SMH replaces SAM entirely in 11i v3. The /usr/sbin/sam command is still available in 11i
v3, but launches the SMH rather than SAM. The latest version of the SMH for all versions of
HP-UX may be downloaded from http://software.hp.com.
SAM’s GUI requires X-windows. The SMH uses a more flexible, SSL-protected, web-based
GUI interface that may be accessed from any Internet Explorer or Firefox web browser.
Accessing the system via a web interface provides much greater flexibility for administrators
who manage systems remotely.
SAM uses HP-UX commands and backend scripts and executables to complete
administration tasks. Administrators can review the commands in the
/var/sam/log/samlog file, but many of the commands called from the SAM interface
cannot be executed outside of SAM.
# smh
NOTE: this screenshot has been formatted and truncated to fit the slide
Student Notes
SMH is included on the operating environment DVDs for HP-UX 11i v1 (since September
2005), 11i v2 (since May 2005), and 11i v3 (all media kits). You can also download the
product from http://software.hp.com.
Not all SMH features are available on all HP-UX versions. New media kits often introduce
new SMH functionality. Use the swlist command to determine your system’s SMH version.
# swlist SysMgmtWeb
SMH has several additional dependencies, all of which are included in the 11i v2 and 11i v3
operating environments. On 11i v1, HP also recommends installing the KRNG11i patch
bundle from http://software.hp.com for improved security.
The SMH offers a web interface in all HP-UX versions, and, in 11i v3, a TUI interface as well.
To launch the TUI interface, log into the target system as user root using any 24x80 terminal
emulator, and run smh.
Use the [Tab] key to jump back and forth between the menu bar and the other regions on
the screen, and the arrow keys to scroll up and down and left and right. Look for keyboard
shortcuts at the bottom of the screen.
Student Notes
HP-UX provides the SMH web interface via a dedicated Apache web server daemon. There
are two common techniques for launching this daemon. By default, SMH is configured to run
in “autostart” mode, as described below. The next slide describes “start on boot” mode.
• During the system boot process, the /sbin/init.d/hpsmh startup script launches a
lightweight smhstartd daemon during the boot process. smhstartd runs continuously
until system shutdown, listening for incoming connection requests from clients.
/opt/hpws/apache/bin/httpd \
-k start \
-DSSL -f \
/opt/hpsmh/conf/smhpd.conf
# smhstartconfig –a on –b off
/etc/rc.config.d/hpsmh has been edited to enable
HPSMH to be autostarted using port 2301.
NOTE: HPSMH 'start on boot' mode is already disabled.
# smhstartconfig
HPSMH 'autostart url' mode.........: ON
HPSMH 'start on boot' mode.........: OFF
Start Tomcat when HPSMH starts.....: OFF
If your organization’s security policy prohibits web servers on production servers, you can
disable the SMH web interface entirely with the following commands:
# vi /etc/rc.config.d/hpsmh
# /sbin/init.d/hpsmh start
Student Notes
The previous slide explained how to launch the Apache/SMH daemon on an as-needed basis
via SMH autostart. Administrators who wish to connect to the SMH directly via HTTPS may
prefer to start the Apache/SMH daemon during the boot process and allow it to run
perpetually.
Autostart is the default SMH configuration mode. Use the smhstartconfig command to
enable and verify SMH start-on-boot.
# smhstartconfig -a off -b on
/etc/rc.config.d/hpsmh has been edited to disable
the autostarting of HPSMH using port 2301.
/etc/rc.config.d/hpsmh has been edited to enable
the 'start on boot' startup mode of HPSMH server.
# smhstartconfig
HPSMH 'autostart url' mode.........: OFF
HPSMH 'start on boot' mode.........: ON
Start Tomcat when HPSMH starts.....: OFF
If your organization’s security policy prohibits web servers on production servers, you can
disable the SMH web interface entirely with the following commands:
# vi /etc/rc.config.d/hpsmh
# /sbin/init.d/hpsmh start
Student Notes
If the SMH start-on-boot functionality is enabled, users connect directly to the SMH via
https://server:2381/. If SMH autostart functionality is enabled, users initially connect
to http://server:2301/, then get redirected to https://server:2381/. In either
case, the user ultimately accesses the SMH server through an https Secure Socket Layer
(SSL) connection.
• Users can verify the identity of the SMH server to which they are connected.
Any time a web browser accesses a website via the HTTPS protocol, the web server presents
a security “certificate”. The client browser compares the certificate provided by the web
server with information obtained from a trusted “certificate authority” (CA) such as
http://www.verisign.com.
By default, SMH uses “self-signed” certificates, which are signed by the SMH server itself
rather than a well-known CA. The browser can’t determine the authenticity of self-signed
certificates, so it displays a warning similar to the messages shown on the slide. If you see a
security certificate warning message, but your server and client reside on a secure, trusted
network, you may choose to ignore the message and proceed with the connection.
Student Notes
After connecting to the SMH daemon, enter an authorized HP-UX username/password. By
default, only members of the HPUX root group can log into the SMH. User root is typically
the only member of the root group. To determine which users belong to your system’s
root group, use nsquery.
A later slide in this chapter explains how to grant other user groups access to the SMH, too.
Student Notes
The SMH utilizes a tabbed interface.
• Use the “Home” tab, the default tab, to view summary system status information.
• Use the “Settings” tab to customize SMH security and add custom menu items.
• Use the “Logs” tab to launch SMH’s web-based log file viewers.
• Use the “Support” tab to access HP’s online IT Resource Center and user forums.
The SMH banner graphic includes links to a number of other resources in the SMH, too.
• On the far left, the SMH reports which SMH screen you are currently viewing.
• The next block reports your system hostname and model string.
• The next block provides a link to the Management Processor, which provides a console
login interface that is required for some system administration tasks.
• Two icons on the far right enable you to select the SMH list or icon menu format.
• Two links above the menu format buttons take you back to the SMH “Home” screen, or
log you out.
• The “Legend” link displays a legend that explains the meaning of the SMH icons.
• The “Refresh” link refreshes the current SMH screen when system conditions change.
• By default, SMH sessions terminate after several minutes of inactivity. Click the
checkbox at top right to disable the auto-logout feature.
SMH->Home (1 of 2)
• The SMH “Home” tab summarizes the status of the system’s subsystems
• Click any subsystem for more detailed information
• Contents of the “Home” tab vary from model to model
• Click the “Legend” link to view an icon legend
Student Notes
The SMH “Home” tab summarizes the status of the cooling, power, memory, and other
hardware subsystems. The subsystems listed may vary somewhat from system model to
system model. To learn more about a subsystem, click the subsystem name.
To the left of each subsystem name, the SMH displays a color-coded icon that represents the
subsystem’s health status. Click the “Legend” link in the SMH header, or see the legend
included on the slide, to determine what each icon represents.
The oversize status icon at the top left of the SMH “Home” page summarizes the overall
system status. In the sample system shown on the slide, one of the network interface cards is
disconnected, which results in a minor warning for the network subsystem, and for the
system as a whole.
Though not shown in the screenshot on the slide, the “Home” tab also includes a “System
Configuration” box containing links to some of the commonly used SMH system
administration tools. A slide later in this chapter discusses tools in detail.
WBEM
The SMH collects status information about the operating system and the system hardware via
Web Based Enterprise Management (WBEM) protocols and standards. WBEM is an industry
standard developed and used by multiple vendors. Most HP operating systems, platforms
and devices include WBEM “providers” that provide information to SMH and other HP
management tools.
Use the swlist command to see which WBEM providers are installed on your HP-UX 11i v1,
v2, or v3 system.
HP adds new and updated WBEM providers in each media kit release. The latest WBEM
providers are also available on http://software.hp.com.
SMH->Home (2 of 2)
From the “Home” tab …
• Click a hardware subsystem (e.g.: “Physical Memory”) for more details
• Output varies from model to model
NOTE: screenshot has been formatted and truncated to fit the slide
Student Notes
From the SMH “Home” tab, you can click any subsystem link to view more detailed
information about that subsystem. The screenshot on the slide shows the physical memory
subsystem detail, including the status, location, capacity, type, and serial number of each
DIMM (Dual Inline Memory Module).
SMH->Tools (1 of 4)
The “Tools” tab provides GUI interfaces for many common admin tasks
• Some tools launch GUI interfaces, some launch web interfaces, others run CLIs
• Supported tools vary from release to release
Student Notes
The SMH “Tools” tab provides GUI interfaces for many common system administration tasks.
The slide shows some of the tools included by default in the SMH.
Some tools launch GUI interfaces, some launch web interfaces, others run command line
utilities. In the current release, some SMH tools launch legacy SAM interfaces, too.
Supported tools vary from OS release to OS release.
SMH->Tools (2 of 4)
To run a tool...
• Click a tool (e.g.: “File Systems”) on the “Tools” tab
• Select an object (e.g.: “/home”) from the resulting object list
• Select an action (e.g.: “Unmount”) from the resulting action list
• Provide the information requested in the dialog box that follows
Student Notes
In order to launch a tool, simply click the tool’s link on the SMH “Tools” tab.
The interface that follows varies from tool to tool. Most of the recently developed tools use a
web interface similar to the “File System” tool shown on the slide.
• Click a tool (e.g.: “File Systems”) on the “Tools” tab.
• Select an action (e.g.: “Unmount”) from the resulting action list on the right side of the
screen.
SMH->Tools (3 of 4)
• Dialog boxes vary
from tool to tool
• Most include an
explanation of the tool
and it’s limitations and
side-effects
• Most include a
preview button that
displays the HP-UX
command(s) executed
by the tool
Student Notes
Tool dialog boxes vary from tool to tool.
Most include an explanation of the tool’s purpose, its limitations, and any potential side-
effects.
Most include a “Preview” button that displays the HP-UX command(s) that will be executed
by the tool.
SMH->Tools (4 of 4)
Some SMH tools are simply wrappers for external non-web-based applications
• Select your preferred language
• Enter your desktop system’s $DISPLAY variable value
• Look at the command preview to determine which command the tool executes
• Click “Run”
Student Notes
Some SMH tools simply launch legacy SAM interfaces, or other GUI and CLI applications.
Launching these types of tools displays a window similar to the dialog box shown on the
slide. To use these tools:
• Select your preferred language from the pull-down menu. English users should select
“C”.
• If the tool is GUI-based, enter your desktop system’s $DISPLAY name. Execute echo
$DISPLAY in a shell window to determine the appropriate display name.
• Look at the command preview at the bottom of the screen to determine which
command the tool executes.
• Click “Run”.
What happens next varies from tool to tool. CLI-based tools simply execute the command
and display the resulting STDOUT/STDERR output. Web-based tools run in a new browser
window. X-based applications, such as the swinstall tool shown on the slide, launch an X-
based interface similar to the swinstall interface below.
SMH->Settings
The “Settings” tab allows you to add and remove your own custom tools, too
• Access the “Settings” tab
• Click “Add Custom Menu”
• Use the resulting dialog box to create the custom tool
• Custom tools may be added to existing SMH tool categories, or new custom categories
• Custom tools may launch X applications, CLI commands, or web applications
• Custom tools may be configured to run as root when launched by non-root users
• Custom tools may be executed just like built-in SMH tools
Student Notes
The SMH has quite a few built-in tools. For even more flexibility, SMH allows the
administrator to add custom tools, too.
• Access the “Settings” tab.
• Custom tools may be added to existing tool categories, or new custom categories.
• Custom tools may be configured to run as root when launched by non-root users.
• To execute a custom tool, just click the tool’s link as you would any other SMH tool. CLI-
based tools execute the command non-interactively and display the resulting
SMH->Tasks
Use the “Tasks” tab to execute a single command through the SMH
• Access the “Tasks” tab
• Click “Launch” or “Run”, and follow the prompts to run the program
• SMH reports the command’s STDERR and STDOUT output
Student Notes
The SMH “Settings” tab allows administrators to create permanent custom tools to execute
frequently-used commands. The SMH “Task” tab allows administrators to execute one-time
commands remotely, without permanently adding a tool to the SMH menus.
• Access the “Tasks” tab.
• Click “Launch” or “Run” and follow the prompts to run the program. Select your
preferred language from the pull-down menu. English users should select “C”. If the tool
is GUI-based, enter your desktop system’s $DISPLAY name. Execute echo $DISPLAY
in a shell window to determine the appropriate display name.
SMH->Logs
SMH provides web-based log file viewers for viewing some common system log files
• Access the “Logs” tab
• Select a log file viewer (e.g.: “System Log Viewer”)
• Use the “Select” tab to select a log file (e.g.: “syslog.log” vs. “OLDsyslog.log”)
• Use the “Layout” and ”Filters” tabs to customize the column layout
• Use the “Display” tab to view the log contents
• Log file viewer features for other log files may vary
Student Notes
SMH provides web-based log file viewers for viewing and filtering several common system
log files.
• Access the “Logs” tab.
• Select a log file viewer (e.g.: “System Log Viewer”). Different log viewers may have
slightly different interfaces. The steps below apply to the “System Log Viewer”, which
displays the contents of the /var/adm/syslog/syslog.log log file. The
syslog.log file captures error, warning, and status messages from a variety of
subsystems and services.
− Use the “Select” tab to select a log file (e.g.: “syslog.log” vs. “OLDsyslog.log”).
− Use the “Layout” and tab to customize the column layout, and use the “Filters” tab to
filter the log file contents by date and time.
− Use the “Display” tab to view the log file contents. Use the scroll bar to move
forwards and backwards through the file. Use the “Search” text box to search the file
for specific patterns.
− Log file viewer features for other log files may vary.
If you want to add log file viewers for other log files into the SMH, use the “Add Custom
Menu” feature described previously, put the tool on the “Logs” page, and enter
“/usr/bin/cat /my/log/file/name” in the “Command/URL” field.
Student Notes
Users must enter a valid HP-UX username/password in order to access the SMH. SMH
determines a user’s access rights (if any) via the user’s HP-UX group memberships. By
default, only members of the root group can access the SMH. If other users such as
operators, backup administrators, or database administrators need access to the SMH, use
the “Settings->Security->User Groups” menu to grant SMH access to other HP-UX groups.
Members of groups that have SMH “Administrator” privileges can use all of the SMH tools
and features, add custom tools, and grant SMH access rights to other user groups. By default,
the SMH grants members of the root group SMH “Administrator” privileges.
Members of groups that have SMH “Operator” privileges can access most SMH tools and
features, but cannot add or remove custom tools, execute arbitrary tasks as root, or modify
the SMH user, group, security, and authentication settings.
Members of groups that have SMH “User” privileges can use tools that display information
but cannot use SMH tools to modify either the system or SMH configuration.
# smh -r
The privileges set for the user from the Text User Interface doesn't
apply to Graphical User Interface. System Management Homepage(SMH)
in Graphical User Interface has a different way of setting the
privileges. Please look at smh(1M) man page for more information
Do you want to continue (y/n) <y>: y
Next, specify which SMH functional areas the user should be allowed to access. Be sure to
press s to save the selected privileges before exiting.
The user should then be able to run /usr/sbin/smh and access the selected SMH
functional areas.
SMH Authentication
Security conscious system administrators can enable
additional SMH authentication features via other links on
the Settings->Security menu
• Anonymous/Local Access:
Allow local and/or remote users to access the SMH without providing a
username/password
• IP Binding:
Only allow users to access SMH from selected networks
• IP Restricted login:
Only allow users to access SMH from selected IP addresses
• Local Server Certificate:
Import a security certificate for the SMH server from a third party
• Timeouts:
Specify SMH session timeout values
• Trust Mode:
Determine how SMH authenticates configuration requests from remote
SIM servers
• Trusted Management Servers:
Import security certificates for SIM servers, if using SIM to remotely
manage SMH nodes
Student Notes
Security conscious system administrators can enable additional SMH authentication features
via other links on the “Settings->Security” menu.
Local/Anonymous Access
Anonymous Access enables a user to access the System Management Homepage without
logging in. This feature is disabled by default. HP does not recommend enabling anonymous
access.
Local Access enables local users to access the System Management Homepage without being
challenged for authentication.
If Local Access/Anonymous is selected, any local user has access limited to unsecured pages
without being challenged for a username and password.
If Local Access/Administrator is selected, any user with access to the local console is granted
full access to all SMH features.
IP Binding
IP Binding specifies which IP networks and subnets the System Management Homepage
accepts requests from. A maximum of five subnet IP addresses and netmasks can be defined.
The System Management Homepage allows access from 127.0.0.1. If IP Binding is enabled
and no subnet/mask pairs are configured, then the System Management Homepage is only
available to 127.0.0.1. If IP Binding is not enabled, users can access the SMH from any
network or subnet.
IP Restricted login
IP Restricted Login allows the administrator to specify a semi-colon separated list of IP
address ranges that should be explicitly allowed or denied SMH access.
If an IP address is excluded, it is excluded even if it is also listed in the included box. If there
are IP addresses in the inclusion list, then only those IP addresses are allowed log-in access
with the exception of localhost. If no IP addresses are in the inclusion list, then log-in access
is allowed to any IP addresses not in the exclusion list.
Timeouts
Use this feature to change SMH session and interface timeout values.
Trust Mode
HP Systems Insight Manager (SIM) is an HP product that allows administrators to monitor
and manage multiple servers and devices from a central management station. The next slide
provides a brief overview of SIM functionality. SIM utilizes SMH for some management
tasks. The SMH “Trust Mode” screen determines how SMH authenticates requests received
from remote servers.
User Groups
Student Notes
HP SMH provides an intuitive web interface for managing a single HP system. HP Systems
Insight Manager provides an intuitive web interface for managing multiple HP servers and
devices from a consolidated central management interface.
SIM manages all HP-supported operating systems, and most HP-supported devices, including
storage devices, blade servers, Proliant Windows/Linux servers, blade enclosures and
servers, and much more.
SIM integrates with the SMH, and can seamlessly launch any HP Windows/Linux/HP-UX
server’s SMH.
SIM consolidates status, log, and other information from multiple nodes. In large
environments, this consolidated monitoring greatly simplifies monitoring and
troubleshooting tasks.
Basic SIM functionality is included with HP-UX. Some customers purchase additional SIM
plug-ins for even greater flexibility.
For more information about SIM, attend HP Education’s HB508S “HP-UX Systems Insight
Manager” class, or visit the SIM product page at http://www.hp.com/go/hpsim.
Manuals on http://docs.hp.com:
HP System Management Homepage User Guide
HP System Management Homepage Installation Guide
HP System Management Homepage Release Notes
Student Notes
Directions
Carefully follow the instructions below and record your answers in the spaces provided.
# swlist SysMgmtWeb
# swconfig –x reconfigure=true SysMgmtHomepage.*
1. Launch the Internet Explorer web browser and point it to the SMH autostart URL,
http://server_ip:2301/. Replace server_ip with your server's IP address.
a. If you are accessing your lab system remotely via a Virtual Lab portal server, launch
the portal’s Internet Explorer via the browser link on the VL webtop. In some VL
environments, there may be an SMH link on the webtop that opens a browser directly
to the SMH.
b. If you are accessing your lab system from a PC that has full network connectivity to
your lab system, launch Internet Explorer on your PC.
2. If asked if you wish to be redirected to “view pages over a secure connection”, click
[OK].
You should see a “Security Alert” indicating that the security certificate provided by the
SMH server was “issued by a company you have not chosen to trust”.
By default, the SMH uses “self-signed” authentication certificates, issued by the SMH
server itself. It’s possible to obtain a security certificate for the SMH server from a third
party “Certificate Authority”; for the sake of the lab, we’ll use the self-signed certificate.
3. Login as user root on the SMH login page. If your browser’s status bar is enabled, note
the padlock icon in the bottom right corner of the browser window indicating that the
connection to the server is secure.
By default, the SMH uses “self-signed” authentication certificates, issued by the SMH
server itself. It’s possible to obtain a security certificate for the SMH server from a third
party “Certificate Authority”; for the sake of the lab, we’ll use the self-signed certificate.
a. Click the “Accept this certificate permanently” radio button to permanently accept
the self-signed certificate from the SMH server.
c. A “Security Warning” message should appear indicating that you “have requested an
encrypted page”. Click [OK] to proceed to the SMH login screen.
4. Login as user root on the SMH login page. Note the padlock icon in the bottom right
corner of the browser window, indicating that you are connected to the server via a
secure connection.
1. Use the SMH “Home” tab links to view detailed status reports on some of your lab
system’s hardware components.
2. Use the SMH “Home” tab links to view detailed reports of your lab system’s process
information, networking information, and memory utilization.
3. Navigate to the SMH “Tools” tab and use the Defragment Extents link to
“defragment” the /home file system.
4. Navigate to the SMH “Tasks” tab and use the Run Command as Root link to execute
/usr/bin/passwd –f user1, which forces user1 to change his/her password at
next login.
5. Navigate to the SMH “Logs” tab and use the System Log Viewer link to view all lines
in /var/adm/syslog/syslog.log that contain the string inetd.
7. In the Command/URL field enter the following command, which purges all files from
/tmp which haven’t been accessed in at least seven days:
8. Click [Add].
10. In the Disk & File Systems category, click the new Purge /tmp tool.
Part 6: Cleanup
Close your SMH browser window before proceeding to the next chapter.
Directions
Carefully follow the instructions below and record your answers in the spaces provided.
1. Verify that the SMH product is installed and configured on your system.
# swlist SysMgmtWeb
# swconfig –x reconfigure=true SysMgmtHomepage.*
Answer:
# smhstartconfig
HPSMH 'autostart url' mode.........: ON
HPSMH 'start on boot' mode.........: OFF
Start Tomcat when HPSMH starts.....: OFF
1. Note that when performing these labs in the HP Virtual Lab, there is an SMH button in the
HPVL Reservation Window that will open an SMH browser window.
The other method is to launch the Internet Explorer web browser and point it to the SMH
autostart URL, http://server_ip:2301/. Replace server_ip with your server's IP
address.
a. If you are accessing your lab system remotely via a Virtual Lab portal server, launch
the portal’s Internet Explorer via the browser link on the VL webtop. In some VL
environments, there may be an SMH link on the webtop that opens a browser directly
to the SMH.
b. If you are accessing your lab system from a PC that has full network connectivity to
your lab system, launch Internet Explorer on your PC.
2. If asked if you wish to be redirected to “view pages over a secure connection”, click
[OK].
a. You should see a “Security Alert” indicating that the security certificate provided by
the SMH server was “issued by a company you have not chosen to trust”.
By default, the SMH uses “self-signed” authentication certificates, issued by the SMH
server itself. It’s possible to obtain a security certificate for the SMH server from a
third party “Certificate Authority”; for the sake of the lab, we’ll accept the self-signed
certificate.
3. Login as user root on the SMH login page. If your browser’s status bar is enabled, note
the padlock icon in the bottom right corner of the browser window indicating that the
connection to the server is secure.
2. When performing these labs in the HP Virtual Lab, there is an SMH button in the HPVL
Reservation Window that will open an SMH browser window.
The other method is to point your web browser to the SMH autostart URL,
http://server:2301/. Replace server with your fully-qualified server hostname.
By default, the SMH uses “self-signed” authentication certificates, issued by the SMH
server itself. It’s possible to obtain a security certificate for the SMH server from a third
party “Certificate Authority”; for the sake of the lab, we’ll use the self-signed certificate.
a. Click the “Accept this certificate permanently” radio button to permanently accept
the self-signed certificate from the SMH server.
c. A “Security Warning” message should appear indicating that you “have requested an
encrypted page”. Click [OK] to proceed to the SMH login screen.
4. Login as user root on the SMH login page. Note the padlock icon in the bottom right
corner of the browser window, indicating that you are connected to the server via a
secure connection.
1. Use the SMH “Home” tab links to view detailed status reports on some of your lab
system’s hardware components.
2. Use the SMH “Home” tab links to view detailed reports of your lab system’s process
information, networking information, and memory utilization.
3. Navigate to the SMH “Tools” tab and use the Defragment Extents link to
“defragment” the /home file system.
Answer:
a. Navigate to the SMH “Tools” tab.
d. Click the Defragment Extents link. You may have to scroll to the bottom right
corner of the SMH screen to see this link.
4. Navigate to the SMH “Tasks” tab and use the Run Command as Root link to execute
/usr/bin/passwd –f user1, which forces user1 to change his/her password at
next login.
Answer:
a. Navigate to the SMH “Tasks” tab and click the Run Command as Root link.
d. Click [Run].
5. Navigate to the SMH “Logs” tab and use the System Log Viewer link to view all lines
in /var/adm/syslog/syslog.log that contain the string inetd.
Answer:
a. Navigate to the SMH “Logs” tab and click the System and Consolidated Log
Viewer link.
7. In the Command/URL field enter the following command, which purges all files from
/tmp which haven’t been accessed in at least seven days:
8. Click [Add].
Part 6: Cleanup
Close your SMH browser window before proceeding to the next chapter.
Su J im k
e Fran Users
Sales
Develop
Student Notes
In order to gain access to an HP-UX system and its resources, users are required to log in. By
controlling access to your system, you can prevent unauthorized users from running
programs that consume resources, and control access to the data stored on your system.
Every user on an HP-UX system is assigned a unique username, password, and User
Identification (UID) number. HP-UX uses the user’s UID number to determine which files
and processes are associated with each user on the system.
Every user is also assigned a primary group membership and, optionally, up to 20 additional
group memberships. HP-UX grants access to files and directories based on a user’s UID and
the groups to which the user belongs.
Use the id command to determine a user’s UID and primary group membership.
# id user1
uid=301(user1) gid=301(class)
# groups user1
class class2 users
This chapter describes the configuration files that define user accounts and groups, and the
commands required to manage those files.
/etc/group
users::20:
accts::1001:user1,user2
sales::1002:user1,user2,user3,user4,user5,user6
/home
Student Notes
User accounts are defined in the /etc/passwd file. Each line in the /etc/passwd file
identifies a user’s username, password, User ID, primary group, home directory, and other
critical user-specific information.
Some users may belong to multiple user groups. The /etc/passwd file defines each user’s
primary group membership. The /etc/group file defines additional group memberships.
Finally, most users have a home directory under /home, beneath which they can store their
personal files and directories.
/etc/passwd (r--r--r--)
root:qmAj8as.,8a3e:0:3::/:/sbin/sh
daemon:*:1:5::/:/sbin/sh
user1:AdOK60AazRgXU:1001:1001:111-1111:/home/user1:/usr/bin/sh
user2:AdOK60AazRgXU:1002:1001:222-2222:/home/user2:/usr/bin/sh
user3:AdOK60AazRgXU:1003:1001:333-3333:/home/user3:/usr/bin/sh
Student Notes
The /etc/passwd file contains a one-line entry for each authorized user account. All fields
are delimited by colons (:).
Username The username that is used when a user logs in. The first character in each
username should be alphabetic, but remaining characters may be
alphabetic or numeric. Usernames are case sensitive.
In 11i v1 and v2, the username must be 1-8 characters in length. If a name
contains more than eight characters, only the first eight are significant.
# /sbin/init.d/pwgr stop
pwgrd stopped
# lugadmin –e
Warning: Long user/group name once enabled cannot
be disabled in future.
Do you want to continue [yY]: y
# /sbin/init.d/pwgr start
pwgrd started
# lugadmin –l
256
Commands such as who, ll, and ps that display usernames may truncate
usernames greater than 8 characters. The user represented in the who
output below has username ThisIsALongName.
$ who
ThisIsA+ console Jun 13 13:27
Long usernames may cause problems for scripts and applications that
attempt to parse the output from these commands or the contents of the
/etc/passwd file.
Password The encrypted password. You can encrypt a new password for a user via
the passwd command. /etc/passwd supports user passwords up to
eight characters.
If the password field is empty, the user can login without entering a
password.
User ID Each user must be assigned a user ID. User ID 0 is reserved for root, and
UIDs 1-99 are reserved for other predefined accounts required by the
system. SAM, SMH, and ugweb automatically assign UID numbers when
creating new groups.
parameter in /usr/include/sys/param.h.
Using large UIDs may cause problems when sharing files with other
systems that do not support large UIDs.
Group ID The user’s primary group ID (GID). This number corresponds with an
entry in the /etc/group file. See the /etc/group discussion later in the
chapter for more information.
Comments The comment field. It allows you to add extra information about the users,
such as the user's full name, telephone extension, organization, or building
number.
Home directory The absolute path to the directory the user will be in when they log in. If
this directory does not exist or is invalid, then the user’s home directory
becomes /.
Command The absolute path of a command to be executed when the user logs in.
Typically, this is a shell. The shells that are usually used are
/usr/bin/sh, /usr/bin/ksh, and /usr/bin/csh. Administrators
must use the/sbin/sh POSIX shell. Most non-root users should use the
/usr/bin/sh POSIX shell. If the field is empty, the default is
/usr/bin/sh.
The command entry does not have to be a shell. For example, you can
create the following entry in /etc/passwd:
date:rc70x.4.hGJdc:20:1::/:/usr/bin/date
NOTE: The permissions on the passwd file should be read only (r--r--r--) and
the owner must be root.
root:rZ1lps2JYh3iA:0:3::/:/sbin/sh
daemon:*:1:5::/:/sbin/sh
bin:*:2:2::/usr/bin:/sbin/sh
sys:*:3:3::/:
adm:*:4:4::/var/adm:/sbin/sh
uucp:*:5:3::/var/spool/uucppublic:/usr/lbin/uucp/uucico
lp:*:9:7::/var/spool/lp:/sbin/sh
nuucp:*:11:11::/var/spool/uucppublic:/usr/lbin/uucp/uucico
hpdb:*:27:1:ALLBASE:/:/sbin/sh
nobody:*:-2:60001::/:
Editing /etc/passwd
If you are using vi to edit /etc/passwd and a user attempts to change a password while
you are editing, the user's change will not be entered into the file. To prevent this situation,
use vipw when editing /etc/passwd.
# vipw
# pwck
[/etc/passwd] user1:fnnmD.DGyptLU:301:301:student:/home/user1
Too many/few fields
/etc/shadow (r--------)
user1:AdOK60AazRgXU:12269:70:140:70:35::
Student Notes
The default permissions on the /etc/passwd file are r--r--r--. Since the file is world-
readable, anyone with a valid login can view the file and view encrypted passwords. Hackers
sometimes exploit this fact to extract a list of encrypted passwords and run a password
cracking utility to gain access to other users’ accounts.
HP’s shadow password functionality addresses this problem by moving encrypted passwords
and other password information to the /etc/shadow file, which has 400 permissions to
ensure that it is only readable by root. Other user account information (UIDs, GIDs, home
directory paths, and startup shells) remain in the /etc/passwd file to ensure that login,
ps, ll, and other commands can still convert UIDs to usernames.
1. Shadow password support is included by default in 11i v2 and v3. HP-UX 11i v1
administrators, however, must download and install the ShadowPassword patch bundle
from http://software.hp.com/. Use the swlist command to determine if the
product has already been installed.
# swlist ShadowPassword
2. Run pwck to verify that there aren’t any syntax errors in your existing /etc/passwd file.
# pwck
3. Use the pwconv command to move your passwords to the /etc/shadow file.
# pwconv
4. Verify that the conversion succeeded. The /etc/passwd file should remain world-
readable, but the /etc/shadow file should only be readable by root. The encrypted
passwords in /etc/passwd should have been replaced by “x”s.
# ll /etc/passwd /etc/shadow
-r--r--r-- 1 root sys 914 May 18 14:35 /etc/passwd
-r-------- 1 root sys 562 May 18 14:35 /etc/shadow
5. You can revert to the traditional non-shadowed password functionality at any time via the
pwunconv command.
# pwunconv
All of the standard password commands, including passwd, useradd, usermod, userdel,
and pwck are shadow password aware.
Fields in /etc/shadow
The /etc/shadow file is an ASCII file consisting of any number of user entries separated by
newlines. Each user entry line consists of the following fields separated by colons:
username Each login name must match a username in /etc/passwd. In 11i v3,
/etc/shadow is compatible with long usernames as described on the
/etc/passwd slide previously.
last changed The number of days since January 1, 1970 that the password was last
modified. This field is used by the password aging mechanism, which will be
described later in the chapter.
min days The minimum number of days that a user must retain a password before it can
be changed. This field is used by the password aging mechanism, which will
be described later in the chapter.
max days The maximum number of days for which a password is valid. A user who
attempts to login after his password has expired is forced to supply a new one.
If min days and max days are both zero, the user is forced to change his
password the next time he logs in. If min days is greater than max days, then
the password cannot be changed. These restrictions do not apply to the
superuser. This field is used by the password aging mechanism, which will be
described later in the chapter.
warn days The number of weeks the user is warned before his password expires. This
field is used by the password aging mechanism, which will be described later
in the chapter.
inactivity The maximum number of days of inactivity allowed after a password has
expired. The account is locked if the password is not changed within the
specified number of days after the password expires. If this field is set to zero,
then the user is required to change his password. This field is only used by
HP-UX trusted systems, which aren’t discussed in this course.
expiration The absolute number of days since Jan 1, 1970 after which the account is no
longer valid. A value of zero in this field indicates that the account is locked.
reserved The reserved field is always null, and is reserved for future use.
Editing /etc/shadow
Manually editing the /etc/shadow file isn’t recommended. On a shadow password system,
you should use the useradd, usermod, userdel, and passwd commands to manage user
accounts in both /etc/passwd and /etc/shadow. These commands will be described in
detail later in the chapter.
Traditionally, HP-UX has used a variation of the DES encryption algorithm to encrypt user
passwords in /etc/passwd. HP-UX 11i v2 and v3 now support the more secure SHA-512
algorithm if you install the Password Hashing Infrastructure patch bundle from
http://software.hp.com. HP-UX 11i v3 also supports long passwords up to 255
characters if you add the LongPass11i3 patch bundle, too. Use the following commands to
determine if your system has these patch bundles:
In 11i v2:
# swlist SHA
In 11i v3:
After installing the software, add the following two lines to /etc/default/security to
enable SHA512 password hashing:
# vi /etc/default/security
CRYPT_DEFAULT=6
CRYPT_ALGORITHMS_DEPRECATE=__unix__
The lines above ensure that when passwords are created or changed, HP-UX always uses the
new SHA-512 algorithm rather than the legacy 3DES __unix__ algorithm. Existing users
can continue using their legacy passwords until their passwords expire, or until they
manually change their passwords.
As users change their passwords, note that the resulting passwords in /etc/shadow
become much longer. The $6$ prefix in the second password field below indicates that the
password was encrypted via SHA-512.
Before: user1:9oTPronwCKT9w:14370::::::
After: user1:$6$At65DRDJ$e9MfDCRnMMyJp1OeaOlzgslSyaXmzmS1TgGdni8
SUqrYYPvGSZXZNh/Ov0O5RdMgCe3Vap5DApx0zpr6XB190.:14370::::::
This functionality only works on systems that store passwords in /etc/shadow rather than
/etc/passwd.
NIS and NIS+ are incompatible with this feature, as are some third party applications that
directly parse encrypted passwords.
On 11i v3 systems, you can also enable long passwords up to 255 characters in length by
adding this line to /etc/default/security:
# vi /etc/default/security
CRYPT_DEFAULT=6
CRYPT_ALGORITHMS_DEPRECATE=__unix__
LONG_PASSWORD=1
This functionality only works on systems that store passwords in /etc/shadow, and that
have the SHA512 password functionality enabled.
other::1:root,daemon,uucp,sync
users::20:
accts::1001:user1,user2
sales::1002:user1,user2,user3,user4,user5,user6
Student Notes
When a user logs in on HP-UX system, HP-UX checks the GID field in the user's
/etc/passwd entry to determine the user’s primary group membership. The /etc/group
file determines a user’s secondary group memberships.
Users will be granted group access rights to any file associated with either their primary or
secondary groups.
New files and directories that the user creates will, by default, be assigned to the user’s
primary group. Users who prefer to associate new files and directories with a secondary
group can use the newgrp command to temporarily change their GID.
# newgrp sales
# newgrp
To determine which groups a user belongs to, use the groups command.
# groups user1
sales accts
group_name is the mnemonic name associated with the group. If you ll a file, you will see
this name printed in the group field.
In 11i v1 and v2, group names may only be 8 characters in length. In 11i v3,
the lugadmin command enables long group names up to 255 characters.
group_id is the group ID (GID). This is the number that should be placed in the
/etc/passwd file in the group_id field.
GIDs 1-99 are reserved for other predefined groups required by the system.
SAM, SMH, and ugweb automatically assign GID numbers when creating
new groups.
Using large GIDs may cause problems when sharing files with other systems
that don’t support large UIDs.
group_list is a list of usernames of users who are members of the group. A user's
primary group is defined in the fourth field of /etc/passwd, not in the
/etc/group file.
For more information on the /etc/group file, see group(4) in the HP-UX Reference
manual.
# grpck
users::20:root,user101
user101 - Logname not found in password file
Student Notes
The useradd command provides a convenient mechanism for adding user accounts.
Without any options, useradd simply adds a user to the /etc/passwd file using all of the
user account defaults:
# useradd user1
# grep user1 /etc/passwd
user1:x:101:20::/home/user1:/sbin/sh
Most administrators choose to override one or more of these defaults via some combination
of the command line options listed below:
-o -u uid -u specifies the User ID (UID) for the new user. uid must be a non-
negative integer less than MAXUID as it is defined in the
/usr/include/sys/param.h header file. uid defaults to the next
available unique number above the maximum currently assigned number.
UIDs from 0-99 are reserved.
The –o option allows the UID to be non-unique. This is most useful when
creating multiple user accounts with UID 0 administrator privileges.
-G group Specifies a comma separated list of additional GIDs or group names. This
defines the supplemental group memberships of the new login. Multiple
groups may be specified as a comma separated list. Duplicates within the
-g and -G options are ignored.
-c comment Specifies the comment field in the /etc/passwd entry for this login. This
can be any text string. A short description of the new login is suggested for
this field. The field may be used to record users’ names, telephone
numbers, office locations, employee numbers, or other information. The
field isn’t referenced by the system.
-k skeldir Specifies the skeleton directory containing files that should be copied to
all new user accounts. Defaults to /etc/skel. See the /etc/skel
discussion later in this chapter for more information.
-m -d dir -d specifies the new user’s home directory path. The home directory path
defaults to /home/username. With the optional –m (make) option,
useradd also creates the home directory.
-s shell Specifies the full pathname of the new user’s login shell. By default, the
system uses /sbin/sh as the login shell. /sbin/sh is a POSIX shell, but
it’s a “statically linked” executable that consumes more system resources
than the dynamically linked /usr/bin/sh shell. /sbin/sh is required
for the root account, but other accounts should use /usr/bin/sh.
-e expire Specifies the date after which this login can no longer be used. After
expire, no user will be able to access this login. Use this option to create
temporary logins. expire, which is a date, may be typed in a variety of
formats, including mm/dd/yy. See the man page for other supported
formats. This option only works on systems configured to use the
/etc/shadow file.
-f inactive Specifies the maximum number of days of continuous inactivity of the login
before the login is declared invalid. This option is only supported on
trusted systems. To learn more about HP’s trusted system functionality,
attend HP Customer Education’s H3541S course.
-p password Specifies an encrypted password for the account. The argument passed to
–p must be a valid encrypted password, created via the crypt perl/C
function. The example below uses command substitution to execute a
perl command that encrypts password “hp” for user1. Although this
solution is convenient, beware that the command (which includes the
user’s cleartext password) will appear in the process table and in
~/.sh_history.
If –p isn’t specified, useradd creates the user account, but doesn’t enable
it. Execute the passwd username command to interactively assign a
password to the new account.
-t template Specifies a template file, which establishes default options for the
command. See the user template discussion below.
/etc/default/useradd is the default template file.
username Specifies the new user’s username. The username should be between one
and eight characters in length. The first character should be alphabetic. If
the name contains more than eight characters, only the first eight are
significant.
The administrator can either define a password for the user or set a null password:
In either case, most administrators force new users to choose a new, memorable password
the first time they login.
-c “C programmer” \ # comment
-g developer \ # primary group
-s /usr/bin/csh # default shell
To verify that the template was created, execute useradd with just the –D and –t options,
or simply cat the file.
# useradd -D -t /etc/default/useradd.cusers
GROUPID 20
BASEDIR /home
SKEL /etc/skel
SHELL /usr/bin/csh
INACTIVE -1
EXPIRE
COMMENT programmer
CHOWN_HOMEDIR no
CREAT_HOMEDIR no
ALLOW_DUP_UIDS no
The example below uses the new template to create a user account. Recall that –m creates a
home directory for the new user.
Student Notes
User account settings may be modified by the administrator, or, to a lesser extent, by users.
-l username Changes the user’s username. This option doesn’t, however, change the
user’s home directory name. See the –m and -d options below.
The –o option allows the new UID to be non-unique (i.e.: allows duplicate
UIDs). This is most useful when creating multiple user accounts with UID
0 administrator privileges.
-c comment Specifies the comment field in the /etc/passwd entry for this login. This
can be any text string. A short description of the new login is suggested for
this field. The field may be used to record users’ names, telephone
numbers, office locations, employee numbers, or other information. The
field isn’t referenced by the system.
-p password Specifies an encrypted password for the account. The argument passed to
–p must be a valid encrypted password, created via crypt perl/C
function. The example below uses command substitution to execute a
perl command that encrypts password “hp” a new user1 account.
# passwd user1
Changing password for user1
New password: ******
Re-enter new password: ******
Passwd successfully changed
-s shell Specifies the full pathname of the new user’s login shell. By default, the
system uses /sbin/sh as the login shell. /sbin/sh is a POSIX shell, but
it’s a “statically linked” executable that consumes more system resources
than the dynamically linked /usr/bin/sh shell. /sbin/sh is required
for the root account, but other accounts should use /usr/bin/sh.
-e expire Specifies the date after which this login can no longer be used. After
expire, no user will be able to access this login. Use this option to create
-f inactive Specifies the maximum number of days of continuous inactivity of the login
before the login is declared invalid. This option is only supported on
trusted systems. To learn more about HP’s trusted system functionality,
attend HP Customer Education’s H3541S course.
$ passwd user1
Changing password for user1
New password: ******
Re-enter new password: ******
Passwd successfully changed
Alternatively, use the –d option to set a null password. Users with null passwords aren’t
prompted to enter a password at login.
# passwd -d user1
In either case, consider using the –f option to force the user to personally select a new
password at next login.
# passwd -f user1
$ passwd
Changing password for user1
Old password: ******
New password: ******
Re-enter new password: ******
Passwd successfully changed
Users can modify some of their other account attributes, too, via the chsh, and chfn
commands.
Student Notes
If a user is going on leave, or no longer needs access to the system, deactivate/lock their
account. Deactivating an account places an “*” in the user’s password field and prevents the
user from logging in.
# passwd –l user1
If the user returns, simply choose a new password for the user to reactivate their account.
# passwd user1
If a user’s account has been deactivated and the user’s files will never be used by another
user, reclaim the user’s disk space by removing their home directory.
# rm –rf /home/user1
Some users may have files scattered across other directories as well. Use the find
command to find and remove the user’s files and directories. The –i option provides an
opportunity to review each file before removing it.
Alternatively, consider reassigning the user’s files to a different user. The example below
chowns all files owned by user1 to user2.
Student Notes
If you are certain that a user will never need access to your system again, you may prefer to
remove the user’s account from the /etc/passwd file entirely.
# userdel user1
If you want to remove the user’s home directory, too, include the –r (recursive remove)
option.
# userdel -r user1
Some users may have files scattered across other directories as well. You can use the find
command to find and remove the user’s other files and directories.
Or, perhaps simply leave the files on disk as-is. If you choose this approach, the ll
command will report the old user’s userid rather than username in the file owner field. Use
the find command to general a list of all such “orphaned” files.
Student Notes
Many administrators force users to change their passwords on a regular basis via password
aging. Thus, even if a hacker were to obtain a copy of the /etc/passwd file, passwords
gleaned from that file would only be useful for a short period of time.
# passwd -n 7 -x 70 –w 14 user1
<min> argument rounded up to nearest week
<max> argument rounded up to nearest week
<warn> argument rounded up to nearest week
The -x option defines the maximum number of days a user is allowed to retain a password.
In the example on the slide, user1 will be forced to change his or her password every 28 days.
The -n option defines the minimum number of days a user is required to retain a password
after a password change. This, too, is rounded to the nearest week. In the example on the
slide, user1 must retain each new password for a minimum of 7 days. This prevents a user
from changing their password, then immediately reverting to their previously used password
each time their password expires.
-n Sets the minimum number of days between password changes. Although this
parameter must be specified in days, passwd rounds up to the nearest week. In the
example on the slide, user1 must retain each new password for a minimum of 7 days.
This prevents a user from changing their password, then immediately reverting to
their previous password.
-x Sets the maximum number of days allowed between password changes. Although
this parameter must be specified in days, passwd rounds up to the nearest week.
-w Sets the password expiration warning period. The –w option causes the system to
display a login warning message one or more weeks before a user’s password expires.
The number of days is configurable. The –w option is only available on systems
configured to use the /etc/shadow file. And must be specified in multiples of seven
days.
You can check the password status of a user's account with the -s option.
# passwd -s user1
user1 PS 03/21/05 7 70 14
This generates a one-line summary indicating when the minimum and maximum password
aging parameters, as well as the week when the password was last changed. To view the
aging status of all user accounts, execute:
# passwd -sa
user1 PS 03/21/05 7 70 14
user2 PS
user3 PS
The first character of the age, M, denotes the maximum number of weeks for which a
password is valid. A user who attempts to login after the password has expired is forced to
supply a new one. The next character, m, denotes the minimum period in weeks that must
expire before the password can be changed. The remaining characters define the week
(counted from the beginning of 1970) when the password was last changed (a null string is
equivalent to zero).
If m = M = 0 the user is forced to change the password at the next log in (and the age
disappears from the password entry). If m > M (the string ./), only a superuser (not the user)
can change the password.
Although these parameters may be set manually, it's much easier to use the
/usr/bin/passwd command!
# vi /etc/default/security
MIN_PASSWORD_LENGTH=
PASSWORD_MIN_UPPER_CASE_CHARS=
PASSWORD_MIN_LOWER_CASE_CHARS=
PASSWORD_MIN_DIGIT_CHARS=
PASSWORD_MIN_SPECIAL_CHARS=
PASSWORD_MAXDAYS=
PASSWORD_MINDAYS=
PASSWORD_WARNDAYS=
Student Notes
In order to ensure that users choose secure passwords, HP-UX supports a configuration file
called /etc/default/security that may be used to define a variety of security policies.
To use these policies in 11i v1, install the ShadowPassword patch bundle and PHCO_24606.
11i v3, and the SecurityExt software bundle in 11i v2, provide support for several
additional parameters not shown on the slide. See the security(4) man page for a
complete list of policies and parameters available on your system.
MIN_PASSWORD_LENGTH=N
PASSWORD_MIN_UPPER_CASE_CHARS=N
New passwords must contain a minimum of N upper-case characters. In 11i v1, this
only applies if PHCO_24606 is installed.
PASSWORD_MIN_LOWER_CASE_CHARS=N
New passwords must contain a minimum of N lower-case character. This only applies
if PHCO_24606 is installed.
PASSWORD_MIN_DIGIT_CHARS=N
PASSWORD_MIN_SPECIAL_CHARS=N
PASSWORD_MAXDAYS=N
This parameter controls the default maximum number of days that passwords are
valid. This parameter applies only to local users and does not apply to trusted
systems. The passwd -x option can be used to override this value for a specific
user.
PASSWORD_MINDAYS=N
This parameter controls the default minimum number of days before a password can
be changed. This parameter applies only to local users and does not apply to trusted
systems. The passwd -n option can be used to override this value for a specific user.
PASSWORD_WARNDAYS=N
This parameter controls the default number of days before password expiration that a
user is to be warned that the password must be changed. This parameter applies only
to local users on Shadow Password systems. The passwd -w option can be used to
override this value for a specific user.
Managing Groups
y Each user can belong to one or more groups
y Groups can be managed via groupadd/groupmod/groupdel
y Group memberships can be managed via usermod and groups
Student Notes
Each user on an HP-UX system may belong to one or more groups. Groups may be managed
via the groupadd/groupmod/groupdel command line utilities. Group membership may be
managed via the usermod and groups commands.
Replace the current list of users in a group with a new list of users:
Delete a group:
# groupdel accounts
# groups user1
Managing /etc/skel
/etc/skel/ /home/user1/
.profile .profile
copied
to new
.shrc accounts .shrc
.exrc .exrc
Student Notes
When a user logs into a UNIX system, several scripts execute to establish the user’s shell
environment. The list below describes the scripts that execute during the POSIX and Korn
shell login process. Login processes for other shells may vary.
1. After the user enters a username and password, the /usr/bin/login script checks the
/etc/passwd file to verify that the user has a valid account. If the user's username and
password are correct, the login program launches a shell for the user.
3. Next, the user's personal ~/.profile script executes. Each user has a .profile
script that executes at login time to define additional environment variables, or to
override the default environment variable values that the administrator defined in
/etc/profile.
4. Finally, the shell looks for an environment variable called ENV. The ENV variable
identifies a personal shell startup program that users may optionally choose to configure.
POSIX shell users often create a ~/.shrc shell startup script, while Korn shell users
typically define a ~/.kshrc shell startup script. Unlike the ~/.profile script, which
only executes at login, the shell startup script executes every time the user logs in, runs a
shell script, opens a terminal emulator window, or launches a shell. The POSIX and Korn
shell startup scripts are typically used to define shell aliases.
Users can modify their personal ~/.profile and ~/.shrc scripts. The administrator can
create a template version of these in the /etc/skel directory. useradd automatically
copies the files found in this directory to each new user home directory.
Thus, if you wish to change the default configuration files that are copied to new users' home
directories, simply modify the files in /etc/skel. Note that changes made in /etc/skel
won't affect existing users' home directories. Updated files will only be copied to new user
accounts.
Additional files can be copied into /etc/skel as well, if your applications require
configuration files in users' home directories. The /etc/skel directory on the slide
includes a .exrc file which defines vi macros and keyboard shortcuts.
Administrators on very large systems may choose to create subdirectories under /etc/skel
for different user account types. Then, when creating a user account, use the useradd –k
skeldir option to specify which skeleton directory useradd should copy files from.
NOTE: There is no CDE .dtprofile script in /etc/skel. The first time a user
logs in via CDE, HP-UX attempts to copy either
/etc/dt/config/sys.dtprofile (if it exists) or
/usr/dt/config/sys.dtprofile to the user's ~/.dtprofile. Use the
following procedure to customize the default .dtprofile:
# cp –p /usr/dt/config/sys.dtprofile \
/etc/dt/config/sys.dtprofile
# vi /etc/dt/config/sys.dtprofile
TERM The TERM variable defines the user's terminal type. If the TERM variable is set
incorrectly, applications may not be able to write to the user's terminal properly.
More commonly, however, the TERM variable is set using the ttytype
command, which can usually automatically determine your terminal type. The
following portion of code can be included in one of the scripts that runs at login
to set your terminal type for you:
if [ "$TERM" = "" -o \
"$TERM" = "unknown" -o \
"$TERM" = "dialup" -o \
"$TERM" = "network" ]
then
eval `ttytype -s -a`
fi
export TERM
PS1 The PS1 variable defines your shell prompt string. This, too, can be changed by
the user. Some useful sample PS1 values are shown below:
LPDEST LPDEST defines the user's default printer. The printer named in LPDEST takes
precedence over the system-wide default printer configured by the system
administrator. Examples:
PATH Every time the user enters a command, the shell must find the executable
associated with the requested command. The PATH variable contains a ":"
separated list of directories that the shell should search for executables. If users
need access to new applications and utilities, you may need to modify their PATH
variables. You can append a new directory to the user's PATH using syntax
similar to the following syntax:
The initial PATH variable value usually taken from the /etc/PATH file.
Oftentimes installing an application automatically updates the /etc/PATH file
for you, so it may not be necessary to update individual users' PATHs.
EDITOR Three variables must be defined if your users want to use command line editing:
export EDITOR=vi
export HISTFILE=~/.sh_history
export HISTSIZE=50
EDITOR defines the user's preferred command line editor. emacs and vi are the
only allowed values. HISTFILE determines the file that should be used to log
commands entered by the user. HISTSIZE determines the number of commands
retained in the shell's command buffer.
TZ Defines the user’s time zone. Internally, UNIX records timestamps as the number
of seconds since January 1, 1970 UTC. Commands that display timestamps
(date, who, ll, etc.) display dates and times relative to the timezone specified in
the user’s TZ variable. The administrator can establish a system-wide default
value in /etc/TIMEZONE, but individual users may wish to customize the
variable to match their local time zone. See the /usr/lib/tztab file for a list
of recognized time zones. The example below establishes a TZ value appropriate
for users in Chicago.
export TZ=CST6CDT
These are just some of the more commonly defined environment variables that you can
define for your users. Other environment variables are defined in the man page for the
POSIX shell (man 1 sh-posix), and still others may be required by your applications.
Environment variables can be set from the command line, but are more commonly defined in
the login configuration files, which will be covered later in this chapter. You can view a list
of currently defined environment variables by executing the env command:
# env
Directions
Perform the following tasks. Record the commands you use, and answer all questions. The
password for user accounts user1-24 is class1.
2. Do you see an entry for the new user in the /etc/passwd file?
Do you see an entry for the new user in the /etc/group file? Explain.
5. Force the user to choose a new password the first time they login.
6. Login as user25 to verify that the new account works. What happens?
8. Oops! We forgot to define the comment field for user25. Set user25’s comment field
to “student account”.
10. Create a /home/project directory that user24 and user25 can use to store and
manage files associated with their project. Ensure that the administrator and members of
the project group are the only users who can access the shared directory.
# mkdir /home/project
# chown root:project /home/project
# chmod 770 /home/project
11. Verify that user24 and user25 have access to the group, and that other users don’t.
3. What changed in the /etc/passwd file because of the commands in the previous two
questions?
4. What happens now when user24 and user25 attempt to log in? telnet to your local
host, and try to login using both usernames. What happens?
# telnet localhost
5. What happened to the users’ home directories? Do a long listing of /home. Can you
explain what you see?
# ll –d /home/user24 /home/user25
b. Ensure that users wait at least one week between password changes.
3. Apply the same password aging parameters to all users by modifying the appropriate
variables in /etc/default/security. Also require users to choose passwords that
are at least eight characters.
4. Before you continue on to the next part, revert to a non-shadowed password file.
Hint: Try running the sample shell script below. What must be changed in the shell script
to automatically create the desired accounts?
#!/usr/bin/sh
n=1
while ((n<=50))
do
echo stud$n
((n=n+1))
done
A similar Accounts for Users and Groups functional area exists in sam in earlier
versions of HP-UX.
Answer:
2. Do you see an entry for the new user in the /etc/passwd file?
Do you see an entry for the new user in the /etc/group file? Explain.
Answer:
There should be an entry in the /etc/passwd file for the new user. However, the user
isn’t listed in /etc/group. A user's primary group membership is recorded in the
/etc/passwd GID field; /etc/group only records secondary group memberships.
Answer:
The user can’t login at this point since the user’s password hasn’t been defined yet.
Answer:
# passwd user25
5. Force the user to choose a new password the first time they login.
Answer:
# passwd –f user25
6. Login as user25 to verify that the new account works. What happens?
# login
Answer:
Answer:
$ exit
8. Oops! We forgot to define the comment field for user25. Set user25’s comment field
to “student account”.
Answer:
Answer:
# groupadd project
# usermod -G project user24
# usermod -G project user25
10. Create a /home/project directory that user24 and user25 can use to store and
manage files associated with their project. Ensure that the administrator and members of
the project group are the only users who can access the shared directory.
# mkdir /home/project
# chown root:project /home/project
# chmod 770 /home/project
11. Verify that user24 and user25 have access to the group, and that other users don’t.
Answer:
# passwd -l user24
Answer:
# userdel user25
3. What changed in the /etc/passwd file because of the commands in the previous two
questions?
Answer:
user24's password field is set to "*" to indicate that the account is disabled.
user25's /etc/passwd entry disappeared entirely.
4. What happens now when user24 and user25 attempt to log in? telnet to your local
host, and try to login using both usernames. What happens?
# telnet localhost
Answer:
Both login attempts should fail.
5. What happened to the users’ home directories? Do a long listing of /home. Can you
explain what you see?
# ll –d /home/user24 /home/user25
Answer:
Both directories are still there, but the owner field for user25's directory lists a number
rather than user25's username. Internally, HP-UX identifies file ownership by UID
rather than username. ll attempts to resolve these UIDs into usernames. However,
since user25 is no longer listed in /etc/passwd, the ll command has no way of
determining which username is associated with the /home/user25 directory.
Answer:
# passwd user24
Answer:
Each /etc/shadow entry should contain a user name, an encrypted password, and a
timestamp field that indicates when the password was last changed. The other fields
should be empty.
b. Ensure that users wait at least one week between password changes.
Answer:
3. Apply the same password aging parameters to all users by modifying the appropriate
variables in /etc/default/security. Also require users to choose passwords that
are at least eight characters.
Answer:
# vi /etc/default/security
MIN_PASSWORD_LENGTH=8
PASSWORD_MAXDAYS=180
PASSWORD_MINDAYS=7
PASSWORD_WARNDAYS=7
4. Before you continue on to the next part, revert to a non-shadowed password file.
Answer:
# pwunconv
Answer:
#!/usr/bin/sh
n=1
while ((n<=50))
do
echo stud$n
useradd –m –s /usr/bin/sh stud$n
passwd –d –f stud$n
((n=n+1))
done
From the Home Page, click "System Configuration." From the System Configuration Window,
click "Accounts for Users and Groups".
When this exercise is complete, Sign out of the SMH utility and close the browser window.
A similar Accounts for Users and Groups functional area exists in sam in earlier
versions of HP-UX.
• Describe the key contents of /sbin, /usr, /stand, /etc, /dev, /var (OS-related
directories).
• Use find, whereis, and which to find files in the HP-UX file system.
OS
Executables Application
OS
Static Files Libraries Application
System startup
OS
Configuration Application
Dynamic Files Temporary OS
Application
User
Student Notes
Many HP-UX system administration tasks require the administrator to find and manipulate
system and application configuration and log files. Understanding the philosophy behind the
organization of the file system will ensure that you can successfully find the resources you
need to perform administration tasks.
Files in the HP-UX file system are organized by various categories. Static files are separated
from dynamic files. Executable files are separated from configuration files. This philosophy
provides a logical structure for the file system and simplifies administration as well.
Dynamic files and directories change frequently. They are stored in a separate portion of the
file system. Configuration, temporary, and user files are all considered to be dynamic.
• Executable files can be easily shared across the network, while host-specific
configuration data is stored locally on each host.
System Directories
/opt
/var
/dev /mnt
App2 /etc
App1 /tmp
/usr
/stand
STATIC FILES /sbin
/home
Student Notes
The shaded directories in the diagram on the slide contain static data, while unshaded
directories in the diagram contain dynamic data. The sharable portion of the operating
system is located beneath /usr and /sbin. Only the operating system can install files into
these directories. Applications are located beneath /opt.
The directories /usr, /sbin, and the application subdirectories below /opt can be shared
among networked hosts. Therefore, they must not contain host-specific information. The
host-specific information is located in directories in the dynamic area of the file system.
Directory Definition
/sbin Minimum commands needed to boot the system and mount other file systems.
/opt Applications.
/var Dynamic information such as logs and spooler files (previously in /usr).
The allowed subdirectories in /usr are defined below; no additional subdirectories should
be created.
In general, files beneath /var are somewhat temporary. System administrators that wish to
free up disk space are likely to search the /var hierarchy for files that can be purged. Some
sites may choose not to make automatic backups of the /var directories.
/var/adm/cron Used for log files maintained by cron. cron is a subsystem that
allows you to schedule processes to run at a specific time or at
regular intervals.
/var/adm/syslog System log files. Applications as well as the kernel can log
messages here. The syslogd daemon is responsible for writing
the log messages. The behavior of the syslogd daemon can be
customized with the/etc/syslog.conf file. The name of the
default log file is /var/adm/syslog/syslog.log. At boot
time this file is copied to OLDsyslog.log, and a new
syslog.log is started. The syslog.log file is an ASCII file.
/var/adm/sulog This file contains a history of all invocations of the switch user
command. sulog is an ASCII log file.
/etc/utmp On an 11i v1 system, this file contains a record of all users logged
onto the system. This file is used by commands such as write
and who. This file is not an ASCII file and can not be directly
viewed.
/etc/utmps On an 11i v2 system, this file contains a record of all users logged
onto the system. This file is used by commands such as write
and who. This file is not an ASCII file and can not be directly
viewed.
Application Directories
Static Dynamic
/opt/<application>/ /etc/opt/<appl>
Student Notes
Each application will have its own subdirectory under /opt, /etc/opt, and /var/opt.
The sharable, or static, part of the application is self-contained in its own
/opt/application directory, which has the same hierarchy as the operating system
layout:
/opt/application/lib Libraries.
The application's host-specific log files are located under /var/opt/application, and
host-specific configuration files are located under /etc/opt/application.
Student Notes
As a system administrator, you will need to reference files in directories all over the HP-UX
file system. HP-UX offers several tools for finding the files and executable files you need to
perform administration tasks.
Examples
Example
# whereis -b sam
sam: /usr/sbin/sam
Examples
# file /sbin/shutdown
/sbin/shutdown: s800 shared executable
# file /etc/passwd
/etc/passwd: ascii text
Directions
Answer all the questions below.
1. Which of the following directories are dynamic?
/etc
/usr
/sbin
/dev
/tmp
2. Viewing a report on your disk space usage, you note that /usr, /var, and /opt are all
nearing 90% capacity. Which of these directories should you be most concerned about?
Why?
4. Where would you expect to find the cp and rm OS user executables? See if you are
correct.
5. Where would you expect to find the smh, useradd, and userdel executables? See if
you are correct.
6. The pre_init_rc utility executes in the early stages of the system start-up procedure to
check for file system corruption. Where would you expect to find this executable? See if
you are correct.
7. There is a system log file that maintains a record of system shutdowns. Where would you
expect to find the shutdown log file? See if you are correct.
8. In which directory would you expect to find the "hosts" configuration file, which contains
network host names and addresses? See if you are correct.
9. Though many utilities and daemons maintain independent log files, many daemons and
services write their errors and other messages to a log file called syslog.log. See if
you can find the path for this file, then check to see if any messages have been written to
the file in the last day.
10. Find all of the directories (if any) under /home that are owned by root.
11. (Optional) Find all the files under /tmp that haven't been accessed within the last day.
12. (Optional) Find all the files on your system that are greater than 10000 bytes in size. If
you needed to make some disk space available on your system, would it be safe to simply
remove these large files?
Directions
Answer all the questions below.
1. Which of the following directories are dynamic?
/etc
/usr
/sbin
/dev
/tmp
Answer:
/etc
/dev
/tmp
2. Viewing a report on your disk space usage, you note that /usr, /var, and /opt are all
nearing 90% capacity. Which of these directories should you be most concerned about?
Why?
Answer:
/var deserves the most attention here because it is a dynamic file system that could
grow quite quickly in case of an error condition that creates entries in the system log files.
/usr and /opt are static file systems that are less likely to cause problems.
Answer:
4. Where would you expect to find the cp and rm OS user executables? See if you are
correct.
Answer:
Both are in /usr/bin, along with all the other user executables.
5. Where would you expect to find the smh, useradd, and userdel executables? See if
you are correct.
Answer:
All three are in /usr/sbin along with many other administrative utilities.
6. The pre_init_rc utility executes in the early stages of the system start-up procedure to
check for file system corruption. Where would you expect to find this executable? See if
you are correct.
Answer:
pre_init_rc is in the /sbin directory, along with other files used during the boot
process.
7. There is a system log file that maintains a record of system shutdowns. Where would you
expect to find the shutdown log file? See if you are correct.
Answer:
8. In which directory would you expect to find the "hosts" configuration file, which contains
network host names and addresses? See if you are correct.
Answer:
9. Though many utilities and daemons maintain independent log files, many daemons and
services write their errors and other messages to a log file called syslog.log. See if
you can find the path for this file, then check to see if any messages have been written to
the file in the last day.
Answer:
# more /var/adm/syslog/syslog.log
10. Find all of the directories (if any) under /home that are owned by root.
Answer:
11. (Optional) Find all the files under /tmp that haven't been accessed within the last day.
Answer:
12. (Optional) Find all the files on your system that are greater than 10000 bytes in size. If
you needed to make some disk space available on your system, would it be safe to simply
remove these large files?
Answer:
• Describe the components of HP-UX legacy and Agile View hardware paths
• Describe the features of HP’s nPar, vPar, VM, and Secure Resource Partitions
• View a system’s hardware model and configuration with machinfo and model
• View a system’s peripheral devices and buses with ioscan and scsimgr
• Add and replace interface cards with and without HP OL* functionality
Hardware Components
LAN
Blade Link / Crossbar
iLO / MP Serial
LBA PCI-X Bus SCSI Disk
Core I/O LAN DVD
CPUs Serial
SBA
Memory LBA PCI-X Bus FC HBA LUN
SAN LUN
Cell Boards
or Blades LBA PCI-X Bus FC HBA LUN
Student Notes
Every recent HP-UX system has several hardware components:
• One or more PA-RISC or Itanium single-, dual-, or quad-core CPUs for processing data.
• One or more System/Local Bus Adapters that provide connectivity to expansion buses.
• One or more PCI I/O expansion buses with slots for add-on Host Bus Adapters.
• One or more Host Bus Adapter cards for connecting peripheral devices.
• One or more Core I/O cards with built-in LAN, console, and boot disk connectivity.
• An Integrated Lights Out / Management Processor (iLO/MP) card to provide local and
remote console access and system management functionality.
The slides that follow describe these components in detail.
CPUs
• HP’s current “Integrity” servers use Intel’s 64-bit EPIC architecture Itanium 2 processors
• HP’s older “hp9000” servers used HP’s proprietary 64-bit PARISC processors
• HP provides binary compatibility across processor types and generations
LAN
Blade Link / Crossbar
iLO / MP Serial
LBA PCI-X Bus SCSI Disk
Core I/O LAN
CPUs Serial DVD
SBA
Memory LBA PCI-X Bus FC HBA LUN
Cell Boards SAN LUN
LBA PCI-X Bus FC HBA LUN
or Blades
Student Notes
HP’s HP-UX systems utilize two different processor families.
The Itanium 2 architecture uses a variety of techniques to increase parallelism — the ability
to execute multiple instructions during each machine cycle. Parallelism improves
performance because it allows multiple instructions to be executed simultaneously. The
Itanium 2 architecture is designed to make certain the processor can execute as many
instructions per cycle as possible.
A key to the high performance of the IPF processors is the design philosophy at the heart of
the processor, Explicitly Parallel Instruction Computing (EPIC). The
®
IPF is a registered trademark of the Intel Corporation
EPIC philosophy is a major reason why Itanium 2 processors are different from other 64-bit
processors, providing much higher instruction-level parallelism without unacceptable
increases in hardware complexity. EPIC achieves such performance by placing the burden of
finding parallelism squarely on the compiler. Although processor hardware can extract a
limited sort of parallelism, the best approach is to let the compiler, which can see the whole
code stream, find the parallelism and make global optimizations. The compiler
communicates this parallelism explicitly to the processor hardware by creating a three-
instruction bundle with directions on how the instructions should be executed. The hardware
focuses almost entirely on executing the code as quickly as possible.
The EPIC architecture, together with several other architecture innovations, gives the IPF
processors a significant advantage over both IA32 and 64-bit RISC systems. As co-developer
of the Itanium 2 architecture, HP has been able to take the lead in bringing production-ready
Itanium 2 based servers to market.
As shown on the slide, Intel has already released several generations of Itanium 2 processors.
The latest generation of Itanium processors, the 9300 series “Tukwila” processor series
features four processor cores on a single chip die, which increases computing density and
delivers significant performance gains over earlier single- and dual-core processors. HP’s
newest systems utilize the 9300 series processor chips. Older models utilize the dual-core
9100 and 9200 series Itanium processors.
These multi-core processors are further enhanced by increasing the on-chip cache sizes in
each successive processor generation.
PA-RISC used Reduced Instruction Set Computing (RISC) principles to provide high
performance, and high reliability. HP offered several iterations of its PA-RISC technology
over the years. The early PA7000 series of chips used a 32-bit architecture, while the newer
PA8000 series chips used a 64-bit architecture.
HP’s PA8800 and PA8900 processors are dual-core processors. A single PA8800 or PA8900
processor may contain one or two PARISC processor “cores”, thus allowing twice as many
processors in a single system as was previously possible. The hp9000 Superdome supported
up to 64 processor modules, a total of up to 128 PA8900 processor cores.
The PA8900 processor was the last processor in the PA-RISC family. HP stopped selling PA-
RISC servers at the end of 2008, but will support PA-RISC at least through 2013.
• Maintains forward data, source, build environment, and binary compatibility across
all hardware platforms of the same architecture family (e.g. Intel ® Itanium ® or PA-
• Provides forward data, source, build environment, and binary compatibility across
HP-UX release versions and updates on HP 9000 servers and Integrity servers on their
respective architectures. This is true for 32-bit or 64-bit applications on either
architecture family;
• Delivers new features and improved performance with each new HP-UX release.
Binary compatibility across operating system releases applies to legacy features
(features that were present in the earlier release). There are some instances, however,
where applications may be required to recompile in order to use or leverage a new
feature.
See the HP-UX release notes for information on new features that may require changes to
applications.
Additionally, there is complete data compatibility between the HP-UX 11i releases for PA-
RISC and Itanium-based systems. No data conversion is required when transferring data
between releases of HP-UX 11i on PA-RISC and Integrity servers.
For a more complete discussion of HP-UX compatibility, see the “HP-UX 11i compatibility for
HP Integrity and HP 9000 servers” white paper at
http://www.hp.com/go/hpux11icompatibility.
HP Integrity servers with Intel Itanium 2 processors offer the best HP-UX performance,
scalability, and investment protection available. HP encourages current PA-RISC customers
to consider upgrading their systems to Itanium. Consult your sales representative for details.
On Integrity systems, you can determine your processor type and configuration via the
machinfo command.
# machinfo
CPU info:
Firmware info:
Firmware revision: 01.02
FP SWA driver revision: 1.18
IPMI is supported on this system.
BMC firmware revision: 1.00
Platform info:
Model: "ia64 hp Integrity BL860c i2"
Machine ID number: 669ab3af-3d4c-11df-abc1-1a4b5386cd07
Machine serial number: USE008XX06
OS info:
Nodename: bl860-1
Release: HP-UX B.11.31
Version: U (unlimited-user license)
Machine: ia64
ID Number: 1721414575
vmunix _release_version:
@(#) $Revision: vmunix: B.11.31_LR FLAVOR=perf
LAN
iLO / MP
Blade Link / Crossbar
Serial
LBA PCI-X Bus SCSI Disk
Core I/O LAN DVD
CPUs Serial
SBA
Memory LBA PCI-X Bus FC HBA LUN
Cell Boards SAN LUN
or Blades LBA PCI-X Bus FC HBA LUN
Student Notes
On HP’s mid-range and high-end servers, and on newer blade servers, each system is
comprised of one or more cell boards or blades. Each cell board or blade contains a portion
of the system’s memory and CPU resources.
All of the system’s cell boards or blades are interconnected via a low latency “crossbar” (on
mid-range and high end servers) or blade link (on the blade servers).
HP’s crossbar and blade link technologies ensure that any processor core on a system can
access resources on any other blade or cell board on that same system.
The diagram below shows the blade link used to interconnect foundation blades in HP’s
newer Integrity blade servers:
The diagram below shows the HP sx2000 crossbar technology used to interconnect cell
boards in HP’s cell-based midrange and high-end Superdome servers:
The diagram below shows the HP sx3000 crossbar technology used to interconnect
Superdome 2 blades on the new Superdome 2 server:
• System and Local Bus Adapters provide connectivity to I/O expansion buses
• I/O expansion buses provide one or more slots for device adapter cards
• HP supports PCI, PCI-X, and PCI-E bus types, and slot speeds up to ~2GB/sec
• HP OL* functionality on some servers facilitates adding/removing cards online
• Dedicated buses minimize downtime and maximize performance
LAN
iLO / MP
Blade Link / Crossbar
Serial
LBA PCI-X Bus SCSI Disk
Core I/O LAN DVD
CPUs Serial
SBA
Memory LBA PCI-X Bus FC HBA LUN
Cell Boards SAN LUN
or Blades LBA PCI-X Bus FC HBA LUN
Student Notes
Every cell, system board, or blade has a System Bus Adapter (SBA) that provides
connectivity between the system’s processors and the I/O expansion buses.
The SBA connects to one or more Local Bus Adapters (LBAs) on the system’s I/O backplane
via a high-speed communications channel known as a “rope”. Some LBAs have a single rope
connection to the SBA. Other LBAs utilize two ropes to the SBA for greater bandwidth.
Each LBA provides an I/O bus to support one or more interface adapters or Host Bus
Adapters (HBAs).
Since it was first introduced, the PCI standard has been enhanced several times to
accommodate the greater bandwidth and shorter response times demanded from the
input/output (I/O) subsystems of enterprise computers. The table below lists the PCI bus
types available on recent Integrity servers.
The architecture diagram below shows the bus types provided on an Integrity rx6600 entry-
class server. Model-specific technical white papers on HP’s
http://www.hp.com/go/servers website provide similar technical details for other
server models, too.
Rackmount entry-class and mid-range servers have card slots on the backplane of the server
which host the expansion cards.
Superdome servers host expansion cards in one or more I/O chassis accessible from the front
and rear of the server.
Superdome 2 servers have no internal expansion card slots. Rather, Superdome 2 servers
host expansion cards in one or more external I/O expansion enclosures.
HP Integrity blade server administrators can add additional interfaces via the “mezzanine”
expansion card slots located directly on the server blades.
Slides later in the module describe each of these expansion solutions in greater detail.
iLO / MP Cards
• All current HP servers support an Integrated Lights Out Management Processor
• The iLO / MP provides:
− Local console access via a local serial port
− Remote console access via modem or via telnet, HTTPS*, or SSH* network services
− Hardware monitoring and logging
− Power management and control
LAN
Blade Link / Crossbar
iLO / MP Serial
LBA PCI-X Bus SCSI Disk
Core I/O LAN DVD
CPUs Serial
SBA
Memory LBA PCI-X Bus FC HBA LUN
Cell Boards SAN LUN
or Blades LBA PCI-X Bus FC HBA LUN
Student Notes
The next few slides discuss some of the cards and adapters that occupy PCI, PCI-X, and PCI-
Express buses.
All of HP’s recent server models support an Integrated Lights Out / Management Processor
(iLO/MP). The iLO/MP provides several important features:
• Local console access via a local serial port: Attach an ASCII terminal to the MP Serial port
to install, update, boot, and reboot.
• Remote console access via modem or via telnet, HTTPS, or SSH network services:
Remote administrators can use these iLO/MP features to remotely install, update, boot,
reboot, and perform other administration tasks.
• Hardware monitoring and logging: The iLO/MP captures system hardware level
diagnostics and system messages.
• Power management and control: Use the iLO/MP to view power status and power on/off
system components.
• And much more... The iLO/MP chapter elsewhere in this course describes these and
many other iLO/MP features in detail.
LAN
Blade Link / Crossbar
iLO / MP Serial
LBA PCI-X Bus SCSI Disk
Core I/O LAN DVD
CPUs Serial
SBA
Memory LBA PCI-X Bus FC HBA LUN
SAN LUN
Cell Boards
LBA PCI-X Bus FC HBA LUN
or Blades
Student Notes
All Integrity servers include a Core I/O card or equivalent built-in interfaces that provide
basic server connectivity. Cell-based servers may have multiple Core I/O cards to support
node partitioning. Core I/O configurations vary, but typically include some combination of
the following:
• One or more Parallel Small Computer System Interface (SCSI) interfaces for connecting
the internal disk(s), tape drive, and optional DVD.
• A Serial Attach SCSI (SAS) interface, for connecting the internal disk(s). SAS provides
greater expandability and better performance than parallel SCSI technology. Newer
systems include SAS rather than parallel SCSI interfaces.
• One or two 10/100/1000BaseT interfaces, for connecting the system to a Local Area
Network. Newer blade servers include standard, built-in “LAN on Motherboard” (LOM)
dual-port 10Gb Ethernet interfaces.
• One or more serial ports, for connecting a terminal, modem, or serial printer.
• One or more USB ports, for connecting a local keyboard and/or mouse.
• A graphics/VGA adapter for connecting a local VGA monitor. This feature is only
available on some entry-class servers.
• Audio ports, for connecting a headphone, microphone, and/or speakers. This feature is
only available on some entry-class servers.
To learn more about your server’s Core I/O features, review your model’s QuickSpecs on
http://www.hp.com/go/servers.
LAN
Blade Link / Crossbar
iLO / MP Serial
LBA PCI-X Bus SCSI Disk
Core I/O LAN DVD
CPUs Serial
SBA
Memory LBA PCI-X Bus FC HBA LUN
SAN LUN
Cell Boards
LBA PCI-X Bus FC HBA LUN
or Blades
Student Notes
The Core I/O / integrated parallel SCSI and SAS interfaces are commonly used to connect
internal mass storage devices.
Entry-class, mid-range, and Integrity blade server models support at least two internal SAS or
SCSI disks. Entry-class servers support at least one internal DVD drive; some support one or
more optional internal DDS tape drives, too.
HP’s high-end Superdome and Superdome 2 servers do not include any internal disk or tape
drives; they rely on external devices or devices installed in an adjacent I/O expansion cabinet
On all current systems, the internal disk and tape devices are “hot-pluggable”, enabling the
administrator to service the devices while the server remains running in most cases. See
your server’s user service manual for details.
Many models now support HP’s SmartArray controller cards. The SCSI and SAS SmartArray
cards provide hardware-based mirroring functionality using the server’s internal disks. This
useful feature ensures that the system continues running even if an internal disk fails.
To learn more about your server’s internal mass storage options, review your model’s
QuickSpecs on http://www.hp.com/go/servers.
LAN
Blade Link / Crossbar
iLO / MP Serial
LBA PCI-X Bus SCSI Disk
Core I/O LAN DVD
CPUs Serial
SBA
Memory LBA PCI-X Bus FC HBA LUN
Cell Boards SAN LUN
or Blades LBA PCI-X Bus FC HBA LUN
Student Notes
The Core I/O card provides basic LAN and storage connectivity. Adding additional interface
adapter cards makes it possible to connect to additional LANs, SANs, and external devices.
The slide lists some of the common interface adapter card types commonly found on HP-UX
systems today.
Supported cards vary by server model and OS type and version; see your model’s QuickSpecs
on http://www.hp.com/go/servers for details. If you plan to use the interface card to
boot from a SAN device or a network-based Ignite-UX install server, check the QuickSpecs to
verify that your interface card provides boot support for your OS version.
To determine if your server supports OL*, execute rad –q (11i v1) or olrad –q (11i v2 and
v3). If the command yields an error message, your server doesn’t support OL*. The olrad
output below suggests that three card slots on this server are unoccupied. Five slots are
occupied and support OL* functionality.
# olrad -q
Driver(s) Capable
Slot Path Bus Max Spd Pwr Occu Susp OLAR OLD Max Mode
Num Spd Mode
0-0-1-1 1/0/8/1 396 133 133 Off No N/A N/A N/A PCI-X PCI-X
0-0-1-2 1/0/10/1 425 133 133 Off No N/A N/A N/A PCI-X PCI-X
0-0-1-3 1/0/12/1 454 266 266 Off No N/A N/A N/A PCI-X PCI-X
0-0-1-4 1/0/14/1 483 266 66 On Yes No Yes Yes PCI-X PCI
0-0-1-5 1/0/6/1 368 266 66 On Yes No Yes Yes PCI-X PCI
0-0-1-6 1/0/4/1 340 266 266 On Yes No Yes Yes PCI-X PCI-X
0-0-1-7 1/0/2/1 312 133 133 On Yes No Yes Yes PCI-X PCI-X
0-0-1-8 1/0/1/1 284 133 133 On Yes No Yes Yes PCI-X PCI-X
In order to add/replace a card online, both the server’s card slot and that interface card’s
driver must support OL*. To determine if an interface card’s driver supports OL*, check the
documentation accompanying the card.
Student Notes
Disk Arrays
A disk array is a storage system consisting of multiple disk drive mechanisms managed by an
array controller that makes the resulting disk space available to one or more hosts. As the
volume of data managed on HP-UX systems has increased from megabytes, to gigabytes, to
terabytes, disk arrays have become increasingly popular. Though many administrators still
choose to configure internal disks as boot disks, most application and user data today is
stored on external disk arrays.
LUNs
Disk arrays often have dozens, or even hundreds, of disk devices. Management software
running on the array enables the array administrator to subdivide the array’s disk space into
one, two, or even hundreds of “Logical Units” (LUNs), or virtual disks.
# scsimgr get_attr \
-a lunid \
-H 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000
name = lunid
current =0x4001000000000000 (LUN # 1, Flat Space Addressing)
default =
saved =
11i v1 and v2 administrators must use utilities supplied by the array vendor to obtain a LUN’s
WWID and LUN ID. 11i v1 and v2 servers accessing HP disk arrays via HP’s SecurePath
software product can view WWIDs and other LUN attributes via the spmgr command. 11i v1
and v2 servers accessing HP disk arrays via HP’s AutoPath software product can view
WWIDs and other LUN attributes via the autopath command.
Many different RAID technologies have been proposed over the years. Each level specifies a
different disk array configuration and data protection method, and each provides a different
level of reliability and performance. Only a few of these configurations are typically
implemented in today’s arrays:
• RAID 0: Striping
• RAID 1: Mirroring
Array Benefits
Disk arrays offer several advantages over traditional disk storage:
• Improved scalability: Many disk arrays provide hundreds of terabytes of disk
space.
• Improved flexibility: Disk arrays make it very easy to make additional space
available when necessary, and re-allocate space that is
underutilized.
LUN x 4 paths
LUN x 4 paths
LUN x 4 paths
LUN
SAN LUN
server w/ SAN array w/ LUN
2 HBAs switches 2 controllers
Student Notes
SANs
For even greater flexibility, arrays are oftentimes connected to multiple hosts via a Storage
Area Network (SAN). A Storage Area Network (SAN) is a special purpose network of servers
and storage devices that allows administrators to more flexibly configure and manage disk
and tape resources. Array administrators can control which LUNs are presented to each host
on the SAN.
Multipathing
In high-availability environments, administrators often configure multiple physical paths to a
disk array. Each path utilizes a unique path from the server’s Host Bus Adapter (HBA),
through the SAN, to an array controller. Depending on the complexity of your SAN, you may
have two, four, eight, or even more paths to each LUN.
Redundant links ensure that if an HBA or array controller fails the server can maintain
connectivity to the array LUNs via the remaining link(s). Utilizing multiple paths to an array
concurrently may provide performance benefits, too: if any single path to a LUN becomes
overloaded, I/O can be redirected down one of the other paths.
In 11i v1 and v2, the kernel isn’t multi-path aware. It views each path to a multi-pathed LUN
as an independent device, and relies on LVM Physical Volume Links (PV Links), VxVM
Dynamic Multipathing (DMP), or path management software from the array vendor to
determine which paths are redundant and how those paths should be used.
Many disk array vendors offer additional software that can be added to the 11i v1 or v2 kernel
to provide array-specific multi-pathing capabilities independent of LVM or VxVM. HP’s
Storageworks Secure Path product provides this functionality for HP’s XP, EVA, and VA disk
arrays. The Power Path product from EMC provides similar functionality for EMC disk
arrays. To learn more about HP Storageworks Secure Path, visit http://www.hp.com.
11i v3 implements a new mass storage stack that provides “native” OS multi-pathing. In the
new mass storage stack, the kernel automatically recognizes, configures, and manages
redundant LUN paths. LVM PV Links and third party path management software are no
longer required.
Partitioning Overview
HP partitioning technologies allow multiple applications to run on a single server with
dedicated CPU/memory/IO resources that can be flexibly reallocated as necessary
Partition #1
Partition #1 Transaction
Transaction Processing
Processing
CPU/memory/IO
Partition #2
Batch Partition #2
Processing Batch
Processing
Student Notes
In the past, most organizations deployed a dedicated server for each application. Allocating a
dedicated server for each application guaranteed that the application didn’t compete for
resources with other applications, and ensured that hardware, software, or security issues on
the server would only impact one application.
1
The level of fault isolation provided varies, depending on the partitioning technology selected.
Cell-Based, Cell-Based,
Superdome, & Superdome, &
All IA All IA/PA
HW Support Superdome 2 Superdome 2
Servers Servers
IA/PA IA/PA
Servers Servers
HPUX, HPUX,
Windows, Windows,
OS Support HPUX HPUX
Linux, Linux,
OpenVMS OpenVMS
Student Notes
HP offers a variety of partitioning solutions.
nPar Advantages: nPars allow the administrator to run multiple OS instances on a server, and
move cell boards between nPars to balance utilization, while still guaranteeing hardware and
OS fault isolation. Applications running in one nPar can’t access resources in another nPar.
An OS panic, hardware failure, or security breach in one nPar has no impact on the other
nPars.
nPar Disadvantages: In comparison to some of the other partitioning solutions below, nPars
provide a bit less flexibility since they only allow blade- / cell-level partition granularity.
nPar Support: nPars are only supported on midrange, Superdome, and Superdome 2 servers.
On Integrity servers, nPars support HP-UX, Windows, Linux, and OpenVMS operating
systems. Servers with multiple nPars can run a different OS in each nPar.
vPar Advantages: vPars provide greater flexibility than nPars, since each vPar can be
assigned individual processors, individual LBAs, and a percentage of physical memory.
Applications running in one vPar can’t access resources in another vPar, and an OS panic in
one vPar has no impact on other vPars. Since individual hardware components are assigned
to each vPar, vPars have little impact on an application’s performance. vPars allow the
administrator to move CPUs between vPars very easily. The latest version of vPars also
supports dynamic memory migration between vPars.
vPar Disadvantages: Unlike nPars, vPars don’t provide hardware fault isolation. When a cell
board fails, multiple vPars on the cell board may panic as a result.
vPar Support: vPars are supported on all current midrange, Superdome, and Superdome 2
servers. However, not all models support the latest version of the vPars software, and not all
interface cards provide vPar support. See the vPars documentation for details. vPars only
support HP-UX.
VM Disadvantages: VMs provide software, but not hardware fault isolation. Also, VMs incur
greater performance penalties than vPars, particularly for I/O bound applications.
Support: VMs are supported on all Integrity servers – including entry class systems and
Integrity blades. At the time this book went to press, Integrity VMs supported HP-UX,
Windows, and Linux. OpenVMS will eventually be supported as a guest OS. Check the
current QuickSpecs for the latest support list. vPars and VMs are mutually incompatible
within an nPar, though a server with multiple nPars can run vPars in one nPar and VMs in the
other.
• Processor Sets (PSETS) enable the administrator to assign one or more dedicated
processors to an application, and reallocate PSET assignments when necessary.
• Security Containment, a product introduced in 11i v2, facilitates the creation of security
“compartments” that limit the network interfaces, sockets, files, directories, and kernel
functions available to an application. Configuring each application in a separate security
compartment ensures that applications can not intentionally or unintentionally interfere
with other applications’ resources.
• IPFilter, an open source firewall solution, restricts network traffic flowing in and out of
the SRP’s network interfaces.
• Secure Resource Partitions, an intuitive CLI / menu interface that automatically integrates
and manages the components described above.
Secure Resource Partition Advantages: Secure Resource Partitions enforce minimum and
maximum CPU, memory, and disk I/O bandwidth entitlements for each application, and
ensure that each application can only access its own files, directories, network interfaces and
other resources.
Secure Resource Partition Support: PRM and PSETS are supported on all HP-UX 11i v1, v2,
and v3 servers. Security containment is only supported on 11i v2 and v3. The Secure
Resource Partitions CLI/TUI interface is only supported on 11i v3.
Configuring
Hardware:
Part 2: System Types
Student Notes
Student Notes
HP offers a wide variety of Integrity servers, from dual processor entry-class servers, to high-
end servers with that can accommodate several thousand concurrent users.
HP’s entry-class servers are self-contained, rackmounted servers. Each server chassis
includes processors and memory, as well as power, cooling, management components. HP’s
largest entry-class servers support up to eight cores.
HP’s mid-range servers are self-contained, rackmounted servers that utilize a cell-based
architecture. Each server chassis contains one or more cell boards, as well as power,
cooling, and management components. HP’s largest mid-range, rack-mounted server
supports up to 32 cores.
HP’s entry-class and mid-range rackmount servers all have model names that begin with HP
Integrity rx____, in which the “rx” is followed by a four digit number. In general, servers with
higher model numbers (e.g.: rx7640) offer greater power and expandability than servers with
lower model numbers (e.g.: rx2660).
HP’s high-end Integrity Superdome server also utilizes a cell-based architecture. The current
cell-based Superdome model supports up to 128 cores.
Many organizations today deploy HP’s blade server solutions rather than rackmounted
servers.
A blade server is a compact, high-density server that has its own CPU and memory, but that
shares networking cables, switches, power, and storage with other blade servers in a
specially designed HP BladeSystem enclosure.
All of the components in the enclosure connect to a common midplane, eliminating the need
for power, LAN, and SAN cables to individual server blades.
Blade solutions often provide greater flexibility, faster server deployments, better
manageability, less downtime, less power consumption, and lower costs than similar rack-
mounted solutions.
The slide notes that HP currently offers a variety of Integrity blade servers with as many as 32
processor cores.
hp9000 Servers
In the past, HP offered a variety of PA-RISC based HP-UX server solutions. HP no longer
sells new PA-RISC servers, but does continue to support existing PA-RISC servers.
Upgrade Paths
Customers often find that as their business grows, their transaction volumes demand greater
capacity and performance. Hewlett-Packard provides a comprehensive upgrade program
that protects customers' investments in hardware, software and training. The upgrade
program includes simple board upgrades, system swaps with aggressive trade-in credits, and
100 percent return credit on most software upgrades.
# model
ia64 hp server rx2660
OS Version Support
Each HP-UX release only supports certain server models. To determine which hardware
models support each operating system release, see
http://www.hp.com/go/hpuxservermatrix.
Hardware products change frequently. For the most current information on HP’s hardware
products, visit HP’s product website at http://www.hp.com/go/integrity, or contact
your local HP sales representative.
Common Features:
• Integrated LAN interface
• Integrated Management Processor
• Redundant hot-swap power supplies
• Redundant hot-swap cooling
Student Notes
HP’s entry-class Integrity servers are ideal for customers who require flexibility, high-
availability, and scalability up to eight processor cores in a traditional rackmount form factor.
Most are also available in a pedestal mount for deskside use. Administrators often deploy
entry-class servers in smaller branch office locations.
The entry-class servers offer one to four processor sockets. Most models support dual-core
Itanium processors. The rx2800 i2 supports a quad-core Itanium processor.
PCI-X and PCI Express expansion slots allow the administrator to easily add additional LAN
and mass storage interface cards to connect additional peripheral devices.
All of the entry class servers include internal disks. Older servers used SCSI
controllers/disks. Newer servers use Serial Attach SCSI (SAS) controllers/disks. Some
servers also support HP’s SmartArray controllers, which provide hardware mirroring. All
internal disks on current server models are hot-pluggable, so failed disks can usually be
replaced without shutting down the operating system.
All of the entry class servers offer a slimline DVD drive. The DVD is included standard on
some servers, and as an option on others. Some models accommodate a tape drive in place
of the DVD if desired.
All of the entry class servers support an Integrated Lights Out (iLO) Management Processor
card (though the card is an add-on option on some models). The iLO/MP enables the
administrator to remotely access the system console, view system hardware status messages,
reset the system, and power the system on and off. All iLO/MP cards provide remote access
via telnet and HTTPS. The iLO web interface is very similar to the web interface provided
by HP’s ProLiant servers. Some models offer an SSH access option, too, for enhanced
security.
All of the current entry class servers include redundant, hot-plug power supplies, fans, and
disks to minimize downtime.
• The next couple slides show the layout of an Integrity rx2660 entry-class rackmount server
• For descriptions of other entry-class servers visit http://www.hp.com/go/servers
1 2 3 4 5 6 7 8
Student Notes
The slide above shows the major components visible from the front of an rx2660 entry-class
rackmount server.
To learn more about the rx2660 and other entry class servers, go to
http://www.hp.com/go/servers.
1 2 3 4 5 6 7 8 9 10
Student Notes
The photo on the slide above shows a rear view of the rx2660 entry-class rackmount server.
To learn more about the rx2660 and other entry-class servers, go to
http://www.hp.com/go/servers.
Common Features:
• Integrated LAN interfaces
• Integrated SCSI interface
• Integrated Management Processor
• Redundant hot-swap power supplies
• Redundant hot-swap cooling
Student Notes
HP’s mid-range rackmount servers utilize HP’s cell-based server technology, in which each
server contains one or more cell boards. Each cell board may be connected to an optional
I/O chassis which contains eight expansion slots. A low-latency crossbar backplane provides
connectivity between the cell boards.
The cell-based architecture provides tremendous expandability. As the need for processing
power, expansion slots, and memory increases, additional cell boards may be added to the
system.
The rx7640 cell-based server supports up to two cell boards and 16 cores. The rx8640 cell-
based server supports up to four cell boards and 32 cores.
The mid-range, rackmount cell-based servers are ideal for mission-critical, consolidation, and
scale-up deployments that require up to 32 processor cores in a rackmount form factor.
• The graphic below shows the physical layout of an Integrity rx8640 server
• For descriptions of other mid-range servers visit http://www.hp.com/go/servers
Student Notes
The graphic on the slide shows the physical layout of a rack-mounted, mid-range rx8640
server. The rx8640 supports four cell boards, two 8-slot I/O chassis in the rear, two DDS/DVD
bays, and four internal disks. Customers who require additional interface cards can purchase
a System Expansion Unit (SEU) that provides two additional 8-slot I/O chassis, four
additional internal disks, and two additional DDS/DVD bays.
The rx7640 is similar, but supports two rather than four cellboards.
MP/Core I/O
Crossbar backplane
Power inputs
Student Notes
The photo on the slide above shows a rear view of the rx8640 mid-range server.
For detailed specifications of this and other mid-range servers visit
http://www.hp.com/go/servers.
Integrity Superdome:
• Up to two compute cabinets
• Up to two I/O Expansion Cabinets
• Up to 16 cell boards
• Up to 64 dual-core processors
• Up to 128 cores
• Up to 512 DIMMs
• Up to 192 PCIe/PCI-X slots
• Integrated iLO / MP
• Redundant hot-swap power supplies
• Redundant hot-swap cooling
Student Notes
For over a decade, enterprise customers have trusted HP’s mission-critical cell-based
Integrity Superdome server to provide maximum performance, scalability, and flexibility.
HP’s high-end Superdome servers support up to 16 cell boards, with 128 processor cores and
192 expansion slots.
The cell-based architecture provides a great deal of expandability. As the need for
processing power, expansion slots, and memory increases, additional cell boards may be
added to the system. Node partitioning enables the administrator to assign (and re-assign!)
cell boards to one or more functionally isolated nPar partitions for even greater flexibility.
For detailed specifications of this and other Superdome server configurations, visit
http://www.hp.com/go/servers.
I/O Bay 0
I/O Fans 0-4
I/O Chassis 1 and 3
each with 12 expansion
slots
Power Supplies
Leveling Feet
Front
Student Notes
The graphic on the slide shows the physical layout of an 8-cell Superdome server. Each
Superdome “compute” cabinet contains up to eight cell boards with four dual-core Montecito
processors per cell, and two I/O bays, each containing two 12-slot I/O chassis.
Customers who require larger configurations can purchase two side-by-side compute
cabinets to support up to 16 cell boards and 96 I/O expansion slots, as shown below.
Optional I/O expansion units provide additional I/O expandability.
For detailed specifications of this and other Superdome server configurations, visit
http://www.hp.com/go/servers.
Blowers 2-3
Crossbar Backplane
MP
I/O Bay 1
I/O Chassis 1 and 3
each with 12 expansion slots
Cable Groomer
Rear
Student Notes
The photo on the slide above shows a rear view of the Integrity Superdome server computer
cabinet.
HP BladeSystem Overview
• For maximum flexibility, consider HP’s HP BladeSystem solution
• A blade server is a compact, high-density server that has its own CPU and memory
resources, but that shares network, power, cooling, and storage resources with other blade
servers in an HP BladeSystem enclosure.
HP BladeSystem advantages:
• Manageability
Sophisticated integrated management and monitoring tools simplify
administration of the blade enclosure, and of the blades within the enclosure
• Availability
Redundant power, cooling, and interconnects eliminate single points of
failure
• Flexibility
The HP BladeSystem supports Integrity, ProLiant, and storage blades in a
single enclosure. HP’s Virtual Connect technology allows you to quickly
deploy (and redeploy) without rewiring!
• Serviceability
Simple tool-less replacement for most components; powerful, intuitive,
proactive diagnostic tools
• Scalability
Consolidated power and cooling and the BladeSystem’s dense form factor,
enable you to deploy more servers, more quickly and more cost effectively
Student Notes
For maximum flexibility, consider the HP BladeSystem Integrity blade server solutions.
A blade server is a compact, high-density server that has its own CPU and memory, but that
shares power, cooling, and an intuitive management interface in a specially designed HP
BladeSystem enclosure.
All of the components in the enclosure connect to a common midplane, eliminating the need
for power, LAN, and SAN cables to individual server blades.
The servers and all the components of the enclosure work together as a seamless unit,
increasing efficiency and reducing costs by eliminating many of the overlapping resources
required to support stacks of individual rack servers.
The list below describes some of the most important features of HP’s latest BladeSystems.
• Manageability:
• Availability: All power supplies, fans, and other critical enclosure components are
redundant and hot-pluggable to ensure maximum uptime.
• Flexibility: Enclosures may contain a mix of ProLiant, Integrity, and storage blades,
allowing the administrator to easily match the blade mix in the enclosure to the needs of
the organization. BladeSystem “mezzanine” expansion cards are interchangeable: many
of the fibre channel and Ethernet cards used on HP’s c-Class ProLiant blades are
supported on c-Class Integrity server blades.
HP’s c-Class BladeSystem’s Virtual Connect technology can drastically reduce and
simplify cabling requirements, too.
Densely stacked rack-mounted servers with many Ethernet and Fibre Channel (FC)
connections can result in hundreds of cables coming out of a rack. Installing and
maintaining multitudes of cables is time-consuming and costly. When you add, move, or
replace a traditional server, you must typically add new power and cooling units, and
modify the LAN and SAN, which may require assistance from your LAN, SAN, and facility
administrators. This may delay server changes and deployments.
Virtual Connect technology provides a simple, easy-to-use tool for managing the
connections between HP BladeSystem c-Class servers and external networks. It cleanly
separates server enclosure administration from LAN and SAN administration, relieving
LAN and SAN administrators from server maintenance and makes HP BladeSystem c-
Class server blades change-ready, so that blade enclosure administrators can rapidly add,
move, and replace server blades with minimal assistance from LAN/SAN administrators.
• Scalability: HP’s BladeSystem provides more efficient power and cooling than rack-
mounted server solutions, since consolidated power supplies and zone-based cooling
components in the enclosure provide power and cooling for multiple server blades. As a
result, BladeSystem solutions may enable organizations to deploy more servers, much
more quickly and affordably, in a smaller datacenter footprint than would otherwise be
possible with rack-mounted servers.
The graphic on the slide above is an HP BladeSystem c7000 blade enclosure with eight half-
height ProLiant blades and four full-height Integrity BL860c blades in the slots on the right.
Student Notes
HP currently offers two HP BladeSystem enclosures: the HP BladeSystem c7000 and c3000.
Both enclosures utilize the same blades, interconnects, power, cooling, and other
components, and both allow you to mix and match a variety of Integrity and ProLiant server
blades in the enclosure.
The c7000 is a 10 rack unit enclosure that can accommodate up to eight full height blades or
sixteen half-height blades. The enclosure on the slide has eight half-height ProLiant blades
on the left, and four full-height HP Integrity BL860C blades on the right.
The c3000 is a 6 rack unit enclosure that can accommodate up to four full height blades or
eight half-height blades. The c3000 is also available in a tower configuration for small office
deployments. The c3000 on the slide has four full-height HP Integrity BL860C blades on the
right.
HP’s HC590S Integrity Blade Server Administration course discusses the c-class blade
enclosures and Integrity blade models and management tools in much greater detail.
These two white papers on http://www.hp.com provide additional information about the
C-class BladeSystem enclosures:
The graphic below highlights some of the important components of the HP BladeSystem c7000
blade enclosure
Student Notes
The slide above highlights some of the critical components of the c7000 BladeSystem
enclosure.
Student Notes
HP offers a complete line of Integrity server blades, from 2- to 32-cores. All are compatible
with the HP BladeSystem c3000 and c7000 enclosures, and all leverage the HP BladeSystem
manageability, availability, flexibility, and serviceability features described previously.
• HP’s BL860C i2, BL870C i2, and BL890C i2 blades all utilize a common “foundation blade”
• The Integrity Blade Link, using Intel’s QPI fabric technology, conjoins 1, 2, or 4 foundation
blades
• Each blade in the QPI fabric has full access to resources on the other blades via the QPI fabric
+ + + + =
Student Notes
The graphic on the slide shows an Integrity BL890C i2.
HP’s BL860C i2, BL870C i2, and BL890C i2 blades all utilize a common “foundation blade”.
Each foundation blade hosts:
The Integrity Blade Link, using Intel’s QuickPath Interconnect (QPI) fabric technology, may
be used to conjoin one to four foundation blades. Each blade in the QPI fabric has full
access to resources on the other blades via the QPI fabric. This approach allows HP’s
Integrity blades to easily scale from 2 to 32 processor cores.
The graphic below shows the architecture of the QPI fabric in a BL890C i2:
HP’s HC590S Integrity Blade Server Administration course discusses the c-class blade
enclosures and Integrity blade models and management tools in much greater detail.
Also read the following white paper’s on http://docs.hp.com to learn more about the
Integrity blade architecture:
• Why Scalable Blades: HP Integrity Server Blades (BL860c i2, BL870c i2, and BL890c
i2)
• Technologies in HP Integrity server blades (BL860c i2, BL870c i2, and BL890c i2)
Superdome 2 Superdome 2
8-socket 16-socket
Superdome 2
32-socket
Student Notes
For maximum scalability, availability, and flexibility, consider the HP Superdome 2 server.
Superdome 2 leverages its lower mid-plane, power, cooling, interconnects, and other modular
components from the HP BladeSystem c7000 enclosure. It adds a fault tolerant, low latency
crossbar fabric that facilitates the creation of nPars with up to 128 cores, and a Superdome 2
specific upper midplane that supports to connect external I/O expansion enclosures, each
with up to 12 PCIe I/O expansion cards.
• The 8-socket / 32-core Superdome 2 has four Superdome 2 blades in a single Superdome 2
enclosure.
• The 16-socket / 64-core Superdome 2 has eight Superdome 2 blades in a single Superdome
2 enclosure.
• The 32-socket / 128-core Superdome 2 has sixteen Superdome 2 blades in two Superdome
2 enclosures connected via a single Superdome 2 enclosure.
Student Notes
The graphic on the slide shows a close-up view of an 8-socket / 32-core Superdome 2.
The lower midplane is highly leveraged from the c7000, using many of the same power,
cooling, and LAN/SAN interconnect modules.
The upper midplane, designed specifically for the Superdome 2, provides a fault tolerant
crossbar, and connectivity to external I/O expansion enclosures.
Eight external I/O expansion enclosures each house up to twelve additional PCIe expansion
cards.
Or, read more about Superdome 2 architecture in the HP Superdome 2: the Ultimate
Mission-critical Platform white paper on http://www.hp.com.
View cell boards, interface cards, peripheral devices, and other components
# ioscan all components
# ioscan –C cell cell board class components
# ioscan –C lan LAN interface class components
# ioscan –C disk disk class devices
# ioscan –C fc fibre channel interfaces
# ioscan –C ext_bus SCSI buses
# ioscan –C processor processors
# ioscan –C tty serial (teletype) class components
SAM and the SMH can also provide detailed hardware information
Student Notes
HP-UX provides several commands for viewing your system configuration.
Execute the model command to determine your system’s hardware model string.
# model
ia64 hp server rx2600
In 11i v2 and 11i v3, the machinfo command reports detailed processor, memory, firmware,
model, and operating system information.
# machinfo
CPU info:
1 Intel(R) Itanium 2 processor (1.4 GHz, 1.5 MB)
400 MT/s bus, CPU version B1
Firmware info:
Firmware revision: 02.31
FP SWA driver revision: 1.18
IPMI is supported on this system.
BMC firmware revision: 1.53
Platform info:
Model: "ia64 hp server rx2600"
Machine ID number: e85c91a3-7141-11d8-b1ce-0f6d684be9ae
Machine serial number: US40676377
OS info:
Nodename: myhost
Release: HP-UX B.11.31
Version: U (unlimited-user license)
Machine: ia64
ID Number: 3898380707
vmunix _release_version:
@(#) $Revision: vmunix: B.11.31_LR FLAVOR=perf
The ioscan command presents a hierarchical list of cell boards, interface cards, peripheral
devices, and other components on your system. By default, ioscan reports each
component’s hardware path, class, and description. Add the –C option to view a specific
device class such as cell, disk, lan, or processor. Slides later in the chapter describe
HP-UX hardware paths and other ioscan options in detail.
# ioscan
H/W Path Class Description
==============================================================
root
1 cell
1/0 ioa System Bus Adapter (804)
1/0/0 ba Local PCI Bus Adapter (782)
1/0/2 ba Local PCI Bus Adapter (782)
1/0/2/0/0 ext_bus SCSI C1010 Ultra160
1/0/2/0/0.8 target
1/0/2/0/0.8.0 disk HP 36.4GST336607LC
1/0/2/0/0.10 target
1/0/2/0/0.10.0 disk HP 36.4GST336607LC
1/0/14 ba Local PCI Bus Adapter (782)
1/0/14/0/0 lan HP A5230A 10/100Base-TX
1/5 memory Memory
1/10 processor Processor
1/11 processor Processor
1/12 processor Processor
1/13 processor Processor
The SMH (11i v2 and v3) can also provide detailed hardware information. Click the
“Processors”, “Memory”, and other links on the SMH Home tab.
Student Notes
Viewing system hardware resources becomes more complicated on partitioned systems.
Hardware resources allocated to one nPar, vPar, or Integrity VM are not visible to other
partitions. The peripheral device and interface card management commands discussed in the
remaining slides of the chapter -- such as ioscan, scsimgr, rad, olrad, pdweb, and sam --
only display devices in the current partition.
To determine which resources have been assigned to other nPars partitions on the system,
run the parstatus command. Similarly, vparstatus reports which resources have been
allocated to other virtual partitions, and hpvmstatus reports which resources have been
allocated to Integrity virtual machine guests.
Configuring
Hardware:
Part 3: HP-UX Hardware Addressing
Student Notes
Hardware Addresses
In order to successfully configure and manage devices on an HP-UX
system, administrators must understand the addressing mechanism used
to identify devices
Student Notes
During the HP-UX startup process, the kernel automatically scans the system hardware and
assigns a unique HP-UX hardware address to every bus adapter, interface card, and device.
In order to configure new devices on your system, you need to be able to read and
understand these hardware addresses. The next few slides discuss HP-UX hardware
addressing in detail.
• 11i v1 and v2 implement a “legacy” mass storage stack and addressing scheme
• 11i v3 implements a new mass storage stack, with many new enhancements
• 11i v3 uses new “agile view” addresses, but still supports legacy addresses, too
Student Notes
11i v1 and v2 implement a “legacy” mass storage stack and hardware addressing scheme. 11i
v3 implements a new mass storage stack, with many enhancements and a new hardware
addressing scheme to better support the SAN-based storage used on most HP-UX servers
today. To ensure backward compatibility, 11i v3 still supports legacy hardware addresses,
but HP encourages administrators to begin using the new “Agile View” hardware addresses.
The notes below highlight some of the most important new features provided by the new
mass storage stack and hardware addressing scheme.
Increased Scalability
The new mass storage stack significantly increases the operating system’s mass storage
capacity as shown in the table below.
In addition, the mass storage stack has been enhanced to take advantage of large multi-CPU
server configurations for greater parallelism. Adding more mass storage to a server does not
appreciably slow down the boot process or the ioscan command that administrators use to
view available hardware.
See the HP-UX 11i v3 Mass Storage I/O Scalability white paper for details.
Enhanced Adaptability
The new mass storage stack enhances a server’s ability to adapt dynamically to hardware
changes, without shutting down the server or reconfiguring software.
11i v3 servers automatically detect the creation or modification of LUNs. If new LUNs are
added, the new mass storage stack recognizes and configures them automatically. If an
existing LUN’s addressing, size, or I/O block size changes, the mass storage stack detects this
without user intervention.
When such changes occur, the mass storage stack notifies the relevant subsystems. For
example, if a LUN expands, its associated disk driver, volume manager, and file system are
notified. The volume manager volume or file system can then automatically expand
accordingly.
The new mass storage stack can also remove PCI host bus adapters (HBAs) without shutting
down the server. Coupled with existing online addition and replacement features, online
deletion enables you to replace a PCI card with a different PCI card, as long as the HBA slot
permits it and no system critical devices are affected. You can also change the driver
associated with a LUN; if the software drivers don’t support rebinding online, the system
remembers the changes and defers them until the next server reboot.
Native Multipathing
11i v3 “agile addressing” creates a single virtualized hardware address for each disk or LUN
regardless of the number of hardware paths to the device. The administrator can use the
single virtualized hardware path, rather than the underlying hardware paths, when
configuring the disk or LUN. When a volume manager, file system, or application accesses
the device, the new mass storage stack transparently distributes I/O requests across all
available hardware paths to the LUN using a choice of load balancing algorithms.
If a path fails, the mass storage stack automatically disables the failed path and redistributes
I/O requests across the remaining paths. The kernel monitors failed or non-responsive paths,
so that when a failed path recovers it is automatically and transparently reincorporated into
any load balancing. The mass storage stack automatically discovers and incorporates new
paths, too.
11i v1 and v2 administrators typically rely on add-on multi-pathing products from array
vendors to provide multi-pathing functionality.
The new mass storage stack simplifies management of LUN Device Special Files (DSFs), too.
The next chapter discusses these DSF enhancements in detail.
A new utility called scsimgr allows the administrator to easily view LUN attributes and
usage statistics, and modify the load balancing algorithm used when accessing the LUN.
The new tools and features are integrated with other systems and storage management
utilities such as Systems Management Homepage (SMH) and Systems Insight Manager (SIM)
and Storage Essentials.
Improved Performance
The new mass storage stack achieves better performance by using high levels of concurrent
I/O operations and parallel processing, processor allegiance algorithms, and unique HP server
hardware features such as Cell Local Memory. The operating system provides a choice of
load balancing algorithms, too, so administrators can tune performance to meet each server’s
requirements.
Compatibility
11i v3 supports both legacy addressing and Agile View addressing. HP encourages customers
to begin using the new addressing scheme, though legacy hardware addresses are still
available.
HP-UX includes two commands to ease the migration to the new mass storage stack. The
iofind command automatically identifies configuration files that reference legacy
addresses, and optionally replaces them with equivalent Agile View addresses. The ioscan
–m hwpath command may be used to list Agile View LUN hardware paths and their
equivalent legacy addresses.
1/0/0/2/0
Cell SBA LBA device/function
LUN
CPUs LUN
SBA LBA PCI-X Bus FC HBA
Memory
LAN
LBA Core I/O Serial
Cell Boards SCSI Disk
PCI-X Bus
DVD
MP LAN
Serial
Student Notes
The next few slides discuss the legacy hardware addressing scheme used in 11i v1 and v2.
Later slides discuss the addressing scheme used in 11i v3’s new mass storage stack.
All current systems based on PCI/PCI-X/PCI-Express expansion buses use a fairly consistent
hardware addressing scheme, which we will focus on here.
Every LAN, LUN, disk, or tape drive hardware address begins with an HBA hardware address.
An HBA hardware address encodes the HBA’s location in the kernel’s I/O tree structure:
Cell/SBA/LBA/device/function
SBA The next portion of the HP-UX hardware address identifies the
address of the System Bus Adapter (SBA). This portion of the
address will always be 0 in HBA and peripheral device hardware
paths. Hardware paths for processors and memory modules
typically display a non-zero number in this component of the
hardware path.
LBA The SBA connects to one or more Local Bus Adapters (LBAs) via
high-speed communication channels known as “ropes”. Some
LBAs have just one rope to the SBA. Other LBAs have two ropes to
the LBA to provide enhanced throughput.
Because some LBAs utilize two ropes and others utilize just one, an
HBA’s rope/LBA number typically isn’t the same as its physical slot
number.
Device/Function Each LBA typically provides connectivity to one or two PCI, PCI-X,
PCI-E expansion slots, each accommodating an interface card with
one or more functions. The Device/Function numbers together
uniquely identify a specific function on a specific PCI or PCI-X
card. If a card isn’t a multi-function card, a device/function
combination 0/0 indicates that it is a PCI card, and 1/0 indicates
that it is PCI-X.
Slides later in the chapter describe the ioscan command, which lists a system’s hardware
paths, and the rad and olrad commands, which translate hardware paths into physical slot
locations. Service manuals, which often include system-specific hardware addressing
information, are available at http://docs.hp.com/en/hw.html.
1/0/0/2/0.1.0
HBA hardware address Target LUN ID
Example: the following hardware addresses represent three distinct devices on a SCSI bus
•1/0/0/2/0.2.0
•1/0/0/2/0.6.0
•1/0/0/2/0.10.0 target 2 target 6 target 10
Student Notes
Some servers use a parallel SCSI bus to connect internal disks, DVDs, and tapes. Some Core
I/O cards provide an external SCSI port, too, which may be used to connect additional SCSI
devices. The server in the graphic on the slide has a SCSI HBA connected to three external
SCSI devices.
As shown in the graphic on the slide, legacy SCSI hardware addresses encode a SCSI device’s
HBA address, target address, and LUN ID.
HBA Addresses
The first part of a legacy SCSI hardware address identifies the address of the SCSI HBA to
which the device is attached. In the example on the slide, all three SCSI devices are
connected to the SCSI HBA at address 1/0/0/2/0. Thus, the hardware addresses for all three
devices begin with 1/0/0/2/0.
1/0/0/2/0.2.0
1/0/0/2/0.6.0
1/0/0/2/0.10.0
The graphic on the slide shows a SCSI bus with three external devices identified by target
addresses 2, 6, and 10.
1/0/0/2/0.2.0
1/0/0/2/0.6.0
1/0/0/2/0.10.0
When attaching an external SCSI device, it may be necessary to manually assign a target
address. Some devices use a series of binary DIP switches to set the address. Other devices
use a series of jumper pins. Consult your device documentation to determine how to set your
device’s SCSI target address.
LUN IDs
Some SCSI devices may have a single SCSI target address, with multiple addressable units
within the device.
For example, tape autochangers often provide access to the autochanger’s tape drive via one
LUN ID, and access to the robotic mechanism via second LUN ID.
A SCSI disk array may present multiple virtual disks, each identified by a unique LUN ID.
HPUX uses the last component in the legacy SCSI hardware address to identify the LUN ID.
Most autochangers and disk arrays today are often connected via SAS or Fibre channel
interfaces rather than parallel SCSI, so the LUN ID portion of most SCSI device hardware
paths is typically 0.
1/0/0/2/0.2.0
1/0/0/2/0.6.0
1/0/0/2/0.10.0
1/0/2/1/0.6.1.0.0.0.1
HBA hardware address SAN domain/area/port Array LUN ID
Example: The array below has three LUNs, each accessible via four SAN paths
The next slide lists all legacy hardware paths to the first LUN
Student Notes
Fibre channel disk array LUNs are often accessible via multiple paths through a SAN. The
graphic on the slide shows a disk array with three LUNs, each accessible via four different
paths. It isn’t uncommon today to have four, eight, or even more different paths to a LUN.
The next slide lists the legacy hardware paths that would be used to represent LUN 1 in the
graphic.
Each legacy hardware address encodes:
• The legacy hardware address of the server HBA used to access the LUN. See the HBA
addressing discussion earlier in the chapter.
• The SAN domain/area/port used to access the array. Administrators may use the legacy
hardware address’s 8-bit domain, 8-bit area, and 8-bit port addresses to associate a
hardware address with a specific path through the SAN from the server HBA to the target
array controller. Different SAN switch vendors use the domain/area/port fields
differently. HP Customer Education’s Accelerated SAN Essentials class (UC434S)
discusses these differences in detail.
• The LUN ID of the target LUN within the array. When presenting LUNs to a server, the
array administrator assigns each LUN a LUN ID.
The legacy addressing scheme was designed to accommodate SCSI-2 bus addresses, in
which each device on a bus was uniquely identified by a 7-bit “controller” number
(ranging from 0 to 128), a 4-bit “target” number (ranging from 0 to 15), and a 3-bit “LUN”
number (ranging from 0 to 7).
Since today’s arrays routinely present more than eight LUNs, the original 3-bit
representation of the LUN ID is insufficient. Thus, legacy addresses now use all 14
controller/target/LUN bits at the end of an FC hardware path to represent the LUN ID.
Thus, the last three components of the legacy hardware addresses for LUN IDs 0-16 would be
represented as follows:
The 11i v1 and v2 kernels provide no automated path correlation or management; they treat
each path as if it were an independent device. 11i v1 and v2 rely on the LVM and VxVM
volume managers or add-on path management solutions such as HP’s SecurePath product or
EMC’s PowerPath product to correlate redundant paths, ensure path failover when an HBA
fails, and provide load balancing across paths to a LUN.
HBA: 1/0/2/1/0
SAN domain/area/port: 6.1.0
LUN ID: 0.0.1
HW Path: 1/0/2/1/0 . 6.1.0 . 0.0.1
HBA: 1/0/2/1/0
SAN domain/area/port: 6.2.0
LUN ID: 0.0.1
HW Path: 1/0/2/1/0 . 6.2.0 . 0.0.1
HBA: 1/0/2/1/1
SAN domain/area/port: 6.1.0
LUN ID: 0.0.1
HW Path: 1/0/2/1/1 . 6.1.0 . 0.0.1
HBA: 1/0/2/1/1
SAN domain/area/port: 6.2.0
LUN ID: 0.0.1
HW Path: 1/0/2/1/1 . 6.2.0 . 0.0.1
Student Notes
The example on the slide shows four different SAN paths to LUN ID 1, and each path’s
corresponding legacy hardware path. The heavy black lines represent the physical path
through the SAN for each address. Note that the LUN ID is the same in all four paths; each
path is simply a different path to the same LUN.
# ioscan –f
Class I H/W Path Driver S/W State H/W Type Description
================================================================
root 0 root CLAIMED BUS_NEXUS
cell 0 1 cell CLAIMED BUS_NEXUS
ioa 0 1/0 sba CLAIMED BUS_NEXUS SBA
ba 0 1/0/0 lba CLAIMED BUS_NEXUS LBA
slot 0 1/0/0/3 pci_slot CLAIMED SLOT PCI Slot
ext_bus 0 1/0/0/3/0 mpt CLAIMED INTERFACE U320 SCSI
target 0 1/0/0/3/0.6 tgt CLAIMED DEVICE
disk 0 1/0/0/3/0.6.0 sdisk CLAIMED DEVICE HP Disk
Student Notes
You can view a list of the device on your system and their legacy HP-UX hardware addresses
via the ioscan command. ioscan supports a number of useful options.
# ioscan Scans hardware and lists all devices and other hardware
devices found. Shows the hardware path, class, and a brief
description of each component.
# ioscan –f Scans and lists the system hardware as before, but displays
a "full" listing including several additional columns of
information. See the ioscan field output descriptions
below for more information.
# ioscan –kf Lists the system hardware as before, but uses cached
information. On a large system with dozens of disks and
interface cards, ioscan –kf is much faster than ioscan
–f.
# ioscan -kfH 0/0/0/3/0 Shows a full listing of the component at the specified
hardware address, and all nodes in the I/O tree below that
node. The example shown here would display a full listing
of both the HBA at address 0/0/0/3/0 and the targets and
devices attached to that HBA (if any). -H is very useful on
a large system if you just need to view information about a
single device or bus.
# ioscan -kfC disk Lists devices of the specified class only. Two other
common classes are "tape" and "lan". The optional –k
option displays cached information.
# ioscan –kfn Lists device file names associated with each device. Device
files are discussed at length in the next chapter. The
optional –k option displays cached information.
Driver The name of the kernel driver that controls the hardware
component. If no driver is available to control the
hardware component, a question mark (?) is displayed in
the output.
First, simply check to see that your new device appears in the ioscan output. If not,
shutdown your machine and check to ensure that all the cables are connected properly. In
the case of an interface card, ensure that the card is firmly inserted in the interface card slot
in the backplane of your machine.
Next, ensure that the hardware path is correct. Did you set the correct SCSI address? Add
the device and its hardware path and description to the hardware diagram in your system log
book.
Assuming the hardware path is correct, check the S/W state column in the ioscan -f
output. In order to communicate with your new device or interface card, your kernel must
have the proper device drivers configured. If the proper driver already exists in your kernel,
the S/W State column should say CLAIMED. If this isn't the case, you will have to add the
driver to the kernel. A later chapter discusses kernel configuration.
If your new device appears to be CLAIMED by the kernel, proceed to the next chapter and
learn how to create and use device files to access your new device.
1/0/0/2/0
Cell SBA LBA device/function
LUN
CPUs LUN
SBA LBA PCI-X Bus FC HBA
Memory
LAN
LBA Core I/O Serial
Cell Boards SCSI Disk
PCI-X Bus
DVD
MP LAN
Serial
Student Notes
The last few slides described the legacy hardware addressing scheme used in 11i v1 and v2.
The next few slides discuss the agile view addressing scheme introduced by the new mass
storage stack in 11i v3.
Like the addressing scheme used in earlier versions of HP-UX, the new Agile View HBA
hardware addresses encode the HBA’s cell/SBA/LBA/device/function location in the kernel’s
I/O tree structure.
Cell/SBA/LBA/device/function
SBA The next portion of the HP-UX hardware address identifies the
address of the System Bus Adapter (SBA). This portion of the
address will always be 0 in HBA and peripheral device hardware
paths. Hardware paths for processors and memory modules
typically display a non-zero number in this component of the
hardware path.
LBA The SBA connects to one or more Local Bus Adapters (LBAs) via
high-speed communication channels known as “ropes”. Some
LBAs have just one rope to the SBA. Other LBAs have two ropes to
the LBA to provide enhanced throughput.
Because some LBAs utilize two ropes and others utilize just one, an
HBA’s rope/LBA number typically isn’t the same as its physically
slot number.
Device/Function Each LBA typically provides connectivity to one or two PCI, PCI-X,
PCI-E expansion slots, each accommodating an interface card with
one or more functions. The Device/Function numbers together
uniquely identify a specific function on a specific PCI or PCI-X
card. If a card isn’t a multi-function card, a device/function
combination 0/0 indicates that it is a PCI card, and 1/0 indicates
that it is PCI-X.
Slides later in the chapter describe the ioscan command, which lists a system’s hardware
paths, and the rad and olrad commands, which translate hardware paths into physical slot
locations. Service manuals, which often include system-specific hardware addressing
information, are available at http://docs.hp.com/en/hw.html.
1/0/0/2/0.0xa.0x0
HBA hardware address Target LUN ID
Example: the following hardware addresses represent three distinct devices on a SCSI bus
•1/0/0/2/0.0x2.0x0
•1/0/0/2/0.0x6.0x0
•1/0/0/2/0.0xa.0x0
target 2 target 6 target 10
Student Notes
Agile View hardware addresses for parallel SCSI devices are similar to legacy hardware
addresses for parallel SCSI devices, but Agile View represents target and LUN numbers in
hexadecimal rather than decimal form.
As shown in the graphic on the slide, the Agile View parallel SCSI hardware address encodes
the device’s HBA address, target address, and LUN ID.
HBA Addresses
The first part of a legacy SCSI hardware address identifies the address of the SCSI HBA to
which the device is attached. In the example on the slide, all three SCSI devices are
connected to the SCSI HBA at address 1/0/0/2/0. Thus, the hardware paths for all three
devices begin with 1/0/0/2/0.
1/0/0/2/0.2.0
1/0/0/2/0.6.0
1/0/0/2/0.10.0
The graphic on the slide shows a SCSI bus with three external devices identified by target
addresses 2, 6, and 10.
1/0/0/2/0.0x2.0
1/0/0/2/0.0x6.0
1/0/0/2/0.0xa.0
When attaching an external SCSI device, it may be necessary to manually assign a target
address. Some devices use a series of binary DIP switches to set the address. Other devices
use a series of jumper pins. Consult your device documentation to determine how to set your
device’s SCSI target address.
LUN IDs
Some SCSI devices may have a single SCSI target address, with multiple addressable units
within the device. HPUX uses the last component in the Agile View SCSI hardware address
to identify the LUN ID in hexadecimal form.
For example, tape autochangers often provide access to the autochanger’s tape drive via one
LUN ID, and access to the robotic mechanism via second LUN ID.
A SCSI disk array may present multiple virtual disks, each identified by a unique LUN ID.
HPUX uses the last component in the legacy SCSI hardware address to identify the LUN ID.
Most autochangers and disk arrays today are often connected via SAS or Fibre channel
interfaces rather than parallel SCSI, so the LUN ID portion of most SCSI device hardware
paths is typically 0x0.
Most disk arrays today are connected via SAS or Fibre channel interfaces rather than parallel
SCSI, so the LUN ID portion of most SCSI device hardware paths is typically 0x0.
1/0/0/2/0.0x2.0x0
1/0/0/2/0.0x6.0x0
1/0/0/2/0.0xa.0x0
• Agile view provides a lunpath hardware address for each path to each LUN
• Lunpath hardware addresses encode:
− the hardware address of the server HBA used to access the LUN
− the WW Port Name of the array controller FC port used to access the LUN
− the LUN address of the target LUN
• The mass storage stack automatically recognizes and manages redundant paths
1/0/2/1/0.0x64bits.0x64bits
HBA hardware address WW Port Name LUN Address
Example: The array below has three LUNs, each accessible via four SAN paths
The next slide lists all Agile View lunpath addresses for the first LUN
Student Notes
Like the legacy mass storage stack, Agile View provides a hardware address for each path to
each LUN. Agile View calls these path-specific hardware addresses “lunpath addresses”. The
graphic on the slide shows a disk array with three LUNs, each accessible via four different
paths. The next slide lists the Agile View hardware paths that would be used to represent
LUN 1 in the graphic.
• The 64-bit WW Port Name of the array controller FC port used to access the LUN. Disk
arrays connect to a SAN via fibre channel ports on array controller cards. Arrays
typically have redundant controllers, and each controller may have multiple ports
connected to the SAN. Each array controller FC port is identified by a globally unique
WWPN, which is included in the Agile View lunpath address.
• The target LUN’s LUN ID. The first two bits in this number identify the LUN’s LUN
addressing method, the next 14 bits represent the LUN ID number assigned to the LUN by
the array administrator, and the last 48 bits are reserved for future use. Fortunately, the
11i v3 scsimgr command may be used to automatically extract the decimal LUN ID from
the lunpath address.
# scsimgr get_attr \
-a lunid \
-H 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000
name = lunid
current =0x4001000000000000 (LUN # 1, Flat Space Addressing)
default =
saved =
Unlike the legacy mass storage stack, the new mass storage stack automatically recognizes
redundant paths and load balances I/O requests across lunpaths.
1/0/2/1/0.0x64bits.0x64bits
HBA hardware address WW Port Name LUN Address
HBA: 1/0/2/1/0
WW Port Name: 0x50001fe15003112c
LUN ID: 0x4001000000000000
HW Path: 1/0/2/1/0 . 0x50001fe15003112c . 0x4001000000000000
HBA: 1/0/2/1/0
WW Port Name: 0x50001fe150031128
LUN ID: 0x4001000000000000
HW Path: 1/0/2/1/0 . 0x50001fe150031128 . 0x4001000000000000
HBA: 1/0/2/1/1
WW Port Name: 0x50001fe15003112d
LUN ID: 0x4001000000000000
HW Path: 1/0/2/1/1 . 0x50001fe15003112d . 0x4001000000000000
HBA: 1/0/2/1/1
WW Port Name: 0x50001fe150031129
LUN ID: 0x4001000000000000
HW Path: 1/0/2/1/1 . 0x50001fe150031129 . 0x4001000000000000
Student Notes
The example on the slide shows four different SAN paths to LUN ID 1, and each path’s
corresponding legacy hardware path. The heavy black lines represent the physical path
through the SAN for each address. Note that the LUN ID is the same in all four paths, but the
HBA and WWPN portions of the lunpath address varies.
• Agile View also provides a virtual LUN hardware address for disk/tape/LUN
• The LUN hardware address represents the device/LUN itself, not a path to the LUN
• Advantages:
− LUN hardware paths are unaffected by changes to the SAN topology
− The mass storage stack automatically correlates and manages redundant paths
64000/0xfa00/0x4
virtual root node virtual bus virtual LUN ID
Example: the following example shows a LUN hardware address and its associated lunpaths
• 64000/0xfa00/0x4
1/0/2/1/0.0x50001fe15003112c.0x4001000000000000
1/0/2/1/0.0x50001fe150031128.0x4001000000000000 LUN 1
LUN 2
1/0/2/1/1.0x50001fe15003112d.0x4001000000000000 LUN 3
1/0/2/1/1.0x50001fe150031129.0x4001000000000000
Student Notes
In addition to the lunpath hardware addresses discussed on the previous slide, Agile View
presents a virtualized LUN hardware path for each parallel SCSI device, SAS disk, and fibre
channel LUN. The LUN hardware path represents the device or LUN itself rather than a
single physical path to the device or LUN.
In the example on the slide, 64000/0xfa00/0x4 is a LUN hardware path representing a disk
array LUN. The four addresses below the LUN hardware path represent the four lunpaths
used to access the LUN.
64000/0xfa00/0x4
1/0/2/1/0.0x50001fe15003112c.0x4001000000000000
1/0/2/1/0.0x50001fe150031128.0x4001000000000000
1/0/2/1/1.0x50001fe15003112d.0x4001000000000000
1/0/2/1/1.0x50001fe150031129.0x4001000000000000
• LUN hardware addresses always start with 64000, the virtual root node of all Agile View
LUN hardware addresses. This portion of the address should be consistent across all
LUN hardware paths.
• The second component in a LUN hardware address is always 0xfa00, the virtual bus
address used by all Agile View LUN hardware paths. This portion of the address should
be consistent across all LUN hardware paths, too.
• The last component in the LUN hardware path is a virtual LUN ID. The kernel
automatically assigns virtual LUN IDs, sequentially, as it identifies new LUNs. Note that
the virtual LUN ID is virtual; it is not related to the LUN ID that is encoded in lunpath
hardware addresses.
The kernel maintains a persistent WWID to Virtual LUN ID map to ensure that LUN
hardware paths remain consistent across reboots, even if the SAN topology changes.
Agile View LUN hardware paths offer several significant advantages over legacy path-based
addressing, particularly in SAN environments.
• On systems using the legacy mass storage stack, changes in the SAN topology change
device hardware paths, and may require administrator intervention to update the volume
manager and file system configuration. LUN hardware paths are unaffected by changes
to the SAN topology. Since the Agile View LUN hardware paths don’t change, changing
the SAN topology no longer requires manual changes to the volume manager or file
system configuration.
• On systems using the legacy mass storage stack, when configuring disks for use in Logical
Volume Manager, the administrator must manually add each lunpath to the LVM
configuration. The new mass storage stack automatically recognizes and manages
redundant paths.
HP encourages customers to begin using the new Agile View LUN hardware paths, although
legacy hardware paths and lunpath addresses are still supported.
The next few slides describe several 11i v3 commands for viewing the new hardware paths,
and for converting legacy addresses to their Agile View equivalents.
Student Notes
When configuring additional disk space for applications, administrators frequently need to
know which disks are available on the system. Use the ioscan command to view legacy
device hardware addresses. Add the –N option to view Agile View LUN hardware paths and
lunpaths rather than legacy addresses. To view a kernel-cached, full listing, add –k and –f.
Adding the –C disk option limits the output to disk class devices.
# ioscan
# ioscan –N
# ioscan –kfN
Display a kernel-cached listing of disk class devices using Agile View addressing.
5–47. SLIDE: Viewing LUNs and their lunpaths via Agile View
Student Notes
The new –m lun option is specifically designed to display LUNs and lunpaths. Like the
legacy ioscan command, ioscan –m lun reports each device’s class, instance, hardware
path, driver, software state, hardware state, and description.
Between the hardware type and description fields note that ioscan –m lun also reports
the disk’s health status. online indicates that the disk or LUN is fully functional. Limited,
unusable, disabled, or offline indicate that there may be a problem. See the ioscan(1m)
man page or the “Monitoring LUN Health” slide later in the module for details.
Below the LUN hardware path, the command reports all of the lunpath hardware addresses
available to access each LUN.
Below the lunpath hardware addresses, the command reports each device’s device special
files, too (eg: /dev/disk/disk30). The next module discusses device special files in
detail.
# ioscan –m lun
Class I H/W Path Driver SW State H/W Type Health Description
====================================================================
disk 30 64000/0xfa00/0x4 esdisk CLAIMED DEVICE online HP HSV101
1/0/2/1/0.0x50001fe15003112c.0x4001000000000000
1/0/2/1/0.0x50001fe150031128.0x4001000000000000
1/0/2/1/1.0x50001fe15003112d.0x4001000000000000
1/0/2/1/1.0x50001fe150031129.0x4001000000000000
/dev/disk/disk30 /dev/rdisk/disk30
disk 31 64000/0xfa00/0x5 esdisk CLAIMED DEVICE online HP HSV101
1/0/2/1/0.0x50001fe15003112c.0x4002000000000000
1/0/2/1/0.0x50001fe150031128.0x4002000000000000
1/0/2/1/1.0x50001fe15003112d.0x4002000000000000
1/0/2/1/1.0x50001fe150031129.0x4002000000000000
/dev/disk/disk31 /dev/rdisk/disk31
By default, the command displays all disks and LUNs. Add the –H option to view a specific
disk or LUN.
5–48. SLIDE: Viewing HBAs and their lunpaths via Agile View
Student Notes
When troubleshooting SAN issues, it may also be helpful to know which lunpaths utilize a
given HBA. Use the ioscan –kfNH command, followed by the HBA hardware address, to
find out. The example below lists the lunpaths serviced by the 1/0/2/1/0 HBA. Following each
lunpath, the command reports the device special file name of the disk or LUN associated with
that lunpath. The next module discusses device special files in detail.
Student Notes
Identifying failed interfaces and devices is a critical system administration task. HP-UX
automatically displays messages in /var/adm/syslog/syslog.log, and sometimes on
the console, when the operating system encounters hardware problems. 11i v3
administrators can proactively check the state of the system’s HBAs, controllers, disks, and
LUNs any time via the ioscan –P health command. The command reports one of the
following health states for each HBA and mass storage component node in the I/O tree.
limited node is online but performance is degraded due to some links, paths, and
connections being offline
unusable an error condition occurred which requires manual intervention (for example,
authentication failure, hardware failure, and so on)
# ioscan –P health –C fc
Report the health status of a specific fibre channel adapter and its lunpaths.
Use one of the LUN’s lunpath hardware addresses to determine a disk’s LUNID
# scsimgr get_attr \
-a lunid \
-H 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000
name = lunid
current =0x4001000000000000 (LUN # 1, Flat Space Addressing)
default =
saved =
Student Notes
HP-UX administrators identify devices by hardware address, but SAN administrators identify
LUNs by their globally unique WWID names and array administrator-assigned LUN IDs. To
translate Agile View addresses into WWIDs and LUN IDs, use the scsimgr command.
Obtaining WWIDs
The first example on the slide displays LUN WWID attributes. To view all LUN WWIDs,
specify the all_lun argument. Or, to view a specific LUN’s WWID, include the –H option
and a specific LUN hardware path.
name = wwid
current = 0x600508b400012fd20000a00000250000
default =
saved =
name = wwid
current = 0x600508b400012fd20000900001900000
default =
saved =
name = wwid
current = 0x600508b400012fd20000a00000250000
default =
saved =
The second example displays a LUN’s LUN ID attribute using the LUN’s Agile View lunpath
address. In order to view the LUN ID, you must provide a specific lunpath. Recall that you
can obtain a LUN’s lunpaths via the ioscan –m lun command.
# scsimgr get_attr \
-a lunid \
-H 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000
name = lunid
current =0x4001000000000000 (LUN # 1, Flat Space Addressing)
default =
saved =
These are just a few of the many attributes and statistics provided by the scsimgr
command. See the scsimgr(1m) man page for more options.
Disable a lunpath
# scsimgr -f disable
–H 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000
LUN path 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000
disabled successfully
Reenable a lunpath
# scsimgr enable
-H 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000
LUN path 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000
enabled successfully
Student Notes
By default, the new mass storage stack interleaves access requests among lunpaths to a LUN.
When planning to remove an interface card, or when troubleshooting SAN connectivity
issues, the administrator may choose to temporarily or permanently disable one or more
lunpaths to a LUN. Use the scsimgr commands below. As long as at least one path to a
LUN remains functional, the LUN should remain accessible.
# scsimgr -f disable \
–H 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000
LUN path 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000
disabled successfully
# ioscan -P health \
-H 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000
Class I H/W Path health
===================================================================
lunpath 5 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000 disabled
Reenable a lunpath:
# scsimgr enable \
-H 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000
LUN path 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000
enabled successfully
Configuring
Hardware:
Part 4: Slot Addressing
Student Notes
Student Notes
The section of the chapter discussed HP-UX hardware addresses. HP-UX hardware
addresses are useful for viewing and managing peripheral devices, LUNs, and disks.
Some entry-class servers and all mid-range and high-end HP-UX servers now enable
administrators to map an interface card’s HP-UX hardware address to a more meaningful
HP-UX slot address that identifies the card’s physical cabinet, bay, chassis, and slot address.
Blowers Blowers
Backplane Power
Cell Boards
0 1 8 9 Utility Subsystem
Cabinets I/O Bay 0 I/O Bay 1
Fan0 Fan1 Fan2 Fan3 Fan4 Fan4 Fan3 Fan2 Fan1 Fan0
Student Notes
The slot address consists of four components, which identify an interface card’s exact
location on a system.
• The first portion of the slot address identifies a slot’s cabinet number. As shown on the
slide, a Superdome complex may have one or two system cabinets, and two additional I/O
expansion (IOX) cabinets. Superdome interface card slots in the first cabinet will have a
0 in cabinet portion of the slot address. Superdome interface card slots in the second
cabinet will have a 1 in the cabinet portion of the slot address. Interface cards in the
expansion cabinets will have an 8 or 9 in the first portion of the slot address. On non-
Superdome systems, the cabinet number will always be 0.
• The second component of the slot address identifies the slot’s I/O bay. Each Superdome
cabinet has two I/O bays. I/O bay 0 is located on the front of the cabinet, and I/O bay 1 is
located in the rear of the cabinet. Each Superdome IOX cabinet can have three vertically
stacked I/O bays, numbered 1 to 3 from bottom to top. Additional space in the IOX can
be used to install peripheral devices. On non-Superdome systems, the I/O bay number
will always be 0. The diagram below shows the location of the I/O bays on the front and
back of a Superdome cabinet.
• The third component of the slot address identifies the slot’s I/O chassis number. Each I/O
bay contains up to two I/O chassis. On Superdome systems, the I/O chassis are physically
distinct components; the chassis on the left is I/O chassis 1 and the I/O chassis on the
right is I/O chassis 3. On the rp7xxx, rx7xxx, rp8xxx, and rx8xxx servers, there are two
logical I/O chassis are numbered 0 and 1, but they are located in a single physical card
cage.
• The fourth component of the slot address identifies the slot number. Each Superdome
I/O chassis has twelve slots, numbered 0- 11. On rp7xxx, rx7xxx, rp8xxx, and rx8xxx
servers, each I/O chassis has eight slots numbered 1-8.
# olrad -q
Driver(s) Capable
Slot Path Bus Max Spd Pwr Occu Susp OLAR OLD Max Mode
Num Spd Mode
0-0-1-1 1/0/8/1 396 133 133 Off No N/A N/A N/A PCI-X PCI-X
0-0-1-2 1/0/10/1 425 133 133 Off No N/A N/A N/A PCI-X PCI-X
0-0-1-3 1/0/12/1 454 266 266 Off No N/A N/A N/A PCI-X PCI-X
0-0-1-4 1/0/14/1 483 266 66 On Yes No Yes Yes PCI-X PCI
0-0-1-5 1/0/6/1 368 266 66 On Yes No Yes Yes PCI-X PCI
0-0-1-6 1/0/4/1 340 266 266 On Yes No Yes Yes PCI-X PCI-X
0-0-1-7 1/0/2/1 312 133 133 On Yes No Yes Yes PCI-X PCI-X
0-0-1-8 1/0/1/1 284 133 133 On Yes No Yes Yes PCI-X PCI-X
# rad -q
Slot Path Bus Speed Power Occupied Suspended Capable
0-0-0-1 0/0/8/0 64 66 On Yes No Yes
0-0-0-2 0/0/10/0 80 66 On Yes No Yes
0-0-0-3 0/0/12/0 96 66 On Yes No Yes
Student Notes
You can view slot addresses and correlate those addresses to HP-UX hardware addresses via
the rad command (11i v1) or olrad command (11i v2 and v3). These commands are only
available on servers that support slot addressing.
Path The Path column reports the slot’s corresponding HP-UX hardware path.
Bus Num The Bus Num column reports the slot’s bus number.
Max Spd The Max Spd column reports the maximum speed (MHz) supported by the
slot.
Spd The Spd column reports the maximum speed (MHz) supported by the card
currently in the slot.
Pwr The Pwr column reports the power status of the slot. Slots can be powered
up/down to facilitate I/O card replacement. This functionality will be
described in detail later in the chapter.
Susp The Susp column identifies slots in the suspended state. A card must be
suspended before it can be replaced.
OLAR The OLAR column (olrad) or Capable column (rad) indicates if the slot
supports HP’s OL* online card addition and replacement functionality.
OLD The OLD column (olrad) indicates if the slot supports HP’s OL* online
card delete functionality. This feature is new in 11i v3.
Max Mode The Max Mode column distinguishes PCI-X slots from PCI slots.
Configuring
Hardware:
Part 6: Managing Cards and Devices
Student Notes
5–57. SLIDE: Installing Interface Cards w/out OL* (11i v1, v2, v3)
Student Notes
The procedures on this and the following two slides describe how to properly install
additional interface cards.
On some entry-class servers, you must shutdown and power-off your system as described
below in order to add, remove, or replace an interface card. On servers that support HP’s
Online Addition and Replacement (OL*) functionality, you can shutdown a single card slot
without shutting down your entire server. The OL* process is described over the next couple
slides.
2. Verify that the required driver is configured in the kernel. Check your interface card
documentation to determine what driver is required, then use sam or kcweb to determine
if the required driver is configured in your kernel. A later chapter in this course discusses
kernel configuration in detail.
3. Use the shutdown command to properly shut down the system. When you see a
message indicating that it is safe to power-off, either press the power button, or use the
Management Processor power control (pc) command to power-off your system.
# shutdown –hy 0
4. Install the interface card. Static discharge can easily damage interface cards. Be sure to
follow the anti-static guidelines that come with your interface card.
6. During the system boot process, the kernel should scan the system for new interface
cards and devices. Use ioscan -kfn to verify that the system recognized the new
interface card. Does your new device appear in the device list? Is the new device
CLAIMED? If not, it may be necessary to add a new driver to the kernel. See our kernel
configuration chapter later in the course!
WARNING: Always check your support agreement before opening the cabinet of
any HP system. Attempting to service hardware components without
the assistance of an HP engineer may invalidate your warranty or
support agreement.
HP’s OL* technology make it possible to add and replace interface cards
without rebooting
Student Notes
Prior to HP-UX 11i, adding or removing an interface card always required a system reboot, as
described in the process on the preceding slide. HP-UX 11i v1 introduced a new technology
called "Interface Card Online Addition and Replacement" (OL*), which provides the ability to
add and replace PCI interface cards without a system reboot. HP-UX 11i v3 enables the
administrator to permanently remove interface cards without rebooting, too.
The OL* functionality is currently only supported by selected interface cards, on selected
servers running HP-UX 11i v1, v2, and v3. See your system hardware manual to determine if
your server supports this functionality. The notes below describe the OL* process on 11i v1.
The next slide describes the 11i v2 and v3 OL* process.
command line, see chapter 2 in HP's Configuring Peripherals for HP-UX manual on
http://docs.hp.com.
1. Verify card compatibility. Check the documentation accompanying your interface card to
verify that the card is OL* compatible. Check your system owner's manual for details.
2. Verify that the required driver is configured in the kernel. Without the proper driver
configured, you may be able to physically install the card, but the card will be unusable.
Check your interface card documentation to determine what driver is required, then use
the sam -> Kernel Configuration -> Drivers screen to determine if the
required driver is configured in your kernel. A later chapter will describe the process
required to add a new driver.
3. Go to the sam -> Peripheral Devices -> Cards screen. This screen lists all of
the interface cards installed on your system, and includes several items in the Actions
menu for managing OL* interface cards.
+------------------------- Cards ---------------------------------+
|File View Options Actions Help |
|I/O Cards 2 of 23 selected|
|-----------------------------------------------------------------|
| Hardware Slot |
| Slot Path Driver State Power Description |
|+--------------------------------------------------------------+ |
|| - 0/0/2/1 c720 not OLAR-able - SCSI C87x Fast Wid | |
|| - 0/0/4/0 func0 not OLAR-able - PCI BaseSystem (10 | |
|| - 0/0/4/1 asio0 not OLAR-able - Service Processor | |
|| 5 0/2/0 - - on empty slot | |
|| 6 0/5/0 - - on empty slot | |
|| 7 0/1/0 - - on empty slot | |
|| 8 0/3/0 - - on empty slot | |
|| 9 0/9/0/0 c8xx active on SCSI C1010 Ultra W | |
|| 9 0/9/0/1 c8xx active on SCSI C1010 Ultra W | |
|| 10 0/8/0/0 c8xx active on SCSI C1010 Ultra W | |
|| 10 0/8/0/1 c8xx active on SCSI C1010 Ultra W | |
|+--------------------------------------------------------------+ |
+-----------------------------------------------------------------+
4. Select an empty slot from the object list. Slots that are available for use by new interface
cards should be marked either empty slot or unclaimed card in the Description
field. Select one of these slots.
+------------------------- Cards ---------------------------------+
|File View Options Actions Help |
|I/O Cards 2 of 23 selected|
|-----------------------------------------------------------------|
| Hardware Slot |
| Slot Path Driver State Power Description |
|+--------------------------------------------------------------+ |
|| - 0/0/2/1 c720 not OLAR-able - SCSI C87x Fast Wid | |
|| - 0/0/4/0 func0 not OLAR-able - PCI BaseSystem (10 | |
|| - 0/0/4/1 asio0 not OLAR-able - Service Processor | |
|| 5 0/2/0 - - on empty slot | |
|| 6 0/5/0 - - on empty slot | |
|| 7 0/1/0 - - on empty slot | |
|| 8 0/3/0 - - on empty slot | |
|| 9 0/9/0/0 c8xx active on SCSI C1010 Ultra W | |
6. Select Actions -> Add to analyze the slot. In order to insert the new interface card,
the selected PCI slot must be powered down. On some servers, multiple slots may share
a common "power domain". Slots within a power domain are powered on or off as a unit.
Powering off the power domain containing the interface card for the system boot disk or
other critical system resources could be disastrous! SAM automatically analyzes the
selected slot's power domain to ensure that it is safe to temporarily disable the power
domain while the new card is being added.
8. SAM will power-on the card, identify and "bind" an appropriate kernel driver, and run a
post addition script to finish configuring the card, if necessary.
9. Check ioscan to verify that the card is recognized. Does the card appear in the
hardware list? Is it CLAIMED?
WARNING: Be sure to check your support agreement before opening the cabinet
of an HP system. Attempting to service hardware components without
the assistance of an HP engineer may invalidate your warranty or
support agreement.
5–59. SLIDE: Installing Interface Cards with OL* (11i v2, v3)
Student Notes
Installing an OL* capable interface card in 11i v2 and v3 is a multi-step process that may be
performed from the command line via the /usr/bin/olrad CLI utility or the SMH GUI/TUI
interfaces. The procedure for adding an OL* interface card via the SMH is described below.
1. Verify card compatibility. Check the documentation accompanying your interface card to
verify that the card is OL* compatible. Check your system's hardware manual for details.
2. Verify that the required driver is configured in the kernel. Without the proper driver
configured, you may be able to physically install the card, but the card will be unusable.
Check your interface card documentation to determine what driver is required. A later
chapter will describe the process required to view and add drivers.
3. Launch the SMH and access the “OLRAD Cards” tab on the “Peripheral Device Tool”. To
learn more about enabling and launching the SMH see the SMH chapter elsewhere in this
course. Login using the root username and password.
Select an empty slot from the object list. Slots that are available for use by new interface
cards should report no in the Occupied column.
4. Click Turn On/Off Slot LED. Check the slot LEDs on the backplane of your system
to verify that you selected the right slot.
5. Click Add Card Online. This should display a dialog box similar to the following:
6. Click Run CRA (Critical Resource Analysis) to analyze the slot. In order to insert the
new interface card, the selected PCI slot must be powered down. On some servers,
multiple slots may share a common OL* "power domain". Slots within a power domain
are powered on or off as a unit. Powering off the power domain containing the interface
card for the system boot disk or other critical system resources could be disastrous!
pdweb automatically analyzes the selected slot's power domain to ensure that it is safe to
temporarily disable the power domain while the new card is being added. A CRA may
report several different outcomes:
System Critical Impacts: Performing an OL* operation on the selected slot will likely
impact /, /stand, /usr, or /etc file systems, or a swap
device. Proceeding with the OL* operation may crash or
significantly degrade system performance.
Data Critical Impacts: Performing an OL* operation on the selected slot will likely
impact one or more locally mounted file systems, open device
files, or non-suspended network interface cards. Proceeding
with the OL* operation may cause data corruption. Loss of a
CDFS file system will not trigger a Data Critical CRA warning.
Other Impacts: Performing an OL* operation on the selected slot may impact
unused logical volumes, CDFS file systems, cards protected by
high-availability resources, networking cards that are
suspended, or one path to a multi-pathed logical volume.
8. Insert the new card. Ensure that the card slot latch is closed firmly.
10. Finally, check ioscan -f to verify that the card is recognized. Does the card appear in
the hardware list? Is it CLAIMED?
When replacing an interface card online, you must use an identical replacement card. This is
referred to as like-for-like replacement. Using a similar but not identical card can cause
unpredictable results. For example, a newer version of the target card with identical
hardware may use a newer firmware version that may conflict with the current driver. If a
new card is not acceptable, the system will report that the card cannot be resumed, and
olrad/pdweb will return an error.
During the replacement process, the driver instance for each port on the target card runs in a
suspended state. I/O to the ports is either queued or failed while the drivers are suspended.
When the replacement card comes online, the driver instances resume normal operation.
Each driver instance must be capable of resuming and controlling the corresponding port on
the replacement card.
The PCI specification enables a single physical card to contain more than one port.
Attempting to replace a card with another card that has more ports can result in the
additional ports being claimed by other drivers if an ioscan occurs when slot power is on.
Removing a Card
In 11i v3, OL* also enables the administrator to permanently remove an interface card
without rebooting. Ensure that the card isn’t currently being used, then select the card in the
SMH interface and select the Delete Card Online link on the main menu.
During the deletion process, the driver instance for each port on the target card is suspended.
I/O to the ports are either queued or failed while the drivers are suspended. When the card is
removed, the driver instances are deleted.
WARNING: Be sure to check your support agreement before opening the cabinet
of any HP system. Attempting to service hardware components
without the assistance of an HP engineer may invalidate your warranty
or support agreement.
Student Notes
After installing a new interface card, you may choose to attach new devices, too. The
procedures below explain the process to install both “hot-pluggable” and “non-hot-pluggable”
devices.
2. Verify that the required driver is configured in the kernel via sam (11i v1) or kcweb (11i
v2 and v3).
4. Run ioscan to add the device to the kernel iotree. Don’t include the –u or –k options; in
order to recognize the new device, ioscan must scan the buses rather than simply report
the devices already recorded in the iotree.
5. Run insf to create device files. Device files allow users and applications to access
peripheral devices. Device files are discussed in detail in the next chapter.
NOTE: Even if a card is hot-pluggable, you must shutdown any daemons using the
device before you remove the device.
2. Verify that the required driver is configured in the kernel via sam (11i v1) or kcweb (11i
v2).
6. Run ioscan to verify auto-configuration. Verify that the device appears in the ioscan
output and is CLAIMED.
Additional Configuration
Some additional configuration may be required after physically connecting a new device.
Terminals and modems may require new device files. Disks may need to be partitioned
before they may be used. The next couple chapters will discuss these additional
configuration tasks in detail.
Directions
The ioscan command is a powerful tool for exploring your system's hardware
configuration.
Your goal in this part of the lab is to explore your assigned lab system’s configuration.
Carefully record the commands you use to obtain the information requested below.
1. Login as root on your assigned server.
2. Execute the model command to determine your system’s model string. Consult the table
of HP server types earlier in the chapter to determine whether your system is an entry
class, blade, mid-range, or high-end server.
3. Execute machinfo to determine your system’s processor type and speed. Some older
PA-RISC systems do not support machinfo. If your system generates an error message,
skip this question.
4. Execute machinfo to determine the amount of physical memory on your system. Some
older PA-RISC systems do not support machinfo. If your system generates an error
message, you can determine the amount of physical memory by executing dmesg |grep
–i physical.
5. Execute ioscan –C cell to determine how many (if any) cell boards you have on your
system.
6. Execute ioscan –C processor to determine how many processor cores you have on
your system.
7. Execute ioscan –C lan to determine how many LAN interfaces you have on your
system.
8. Execute ioscan –C disk to determine how many disk class devices you have on your
lab system.
9. DVDs and CDROMs are disk class devices, too. Execute ioscan –C disk and look in
the Description column for the string DVD or DV to determine if you have a DVD drive.
10. Are there any parallel SCSI buses on your system? Execute ioscan –C ext_bus to
view external bus type components. Look in the Description column for the string
“SCSI”.
# ioscan
# ioscan -f
# ioscan –N
# ioscan –k
# ioscan -kfN
2. Does your system have any SCSI ext_bus’es? If so, can you determine their hardware
paths?
3. Skip this question if your system does not have SCSI buses. If your system does have one
or more SCSI buses, how many devices are on the first bus? Execute the command
below to find out. Replace the hardware path below with the first SCSI bus hardware
path you discovered in the previous step.
4. Skip this question if your system does not have any SCSI buses. If you add a new device
to the SCSI bus you explored in the previous step, which SCSI target addresses have
already been claimed by existing devices on the bus?
5. 11i v3’s new mass storage stack introduced some helpful new tools for managing disks
and LUNs, particularly on systems with multi-pathed devices. Execute ioscan –m lun
to determine which disks (if any) on your system are multi-pathed. If so, how many paths
lead to each disk/LUN?
# ioscan –m lun
6. Choose a disk or LUN from the ioscan –m lun output above and record its LUN
hardware path and one of its lunpath hardware addresses below. If your system has
multi-pathed LUNs, use one of the multi-pathed LUNs.
Conceptually, what is the difference between a LUN hardware address and a lunpath
hardware address?
7. Recall that ioscan –m lun also reports each LUN’s health status. Are any of your
LUNs currently disabled?
# ioscan –m lun
8. When troubleshooting SAN problems, your storage administrators may ask you to
determine a LUN’s WWID. Execute the command below to determine the WWID of the
disk or LUN you selected in the previous question.
9. You may also be asked to determine a LUN’s LUN ID. Use the lunpath hardware address
that you selected previously to determine the LUN’s LUN ID.
# olrad –q
Slot Path Bus Max Spd Pwr Occu Susp OLAR OLD Max Mode
0-1-1-1 2/0/1/1 21 133 133 Off No N/A N/A N/A PCI-X PCI-X
0-1-1-2 2/0/2/1 42 133 133 On Yes No Yes Yes PCI-X PCI-X
0-1-3-2 2/0/3/1 63 133 133 On Yes No Yes Yes PCI-X PCI-X
0-1-3-4 2/0/4/1 84 133 133 On Yes No Yes Yes PCI-X PCI-X
2. If the complex has two cabinets, which cabinet is the card in?
4. Within the I/O bay, is the card’s I/O chassis on the left or right?
6. Based on the output above, is it safe to remove the card from the chassis now?
7. Execute the olrad –q command. Does your lab system support OL* functionality?
# olrad –q
If time permits, explore the Peripheral Devices functional area in the SMH.
In the HP Virtual Lab, use the SMH button that is available from the reservation window to
open an SMH browser.
From the Home Page, click "System Configuration." From the System Configuration Window,
click "Peripheral Devices"
A similar Peripheral Devices SAM SAM functional area exists in sam in earlier versions
of HP-UX.
See if you can find the QuickSpecs page for one or two server models.
If you need help finding the QuickSpecs, ask your instructor.
Directions
The ioscan command is a powerful tool for exploring your system's hardware
configuration.
Your goal in this part of the lab is to explore your assigned lab system’s configuration.
Carefully record the commands you use to obtain the information requested below.
1. Login as root on your assigned server.
2. Execute the model command to determine your system’s model string. Consult the table
of HP server types earlier in the chapter to determine whether your system is an entry
class, blade, mid-range, or high-end server.
Answer:
# model
3. Execute machinfo to determine your system’s processor type and speed. Some older
PA-RISC systems do not support machinfo. If your system generates an error message,
skip this question.
Answer:
# machinfo
4. Execute machinfo to determine the amount of physical memory on your system. Some
older PA-RISC systems do not support machinfo. If your system generates an error
message, you can determine the amount of physical memory by executing dmesg |grep
–i physical.
Answer:
# machinfo
5. Execute ioscan –C cell to determine how many (if any) cell boards you have on your
system.
Answer:
# ioscan –C cell
6. Execute ioscan –C processor to determine how many processor cores you have on
your system.
Answer:
# ioscan –C processor
7. Execute ioscan –C lan to determine how many LAN interfaces you have on your
system.
Answer:
# ioscan –C lan
8. Execute ioscan –C disk to determine how many disk class devices you have on your
lab system.
Answer:
# ioscan –C disk
9. DVDs and CDROMs are disk class devices, too. Execute ioscan –C disk and look in
the Description column for the string DVD or DV to determine if you have a DVD drive.
Answer:
# ioscan –C disk
10. Are there any parallel SCSI buses on your system? Execute ioscan –C ext_bus to
view external bus type components. Look in the Description column for the string
“SCSI”.
Answer:
# ioscan –C ext_bus
# ioscan
# ioscan -f
# ioscan –N
# ioscan –k
# ioscan -kfN
Answer:
When executed without any options, ioscan scans the buses and reports each hardware
component’s legacy hardware path, class, and description.
The –f option adds several additional columns to the output, including the driver name,
instance number, SW State, and HW Type.
The –N option displays Agile View hardware addresses rather than legacy hardware
addresses.
The last example combines the last three options to display a full listing of Agile View
hardware paths using kernel cached information. This is one of the most popular
permutations of the ioscan command.
2. Does your system have any SCSI ext_bus’es? If so, can you determine their hardware
paths?
Answer:
3. Skip this question if your system does not have SCSI buses. If your system does have one
or more SCSI buses, how many devices are on the first bus? Execute the command
below to find out. Replace the hardware path below with the first SCSI bus hardware
path you discovered in the previous step.
Answer:
4. Skip this question if your system does not have any SCSI buses. If you add a new device
to the SCSI bus you explored in the previous step, which SCSI target addresses have
already been claimed by existing devices on the bus?
Answer:
Look at the second to last component in each SCSI device address to determine which
target addresses are already taken. There must not be duplicate SCSI target addresses on
a SCSI bus.
5. 11i v3’s new mass storage stack introduced some helpful new tools for managing disks
and LUNs, particularly on systems with multi-pathed devices. Execute ioscan –m lun
to determine which disks (if any) on your system are multi-pathed. If so, how many paths
lead to each disk/LUN?
# ioscan –m lun
Answer:
If ioscan lists multiple lunpaths below an Agile View LUN hardware path, the LUN is
multi-pathed.
6. Choose a disk or LUN from the ioscan –m lun output above and record its LUN
hardware path and one of its lunpath hardware addresses below. If your system has
multi-pathed LUNs, use one of the multi-pathed LUNs.
Conceptually, what is the difference between a LUN hardware address and a lunpath
hardware address?
Answer:
A LUN hardware path represents a disk or LUN. A lunpath hardware address represents
a single path to a disk or LUN. Each LUN has one LUN hardware path, but may have
multiple lunpath hardware addresses.
7. Recall that ioscan –m lun also reports each LUN’s health status. Are any of your
LUNs currently disabled?
# ioscan –m lun
Answer:
8. When troubleshooting SAN problems, your storage administrators may ask you to
determine a LUN’s WWID. Execute the command below to determine the WWID of the
disk or LUN you selected in the previous question.
Answer:
9. You may also be asked to determine a LUN’s LUN ID. Use the lunpath hardware address
that you selected previously to determine the LUN’s LUN ID.
Answer:
# olrad –q
Slot Path Bus Max Spd Pwr Occu Susp OLAR OLD Max Mode
0-1-1-1 2/0/1/1 21 133 133 Off No N/A N/A N/A PCI-X PCI-X
0-1-1-2 2/0/2/1 42 133 133 On Yes No Yes Yes PCI-X PCI-X
0-1-3-2 2/0/3/1 63 133 133 On Yes No Yes Yes PCI-X PCI-X
0-1-3-4 2/0/4/1 84 133 133 On Yes No Yes Yes PCI-X PCI-X
Answer:
2. If the complex has two cabinets, which cabinet is the card in?
Answer:
Cabinet 0, which is usually on the left when facing the front of the complex.
Answer:
4. Within the I/O bay, is the card’s I/O chassis on the left or right?
Answer:
Answer:
Slot 2.
6. Based on the output above, is it safe to remove the card from the chassis now?
Answer:
Since the card is powered on and is not suspended, you should not remove the card.
7. Execute the olrad –q command. Does your lab system support OL* functionality?
# olrad –q
Answer:
If you get a message reporting “Capability not implemented; Could not obtain information
of all slots”, your server doesn’t support OL*. If you get a list of card slots, your server
does support OL*.
If time permits, explore the Peripheral Devices functional area in the SMH.
In the HP Virtual Lab, use the SMH button that is available from the reservation window to
open an SMH browser.
From the Home Page, click "System Configuration." From the System Configuration Window,
click "Peripheral Devices"
A similar Peripheral Devices SAM SAM functional area exists in sam in earlier versions
of HP-UX.
See if you can find the QuickSpecs page for one or two server models.
If you need help finding the QuickSpecs, ask your instructor.
• Explain the significance of major and minor numbers, and block and character I/O.
• Describe the legacy DSF naming convention for disks, LUNs, DVDS, tapes, autochangers,
terminals, modems.
• Describe the persistent DSF naming convention for disks, LUNs, DVDS, tapes, and
autochangers.
• Use ioscan to list legacy and persistent DSFs associated with devices.
Student Notes
UNIX applications access peripheral devices such as tape drives, disk drives, printers,
terminals, and modems via special files in the /dev directory called Device Special Files
(DSFs). Every peripheral device typically has one or more DSFs.
The same read() and write() system calls used to read or write data to a disk-based file
can also be used to read or write data to a tape drive, terminal device, or any other device via
the device’s DSF. This allows application developers to easily access peripheral devices
using familiar system calls, with minimal knowledge of the system’s underlying hardware
architecture.
The tar application creates (-c) a backup archive on the file specified by the -f option.
Since device files allow applications to access devices using the same system calls that are
used to access files, the –f option may be used to write a tar archive to either a tape drive
(e.g.: /dev/rtape/tape0_BEST) or a disk-based file (e.g.: /tmp/archive.tar).
This second example redirects standard output of the echo command to a terminal via the
terminal’s device file.
NOTE: The terms “Device Special File”, “DSF”, “Device File”, and “Special File” are
used interchangeably.
DSF Attributes
DSF file attributes determine which device a DSF accesses, and how
• Type: Access the device in block or character mode?
• Permissions: Who can access the device?
• Major#: Which kernel driver does the DSF use?
• Minor#: Which device does the DSF use? And how?
• Name: What is the DSF name?
Use ll to view a device file’s attributes
# ll /dev/*disk/disk*
brw-r----- 1 bin sys 3 0x000004 Jun 23 00:34 /dev/disk/disk30
brw-r----- 1 bin sys 3 0x000005 Jun 23 00:34 /dev/disk/disk31
crw-r----- 1 bin sys 22 0x000004 Jun 23 00:34 /dev/rdisk/disk30
crw-r----- 1 bin sys 22 0x000005 Jun 23 00:34 /dev/rdisk/disk31
Student Notes
Every file on a UNIX system has an associated structure called an inode that records the file’s
owner, group, permissions, size, and other attributes. Every DSF also has an inode. Some
DSF file attributes are similar to regular file attributes; others are DSF-specific. The ll
command may be used to view the file attributes associated with both data files and DSFs.
The notes below highlight some of the significant DSF file attributes.
Character Device Files File type "c" identifies character mode DSFs. Character mode DSFs
transfer data to the device one character at a time. Devices such as
terminals, printers, plotters, modems, and tape drives are typically
accessed via character mode DSFs. Character mode DSFs are
sometimes called "raw" device files.
Block Device Files File type "b" identifies a block mode DSFs. When accessing a
device via a block mode DSF, the system reads and writes data
through a buffer in memory, rather than transferring the data
directly to the physical disk. This can significantly improve I/O for
disks, LUNs, and CD-ROMs. Block device files are sometimes called
"cooked" device files.
Terminals, modems, printers, plotters, and tape drives typically only have character device
files. Disks, LUNs, and CD-ROMs may be accessed in either character or block mode, and
thus typically have both types of device files. Some applications and utilities prefer to access
disks directly via character mode DSFs. Other utilities require a block mode DSF. Read the
application or utility documentation to determine which device file is required.
# ll /dev/console
crw--w--w- 1 root sys 0 0x000000 Jun 27 13:17 /dev/tty0p0
Recall that the mesg n command prevents other users from sending message to the local
terminal device. mesg accomplishes this by changing the permissions on the user’s terminal
device file.
# mesg n
# ll /dev/console
crw------- 1 root sys 0 0x000000 Jun 27 13:17 /dev/console
Though administrators can change DSF file permissions, it’s generally best to retain the
permissions applied by the insf and mksf commands when they initially create DSFs.
The lsdev command lists the drivers configured in the kernel, and their associated major
numbers. The third column in the lsdev output reports driver names. The first column
reports each driver’s character major number. The second column reports each driver’s
block major number. Block major number -1 indicates that the driver doesn’t support block
mode access.
Some of the bits in the minor number identify which device the DSF is associated with.
Other bits in the minor number may represent device-specific access options. Tape drives,
for instance, have special access options that enable/disable hardware compression and
define the density format used when writing to the tape.
Fortunately, HP-UX auto-configures most device files, so administrators very rarely have to
manually assign minor numbers anymore. Also, the lssf command automatically translates
a DSF’s hexadecimal minor numbers into human-readable format.
DSFs are created for each LUN path DSFs are created for each WWID
DSFs change if SAN topology changes DSFs are unaffected by SAN topology changes
DSFs are only auto-configured at startup DSFs are auto-configured after LUN creation
Student Notes
HP-UX 11i v3 supports two different types of DSFs.
“Legacy” DSFs are supported in HP-UX 11i v1, v2, and v3, but will be deprecated in a future
release. In the legacy addressing scheme, each device path is represented by a minor
number, a legacy DSF and a legacy hardware path. The legacy DSF’s minor number directly
encodes the corresponding device path’s bus, target, and LUN numbers, as well as device
access options.
“Persistent” DSFs are new in 11i v3. Persistent DSFs provide a persistent, path-independent
representation of a device bound to the device’s Agile View LUN hardware path and World
Wide Identifier (WWID).
The notes below highlight the significant differences between the two DSF types.
The new mass storage stack creates a single Agile View LUN hardware path for each
disk/tape/LUN regardless of the number of underlying paths to the device. The new stack
also creates a block and raw persistent DSF for each Agile View LUN hardware path / WWID.
The persistent DSFs represent the LUN itself, rather than a specific path to the LUN. This
approach greatly simplifies volume and system management.
Because the persistent DSF represents a LUN rather than a lunpath, persistent DSFs aren’t
affected when a device is moved to a different HBA or SAN switch.
Auto-configuration
11i v1 and v2 automatically create device files for new devices during system startup. When
adding LUNs or other devices to a running system, though, the administrator must execute
the insf command to auto-configure DSFs. 11i v3 recognizes new LUNs and creates DSFs
automatically.
Scalability
Minor numbers are 24-bit numbers. In legacy DSFs, 15 bits in the minor number represent
the device address, and 9 bits represent the DSF’s special access options. With just 15
address bits, legacy DSFs can represent 215 = 32,768 LUN paths. The legacy storage stack
further limits the number of concurrently active LUNs to 8192.
Persistent DSF minor numbers are also 24-bits. However, persistent DSFs use all 24 bits to
identify the device itself. As a result, persistent DSFs can represent 224 = 16,777,216 LUNs.
DSF Directories
/dev
Student Notes
The next few slides introduce the structure of the /dev directory, and the naming standard
naming convention used to assign names to legacy and persistent DSFs. An understanding of
the device file naming convention will allow you to more easily manage and use DSFs on
your system.
The slide above describes the structure of the /dev directory. The next few slides describe
the contents of the /dev subdirectories in detail.
# ioscan –kf
Class I H/W Path Description
=====================================================
ext_bus 5 1/0/2/1/0.6.1.0.0 FCP Array Interface
disk 3 1/0/2/1/0.6.1.0.0.0.1 HP HSV101
ext_bus 7 1/0/2/1/0.6.2.0.0 FCP Array Interface
disk 6 1/0/2/1/0.6.2.0.0.0.1 HP HSV101
ext_bus 9 1/0/2/1/1.1.2.0.0 FCP Array Interface
disk 9 1/0/2/1/1.1.2.0.0.0.1 HP HSV101
ext_bus 11 1/0/2/1/1.1.3.0.0 FCP Array Interface
disk 12 1/0/2/1/1.1.3.0.0.0.1 HP HSV101
LUN
Bus Instance Device dependent options
Target
/dev/dsk/c11t0d1options
Student Notes
Legacy disk, LUN, DVD, auto-changer, and tape DSF names follow the convention shown on
the slide, in which the DSF name and minor number encode the associated hardware path’s
bus or controller instance number, target number, and LUN number, plus the associated
device access options. Devices accessed via multiple paths have separate legacy DSFs
representing each path.
# ioscan –kf
Class I H/W Path Description
=====================================================
ext_bus 5 1/0/2/1/0.6.1.0.0 FCP Array Interface
disk 3 1/0/2/1/0.6.1.0.0.0.1 HP HSV101 Å 1st path
ext_bus 7 1/0/2/1/0.6.2.0.0 FCP Array Interface
disk 6 1/0/2/1/0.6.2.0.0.0.1 HP HSV101 Å 2nd path
ext_bus 9 1/0/2/1/1.1.2.0.0 FCP Array Interface
disk 9 1/0/2/1/1.1.2.0.0.0.1 HP HSV101 Å 3rd path
ext_bus 11 1/0/2/1/1.1.3.0.0 FCP Array Interface
disk 12 1/0/2/1/1.1.3.0.0.0.1 HP HSV101 Å 4th path
The ioscan output on the slide represents four paths to a single LUN. The legacy DSF
scheme assigns independent DSFs to each path. The notes below describe each component
in the fourth hardware path, 1/0/2/1/1.1.3.0.0.0.1.
To view assigned instance numbers, look in the I column in the ioscan -kf output. To
improve readability, the screenshot on the slide only shows selected columns and rows from
the ioscan -kf output.
The number following the "c" in a LUN, disk, tape, or DVD DSF name identifies the device
path’s SCSI bus or fiber channel array controller ext_bus instance number. The disk path
represented by legacy hardware path 1/0/2/1/1.1.3.0.0.0.1 on the slide would have device files
beginning with "c11", since the instance number of the ext_bus at legacy hardware path
1/0/2/1/1.1.3.0.0 is "11".
Note that each device also has an instance number. Legacy DSF names utilize the ext_bus
instance number rather than the device instance number. Legacy DSFs allocate 8 bits to
represent the bus/controller portion of the device address in the minor number. Thus, legacy
DSFs support up to 256 bus/controller instances.
Target Numbers
The number following the "t" in a LUN, disk, tape, or DVD DSF name identifies the device’s
target address, which appears in the second-to-last component of the device’s hardware path.
The target address for hardware path 1/0/2/1/1.1.3.0.0.0.1 is 0.
LUN Numbers
The number following the "d" in a LUN, disk, tape, or DVD DSF name identifies the device’s
LUN number, which appears in the last component of the device’s hardware path. The LUN
number for hardware path 1/0/2/1/1.1.3.0.0.0.1 is 1.
Recall that the LUN number in the last component of the hardware path, and following the
“d” in the legacy DSF, doesn’t fully represent the LUN ID assigned by the array administrator.
The legacy addressing scheme only provides 3-bits (eight addresses) in the last component in
an HP-UX hardware path. Since today’s arrays often present hundreds of LUNs. Legacy
hardware addresses use the last three components of the hardware path, together, to
represent the LUN ID. The table below shows the legacy hardware paths and DSF names
that would be used to represent the first sixteen LUNs in an array.
Limitations
Legacy DSF minor numbers only allocate 15 bits to identify the DSF’s associated device.
These 15 bits allow legacy DSFs to address at most 215 = 32,768 LUN paths per system. Above
the legacy addressing scheme limits, only persistent device special files are created.
c5t0d1
c7t0d1
c9t0d1
c11t0d1
/dev/dsk/c5t0d1 /dev/rdsk/c5t0d1
/dev/dsk/c7t0d1 /dev/rdsk/c7t0d1
/dev/dsk/c9t0d1 /dev/rdsk/c9t0d1
/dev/dsk/c11t0d1 /dev/rdsk/c11t0d1
Legacy DSF names for tape drive and auto-changers also follow a cxtxdx format, but reside
in /dev/rmt/ and /dev/rac/. Kernel drivers for tape drives and autochangers typically
only support raw device files.
Terminals, modems, and printers follow a very different format. Later slides describe each
device type’s unique DSF requirements in detail.
# ioscan –kfNn
Class I H/W Path Driver S/W State H/W Type Description
=================================================================
disk 30 64000/0xfa00/0x4 esdisk CLAIMED DEVICE HP HSV101
/dev/disk/disk30options
Student Notes
Legacy DSF names and minor numbers encode a hardware path’s bus or controller instance
number, target number, and LUN number. The legacy scheme has a number of significant
limitations:
• Multi-pathed LUNs require separate legacy DSFs for each LUN path
• Legacy DSF names change when the SAN topology changes, since the DSF names encode
the device’s physical hardware path
• The minor number addressing scheme supported at most 32,768 total LUN addresses, of
which only 8192 LUNs can be active at any given time
Persistent DSFs resolve both of these issues by providing path-independent, WWID-based
DSF representations of up to 16,777,216 disks, LUNs, tape drives, and DVDs.
The agile view ioscan -kfNn output on this slide represents the same LUN described in
the legacy view ioscan -kf output on the previous slide. There are still four paths to the
LUN, but agile view reports a single, path-independent view of the LUN.
The LUN’s persistent DSFs encode the instance number of the agile view LUN hardware
address rather than the bus/target/LUN numbers of the underlying legacy hardware paths
leading to the LUN. The agile view LUN hardware address ultimately maps to a LUN
WorldWide Identifier (WWID), which should remain consistent no matter which path one
uses to access the LUN.
In the example on the slide, ioscan suggests that the LUN’s agile view hardware address
instance number is 30. Thus, the LUN’s persistent DSF name is simply disk30. Since LUNs
may be accessed in either block or raw mode, each LUN has two persistent DSFs:
Persistent DSF names for tape drive and auto-changers also encode the instance number of
the device’s Agile View LUN hardware address, but require a slightly different device prefix
and reside in the /dev/rtape/ and /dev/rchgr/ directories. Kernel drivers for tape
drives and autochangers typically only support raw device files. Tape drives usually have a
suffix representing the compression, format, and other options enabled by the device file.
/dev/rtape/tape0_BEST
/dev/rchgr/autoch1
SAN LUN
Student Notes
LUNs, disks, CDROMs, and DVDs all follow the standard cxtxdx legacy DSF naming
convention and diskx persistent DSF naming convention.
The drivers for these devices support both block and character mode access. Legacy block
and raw DSFs reside in /dev/dsk/ and /dev/rdsk/ respectively. Persistent block and
raw DSFs reside in /dev/disk/ and /dev/rdisk/ respectively.
Using the legacy DSF scheme, every path to a LUN generates a block and raw DSF. Using the
persistent DSF scheme, each LUN requires just one block and raw DSF for the LUN
regardless of the number of underlying paths.
Student Notes
PA-RISC boot disk DSFs follow the standard legacy and persistent disk DSF naming
convention described on the previous slide.
Integrity boot disks, however, are subdivided into Extensible Firmware Interface (EFI) disk
partitions. A partition table at the top of each disk records the locations of the partitions.
Each partition requires additional block and raw device files.
• The EFI system partition contains the OS loader that is responsible for loading the OS
in memory during the boot process, and several supporting files. cxtxdxs1 is the system
partition’s legacy DSF name. diskx_p1 is the system partition’s persistent DSF name.
• The EFI OS partition contains the LVM or VxVM volumes that contain the kernel and
other operating system files and directories. cxtxdxs2 is the OS partition’s legacy DSF
name. diskx_p2 is the OS partition’s persistent DSF name.
• The optional EFI HP Service Partition (HPSP) contains offline diagnostic utilities that
may be used to troubleshoot an unbootable system. cxtxdxs3 is the system partition’s
legacy DSF name. diskx_p3 is the system partition’s persistent DSF name.
To learn more about Integrity boot disks and EFI partitions, see the Integrity boot process
chapter later in this book.
Student Notes
Tape drive DSF names are very similar to LUN DSF names.
/dev/rmt/c0t0d0BEST
Note that the stape kernel driver doesn’t support block mode access to tape drives, so there
isn’t a /dev/mt/ device file directory.
/dev/rtape/tape0_BEST
Note that the estape kernel driver doesn’t support block mode access to tape drives, so
there isn’t a /dev/tape/ device file directory.
w Immediate report disabled. A write request waits until the data are written on the medium.
density Specifies the density or format used when writing to the tape. 11i v3 only
supports the BEST density. 11i v1 and v2 support several other formats. The list
below only describes some of the common 11i v1 and v2 density formats. See the
mt(7) man page for a complete list.
BEST Use the highest available density/compression features available
NOMOD Maintain the density/compression features used previously on the tape
DDS1 Use DDS1 format to ensure compatibility with older DDS1 tape drives
DDS2 Use DDS2 format to ensure compatibility with older DDS2 tape drives
C[n] Write data in compressed mode, on tape drives that support data compression.
Compression is automatically enabled when the density field is set to BEST.
n No rewind on close. Unless this mode is requested, the driver automatically
rewinds the tape when closed.
b Specifies Berkeley-style tape mode. When the b is absent, the tape drive
follows AT&T-style behavior.
When a file is closed after servicing a read request, if the no-rewind bit is not
set, the tape drive automatically rewinds the tape. If the no-rewind bit is set,
the behavior depends on the style mode. For AT&T-style devices, the tape is
positioned after the EOF following the data just read (unless already at BOT
or Filemark). For Berkeley-style devices, the tape is not repositioned in any
way.
w Writes wait for physical completion of the operation before returning status.
The default behavior (buffered mode or immediate reporting mode) requires
the tape device to buffer the data and return immediately with successful
status.
See the examples on the slide and the mksf(1m) man page for more information.
9.x Compatibility
Prior to version HP-UX 10.01, tape drive DSFs followed an entirely different naming
convention:
/dev/rmt/0m First tape drive on the system
/dev/rmt/1m Second tape drive on the system
/dev/rmt/2m Third tape drive on the system
/dev/rmt/2mn Third tape drive on the system, "no-rewind" feature enabled
/dev/rmt/2mnb Third tape drive, "no-rewind" feature and Berkeley semantics enabled
Each DSF name includes an instance number to distinguish the DSF from all other tape drive
DSFs, the letter "m", and a series of access options as described previously.
11i v1, v2, and v3 automatically create the following tape drive DSFs, but they are simply
links to equivalent legacy cxtxdxBEST DSFs.
/dev/rac/c5t0d2
/dev/rac/c7t0d2
/dev/rchgr/autoch1
/dev/rac/c9t0d2
/dev/rac/c11t0d2
Student Notes
Many administrators today use tape libraries with tape auto-changers to manage system
backups. These devices typically include one or more tape drives, magazines for storing
multiple tapes, and a robotic auto-changer mechanism to move tapes between the magazines
and drives. Backup utilities access the tape drives via standard tape DSFs in /dev/rmt/ and
/dev/rtape/. Robotic auto-changers typically have their own DSFs in /dev/rac/ and/or
/dev/rchgr/.
/dev/rac/c0t0d0
/dev/rchgr/autoch1
16 Port Multiplexer
Student Notes
Though many users access systems exclusively via network services today, some systems
still include hardwired terminals, modems, and printers.
Most servers still include a built-in DB9 serial port on the Core I/O card that can be used to
connect a single hardwired terminal or modem.
Administrators who require multiple serial devices can purchase an add-on multiplexer
(MUX) interface card. The interface card occupies one expansion slot on the server and
typically connects to an external box that provides 8, 16, 32, or 64 RJ45, DB25, or DB9 ports
for connecting external devices. Alternatively, it may be possible to connect serial devices to
the multiplexer card directly via a MUX fan-out cable like the one shown below.
See HP’s Configuring HP-UX for Peripherals manual for additional information.
A fully functional modem requires three device files. /dev/ttydxpx is required for dial-in
modem service. /dev/culxpx is required for dial-out service. /dev/cuaxpx is required
for direct-connect service.
See HP’s Configuring HP-UX for Peripherals manual for additional information.
Pseudo Terminals
Pseudo terminals are used by applications that provide terminal emulation capabilities, such
as hpterm, xterm, telnet, etc. The pseudo terminal driver provides support for a
device-pair termed a pseudo terminal. A pseudo terminal is a pair of character devices, a
master device and a slave device.
The device files for pseudo terminals are found in the following places:
slave /dev/tty xx. These are links to files in the /dev/pty
directory /dev/pty/ttyxx
master /dev/pty xx. These are links to files in the /dev/ptym
directory /dev/ptym/ptyxx
streams- based pseudo slave /dev/pts/n. This is used by the dtterm terminal
emulator
streams- based master /dev/ptymx. This is used by the dtterm terminal
emulator
By default, HP-UX creates 60 pseudo-terminals of each type. If your server is likely to service
more than 60 concurrent telnet sessions, or more than 60 concurrent terminal emulator
windows, then you may need to increase the number of pseudo terminals. To increase the
number of pseudo terminals, increase the value of the npty and nstrpty kernel
parameters, and reboot. During the boot process, HP-UX executes the insf command,
which then creates the additional pseudo terminal device files. Kernel configuration will be
covered in a later chapter.
• Use ioscan -kfn to list legacy hardware paths and their legacy DSFs
• Additional options filter the list by class or legacy hardware path
• Output only shows legacy addresses and DSFs
Student Notes
The next few slides discuss several commands for viewing DSFs.
The ioscan command, with the –f (full) and –n (DSF names) options, provides a
convenient mechanism for determining which legacy DSFs are associated with each
hardware path on your system. Below each hardware path, ioscan -kfn lists the legacy
DSFs associated with each hardware path.
The output below shows ioscan –kfn output for the tape drive at hardware path
0/0/1/0/0.0.0. Since tape drives support several access options, the tape drive has
several legacy DSFs.
# ioscan -kfn
Class I H/W Path Driver S/W State H/W Type Description
==================================================================
tape 0 0/0/1/0/0.0.0 stape CLAIMED DEVICE HP C1553A
/dev/rmt/0m /dev/rmt/c0t0d0BESTb
/dev/rmt/0mb /dev/rmt/c0t0d0BESTn
/dev/rmt/0mn /dev/rmt/c0t0d0BESTnb
/dev/rmt/0mnb /dev/rmt/c0t0d0BEST
Add additional ioscan options to view legacy DSFs associated with a specific device class
(-C) or legacy hardware path (–H). Or, specify the device of interest by providing one of the
device’s legacy DSFs as an argument. See the examples below:
Student Notes
Use the ioscan command with the –f (full), –n (DSF names), and –N (New Agile View)
options to view devices’ Agile View hardware paths and persistent DSFs. Devices that
support persistent DSFs report persistent DSFs. Devices that only support legacy DSFs (e.g.:
terminals & modems) report legacy DSFs.
The output below shows ioscan –kfnN output for the tape drive at Agile View LUN
hardware path 64000/0xfa00/0x0. Since tape drives support several access options, the
tape drive has several DSFs.
# ioscan -kfnN
Class I H/W Path Driver S/W State H/W Type Description
==================================================================
tape 0 64000/0xfa00/0x0 estape CLAIMED DEVICE HP C1553A
/dev/rtape/tape0_BEST
/dev/rtape/tape0_BESTn
/dev/rtape/tape0_BESTb
/dev/rtape/tape0_BESTnb
Add additional ioscan options to view legacy DSFs associated with a specific device class
(-C) or legacy hardware path (–H). Or, specify the device of interest by providing one of the
device’s DSFs as an argument. See the examples below:
Student Notes
The ioscan –kfnN command described on the previous page lists devices and DSFs. To
correlate those device files with lunpaths, though, use the same ioscan –m lun command
introduced in the hardware module. Note that you can specify a particular disk or LUN using
the LUN’s LUN hardware path or the LUN’s persistent DSF.
The command reports each device’s class, instance, hardware path, driver, software state,
hardware state, health status, and description.
Below each LUN hardware path, the command reports all of the lunpath hardware addresses
available to access each LUN.
Below the lunpath hardware addresses, the command reports each device’s persistent DSFs.
# ioscan –m lun
Class I H/W Path Driver SW State H/W Type Health Description
====================================================================
By default, the command displays all disks and LUNs. Add the –H option with a LUN
hardware path to view a specific disk or LUN.
Or, add the –D option with a persistent DSF to view a specific disk or LUN.
View the WWID for all LUNs, or a specific LUN hardware path or DSF
# scsimgr get_attr -a wwid \
[all_lun]|[-H 64000/0xfa00/0x4]|[-D /dev/rdisk/disk30]
SCSI ATTRIBUTES FOR LUN : /dev/rdisk/disk30
name = wwid
current = 0x600508b400012fd20000a00000250000
default =
saved =
Student Notes
When troubleshooting SAN issues, it is oftentimes helpful to know a device’s WWID and
underlying lunpath hardware paths. Use the same scsimgr command introduced in the
hardware module. Note that you can view all LUNs, or specify a particular LUN using the
LUN’s LUN hardware path or the LUN’s raw persistent DSF.
name = wwid
current = 0x600508b400012fd20000a00000250000
default =
saved =
name = wwid
current = 0x600508b400012fd20000900001900000
default =
saved =
name = wwid
current = 0x600508b400012fd20000a00000250000
default =
saved =
name = wwid
current = 0x600508b400012fd20000a00000250000
default =
saved =
Recall that you can also use the scsimgr command to obtain a LUN’s LUNID. However, in
order to obtain the LUNID, you must provide a lunpath hardware address. Rather, use the
ioscan –m lun –D /dev/disk/disk30 command to determine the LUN’s lunpaths;
then use one of the lunpath addresses as an argument to scsimgr.
# scsimgr get_attr \
-a lunid \
-H 1/0/2/1/0.0x50001fe15003112c.0x4001000000000000
name = lunid
current =0x4001000000000000 (LUN # 1, Flat Space Addressing)
default =
saved =
Student Notes
Administrators making the transition from legacy addressing to Agile View addressing can
use ioscan –m dsf to correlate legacy and persistent DSFs.
Execute ioscan –m dsf to view a list of all persistent DSFs and their corresponding legacy
DSFs, The output only includes devices that support persistent DSFs.
# ioscan –m dsf
/dev/rdisk/disk30 /dev/rdsk/c5t0d1
/dev/rdsk/c7t0d1
/dev/rdsk/c9t0d1
/dev/rdsk/c11t0d1
/dev/rdisk/disk31 /dev/rdsk/c5t0d2
/dev/rdsk/c7t0d2
/dev/rdsk/c9t0d2
/dev/rdsk/c11t0d2
/dev/rdisk/disk32 /dev/rdsk/c5t0d3
/dev/rdsk/c7t0d3
/dev/rdsk/c9t0d3
/dev/rdsk/c11t0d3
To map a specific legacy DSF to an associated persistent DSF, specify the legacy DSF as an
argument.
To map a specific persistent DSF to associated legacy DSFs, specify the persistent DSF as an
argument.
Student Notes
Many devices have multiple DSFs, since devices can often be accessed via a variety of
combinations of access options. Some utilities and applications require DSFs that enable
specific options. For instance, make_tape_recovery, which creates a bootable system
recovery tape, requires a “No-Rewind” DSF.
ioscan lists a device’s DSFs, but doesn’t indicate which device-specific options each DSF
enables. Use the lssf command to decode a DSF’s major and minor number and report the
DSF’s driver name, hardware address, and access options. The command accepts both
legacy and persistent DSF arguments.
# lssf /dev/rmt/c0t0d0BESTnb
stape card instance 0 SCSI target 0 SCSI LUN 0
Berkeley No-Rewind BEST density
at address 0/0/1/0/0.0.0 /dev/rmt/0mnb
# lssf /dev/rtape/tape0_BESTnb
estape Berkeley No-Rewind BEST density
at address 64000/0xfa00/0x0 /dev/rtape/tape0_BESTnb
# lssf /dev/rtape/*
estape AT&T BEST density at address 64000/0xfa00/0x9
/dev/rtape/tape1_BEST
estape Berkeley BEST density at address 64000/0xfa00/0x9
/dev/rtape/tape1_BESTb
estape AT&T No-Rewind User Config at address 64000/0xfa00/0x9
/dev/rtape/tape1_BESTn
estape Berkeley No-Rewind BEST density at address 64000/0xfa00/0x9
/dev/rtape/tape1_BESTnb
Questions
The last few slides described several ways to view DSFs. Which command would be most
appropriate for each of the following situations?
1. A DBA has requested additional disk space. Which command would you use to list the
persistent DSFs of all of the disk class devices on the system?
3. Your new backup software requires a no-rewind tape device file. How can you determine
which /dev/rtape/tape0_* DSFs enable the no-rewind option?
Answers
1. A DBA has requested additional disk space. Which command would you use to list the
persistent DSFs of all of the disk class devices on the system?
3. Your new backup software requires a no-rewind tape device file. How can you determine
which /dev/rtape/tape0_* DSFs enable the no-rewind option?
# lssf /dev/rtape/tape0_*
• HP-UX automatically creates DSFs for most devices during system startup
• HP-UX 11i v3 automatically creates persistent DSFs for dynamically added LUNs, too
• HP-UX also provides tools for manually creating and managing device files
Student Notes
HP-UX automatically recognizes most supported device types, and automatically creates
standard DSFs. These devices are said to be “auto-configurable”.
During the 11i v1, v2, and v3 system startup process, HP-UX automatically probes the
hardware installed on the system, “binds” an appropriate kernel driver to each auto-
configurable device, assigns instance numbers, and creates legacy and persistent DSFs for
new devices.
HP-UX 11i v3 also detects LUNs added or modified on running systems and dynamically
creates persistent DSFs as necessary.
Although HP-UX creates most DSFs automatically, the administrator can manually create and
manage DSFs using the commands below:
The next few slides describe these commands in detail. You can also use the sam, smh, and
pdweb GUI/TUI interfaces to manage DSFs.
Student Notes
HP-UX 11i v1, v2, and v3 automatically create legacy and persistent DSFs for new devices
during system startup. 11i v3 even detects LUNs added or modified on an already-running
system, and creates persistent DSFs if necessary.
The administrator can also manually create standard DSFs via the insf command after:
• accidentally deleting DSFs; or
Before running insf, run ioscan to scan the hardware and bind kernel drivers to new
devices. Do not include the –k or –u options. The –k and –u options report cached device
information, but don’t scan for new devices. In 11i v3, ioscan also automatically creates
DSFs for new devices; thus, it may not even be necessary to proceed to the insf command
in the next step!
# ioscan
Next, execute insf –v to create standard DSFs for new devices. The –v (verbose) option
reports the new DSF names.
# insf -v
insf: Installing special files for stape instance 0 address 0/1/1/1.4.0
insf: Installing special files for estape instance 1 address
64000/0xfa00/0x0
making rtape/tape1_BEST c 23 0x000009
making rtape/tape1_BESTn c 23 0x00000b
making rtape/tape1_BESTb c 23 0x00000c
making rtape/tape1_BESTnb c 23 0x00000d
By default, insf only creates DSFs for new devices. Add the –e (existing) option to recreate
missing DSFs for a previously configured device. In the example below, the first insf
command does not install any DSFs since the device was previously configured. The second
command recreates missing DSFs for the existing device as a result of the –e option.
# insf –v
# insf –v -e
insf: Installing special files for stape instance 0 address 0/1/1/1.4.0
insf: Installing special files for estape instance 1 address
64000/0xfa00/0x9
making rtape/tape1_BEST c 23 0x000009
making rtape/tape1_BESTn c 23 0x00000b
making rtape/tape1_BESTb c 23 0x00000c
making rtape/tape1_BESTnb c 23 0x00000d
(creates DSFs for all other devices, too)
To create DSFs for selected devices rather than all devices, use -H and -C to select devices
by hardware path or device class. If –H specifies an Agile View LUN hardware path, insf
only creates persistent DSFs. If –H specifies a legacy hardware path, insf creates legacy
DSFs. The –C option creates both legacy and persistent DSFs.
# insf –v –e -H 64000/0xfa00/0x0
insf: Installing special files for estape instance 1 address
64000/0xfa00/0x0
making rtape/tape1_BEST c 23 0x000009
making rtape/tape1_BESTn c 23 0x00000b
making rtape/tape1_BESTb c 23 0x00000c
making rtape/tape1_BESTnb c 23 0x00000d
# insf –v –e –C tape
insf: Installing special files for stape instance 0 address 0/1/1/1.4.0
insf: Installing special files for estape instance 1 address
64000/0xfa00/0x0
making rtape/tape1_BEST c 23 0x000009
making rtape/tape1_BESTn c 23 0x00000b
making rtape/tape1_BESTb c 23 0x00000c
making rtape/tape1_BESTnb c 23 0x00000d
• For tape drives and other devices that support device dependent options,
insf only creates device files for the most common combinations of options
• Use mksf to configure device files for other unusual combinations of options;
see the mksf(1m) man page for driver-specific options
See the mksf and mt man pages for many more device-specific options and examples
Student Notes
insf only configures standard device files for auto-configurable devices. For example, insf
automatically creates the following DSFs for DDS tape drives:
/dev/rmt/0m
/dev/rmt/0mb
/dev/rmt/0mn
/dev/rmt/0mnb
/dev/rmt/c0t0d0BEST
/dev/rmt/c0t0d0BESTb
/dev/rmt/c0t0d0BESTn
/dev/rmt/c0t0d0BESTnb
/dev/rtape/tape0_BEST
/dev/rtape/tape0_BESTb
/dev/rtape/tape0_BESTn
/dev/rtape/tape0_BESTnb
To create a device file supporting an unusual combination of special option, use mksf. The
example below creates a DSF for the tape drive at hardware path 64000/0xfa00/0x0 using
the DDS2 density format (-b DDS2), and with no-auto-rewind enabled (-n).
mksf options vary from driver to driver. Options that are meaningful to one device driver
may be meaningless to others. The mksf(1m) and mt(7) man pages describe dozens of
options that may be used to configure device files with various combinations of options.
Fortunately, the mksf command is rarely needed since insf automatically creates most
commonly used DSFs.
Student Notes
The vast majority of HP-UX DSFs can be auto-configured by insf and mksf. Devices that
aren’t auto-configurable must be configured via mknod. The mknod command requires
several options and arguments:
• The full pathname for the new device file. Device files are typically stored in /dev.
• The major number of the kernel driver used to access the device. Use the lsdev
command to view a list of drivers and their associated major numbers.
mknod options required to create an LVM volume group device file. The example on the slide
shows the full syntax for the mknod command required to create a volume group device file.
After creating a device file with mknod, you may need to execute the chmod command to set
appropriate permissions. A volume group device file should be owned by root, with 640
permissions. Other device files’ permissions will vary.
List devices and DSFs associated with non-existent “stale” devices (11i v3 only)
# lssf -s
Remove devices and DSFs associated with non-existent “stale” devices (11i v3 only)
# rmsf –v –x
Remove a specific DSF
# rmsf –v /dev/disk/disk1
Remove all of the device files associated with a device, and the device definition
# rmsf –v -a /dev/disk/disk1
Or … specify the device’s hardware path
# rmsf –v –H 64000/0xfa00/0x1
Student Notes
HP-UX automatically creates DSFs for new devices, but after removing a device, the device’s
DSFs must be manually removed via the rmsf command.
First, execute lssf –s to determine if there are any “stale” DSFs that reference non-existent
devices. The –s is only available on 11i v3.
# lssf –s
To remove the stale DSFs, execute rmsf –v –x. The –x is only available on 11i v3.
# rmsf –v –x
Removing stale block DSF /dev/disk/disk100
Removing stale character DSF /dev/rdisk/disk100
# rmsf –v /dev/disk/disk100
rmsf: Removing special file /dev/disk/disk100
To preemptively remove DSFs before removing a LUN, run rmsf –v –a. The command
removes the specified device’s DSFs, as well as the device definition.
If you specify a legacy DSF, rmsf removes the DSF’s legacy hardware path and the legacy
hardware path’s associated DSFs. Persistent DSFs remain unaffected.
If you specify a persistent DSF, rmsf removes the DSF’s agile view LUN hardware path and
its associated persistent DSFs. Legacy DSFs remain unaffected.
# rmsf -v -a /dev/disk/disk100
rmsf: Removing special file /dev/disk/disk100
rmsf: Removing special file rdisk/disk100
Alternatively, specify the target device via the device’s hardware path.
• If the hardware path belongs to a node with H/W type DEVICE, all special files
mapping to devices at that hardware path and the system definition of those devices are
removed.
• If the hardware path belongs to LUN hardware path of a node of type DEVICE, the device
should not be in an open state for the command to complete successfully.
• If the hardware path belongs to a node with H/W type LUN_PATH, all legacy special
files mapping to devices at that hardware path, as well as the system definition of those
devices, are removed.
• If the hardware path belongs to a node for which H/W type is TGT_PATH, no special
files are removed; only the corresponding node is removed.
• If the hardware path belongs to a node for which H/W type is not DEVICE, then, a
special file is removed as follows:
• If the hardware path is a leaf node, only special files for that node will be
removed.
• If the hardware path has children, then a warning message will be issued and
system definition of all the children devices and their special files are removed.
Student Notes
By default, HP-UX 11i v3 automatically enables and creates both legacy and persistent mode
device files. You can disable legacy mode if you no longer need legacy mode DSFs. Be sure,
though, to convert all DSF references in volume manager, file system, and application
configuration files to persistent DSFs before disabling legacy mode! The iofind command
can help identify legacy DSF references. See the HP-UX 11i v3 Persistent DSF Migration
Guide on http://docs.hp.com for more information.
To determine if legacy mode DSFs are currently enabled, execute insf –v -L.
# insf -v -L
To disable legacy mode and remove legacy mode DSFs execute rmsf –v -L. Be sure to
convert all legacy mode DSF references to persistent DSFs first!
# rmsf –v –L
To re-enable legacy mode and recreate legacy DSFs, execute insf –L.
# insf –L
Directions
Carefully follow the instructions below, and record the commands used to answer each
question.
# lvlnboot –v
Boot Definitions for Volume Group /dev/vg00:
Physical Volumes belonging in Root Volume Group:
/dev/disk/diska_p2 -- Boot Disk
Boot: lvol1 on: /dev/disk/diska_p2
Root: lvol3 on: /dev/disk/diska_p2
Swap: lvol2 on: /dev/disk/diska_p2
Dump: lvol2 on: /dev/disk/diska_p2, 0
In the lab solutions for the questions that follow, this disk will be identified as diska.
Record it here:
1. Some commands require block DSFs; some commands require character DSFs.
Is the boot disk DSF above a block or character DSF?
3. How can you view a list of the other DSFs associated with the disk? On Integrity
systems, which DSF represents the EFI boot disk partition containing the operating
system?
4. What is the agile view LUN hardware path associated with this DSF?
5. When troubleshooting SAN problems, it’s oftentimes helpful to know a LUN’s WWID.
What is the boot disk’s WWID?
6. Servers often access arrays via multiple redundant paths to enhance performance and
availability. How many paths are available to the disk?
7. How can you correlate the boot disk’s persistent DSF with its legacy DSFs?
8. Are there any other disks available on the system? View a list of all of the disk class
devices and their persistent DSFs. Record the persistent block DSF for one of the other
disks on the system below:
2. HP-UX auto-configures commonly used DSFs for new LUNs and devices, so it’s becoming
less and less common for administrators to manually create DSFs. If someone
accidentally removes a DSF, though, it may be necessary for the administrator to recreate
it. Use the following command to remove your spare disk’s block persistent DSF:
# rmsf /dev/disk/diskb
3. Run ioscan –kfnNH followed by the LUN hardware path identified previously to view
the disk’s persistent DSFs. Did the previous command remove just the block DSF, or did
it remove the raw DSF, too?
4. Oops... We didn’t really want to remove that DSF. Try running insf –v –H followed by
the LUN hardware path to recreate the missing DSF. Then execute the command a
second time with the –e option. What is the significance of the –e option?
# insf –v –H 64000/0xfa00/0x___
# insf –v –e –H 64000/0xfa00/0x___
5. Run ioscan –kfnNH followed by the LUN hardware path to verify that the DSF is back.
6. HP-UX typically auto-configures DSFs for new devices, but does not remove DSFs when
LUNs and devices are removed from the system. Use the command below to remove
your spare disk LUN hardware path and all of its DSFs.
# rmsf –v –H 64000/0xfa00/0x___
7. Run ioscan –kfnNH followed by the LUN hardware path identified previously. Does
the disk and/or its persistent DSFs still appear in the output?
8. Oops... We didn’t really want to remove that device. Run ioscan -fnN to scan for new
hardware. Why was it important to exclude the –k option? Is the missing LUN hardware
path back? What about the DSFs?
9. In 11i v3, the kernel and ioscan automatically create DSFs for new devices. In 11i v1
and v2, the administrator must either reboot or manually run insf to create DSFs for
new devices and LUNs. Though not necessary in 11i v3, go ahead and run insf to install
DSFs for any devices that might be missing DSFs.
# insf –v -e
1. First, determine your server’s serial port hardware path and view a list of existing DSFs.
Note the hardware path and driver name.
2. View the mksf(1m) man page. Search for the list of mksf options supported by the
asynchronous I/O (asio0) driver. Which option may be used to create a line printer
DSF?
# man 1m mksf type /asio to search for the asio driver portion of the man page
3. Create a DSF for a line printer attached to the serial port identified in step 1 above.
4. Run the ioscan command again to verify your work. There should be a cxpx_lp DSF.
Directions
Carefully follow the instructions below, and record the commands used to answer each
question.
# lvlnboot –v
Boot Definitions for Volume Group /dev/vg00:
Physical Volumes belonging in Root Volume Group:
/dev/disk/diska_p2 -- Boot Disk
Boot: lvol1 on: /dev/disk/diska_p2
Root: lvol3 on: /dev/disk/diska_p2
Swap: lvol2 on: /dev/disk/diska_p2
Dump: lvol2 on: /dev/disk/diska_p2, 0
In the lab solutions for the questions that follow, this disk will be identified as diska.
Record it here:
1. Some commands require block DSFs; some commands require character DSFs.
Is the boot disk DSF above a block or character DSF?
Answer:
# ll /dev/disk/diska
The b at the beginning of the ll output, as well as the fact that the DSF is in the
/dev/disk/ directory rather than /dev/rdisk/, indicate that this is a block DSF.
Answer:
# lssf /dev/disk/diska
esdisk section 2 at address 64000/0xfa00/0x1 /dev/disk/diska_p2
The driver name will be the first field in the output, and can also be seen as a result of the
lsdev(1m) command.
3. How can you view a list of the other DSFs associated with the disk? On Integrity
systems, which DSF represents the EFI boot disk partition containing the operating
system?
Answer:
PARISC boot disks have just two DSFs: a block DSF and a raw DSF.
Integrity boot disks should have eight DSFs: a block and raw DSF for the entire disk, plus
block and raw DSFs for each of the EFI partitions. The /dev/[r]disk/diska_p2
DSFs represent the OS partition in 11i v3. In 11i v1 and v2, the OS partition DSF ends in
s2.
4. What is the agile view LUN hardware path associated with this DSF?
Answer:
5. When troubleshooting SAN problems, it’s oftentimes helpful to know a LUN’s WWID.
What is the boot disk’s WWID?
Answer:
6. Servers often access arrays via multiple redundant paths to enhance performance and
availability. How many paths are available to the disk?
Answer:
7. How can you correlate the boot disk’s persistent DSF with its legacy DSFs?
Answer:
8. Are there any other disks available on the system? View a list of all of the disk class
devices and their persistent DSFs. Record the persistent block DSF for one of the other
disks on the system below:
There should be at least one disk on the system besides the boot disk.
2. HP-UX auto-configures commonly used DSFs for new LUNs and devices, so it’s becoming
less and less common for administrators to manually create DSFs. If someone
accidentally removes a DSF, though, it may be necessary for the administrator to recreate
it. Use the following command to remove your spare disk’s block persistent DSF:
# rmsf /dev/disk/diskb
3. Run ioscan –kfnNH followed by the LUN hardware path identified previously to view
the disk’s persistent DSFs. Did the previous command remove just the block DSF, or did
it remove the raw DSF, too?
Answer:
4. Oops... We didn’t really want to remove that DSF. Try running insf –v –H followed by
the LUN hardware path to recreate the missing DSF. Then execute the command a
second time with the –e option. What is the significance of the –e option?
# insf –v –H 64000/0xfa00/0x___
# insf –v –e –H 64000/0xfa00/0x___
Answer:
The –e option recreates missing DSFs for existing devices that were previously
configured. The –e option isn’t required when creating DSFs for new devices that haven’t
been previously configured.
5. Run ioscan –kfnNH followed by the LUN hardware path to verify that the DSF is back.
Answer:
Once again, the LUN hardware path should have two DSFs.
6. HP-UX typically auto-configures DSFs for new devices, but does not remove DSFs when
LUNs and devices are removed from the system. Use the command below to remove
your spare disk LUN hardware path and all of its DSFs.
# rmsf –v –H 64000/0xfa00/0x___
7. Run ioscan –kfnNH followed by the LUN hardware path identified previously. Does
the disk and/or its persistent DSFs still appear in the output?
Answer:
Neither the disk’s agile view LUN hardware path nor its persistent DSFs appear in the
ioscan output.
8. Oops... We didn’t really want to remove that device. Run ioscan -fnN to scan for new
hardware. Why was it important to exclude the –k option? Is the missing LUN hardware
path back? What about the DSFs?
Answer:
The –k option uses cached hardware information. Excluding –k forces ioscan to scan
for new devices, thus re-recognizing the spare disk’s LUN hardware path. In 11i v3,
ioscan also automatically creates DSFs for new devices, so the DSFs should be back,
too.
9. In 11i v3, the kernel and ioscan automatically create DSFs for new devices. In 11i v1
and v2, the administrator must either reboot or manually run insf to create DSFs for
new devices and LUNs. Though not necessary in 11i v3, go ahead and run insf to install
DSFs for any devices that might be missing DSFs.
# insf –v -e
1. First, determine your server’s serial port hardware path and view a list of existing DSFs.
Note the hardware path and driver name.
2. View the mksf(1m) man page. Search for the list of mksf options supported by the
asynchronous I/O (asio0) driver. Which option may be used to create a line printer
DSF?
# man 1m mksf type /asio to search for the asio driver portion of the man page
Answer:
3. Create a DSF for a line printer attached to the serial port identified in step 1 above.
Answer:
# mksf –v –H hwpath –l
4. Run the ioscan command again to verify your work. There should be a cxpx_lp DSF.
• Define the terms Volume Group, Logical Volume, and Physical Volume.
• Compare and contrast the advantages and disadvantages of the whole disk layout
approach, LVM, and the Veritas Volume Manager.
• Compare and contrast the advantages and disadvantages of LVMv1.0 and LVM v2.x.
– A file system
/home file system
– Swap space
– Raw application data /data file system
• Partitions can be configured using:
Student Notes
Disk space is organized into partitions. A partition is nothing more than a portion of disk
space allocated for a particular purpose. A partition can span one disk, multiple disks, or a
portion of a disk. Each partition can contain one of the following:
HP-UX offers three different approaches for creating and managing disk partitions:
Some of the disks on your system can be configured using the whole disk layout approach,
while others can be configured using LVM. All three techniques can be used concurrently on
the same system, but not on the same disk.
All three approaches have advantages and disadvantages. This chapter will discuss all three
disk-partitioning techniques, but will emphasize LVM, which is currently the most common
disk space management solution on HP-UX.
Boot Area
File System
File System
Swap
Swap
Student Notes
Using the whole disk approach, a disk may be configured five different ways:
• The disk can be dedicated entirely for use by a single file system via the newfs
command. See the file system module later in the course for details.
# newfs /dev/rdisk/disk1
• The disk can be dedicated entirely for use as swap via the swapon command. See the
swap module later in the course for details.
# swapon /dev/disk/disk1
• The disk can be dedicated entirely for use as raw disk space for an application. See your
application’s documentation for more information.
• The disk can contain a file system and swap space. The example below configures a file
system at the top of the disk, reserving 1024MB at the end of the disk for use as swap
space. See the swap module later in the course for details.
• A disk can be configured as a boot disk, containing the root file system, a swap area, and
a boot area containing utilities used during the boot process. Use Ignite-UX to configure
whole-disk boot devices. To learn more about boot disks, see the OS installation chapter
later in the course.
Though the whole disk approach is easy to use, it has several limitations:
• A file system cannot span multiple disks.
• HP’s LVM volume manager is much more flexible than the whole disk approach
– Partitions/volumes can span multiple disks
– Multiple partitions/volumes may be configured on a single disk
– Partitions/volumes can be easily extended and reduced as needs change
• LVM is included in all current versions of HP-UX
– BaseLVM is included with the operating system
– LVM Mirrordisk/UX is available for an extra charge
Logical Volumes
Student Notes
Logical Volume Manager (LVM) makes it possible to pool space from several disks (known as
"Physical Volumes") to form a "Volume Group". You can then subdivide the space in the
volume group into "Logical Volumes" (the LVM equivalent of a partition). Logical Volume
Manager (LVM) overcomes the limitations of the whole disk layout scheme by making it
possible to:
• Create volumes that span multiple disks.
The BaseLVM product, which provides functionality for creating, extending, reducing
volumes is included with HP-UX. The BaseLVM product also enables LVM “striping”, which
load balances I/O across multiple disks or disk controllers.
Customers who manage mission critical servers may wish to purchase the add-on
Mirrordisk/UX license, which enables LVM to maintain redundant copies of a volume.
Customers who use disk arrays that provide hardware-based mirroring generally do not
require the Mirrordisk/UX license. To learn more about LVM mirroring, attend HP Customer
Education’s LVM course (H6285S).
The remaining slides in this chapter describe LVM concepts and basic commands in detail.
• Any disk that has been initialized for use in LVM via the pvcreate
command is considered to be an LVM Physical Volume
• Any disk, from a simple internal SCSI or SAS disk, to a disk array LUN
may be configured as a Physical Volume
PV PV PV PV PV PV
Student Notes
A disk managed by LVM is known as a physical volume. Any disk, from a simple internal
SCSI or SAS disk, to a LUN on a disk array may be configured as a Physical Volume.
Several special data structures must be created on a disk before it can be used by LVM. The
size of these structures is determined by the parameters that are chosen at the time the
physical volume and associated volume group are created, and may range from one megabyte
to a few hundred megabytes in very large volume groups.
Once these data structures have been created, the disk is considered to be a physical volume,
and may be added to a volume group.
Although you may have a combination of LVM disks, VxVM disks, and whole disks on your
system, any given disk may only be managed by one volume manager.
• An LVM volume group is a group of disks that have been initialized for use
by LVM that are managed together as unit
• A system may have one or many volume groups
PV PV PV PV PV PV
Student Notes
A volume group is a group of one or more physical volumes. The physical volumes in a
volume group form a pool of disk space which may be allocated to one or more logical
volumes.
Naming Convention
Volume groups usually conform to the following naming convention:
• /dev/vg00
• /dev/vg01
• /dev/vg02
You can stray from the numeric naming convention, but HP recommends that you prefix each
volume group name with “vg”:
• /dev/vgoracle
• /dev/vgeurope
• /dev/vgmarketing
vg00 is a special volume group known as the "root volume group" which typically contains
the default boot disk and the majority of the HP-UX operating system. HP strongly
recommends that vg00 be used only for primary swap and OS-related file systems such as /,
/stand, and perhaps /tmp, /var, and /opt. Create other volume groups on your system
for user and application data based on your users’ needs. Following this recommendation
greatly simplifies updates and recovery.
• An LVM Logical Volume is a virtual partition of disk space within a volume group
• A logical volume may occupy a portion of a single disk, or may span multiple disks
• Logical volumes can be easily extended and reduced as necessary
• A logical volume may be used to store a file system, swap, or raw application data
PV PV PV PV PV PV
Student Notes
Disk space from a volume group may be allocated to one or more logical volumes. A logical
volume is a virtual partition of disk space within a volume group. Like physical disks, a
logical volume may contain a file system, swap area, or raw space for an application.
• Logical volumes can encompass all of, or any portion of, the space on a physical volume.
• Logical volumes can be resized, or even moved to a different physical volume in the
volume group if the need arises.
• A logical volume may contain a file system, swap area, or raw space for an application.
• /dev/vg01/lvol1
• /dev/vg01/lvol2
• /dev/vg01/lvol3
However, it is best to use logical volume names that describe the volume contents:
• /dev/vg01/datavol
• /dev/vg01/swapvol
• /dev/vg01/oraclevol
Student Notes
LVM manages disk space in units known as extents. An extent represents the smallest
allocatable unit of space in an LVM volume group.
When you add a physical volume to a volume group, LVM subdivides the physical volume into
multiple, equal-size physical extents (PEs). The physical extents are added to the volume
group’s extent map.
LVM stores a volume group’s logical to physical extent map in the headers at the top of each
disk in the volume group.
The volume group shown on the slide, vg01, has two logical volumes. Each logical volume
has three logical extents. Each logical extent is a pointer to a physical extent on the disk.
Note that the physical extents associated with a logical volume may or may not reside on the
same physical volume. In the example on the slide, the first physical extent for lvol2 is on
the first physical volume, while the remaining physical extents reside on the second physical
volume. This ability to overcome physical disk boundary limitations is one of the primary
advantages that LVM offers over the whole disk layout approach.
Questions
1. What is the smallest possible LV …
a) If your extent size is 1MB?
b) If your extent size is 256MB?
2. How many extents would be required to manage a 4GB disk …
a) Given a 1MB extent size?
b) Given a 4MB extent size?
c) Given a 256MB extent size?
Student Notes
The PE and LE sizes are consistent throughout a volume group, and may be set when the
volume group is initially created. Supported extent sizes are: 1MB, 2MB, 4MB, 8MB, 16MB,
32MB, 64MB, 128MB, and 256MB. In LVMv1, the default extent size is 4MB. In LVMv2, there
is no default extent size; the extent size must be specified by the administrator.
The extent size determines the minimum unit of space that can be allocated to a logical
volume. If you use the default 4MB extent size, every logical volume in your volume group
must be a multiple of 4MB (4MB, 8MB, 12MB, 16MB, 20MB, 24MB, etc.). If you use a 256MB
extent size, every logical volume must be a multiple of 256MB (256MB, 512MB, 768MB, etc.).
Thus, a smaller extent size allows you to specify logical volume sizes to a finer level of
granularity.
The extent size also helps determine the maximum physical volume size allowed in a volume
group.
In 11i v1 and v2, the administrator defines max PE/PV at volume group creation. The
parameter must be between one and 65535. The default is 1016, or the number of extents
required to represent the largest disk initially added to the volume group. Thus, accepting
the default extent size (4MB) and max PE/PV value (1016) allows a volume group to support
physical volumes no larger than 4MB x 1016 = 4064MB. Increasing the extent size to 256MB
allows a volume group to support physical volumes up to 256MB x 1016 = 260096MB.
In 11i v3, LVMv2.x provides much more flexibility. You still must define an extent size, but
instead of defining a PE/PV value, you simply specify the maximum expected volume group
size. LVM automatically calculates the number of physical extents required to accommodate
the specified maximum volume group size. Using the 11i v3 vgmodify command, you can
easily change the maximum volume group size later. The next slide discusses LVM versions
in much greater detail.
In general, if you expect to have very large physical and logical volumes in a volume group, it
makes sense to select a larger extent size when you create the volume group.
Note that the extent size may impact the size of the LVM headers, but it does not directly
impact system performance.
Student Notes
HP currently supports several LVM volume group versions. The LVM volume group version
determines the maximum size disks and volumes that may be configured in a volume group,
and impacts the availability of other LVM features, too. As shown on the slide, the newer
volume group versions provide much greater expandability than LVMv1.0.
HP-UX 11i v1 and v2 only support LVMv1.0 volume groups. LVMv2.x volume groups may not
be used on 11i v1 and 11i v2 systems.
The latest 11i v3 release can use any LVM volume group regardless of the VG version. To
ensure backwards compatibility, LVMv1.0 remains the default volume group version for new
volume groups in 11i v3, but the administrator can request any volume group version at
volume group creation. Multiple layouts may be used concurrently on a system, but not in a
single volume group. Boot disks must be configured via LVM v1.0 or LVMv2.2; not LVM v2.0
or LVMv2.1.
To determine which version(s) of LVM are supported on your system, execute lvmadm –t.
If the command doesn’t exist, your system only supports LVMv1.0. Otherwise, the command
displays the currently supported LVM versions, and their associated limits.
11i v3 now includes a vgversion command for upgrading LVMv1.0 volume groups to
LVMv2.x. To learn more about the vgversion command, attend HP Customer Education’s
LVM class (H6285S).
HP-UX imposes several restrictions on LVM volume groups and physical volumes, and these
limits vary significantly, as reported by the lvmadm command.
# lvmadm -t
--- LVM Limits ---
VG Version 1.0
Max VG Size (Tbytes) 510
Max LV Size (Tbytes) 16
Max PV Size (Tbytes) 2
Max VGs 256
Max LVs 255
Max PVs 255
Max Mirrors 2
Max Stripes 255
Max Stripe Size (Kbytes) 32768
Max LXs per LV 65535
Max PXs per PV 65535
Max Extent Size (Mbytes) 256
VG Version 2.0
Max VG Size (Tbytes) 2048
Max LV Size (Tbytes) 256
Max PV Size (Tbytes) 16
Max VGs 512
Max LVs 511
Max PVs 511
Max Mirrors 5
Max Stripes 511
Max Stripe Size (Kbytes) 262144
Max LXs per LV 33554432
Max PXs per PV 16777216
Max Extent Size (Mbytes) 256
VG Version 2.1
Max VG Size (Tbytes) 2048
Max LV Size (Tbytes) 256
Max PV Size (Tbytes) 16
Max VGs 2048
Max LVs 2047
Max PVs 2048
Max Mirrors 5
Max Stripes 511
Max Stripe Size (Kbytes) 262144
Max LXs per LV 33554432
Max PXs per PV 16777216
Max Extent Size (Mbytes) 256
VG Version 2.2
Max VG Size (Tbytes) 2048
Max LV Size (Tbytes) 256
Max PV Size (Tbytes) 16
Max VGs 2048
Max LVs 2047
Max PVs 2048
Max Mirrors 5
Max Stripes 511
Max Stripe Size (Kbytes) 262144
Max LXs per LV 33554432
Max PXs per PV 16777216
Max Extent Size (Mbytes) 256
Min Unshare unit(Kbytes) 512
Max Unshare unit(Kbytes) 4096
Max Snapshots per LV 255
/dev
Student Notes
Physical volumes, volume groups, and logical volumes are all referenced via DSFs just as disk
devices are referenced via DSFs.
/dev/vg01 VG LV
/dev/vg01/group c 64 0x010000
/dev/vg01/lvol1 b 64 0x010001
/dev/vg01/lvol2 b 64 0x010002
/dev/vg01/rlvol1 c 64 0x010001
/dev/vg01/rlvol2 c 64 0x010002
Student Notes
Like all other DSFs, every logical volume and volume group DSF must have a major and a
minor number. The major and minor numbers are slightly different in LVMv1 versus LVMv2.
This slide focuses on LVMv1. The next slide focuses on LVMv2 major and minor numbers.
The volume group subdirectory must contain a group DSF that represents the volume group.
The group DSF must have a major and a minor number. All LVMv1 DSFs use major number
64, the major number associated with the LVMv1 kernel driver.
The first two digits of the group DSF’s minor number uniquely identify the DSF’s volume
group. The remaining bits in the group DSF’s minor number must be 0000.
When using numeric volume group names, match the first two digits of the minor number to
the volume group number. Thus, the minor number for /dev/vg01/group would typically
be 0x010000.
When using non-numeric volume group names, simply ensure that the first two digits of the
group DSF minor number are unique.
The major number for both logical volume DSFs should be 64, the major number associated
with the LVMv1 kernel driver.
The first two digits of each DSF’s minor number identify which volume group the DSF is
associated with. The last two digits identify the logical volume associated with the DSF.
Thus, the minor number for /dev/vg01/lvol2 would typically be 0x010002.
When using non-numeric logical volume names, simply ensure that the last two digits are
unique.
The example on the slide lists some typical major and minor numbers for an LVMv1 volume
group.
Questions
If vg02 has three logical volumes created using the default naming convention:
1. What directory would contain the logical volumes' DSFs?
3. What would be the name of the first logical volume's raw DSF?
5. What would be the minor number of the third logical volume's DSF?
/dev/vg01 VG LV
/dev/vg01/group c 128 0x001000
/dev/vg01/lvol1 b 128 0x001001
/dev/vg01/lvol2 b 128 0x001002
/dev/vg01/rlvol1 c 128 0x001001
/dev/vg01/rlvol2 c 128 0x001002
Student Notes
LVMv2.x volume group and logical volume DSFs are structured slightly differently. The
LVMv2.x major number is 128 rather than 64.
Also, to accommodate more volume groups per host, and more logical volumes per volume
group, LVMv2.x DSFs use all six digits in the volume group and logical volume minor
numbers.
The first three digits represent the volume group to which a DSF belongs and may range in
value from 0x000 to 0x7ff (0 to 2047 in decimal).
The last three digits uniquely identify the logical volumes within a volume group and may
also range in value from 0x001 to 0x7ff (1 to 2047 in decimal). The volume group’s group
device file claims the 0x000 minor number.
When using non-numeric volume group or logical names, simply ensure that the minor
number is unique.
The example on the slide lists some typical major and minor numbers for an LVMv2.x volume
group.
PVRA/VGRA PVRA/VGRA
disk1 disk2
Student Notes
Before you can use space on a disk for logical volumes, you must configure the disk as an
LVM physical volume. Once the disk has been configured as a physical volume, you can add
the disk to a volume group and begin allocating space from the disk to logical volumes.
LVM uses two binary configuration files to record which disks belong to LVM volume groups.
/etc/lvmtab contains a list of existing LVMv1.0 volume groups and physical volumes.
/etc/lvmtab_p contains a list of LVMv2.x volume groups and disks, if any.
On 11i v1 and v2 systems you can use the strings command to view the ASCII contents of
/etc/lvmtab and determine which disks belong to volume groups.
# strings /etc/lvmtab
/dev/vg00
/dev/disk/disk0_p2
/dev/vg01
/dev/disk/disk1
/dev/disk/disk2
On 11i v3, the new lvmadm –l command displays the contents of /etc/lvmtab and
/etc/lvmtab_p in a more user-friendly format.
# lvmadm -l
--- Version 1.0 volume groups ---
VG Name /dev/vg00
PV Name /dev/disk/disk0_p2
Compare the ioscan output to the output from strings and lvmadm -l to determine
which disks are available for use as LVM physical volumes. Disks that appear in the ioscan
output, but not in the strings or lvmadm output do not currently belong to LVM volume
groups.
LVMv1.0 supports physical volumes up to 2TB. 11i v1 requires patch PHKL_30622 to support
physical volumes greater than 256GB. 11i v2 requires patch PHKL_31500. 11i v3 includes
large physical volume support by default.
Use the diskinfo command to determine the size of a prospective physical volume.
# diskinfo /dev/rdisk/disk1
SCSI describe of /dev/rdisk/disk1:
vendor: HP
product id: HSV101
type: direct access
size: 35651584 Kbytes
bytes per sector: 512
Next, execute the pvcreate command to create LVM header structures on the disk. If the
disk was previously part of another volume group, you may need to use the -f option on
pvcreate. The example on the slide uses 11i v3 persistent disk DSFs. In 11i v1 and v2, use
legacy DSFs instead.
# pvcreate -f /dev/rdisk/disk1
# pvcreate -f /dev/rdisk/disk2
• The Volume Group Reserved Area (VGRA) contains LVM information specific to the
entire Volume Group. LVM maintains a copy of the VGRA on each Physical Volume in the
Volume Group. The VGRA includes a Volume Group Status Area (VGSA) which contains
quorum information for the Volume Group, and the Volume Group Descriptor Area
(VGDA) which contains additional configuration information required by the LVM kernel
driver. The VGRA is created by vgcreate(1M).
• The User Data Area contains the physical extents that are allocated to file systems,
virtual memory (swap), or user applications. When a volume group is created, the user
data area is divided into fixed-size physical extents, which map to logical extents. The
map of Logical Extents is contained in the VGRA.
• In earlier versions of HP-UX, LVM compensated for disk irregularities by “relocating” data
that would otherwise have been written to unusable disk blocks to the Bad Block
Relocation Area (BBRA) at the end of the disk. Today, bad block relocation
functionality is provided by disk firmware so the BBRA is irrelevant. To avoid creating a
BBRA, include the –d 0 option on pvcreate. BBRA is no longer created in LVMv2.x
volume groups.
• LVM boot disks contain a Boot Disk Reserved Area (BDRA) and other additional data
structures required by the boot process.
LVM Overhead
The LVM header structures consume some disk space at the top of every physical volume.
This overhead is set at a fixed boundary for bootable LVM disks (2912 KB). Disk space
overhead required on non-bootable disks depends on parameters specified by the
administrator when the volume group is created. Increasing the “Max PV/VG” or “Max
PE/PV” parameters in LVMv1.0, or increasing the maximum volume group size parameter in
LVMv2.x, increases the size of the LVM headers. See the vgcreate(1M) man page for
additional information.
vg01
disk1 disk2
# mkdir /dev/vg01
# chown root:sys /dev/vg01; chmod 755 /dev/vg01
# mknod /dev/vg01/group c 64 0x010000
# chown root:sys /dev/vg01/group; chmod 640 /dev/vg01/group
# vgcreate vg01 /dev/disk/disk1 /dev/disk/disk2
Student Notes
After initializing physical volumes, initialize a volume group. This slide focuses on the
commands required to create an LVMv1.0 volume group. The next slide describes the
process in LVMv2.x.
# mkdir /dev/vg01
# chown root:sys /dev/vg01
# chmod 755 /dev/vg01
Note that in 11i v3 0803 and beyond, these steps are optional; vgcreate can create the
volume group DSF automatically.
In 11i v1 and v2, the maxvgs kernel parameter defines the maximum number of volume
groups allowed on the system, as well as the maximum value supported in the first two digits
of the volume group DSF minor number. The default value in 11i v1 and v2 is 10, which
allows volume group DSF minor numbers from 0x000000 to 0x090000. Use kmtune (11i
v1) or kctune (11i v2) to view the current value of the kernel parameter.
# kctune maxvgs
Tunable Value Expression
maxvgs 10 Default
In 11i v3, maxvgs no longer exists; administrators can create up to 256 LVMv1 volume groups
by default.
Several important options may be specified at volume group creation. These options,
described below, may be defined on a per-volume group basis. Note that the options below
are LVMv1.0-specific. The next slides discuss LVMv2.x.
Default: 255
Minimum: 1
Maximum: 255
Default: 16
Minimum: 1
Maximum: 255
Default: 4MB
Minimum: 1MB
Maximum: 256MB
Default: 1016*
Minimum: 1
Maximum: 65535
The vgdisplay command reports the current values for each of these attributes.
# vgdisplay vg01
…
Max LV 255
Max PV 16
Max PE per PV 1016
PE Size (Mbytes) 4
…
WARNING! In 11i v1, these volume group attributes are set at creation and can’t be
changed later without removing and re-creating the volume group.
Plan your LVM layout carefully before you create your volume groups!
In 11i v2 and v3, all of the LVMv1.0 volume group attributes except the
extent size can be changed via the vgmodify command. Attend HP
Education’s LVM course (H6285S) to learn more about vgmodify.
The 11i v1 and v2 kernel aren’t multi-path aware. In order to provide path failover via the
redundant paths to a physical volume, all of the legacy path DSFs must be provided as
arguments to vgcreate. Thus, in order to create a two-disk volume group, in which each
disk is accessible via four paths, vgcreate requires eight physical volume arguments similar
to the following:
# vgcreate /dev/vg01 \
/dev/dsk/c0t0d1 /dev/dsk/c1t0d1 /dev/dsk/c2t0d1 /dev/dsk/c3t0d1
/dev/dsk/c0t0d2 /dev/dsk/c1t0d2 /dev/dsk/c2t0d2 /dev/dsk/c3t0d2
LVM reports the additional links as PV Links. To learn more about PV link configuration and
management, see the PV Link appendix at the end of this workbook.
Some array vendors offer multi-pathing software for 11i v1 and v2 that virtualize redundant
LUN paths automatically. Consult your array vendor for more information.
vg01
disk1 disk2
Student Notes
The options required to create an LVMv2.x volume group are significantly different than the
commands required to create an LVMv1.0 volume group. See the example and notes below
for details. Note that vgcreate automatically creates device files for LVMv2.x volume
groups.
# vgcreate -V 2.2 -E -S 1p
Max_VG_size=1p:extent_size=32m
# vgcreate -V 2.2 -E -s 1
Max_VG_size=32t:extent_size=1m
/dev/disk/disk1... Specify the list of disks you initially wish to assign to the
volume group. At least one disk is required. LVM records the
LVMv2 volume group / physical volume assignments in
/etc/lvmtab_p and in the volume group’s VGRA headers.
Administrators typically choose to use 11i v3 persistent disk
DSFs when assigning disks to the volume group.
The vgdisplay command reports the current values for each of these attributes, plus
several others that aren’t included below. A slide later in the chapter discusses the
vgdisplay command more formally.
# vgdisplay vg01
--- Volume groups ---
…
VG Name /dev/vg01
…
PE Size (Mbytes) 4
VG Version 2.2
VG Max Size 1t
…
The LVMv1.0 vgcreate –p, -l, and -e options don’t apply to LVMv2.x volume groups.
vg01
datavol
disk1 disk2
Student Notes
After creating a volume group, create logical volumes in the volume group via the lvcreate
command. The examples on the slide create two 16MB logical volumes in vg01 called
swapvol and datavol. When lvcreate creates a logical volume, it records the logical
volume’s configuration information in the kernel’s LVM structures and in the LVM headers on
the volume group’s disks. It also creates block and character DSFs for the logical volume in
the volume group’s DSF /dev/vgnn directory.
The volume group name is the only required argument. The example below creates two
empty logical volumes in vg01 using default logical volume names lvol1 and lvol2.
# lvcreate vg01
# lvcreate vg01
The list below describes some of the most common lvcreate command options:
lvcreate supports many other options, too. See the lvcreate(1m) man page or attend
HP Education’s LVM course (H6285S) for more information.
Student Notes
After creating physical volumes, volume groups, and logical volumes, use pvdisplay,
vgdisplay, and lvdisplay to verify the results.
For LVMv1:
# vgdisplay -v vg01
--- Volume groups ---
VG Name /dev/vg01
VG Write Access read/write
VG Status available
Max LV 2047
Cur LV 0
Open LV 0
Max PV 2048
Cur PV 2
Act PV 2
Max PE per PV 262144
VGDA 4
PE Size (Mbytes) 4
Total PE 500
Alloc PE 8
Free PE 492
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
VG Version 2.2 this line only appears in 11i v3
VG Max Size 1t this line only appears in 11i v3
VG Max Extents 262144 this line only appears in 11i v3
LV Name /dev/vg01/datavol
LV Status available/syncd
LV Size (Mbytes) 16
Current LE 4
Allocated PE 4
Used PV 1
PV Name /dev/disk/disk2
PV Status available
Total PE 250
Free PE 250
LV Name /dev/vg01/datavol
Then use lvdisplay –v (verbose) to display a logical volume’s header information, plus
the logical volume’s extent map. Without the verbose option, lvdisplay only reports
logical volume header information.
# lvdisplay -v /dev/vg01/swapvol
--- Logical volumes ---
LV Name /dev/vg01/swapvol
VG Name /dev/vg01
LV Permission read/write
LV Status available/syncd
Mirror copies 0
Consistency Recovery MWC
Schedule parallel
LV Size (Mbytes) 16
Current LE 4
Allocated PE 4
Stripes 0
Stripe Size (Kbytes) 0
Bad block on
Allocation strict
IO Timeout (Seconds) default
# lvdisplay -v /dev/vg01/datavol
--- Logical volumes ---
LV Name /dev/vg01/datavol
VG Name /dev/vg01
LV Permission read/write
LV Status available/syncd
Mirror copies 0
Consistency Recovery MWC
Schedule parallel
LV Size (Mbytes) 16
Current LE 4
Allocated PE 4
Stripes 0
Stripe Size (Kbytes) 0
Bad block NONE
Allocation strict
Then use pvdisplay –v (verbose) to display a physical volume’s header information, plus
an extent map. Without the verbose option, pvdisplay only reports physical volume
header information.
# pvdisplay –v /dev/disk/disk1
--- Physical volumes ---
PV Name /dev/disk/disk1
VG Name /dev/vg01
PV Status available
Allocatable yes
VGDA 2
Cur LV 2
PE Size (Mbytes) 4
Total PE 250
Free PE 242
Allocated PE 8
Stale PE 0
IO Timeout default
Autoswitch On
Proactive Polling On
Student Notes
The first slide in this chapter noted that HP-UX now supports three different disk space
management solutions. Depending on your system's needs, you can choose to use one, two,
or all three solutions on your system. This slide compares the relative advantages and
disadvantages of each approach.
The earliest versions of VxVM could only be used to configure HP-UX data disks. However,
VxVM 3.5, which is the current version of VxVM on 11i v1 and v2, and VxVM 4.1 and 5.0 on v3,
can be used to manage both data and boot disks.
Logical Volume Manager is included with most Linux distributions, and AIX includes a similar
volume manager. LVM commands on HP-UX and Linux are nearly identical. Volume
manager commands on AIX are different.
Veritas supports VxVM on Linux, most major commercial UNIX flavors, and even on
Microsoft Windows. VxVM commands are identical across platforms.
RAID 5 Supported?
The mirrored/striped partitions described above deliver both stability and performance.
However, mirrored/striped partitions are also expensive to implement, since every megabyte
of data requires two or more megabytes of disk space: one for each mirror. RAID 5 is a
sophisticated technology that provides the same stability that one would expect from a
mirrored partition, while simultaneously providing some of the same read performance
benefits that one would expect from a striped partition.
RAID 5 functionality is available via an add-on license for VxVM. RAID 5 functionality is not
available in LVM.
With an additional “Active/Active DMP license, VxVM can even utilize both paths
simultaneously to improve performance.
Though LVM doesn’t provide active/active dynamic multipathing, the new mass storage stack
in 11i v3 provides equivalent functionality.
Add-on Functionality
The features in the table above that are preceded by an asterisk (*) require a special license.
LVM mirroring functionality is included in the HP-UX 11i v1 & 11i v2 Enterprise and Mission
Critical Operating Environment software bundles, and in the HP-UX 11i v3 High Availability,
VSE, and Data Center Operating Environments. Other customers must purchase a license for
MirrorDisk/UX. Use the swlist command to determine if the mirroring license is
configured on your system.
All HP-UX Operating Environments include the Base-VxVM product, which makes it possible
to configure simple VxVM volumes and disk groups, and mirror the boot disk. In order to use
other VxVM high availability features, though, you must install one of the Veritas B9116*
bundles, or one of the T277[1-7]* Serviceguard Storage Management Suite bundles. To
determine if you have access to the VxVM online features, type the following:
Directions
In this exercise you will have an opportunity to create physical volumes, volume groups, and
logical volumes in LVM disk layout 1 (LVMv1). Your system should have at least one unused
spare disk. Your instructor will tell you which spare disk to use. Record the disk DSF name
below. If you consult the solutions, note that diska should be replaced with your spare
disk’s DSF name.
diska = ____________________
Except where noted, do all of the exercises from the command line.
2. Use lvmadm -l to determine which of the disks on your system are already members of
active volume groups. Verify that the spare disk suggested by your instructor is not
already listed in /etc/lvmtab and /etc/lvmtab_p.
3. Before adding a disk to a volume group, you may want to check the size of the disk. This
is accomplished via the diskinfo command. How large is your spare disk?
# diskinfo /dev/rdisk/diska
4. Create a new LVMv1.0 vg01 volume group using your newly created physical volume.
You will have an opportunity to configure an LVMv2.x volume group later in the lab.
5. Use vgdisplay and pvdisplay to check the status of your new physical volume and
volume group. How many physical volumes are in the volume group at this point? How
many logical volumes are in the volume group at this point? What is the extent size?
6. Create two 24-MB logical volumes in your new volume group. Name the first logical
volume cadvol and the second camvol.
7. Use vgdisplay and lvdisplay to ensure that your new logical volumes were actually
created.
8. Do an ll of the /dev/vg01 directory. What is the name of the volume group DSF for
your new volume group? Each of your logical volumes should have two DSFs. Why?
9. Remove the vg01 volume group. For now, use the shortcut cookbook described below.
A later chapter in the course describes the process required to remove volume group
more formally.
# vgchange –a n vg01
# vgexport vg01
10. Execute vgdisplay vg01. This should report that the volume group no longer exists.
If the volume group does exist, return to the previous step.
2. For the sake of variety, create a new LVMv2.0 volume group called vg02 using the spare
disk you just pvcreated. Specify a 4MB extent size and ensure that volume group can
accommodate up to 1TB of disk space.
3. Now that you have a volume group, try creating a few logical volumes. Create a logical
volume called test1vol in vg02. This time, though, don't specify a size for your logical
volume. Based on the result of this experiment, what is the default logical volume size?
4. What happens if you don't specify a logical volume name when you lvcreate? Try it.
Create two new logical volumes of size 12 MB and 16 MB, leaving off the -n option in
both cases. What names did LVM assign to your new logical volumes? Why?
5. See what happens if you attempt to create an 11 MB logical volume in vg02 called
test2vol. Watch the output from lvcreate carefully. What size is your new logical
volume? Explain.
6. At some point in your UNIX career, you will almost certainly accidentally execute
lvcreate -l instead of lvcreate -L. What appears to be the difference between
these two options? Try it and find out.
7. Remove the vg02 volume group. For now, use the shortcut cookbook described below.
A later chapter in the course describes the process required to remove volume group
more formally.
# vgchange –a n vg02
# vgexport vg02
# echo $DISPLAY
2. VxVM refuses to manage disks that contain LVM header information. Use the dd
command to remove any remnants of the LVM headers from your spare disk.
3. Verify that VxVM is installed on your system. The VxVM 4.1 product name is Base-
VXVM. The VxVM 5.0 product name is Base-VxVM-50. Exactly one of the two products
should be installed.
4. Run the vxinstall program to startup the VXVM daemons. You only need to run this
utility once, when you first install VxVM.
# vxinstall
• Don’t enter any license keys. The license required to create mirrored and RAID5
volumes should already be installed..
• Don’t use enclosure-based names; we’ll discuss enclosure-based names later in the
course.
• Don’t select a default disk group.
Answer:
# vxinstall
VxVM uses license keys to control access. If you have not yet
installed a VxVM license key on your system, you will need to do
so if you want to use the full functionality of the product.
Licensing information:
System host ID: 4289413582
Host type: ia64 hp server rx2600
5. VxVM can be managed from the command line, or via the "Veritas Enterprise
Administrator" (vea) GUI-based management tool. In this lab, we will use vea. Launch
the VEA client GUI.
# /opt/VRTSob/bin/vea
7. Click File->Connect.
8. Enter your lab system’s fully qualified hostname, then click [Connect] to connect.
10. When the VEA interface appears, click the magnifying glass icon to the left of your
hostname in the system object list on the left.
12. Click Actions->Rescan to ensure that VEA is displaying the most current information.
13. Click Disk Groups in the object list on the left. If you have any disk groups, they
should appear in the object detail list on the right. You should not have any disk groups
currently.
14. Click Disk Groups in the object browser on the left. VxVM disk groups are similar to
LVM volume groups. If you had any disk groups, they would appear in the detail pane on
the right. There shouldn’t be any disk groups currently.
c. In the Available Disks list, select your spare disk. Note that VxVM uses legacy
device file names rather than persistent device file names by default..
d. In the Disk Names text box, enter datadg01. VxVM will use datadg01 as a
hardware path independent name for the disk.
g. Confirm that you wish to add the disk to the disk group.
h. Click Finish to confirm that you wish to create the disk group.
17. Back in the VEA main window, click Disk Groups in the object list on the left to verify
that your disk group was successfully created.
18. Back in the VEA main window, select the new datadg disk group.
19. Click Actions->New Volume... to create the new volume. VxVM volumes are similar
to LVM logical volumes.
23. In the Choose the method by which to select Disks for this Volume
dialog box, click Let Volume Manager decide which disks to use. Click
Next>.
c. Click Next.
25. When asked to select attributes for the new volume, accept all of the defaults and click
Next.
26. In the Create file system dialog box, select No File System, then click Next.
28. Back in the VEA main window, click Volumes in the object list on the left to verify that
your volume was successfully created.
29. Back in the VEA main window, click Disk Groups in the object list.
Exiting VEA
34. LVM refuses to overwrite VxVM headers. Use the dd command to clobber any VxVM
headers remaining on the disk.
NOTE: This lab only demonstrated VxVM's basic functionality. For more
information on VxVM, attend HP's Veritas Volume Manger training
courses, HB505S, or read the Veritas Volume Manager documentation
on http://docs.hp.com.
A similar Disks and File Systems functional area exists in sam in earlier versions of
HP-UX.
Directions
In this exercise you will have an opportunity to create physical volumes, volume groups, and
logical volumes. Your system should have at least one unused spare disk. Your instructor
will tell you which spare disk to use. Record the disk DSF name below. If you consult the
solutions, note that diska should be replaced with your spare disk’s DSF name.
diska = ____________________
Except where noted, do all of the exercises from the command line.
Answer:
2. Use lvmadm -l to determine which of the disks on your system are already members of
active volume groups. Verify that the spare disk suggested by your instructor is not
already listed in /etc/lvmtab and /etc/lvmtab_p.
Answer:
# lvmadm -l
The disk suggested by your instructor should not appear in the output.
3. Before adding a disk to a volume group, you may want to check the size of the disk. This
is accomplished via the diskinfo command. How large is your spare disk?
# diskinfo /dev/rdisk/diska
Answer:
# diskinfo /dev/rdisk/diska
Answer:
# lvmadm -t
Answer:
# pvcreate /dev/rdisk/diska
Answer:
# pvdisplay /dev/disk/diska
4. Create a new LVMv1.0 vg01 volume group using your newly created physical volume.
You will have an opportunity to configure an LVMv2.x volume group later in the lab.
Answer:
# mkdir /dev/vg01
# mknod /dev/vg01/group c 64 0x010000
# vgcreate vg01 /dev/disk/diska
5. Use vgdisplay and pvdisplay to check the status of your new physical volume and
volume group. How many physical volumes are in the volume group at this point? How
many logical volumes are in the volume group at this point? What is the extent size?
Answer:
Currently there should be just one PV in the volume group, and no LVs. The PE size
should be 4 MB (the default).
6. Create two 24-MB logical volumes in your new volume group. Name the first logical
volume cadvol and the second camvol.
Answer:
7. Use vgdisplay and lvdisplay to ensure that your new logical volumes were actually
created.
Answer:
# vgdisplay -v | more
# lvdisplay -v /dev/vg01/cadvol
# lvdisplay -v /dev/vg01/camvol
8. Do an ll of the /dev/vg01 directory. What is the name of the volume group DSF for
your new volume group? Each of your logical volumes should have two DSFs. Why?
Answer:
# ll /dev/vg01
The volume group DSF should be called /dev/vg01/group. Each logical volume
requires both a raw and a block DSF. Some commands used to access logical volumes
require a block DSF, while others require a character DSF. Both DSF types are required
for every DSF.
9. Remove the vg01 volume group. For now, use the shortcut cookbook described below.
A later chapter in the course describes the process required to remove volume group
more formally.
# vgchange –a n vg01
# vgexport vg01
10. Execute vgdisplay vg01. This should report that the volume group no longer exists.
If the volume group does exist, return to the previous step.
Answer:
# vgdisplay -v vg01
Answer:
# pvcreate -f /dev/rdisk/diska
2. For the sake of variety, create a new LVMv2.0 volume group called vg02 using the spare
disk you just pvcreated. Specify a 4MB extent size and ensure that volume group can
accommodate up to 1TB of disk space.
Answer:
# vgcreate –V 2.0 –S 1t –s 4 vg02 /dev/disk/diska
# pvdisplay /dev/disk/diska
# vgdisplay -v vg02
3. Now that you have a volume group, try creating a few logical volumes. Create a logical
volume called test1vol in vg02. This time, though, don't specify a size for your logical
volume. Based on the result of this experiment, what is the default logical volume size?
4. What happens if you don't specify a logical volume name when you lvcreate? Try it.
Create two new logical volumes of size 12 MB and 16 MB, leaving off the -n option in
both cases. What names did LVM assign to your new logical volumes? Why?
Answer:
# lvcreate -L 12 vg02
# lvcreate -L 16 vg02
# vgdisplay -v vg02
By default, LVM uses the following naming convention for new logical volumes: lvol1,
lvol2, lvol3,... This is the expected behavior in all current volume group
versions.
In this case, since these were the second and third logical volumes in vg02, LVM named
them lvol2 and lvol3. The number after lvol should match the last couple of digits
of the logical volume DSF's minor number.
5. See what happens if you attempt to create an 11 MB logical volume in vg02 called
test2vol. Watch the output from lvcreate carefully. What size is your new logical
volume? Explain.
Answer:
# lvcreate -L 11 -n test2vol vg02
# vgdisplay -v vg02
If you choose a size that is not a multiple of the extent size, LVM rounds up to the nearest
extent boundary.
6. At some point in your UNIX career, you will almost certainly accidentally execute
lvcreate -l instead of lvcreate -L. What appears to be the difference between
these two options? Try it and find out.
Answer:
The -L option defines a logical volume's size in megabytes, while the –l option defines a
logical volume's size in extents. Thus, using lvcreate –l 12 results in a much larger
logical volume than lvcreate –l 12. This is the expected behavior in all current
volume group versions.
7. Remove the vg02 volume group. For now, use the shortcut cookbook described below.
A later chapter in the course describes the process required to remove volume group
more formally.
# vgchange –a n vg02
# vgexport vg02
# echo $DISPLAY
2. VxVM refuses to manage disks that contain LVM header information. Use the dd
command to remove any remnants of the LVM headers from your spare disk.
3. Verify that VxVM is installed on your system. The VxVM 4.1 product name is Base-
VXVM. The VxVM 5.0 product name is Base-VxVM-50. Exactly one of the two products
should be installed.
4. Run the vxinstall program to startup the VXVM daemons. You only need to run this
utility once, when you first install VxVM.
# vxinstall
• Don’t enter any license keys. The license required to create mirrored and RAID5
volumes should already be installed..
• Don’t use enclosure-based names; we’ll discuss enclosure-based names later in the
course.
• Don’t select a default disk group.
Answer:
# vxinstall
VxVM uses license keys to control access. If you have not yet
installed a VxVM license key on your system, you will need to do
so if you want to use the full functionality of the product.
Licensing information:
System host ID: 4289413582
Host type: ia64 hp server rx2600
5. VxVM can be managed from the command line, or via the "Veritas Enterprise
Administrator" (vea) GUI-based management tool. In this lab, we will use vea. Launch
the VEA client GUI.
# /opt/VRTSob/bin/vea
7. Click File->Connect.
8. Enter your lab system’s fully qualified hostname, then click [Connect] to connect.
10. When the VEA interface appears, click the magnifying glass icon to the left of your
hostname in the system object list on the left.
12. Click Actions->Rescan to ensure that VEA is displaying the most current information.
13. Click Disk Groups in the object list on the left. If you have any disk groups, they
should appear in the object detail list on the right. You should not have any disk groups
currently.
14. Click Disk Groups in the object browser on the left. VxVM disk groups are similar to
LVM volume groups. If you had any disk groups, they would appear in the detail pane on
the right. There shouldn’t be any disk groups currently.
c. In the Available Disks list, select your spare disk. Note that VxVM uses legacy
device file names rather than persistent device file names by default..
d. In the Disk Names text box, enter datadg01. VxVM will use datadg01 as a
hardware path independent name for the disk.
g. Confirm that you wish to add the disk to the disk group.
h. Click Finish to confirm that you wish to create the disk group.
17. Back in the VEA main window, click Disk Groups in the object list on the left to verify
that your disk group was successfully created.
18. Back in the VEA main window, select the new datadg disk group.
19. Click Actions->New Volume... to create the new volume. VxVM volumes are similar
to LVM logical volumes.
23. In the Choose the method by which to select Disks for this Volume
dialog box, click Let Volume Manager decide which disks to use. Click
Next>.
c. Click Next.
25. When asked to select attributes for the new volume, accept all of the defaults and click
Next.
26. In the Create file system dialog box, select No File System, then click Next.
28. Back in the VEA main window, click Volumes in the object list on the left to verify that
your volume was successfully created.
29. Back in the VEA main window, click Disk Groups in the object list.
Exiting VEA
34. LVM refuses to overwrite VxVM headers. Use the dd command to clobber any VxVM
headers remaining on the disk.
NOTE: This lab only demonstrated VxVM's basic functionality. For more
information on VxVM, attend HP's Veritas Volume Manger training
courses, HB505S, or take a look at the Veritas Volume Manager
documentation on http://docs.hp.com.
A similar Disks and File Systems functional area exists in sam in earlier versions of
HP-UX.
Answer:
# pvcreate -f /dev/rdisk/diska
# vgcreate –V 2.2 –S 1t –s 4 vg01 /dev/disk/diska
# lvcreate -L 32 -n swapvol vg01
# lvcreate -L 32 -n datavol vg01
Define the terms: superblock, inode, directory, directory entry, block, and extent.
Manually mount VxFS, CDFS, ISO, and MemFS file systems via mount.
• A UNIX file system is a collection of files and directories managed together as a unit
• Each file system resides in a logical volume, or a whole disk partition
• Systems often have multiple file systems, each containing a portion of the system’s files
• Mounting a file system makes the file system accessible to users
• Mount point directories enable users to transparently navigate between file systems
Student Notes
A UNIX file system is a collection of files and directories stored and managed together as a
unit. Each file system resides in a separate logical volume or whole disk partition. HP-UX
systems usually have multiple file systems.
• Operating system files under /usr are usually stored in one file system.
• Variable length log and spool files under /var are usually stored in another file system.
• Temporary files under /tmp are usually stored in another file system.
• User home directories under /home are usually stored in another file system.
• Data files under /data may be stored in yet another file system.
The / (root) file system is a special file system that includes the /etc, /dev, /sbin, and
other directories containing files used very early in the system boot process.
Each file system may be tuned independently. There are a number of parameters associated
with each file system that can significantly affect system performance. It may be beneficial
to optimize some file systems for storage of large files, while others are optimized for storage
of smaller files.
File system maintenance tasks may be performed on one file system, while other file systems
remain accessible to your users.
Mounting a file system makes the file system accessible to users by logically associating the
file system with a directory in the system’s file hierarchy. The directory upon which a file
system is mounted is known as the file system’s mount point directory. Mount points enable
users to easily navigate from file system to file system with no knowledge of the underlying
partitions.
The directory structure on the slide includes files residing in three different file systems.
The files residing in /etc and /dev reside in the “root” file system stored in
/dev/vg00/lvol3. HP-UX mounts the root file system on the / directory very early in the
system startup process.
The file system contained in /dev/vg00/lvol4, which contains user home directories, is
mounted on the /home mount point.
The file system contained in /dev/vg01/datavol, which contains application data files, is
mounted on the /data mount point.
These are just a few examples; systems typically have many more mounted file systems.
The administrator can also unmount a file system, rendering the file system temporarily
inaccessible to users. Some administrative tasks described later in the course can only be
performed on unmounted file systems
Student Notes
HP-UX supports several different file system types. The notes below briefly describe some of
the features of the most common file system types.
There are two JFS products. The BaseJFS file product provides fast recovery feature and is
included in all current HP-UX releases. OnlineJFS offers these additional capabilities:
• online defragmentation and reorganization
• online expansion and contraction of file system size
• online backup
Notes later in this chapter describe the steps required to configure JFS file systems.
CacheFS
CacheFS performs local disk caching of NFS file systems on NFS clients, reducing network
traffic, and potentially improving NFS client and server performance. CacheFS only
improves NFS read performance; it does not affect NFS write performance.
The first time data is read from an NFS-mounted file system, there is some overhead while
CacheFS writes NFS file system data to its local cache. After the data is written to the cache,
read performance for the file system is significantly improved. CacheFS regularly polls the
server to maintain consistency with the server.
To learn more about CacheFS, read the CacheFS chapter in the NFS Services
Administrator’s Guide on http://docs.hp.com.
HP includes CIFS client software in the HP CIFS product. This software makes it possible to
mount file shares from any Samba or Microsoft server on an HP-UX client using the
/etc/fstab file and the standard UNIX mount command. File systems mounted via the
CIFS client software may be accessed using all the standard UNIX utilities and system calls.
Finally, the HP CIFS product includes a Pluggable Authentication Module (PAM) library to
allow users to log onto their HP-UX systems using their Windows domain usernames and
passwords.
HP CIFS is included in the HP-UX Operating Environments.
For more information on Samba and CIFS, read HP's CIFS documentation on
http://docs.hp.com, or purchase O'Reilly and Associates, Using Samba (ISBN 1-56592-
449-5).
The HP-UX “memory-based file system” product (MemFS) enables you to store directories
and files in memory rather than on disk. Users can access files and directories in a MemFS
file system as they would files and directories in an HFS or JFS file system, though the
internal implementation is quite different from disk-based file systems.
MemFS typically provides extremely high throughput. However, since MemFS files are
stored in memory rather than on disk, MemFS files and directories never persist across
reboots. Thus, MemFS is most appropriate for application file systems containing temporary
files.
MemFS was first introduced in 11i v2 and is now supported in 11iv3, too.
To learn more about MemFS, see the Memory File System (MemFS) 2.0 for HP-UX 11i v3
Administrator's Guide on http://docs.hp.com.
Managing File
Systems
Part 1: File System Concepts
Student Notes
Disk space allocated to a file system, regardless of the file system type, is subdivided into file
system blocks. The blocks in a file system may be used for two different purposes.
Some of the blocks in a file system store the actual data contained in user, application, and
OS files. These data blocks account for the majority of the blocks in most file systems.
Some of the blocks in every file system store the file system's metadata. A file system's
metadata describes the structure of the file system. A conceptual understanding of these
structures will contribute greatly to your success as a system administrator.
The next few slides describe some of the metadata structures that are common to most file
system types.
Superblock Concepts
• Every file system has a superblock containing file system structural information
• Use fstyp –v to view selected superblock fields
# fstyp -v /dev/vg01/rdatavol
file system type vxfs
version: 6
f_bsize: 8192
f_frsize: 1024
f_blocks: 32768
f_bfree: 31139
f_bavail: 29193
f_files: 7816
f_ffree: 7784
file system size f_size: 32768
(continues)
Student Notes
Superblock Concepts
Every file system has a structure called a superblock that contains general information
about the file system. The superblock identifies the file system type, size, status, and
attributes, and contains pointers to all of the other file system metadata structures. Since the
superblock contains such critical information, most file systems maintain multiple redundant
copies of the superblock.
The fstyp command displays portions of a file system’s superblock. The slide highlights a
couple fields of particular interest. See the statvfs(2) man page to view brief descriptions
of several other fields reported by fstyp.
# fstyp -v /dev/vg01/rdatavol
vxfs Å file system type
version: 6
f_bsize: 8192
f_frsize: 1024
f_blocks: 16384
f_bfree: 14755
f_bavail: 13833
f_files: 3720
f_ffree: 3688
f_favail: 3688
f_fsid: 1073741834
f_basetype: vxfs
f_namemax: 254
f_magic: a501fcf5
f_featurebits: 0
f_flag: 16
f_fsindex: 9
f_size: 16384 Å file system size
Inode Concepts
• Every file system has an inode table containing an inode for each file & subdirectory
• A file’s inode records the file’s file permissions, owner, group, and other attributes
• Use ll –i to view inode numbers and attributes
# ll -i /data
3 drwxr-xr-x 2 root root 96 Jul 5 16:34 lost+found
101 -rw-rw-rw- 1 root sys 125 Jul 7 10:04 file1
102 -rw-rw-rw- 1 root sys 1034 Jul 7 10:04 file2
103 -rw-rw-rw- 1 root sys 96 Jul 7 10:04 file3
104 -rw-rw-rw- 1 root sys 4267 Jul 7 10:04 file4
105 -rw-rw-rw- 1 root sys 598 Jul 7 10:04 file5
Student Notes
Every file system has a structure called an inode table, which contains an inode for each file
and subdirectory. A file’s inode identifies the file's type, permissions, owner, group, size, and
other attributes. A file's inode also contains pointers to the data blocks associated with the
file. Each inode is identified by a unique inode number within the file system.
When a user or application accesses a file, the kernel consults the file’s inode to determine if
the user is permitted to access the file. If so, kernel uses the pointers in the inode to locate
the file’s data blocks.
Viewing Inodes
The ll command displays file attributes from the inode table. When executed with the –i
option, ll also displays each file’s inode number.
# ll -i /data
3 drwxr-xr-x 2 root root 96 Jul 5 16:34 lost+found
101 -rw-rw-rw- 1 root sys 125 Jul 7 10:04 file1
102 -rw-rw-rw- 1 root sys 1034 Jul 7 10:04 file2
The bdf –i command reports the number of available, used, and free inodes in a file system.
JFS creates additional inodes as needed, so the number of available inodes may change as the
number of files in a file system increases.
# bdf -i /data
Filesystem kbytes used avail %used iused ifree %iuse Mounted
/dev/vg01/datavol 16384 1731 13743 11% 9 3663 0% /data
Directory Concepts
• Directories may be used to organize groups of related files and subdirectories
• Each directory contains one or more directory entries
• Each directory entry associates a file name with an inode number
• Use ls -i to list the file names in a directory, and each file’s inode number
# ls -i /data
3 lost+found
101 file1
102 file2
103 file3
104 file4
105 file5
Student Notes
Directories may be used to organize groups of related files and subdirectories. Each
directory contains one or more directory entries. Each directory entry associates a file name
with an inode number.
Viewing Directories
Use ls to list the file and subdirectory names in a directory. Add the -i option to include
each file’s inode number, too
# ls -i /data
3 lost+found
101 file1
102 file2
103 file3
104 file4
105 file5
JFS Block
Student Notes
Disk space allocated to a file system is subdivided into file system blocks. When a user
writes to a file, HP-UX allocates one or more data blocks to store the file’s data. The
algorithm used to allocate and manage blocks to files varies by file system type. The notes
below are JFS-specific.
JFS Blocks
JFS subdivides a file system’s disk space into equal-size blocks. The block size determines
the smallest unit of disk space that can be allocated to a file.
The block size must be consistent within a file system, but may vary between file systems.
1KB, 2KB, 4KB, and 8KB are the currently supported block sizes. File systems with many
small files may benefit from a smaller block size. File systems with relatively few files may
benefit from a larger block size. Benchmark your application with several different block
sizes to determine the ideal configuration. The block size may be specified at file system
creation, but cannot be changed thereafter.
The block size impacts the file system’s maximum file system size. File systems greater than
4TB require a larger block size. For example, in order to create a file system that is larger
than 16TB, JFS requires an 8KB block size. See the mkfs_vxfs(1M) man page for more
information.
JFS Extents
A JFS extent is a variable-length sequence of adjacent blocks. When allocating space for a
file, JFS allocates extents, rather than individual blocks. As a file grows, JFS tries to extend
the file’s last existing extent.
If JFS can’t extend the last existing extent, it uses another extent elsewhere in the file system.
Allocating one large contiguous extent to a file rather than multiple discontiguous blocks
allows JFS to very efficiently service large I/O requests.
There isn’t an easy way to view the number and size of an individual file’s blocks and extents,
but the fsadm –F vxfs –DE command does report the average number of extents per file,
and the size and distribution of a mounted file system’s free extents. A later chapter explains
how to use this command to “defragment” a file system.
# ln /data/file1 /data/f1
# ll -i
101 -rwxr-xr-x 2 root sys 1599 Jul 12 00:39 f1
101 -rwxr-xr-x 2 root sys 1599 Jul 12 00:39 file1
Student Notes
Although most inodes are associated with exactly one directory entry, hard links make it
possible to associate multiple directory entries with a single inode. Since the inode contains
pointers to a file’s blocks and extents, both hard links ultimately reference the same user
data, too. This, in effect, allows your users to reference a single file via several different file
names.
The example on the slide shows a file /data/f1 that is hard linked to /data/file1. Both
names reference the same inode, and thus share the same permissions, owner, time stamp,
and data.
Since both file names reference the same inode, they also both ultimately reference the same
data blocks. Changes made to f1 will be reflected in file1, and vice versa. f1 and file1
are essentially the same file! Oftentimes it is useful to associate multiple file names with a
single file in this manner.
A hard link may be created with the ln command. The first argument identifies the file name
of the existing file, and the second identifies the name of the new link:
Creating a hard link creates a new directory entry for the new link, and increments the link
count field in the inode. The second field in the output from the ll command shows the
number of links to each file.
Administrators sometimes use hard links to associate multiple file names with a single device
file. For instance, some 11i v1 and v2 administrators use hard links to provide more user-
friendly tape drive DSF names:
# ln /dev/rmt/c0t0d0BEST /dev/tape
After the administrator creates this link, users can access the c0t0d0 tape drive using the
intuitive name /dev/tape, rather than the cryptic default name /dev/rmt/c0t0d0BEST.
Be aware of two hard link limitations:
• Hard links cannot cross file system boundaries.
• Hard links cannot link directories.
Questions
# ln -s /data/file2 /data/f2
# ll -i
112 lrwxrwxrwx 1 root sys 2 Jul 12 00:41 f2 -> file2
102 -rwxr-xr-x 1 root sys 1599 Jul 12 00:40 file2
Student Notes
Symbolic links, like hard links, make it possible to associate multiple file names with a single
file. Unlike hard links, however, symbolic links can cross file system boundaries, and can be
used to link directories.
In the example on the slide, /data/f2 is a symbolic link to /data/file2. f2 and file2
have distinct directory entries and inodes. However, as shown on the slide, /data/f2 is
nothing more than a pointer to /data/file2! Accessing /data/f2 yields the same data
one would see when accessing /data/file2.
Symbolic links are particularly useful when you must move files from one file system to
another, but still wish to be able to use the file's original path name. In HP-UX version 9.x,
system executables were stored in the /bin directory. In HP-UX version 10.x, many
operating system executables were moved to /usr/bin. However, a symbolic link exists
from /bin to /usr/bin so users and applications can still use the version 9 path names.
This is just one situation where symbolic links are commonly used.
Use the ln command with -s to create a symbolic link. The first argument identifies the
existing file that you wish to link to. Additional arguments specify the path names of the
symbolic links you wish to create to the existing file.
# ln -s /data/file2 /data/f2
112 lrwxrwxrwx 1 root sys 2 Jul 12 00:41 f2 -> file2
102 -rwxr-xr-x 1 root sys 1599 Jul 12 00:40 file2
The ll command identifies symbolic links with an l in the first character position. Also, the
file name field in the ll output identifies the file to which a symbolic link leads.
# rm /data/file2
# cat /data/f2
cat: Cannot open /data/f2: No such file or directory
Question
1. Why is it possible to create symbolic links, but not hard links, across file system
boundaries?
Student Notes
JFS file systems use an intent log to track pending file system metadata updates.
Improper system shutdowns caused by power outages, system panics, or administrator error
may result in file system metadata inconsistencies. During the next system boot, HP-UX
automatically executes fsck (file system check) to repair these inconsistencies. Repairing
large HFS file systems after an improper shutdown may take several hours, and may not even
be possible in some cases. Repairing JFS file systems typically only takes a few seconds;
fsck simply consults the intent log and nullifies or completes the pending metadata updates.
The intent log only guarantees metadata integrity; applications must provide their own
logging mechanism to guarantee data integrity.
The intent log is a circular log. When JFS completes an update, it reuses that transaction’s
space in the intent log to service new transactions.
The intent log size is configurable and can be as large as 16MB in JFS file system layout 4,
and as large as 256MB in JFS file system layout 6. The default size varies based on the file
system size. A larger intent log may improve performance on NFS servers and other systems
that manage metadata-intensive workloads. Choosing a smaller log size leaves more room in
the file system for files and directories. A smaller intent log also decreases the time required
to complete an fsck intent log replay after an improper shutdown. You can use the fsadm
command to resize the intent log.
Example: What Happens When a File is Removed from a JFS file system?
The graphic on the slide describes the process JFS uses to remove a file, and the impact this
process has on the JFS file system structures. After each step below, what impact would a
system crash have on the file system? Would the file system be left in a consistent state? If
not, what could be done conceptually to return the file system to a consistent state?
1. The diagram on the left depicts a JFS file system containing two files: f1 (20MB) and f2
(30MB).
2. When a user removes file f2, JFS starts by updating an in-core, memory-based copy of
the file system metadata to reflect the fact that the file has been removed. This step isn’t
shown on the slide.
After updating the in-core metadata, JFS records its intent to modify the metadata in the
on-disk "intent log". Then, if the update is interrupted by a system crash, JFS has a record
of pending file system metadata changes. This greatly simplifies recovery after a system
crash.
3. After writing the intent log transaction, JFS makes the required changes to the on-disk
meta-data. Removing f1 requires JFS to de-allocate f1's inode and data blocks, remove
the file’s directory entry, and update the free-space, space-in-use, and other fields in the
superblock.
4. As JFS completes changes to the file system metadata, it marks the appropriate intent log
entries "done". After an intent log entry is marked "done", JFS reuses that entry's space
for newly requested transactions.
HFS JFS
Max Supported 128GB/128GB 2TB/2TB (11i v1 w/ JFS 3.5)
2TB/32TB (11i v2 w/ JFS 4.1)
File and File System Size (11i v1, v2, v3) 16TB/32TB (11i v3 w/ JFS 4.1)
16TB/40TB (11i v3 w/ JFS 5.0)
Student Notes
The second slide in the chapter noted that HP-UX supports two single-system, disk-based file
systems: HFS and JFS. The slide above summarizes the differences between the two file
system types.
Some JFS features in the table are followed by an asterisk (*). These features are only
available with the "Online" JFS product. See the section below titled "Upgrading Base JFS
Systems to Online JFS" for more information.
Note that HFS has been officially deprecated. Though it is still supported in 11i v3, HFS may
not be supported in future releases.
JFS maximum file and file system sizes vary by JFS version and OS release. The chart on the
slide shows the current maximum file and file system sizes at the time this book went to
press. See the latest release notes for more information.
Note that the maximum supported logical volume size in 11i v1 and v2 is 2TB. 11i v2
customers who require larger file systems should consider using VxVM volumes rather than
LVM logical volumes, as VxVM supports much larger volumes.
See the Upgrading JFS File Systems section below to learn about JFS version upgrade
options.
Kernel Support
The HP-UX kernel loader on PARISC is unable to read JFS metadata structures. As a result,
the /stand file system must be HFS on PARISC systems. All other file systems may be, and
usually are, JFS.
The IPF kernel loader is able to read JFS metadata structures, so on Integrity servers the
/stand file system is JFS.
Access Control Lists make it possible to configure additional permissions for additional users
and groups! HFS has supported a proprietary version of Access Control Lists for many years
on HFS file systems (see the man pages for chacl(1) and lsacl(1)). HP's original JFS
release didn't support ACLs. JFS 3.3 introduced support for up to 13 ACLs per file. JFS 4.1
now supports up to 1024 ACLs per file in 11i v3. The HFS and JFS ACL implementations are
somewhat different. To learn about JFS ACLs, attend HP-UX Security 1 course, or take a
look at HP's HP JFS 3.3 Access Control Lists white paper on http://docs.hp.com.
JFS metadata, on the other hand, can be repaired after a system crash in a matter of seconds
using the JFS intent log. In 24x7 environments the JFS fast crash recovery functionality is
invaluable.
Online Resizing
HFS file systems can't be extended while users are still accessing files in the file system; the
administrator must unmount the file system. HFS provides no mechanism for reducing a file
system online or offline.
JFS makes it possible to extend, and even reduce, file systems that are still being accessed by
users. Note, however, that this feature is only available to users of the Online JFS product;
Base JFS users still must umount before their file systems can be extended.
The Online JFS product includes a defragmentation utility that rearranges file system blocks
to improve performance. No defragmentation utility is available for HFS.
# bdf /data verify that the file system has at least 15% free before proceeding
# umount /data
# /sbin/fs/vxfs/vxfsconvert /dev/vg01/rdatavol
# fsck -F vxfs -y -o full /dev/vg01/rdatavol
# mount /dev/vg01/datavol /data
# vi /etc/fstab change the file system type to vxfs
OnlineJFS is an extra product that makes it possible to extend, reduce, defragment, back up,
and tune JFS file systems on mission critical, 24x7 systems. OnlineJFS is included in the
HP-UX 11i v1 and v2 "Enterprise" and "Mission Critical" bundles, and in the 11i v3 “VSE”,
“High Availability”, and “Data Center” Operating Environments. Other customers must
purchase and OnlineJFS license separately.
Use the following command to determine if you have the optional OnlineJFS product
installed. The product name suffix varies somewhat depending on the JFS version, so
include an asterisk wildcard character on the end of the product name.
Customers who have an Online JFS license can install the Online JFS product from the
Applications CD with the swinstall command.
# swinstall
After installing the software, you can begin using the Online features immediately; no
changes are required to the existing JFS file systems.
* 11i v3 administrators can select either VxFS 4.1 or 5.0 during the OS installation process.
See the VxFS Installation Guide on http://docs.hp.com if you want to upgrade your
JFS software from 4.1 to 5.0.
Later JFS versions provide backwards compatibility with older JFS versions. Thus, a system
running JFS 5.0 can still mount a file system created in JFS 3.5. To take advantage of the
latest JFS features and performance benefits, though, consider upgrading the file system
metadata structures to the current JFS Layout Version via the vxupgrade command while
the file system remains mounted.
# vxupgrade -n 7 /data
/data: vxfs file system version 7 layout
# fstyp -v /dev/vg01/rdatavol
vxfs
version: 6
f_bsize: 8192
f_frsize: 1024
f_blocks: 16384
f_bfree: 14653
f_bavail: 13738
f_files: 3692
f_ffree: 3660
f_favail: 3660
f_fsid: 1073741834
f_basetype: vxfs
f_namemax: 254
f_magic: a501fcf5
f_featurebits: 0
f_flag: 16
f_fsindex: 9
f_size: 16384
Managing File
Systems
Part 2: Creating and Mounting
File Systems
Student Notes
Ignite-UX, HP’s installation utility, automatically creates file systems on the system boot disk.
New application installations and expanding user and application disk space requirements
may require the administrator to extend existing file systems or create additional new file
systems. A later chapter discusses the procedures required to extend file systems. The
remaining slides in this chapter discuss the commands required to create and mount a new
file system.
Student Notes
The slide above overviews the process required to create a new file system; the remaining
slides in the chapter discuss each step in detail.
When using whole disk partitioning, use ioscan to view a list of disks, then consult
/etc/lvmtab, swapinfo, and bdf to determine which disks are already in use. By process
of elimination, identify an unused disk.
When using LVM, execute vgdisplay to determine if the desired volume group has
sufficient free space to accommodate a new logical volume.
# vgdisplay vg01
...
VG Name /dev/vg01
...
PE Size (Mbytes) 4
Free PE 8691
...
If the volume group lacks sufficient free space, add another disk to the volume group. If the
volume group has sufficient free space, execute lvcreate to create a new logical volume.
To learn more about adding disks and logical volumes, see the LVM chapters elsewhere in
this course.
# newfs /dev/vg01/rdatavol
# mkdir /data
# mount /dev/vg01/datavol /data
Before After
Superblock
Intent Log
Inode Table
/dev/vg01/datavol /dev/vg01/datavol
Student Notes
After selecting a logical volume or disk to be used by a new file system, use the newfs
command to create a superblock, inode table, and other metadata structures. In its simplest
form, newfs only requires a target DSF name, which may be a logical volume or a whole disk
DSF. Be sure to use the device’s raw/character DSF. The output below was captured on an
11i v3 system. newfs command output is slightly more verbose in 11i v1 and v2.
# newfs /dev/vg01/rdatavol
newfs: /etc/default/fs is used for determining the file system
type
version 6 layout
16384 sectors, 16384 blocks of size 1024, log size 1024 blocks
largefiles supported
newfs is just a front-end utility for the mkfs command, which actually creates the file
system. To see which file system layout options mkfs used when creating the file system,
execute mkfs with the –m option. When executed with the –m option, mkfs doesn’t
overwrite the file system; it simply reports the specified file system’s current configuration.
# mkfs -m /dev/vg01/rdatavol
mkfs: /etc/default/fs is used for determining the file system type
mkfs -F vxfs -o ninode=unlimited,bsize=1024,version=6,inosize=256,
logsize=1024,largefiles /dev/vg01/rdatavol 16384
Both newfs and mkfs support a number of additional options which are described below.
-F hfs|vxfs Defines the desired file system type. If -F is not specified, the default
file system type is determined from the /etc/default/fs file.
-o largefiles Determines if the file system will allow "large files" over 2 GB in size.
Large files can be dynamically enabled/disabled later on a mounted file
system via fsadm.
For HFS file systems, valid values are: 4096, 8192, 16384, 32768, or
65536 bytes. The default block size is 8192 bytes.
For JFS file systems, valid values are: 1024, 2048, 4096, or 8192 bytes.
For file systems smaller than 2TB, the default block size is 1024. For
file systems larger than 2TB, JFS increases the block size. See the
mkfs_vxfs(1m) man page for details.
-v Verbose. Display the mkfs command used to create the file system.
the -S option. By default, however, file systems now allow "long" file
names up to 256 characters in length.
-f frag-size Specifies the file system's fragment size in bytes. The fragment size
must be a power of two no smaller than the system’s physical block
size and no smaller than one-eighth of the file system block
size. The default value is 1024 bytes.
-m min-free This indicates the percentage of space in the file system reserved for
use by root. If the amount of free space in the file system falls below
this percentage, only the superuser can write to the file system. The
default value is 10%. Consider decreasing this value in large file
systems.
To customize other VxFS file system layout attributes, create the file system via mkfs rather
than newfs. The example below creates a VxFS file system with a 2MB intent log (rather
than the default 1MB). Specifying a non-default intent log size is recommended when
creating file systems to be shared via NFS.
The -R option reserves space at the end of the disk for use as swap space:
# newfs -F hfs -R 200 /dev/rdisk/disk1 HFS, with 200 MB reserved for swap
# newfs -F vxfs -R 200 /dev/rdisk/disk1 VxFS, with 200 MB reserved for swap
You can also create a boot disk using the whole disk approach. See the description of the –B
option on the newfs_hfs(1m) man page for more information. Most administrators
configure boot disks via LVM, in which case newfs –B isn’t necessary.
NOTE: There are several man pages for the newfs command.
• newfs(1m) describes newfs options common to all file systems.
• newfs_hfs(1m) describes HFS-only options
• newfs_vxfs(1m) describes VxFS-only options.
Student Notes
Mounting a file system logically associates the root directory of the new file system with the
mount point directory. Accessing files below the mount point directory actually references
files in the file system mounted on the mount point directory.
NOTE: Most file system administration commands such as newfs require raw DSFs.
The mount command, however, requires a block DSF.
# mount -v
/dev/vg00/lvol3 on / type vxfs
ioerror=nodisable,log,dev=40000003
on Tue Jun 26 05:17:38 2007
/dev/vg00/lvol1 on /stand type vxfs
ioerror=mwdisable,log,tranflush,dev=40000001
on Tue Jun 26 05:17:45 2007
/dev/vg00/lvol8 on /var type vxfs
ioerror=mwdisable,delaylog,dev=40000008
on Tue Jun 26 05:18:08 2007
/dev/vg00/lvol7 on /usr type vxfs
ioerror=mwdisable,delaylog,dev=40000007
on Tue Jun 26 05:18:08 2007
/dev/vg00/lvol4 on /tmp type vxfs
ioerror=mwdisable,delaylog,dev=40000004
on Tue Jun 26 05:18:08 2007
/dev/vg00/lvol6 on /opt type vxfs
ioerror=mwdisable,delaylog,dev=40000006
on Tue Jun 26 05:18:08 2007
-hosts on /net type autofs
ignore,indirect,nosuid,soft,nobrowse,dev=1000002
on Tue Jun 26 05:18:39 2007
/dev/vg00/lvol5 on /home type vxfs
ioerror=mwdisable,delaylog,dev=40000005
on Mon Jul 2 13:34:18 2007
/dev/vg01/datavol on /data type vxfs
ioerror=mwdisable,delaylog,dev=4000000a
on Thu Jul 12 23:28:05 2007
The bdf command also displays a list of mounted file systems, as well as the amount of
space in use and available in each mounted file system.
# bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 1048576 309216 733648 30% /
/dev/vg00/lvol1 1835008 144144 1677696 8% /stand
/dev/vg00/lvol8 8912896 347504 8501576 4% /var
/dev/vg00/lvol7 4030464 2991960 1030456 74% /usr
/dev/vg00/lvol4 524288 21184 499176 4% /tmp
/dev/vg00/lvol6 5308416 3796456 1500192 72% /opt
/dev/vg00/lvol5 212992 7776 203928 4% /home
/dev/vg01/datavol 16384 1730 13746 11% /data
File systems should only be mounted on empty directories. If a file system is mounted on a
directory that already contains files and directories, those files and directories will be hidden
until the file system is unmounted!
Finally, note that it is not possible to mount a file system on a directory that another user or
application is currently using. Trying to mount a file system on a directory that is already in
use results in a "device busy" message.
-o quota|noquota Enable/disable quota checking. See the quota(5) man page for
The –o log option ensures that when a process modifies a file, VxFS logs all of the
metadata changes to the intent log on disk before the application proceeds on to other tasks.
This guarantees the integrity of all file system metadata, but may slightly degrade
performance.
The -o delaylog option guarantees full integrity for critical metadata. Logs critical
metadata changes immediately. Less critical metadata changes (such as time stamp changes)
may be lost in case of a system crash. This is the default logging option.
For more complete descriptions of these and many other VxFS mount options, read the
mount_vxfs(1m) man page and the Veritas File System Administrator's Guide on
http://docs.hp.com, or attend HP's HP-UX Performance and Tuning class.
NOTE: There are several man pages for the mount command.
• mount(1m) describes options common to all file systems.
• mount_hfs(1m) describes HFS-only options.
• mount_vxfs(1m) describes VxFS-only options.
Student Notes
Some administration tasks can only be completed on unmounted file systems. Unmounting a
file system logically disassociates the file system from the file system mount point, making
the file system inaccessible to users.
Now that you know how to mount a new file system, you should also be aware of how to
logically disassociate, or unmount, the new file system from the root file system. The
command used to unmount the file system is umount.
NOTE: The command is umount, not "unmount". The command uses the
block device file or mount-point directory.
Instead of using the umount -a command, you can also use the umountall(1M)
command.
Use the fuser command to identify which processes are using a file or file system. Specify
the target file system by device file name or by mount point directory (when using the mount
point directory name, also add the -c option). With the –k option, fuser also kills the
processes that are using the specified file or file system.
OnlineJFS versions 3.5 and greater now include a new command that will automatically
forcefully unmount a file system, even if processes are currently using the file system.
Be very careful when using the fuser and vxumount commands. If possible, it is much
safer to kill application processes gracefully using your vendors’ recommended shutdown
procedures. Using these utilities to more forcefully kill processes or unmount file systems
that are still in use may leave your applications in an unusable state.
Some file systems, such as the / (root) file system and /usr, can’t be unmounted on a
running system since critical system daemons can’t function properly without them.
The shutdown and reboot commands automatically unmount all file systems during the
system shutdown process.
# vi /etc/fstab
Student Notes
All file systems are unmounted during system shutdown. Any file systems that you wish to
mount automatically after the next system reboot should be added to the /etc/fstab file.
During the boot process, the /sbin/init.d/localmount script automatically mounts
file systems listed in /etc/fstab. This configuration file is not automatically maintained by
the system; it should be manually updated after creating or removing file systems.
After adding a file system to /etc/fstab, you needn't enter the full form of the mount
command when mounting the new file system. Look at the following examples:
block the block device file of the disk or logical volume containing the file
system
pass-number determines the order in which fsck checks file systems after an
improper shutdown
NOTE: For a more detailed description of the /etc/fstab file syntax, see
the fstab(4) man page. Also take a look at the man pages mount:
mount(1m), mount_hfs(1m), and mount_vxfs(1m) for file system
specific mount options.
Use CDFS to mount High Sierra, ISO 9600, and Rock Ridge CDROMs.
3. Mount the file system (additional mount options required in 11i v1).
# mount -F cdfs /dev/disk/disk1 /dvd
Student Notes
CDFS is a kernel subsystem that makes it possible to mount CDROMs and DVDs on HP-UX
systems. CDFS supports several common CDROM formats:
Mounting a CDROM/DVD requires several steps. Simply create a mount point directory, then
mount the file system using the CDROM/DVD’s block DSF. At a minimum, use the ro mount
option to make it clear that the file system is read-only. The mount example on the slide
includes two additional options, which will be described in more detail below.
# ioscan -fnC disk find the block device file (11i v1 & v2)
# ioscan -fnNC disk find the block device file (11i v3)
# mkdir /dvd create a mount point directory
# mount –F cdfs /dev/disk/disk1 /dvd mount the CDROM/DVD
Also consider including the noauto option in /etc/fstab cdfs entries. This option
prevents the /sbin/init.d/localmount script from automatically mounting cdfs file
systems during the system boot process, but still allows the administrator to manually mount
a CDROM/DVD by simply typing mount /dvd. Here is a typical CDFS entry in
/etc/fstab :
Once mounted, CDROM file systems can be accessed using the same HP-UX file system
commands that are used to navigate an HFS or JFS file system.
This format may be inconvenient in an HP-UX environment since the semicolon is used as a
command separator in the POSIX shell. You can disable lowercase-to-uppercase filename
translation, and suppress the display of version numbers by using the -o cdcase option:
If you install the ISO 9660 Rockridge Extension patches mentioned above and mount file
systems with the rr mount option, CDFS provides even more flexibility. In 11i v1, this option
is required when dealing with CDROMs from Oracle and other third party vendors.
11i v2 and v3 display full length file names and uppercase and lower case filenames by
default, so the rr and cdcase options are not necessary.
CD-RW CDs
The open source cdrecord, CDROM burner software is included with the HP’s free Ignite-
UX product, which you can optionally install from the HP-UX media kit. If you wish to create
DVD-R or DVD-RW disks, download and install the dvd+rw-tools product from
http://software.hp.com.
1. Verify that you have the ISO enhancement bundle, and load the ISO kernel module
# swlist ISOIMAGE-ENH
# kcmodule cdfs=loaded fspd=loaded
Student Notes
11i v3 now enables you to mount ISO image files, too! An ISO file is a disk-based file
containing an ISO 9660 CDFS file system. Many vendors ship software in ISO format files.
For instance, if you have an HP support contract and request “e-delivery”, you can download
HP-UX media kits as ISO files. Search for “e-delivery” on http://www.hp.com to learn
more.
The process required to mount an ISO file is almost identical to the process required to
mount a CDROM.
1. Verify that you have the ISO enhancement bundle. This bundle is only supported on 11i
v3. Then load product’s dynamically loadable kernel module, and the cdfs kernel
module. The kernel configuration module
# swlist ISOIMAGE-ENH
# kcmodule cdfs=loaded fspd=loaded
# mkdir /dvd
4. Optionally add the file system to /etc/fstab. Consider using the noauto mount
option described on the previous slide.
# vi /etc/fstab
/root/myapp.iso /dvd cdfs noauto 0 0
# swlist IGNITE
Then create the ISO with the mkisofs command. The example below creates an ISO file in
/root/user1.iso that contains the contents of the /home/user1 directory.
HP’s Loopback File System (LOFS) allows a file hierarchy to appear under
multiple mount points simultaneously, without creating symbolic links
Student Notes
The Loopback Filesystem (LOFS) allows the same file hierarchy to appear
in multiple places. Traditionally, this was accomplished via symbolic links:
# ln -s /data /opt/data
# ls /data /opt/data
/data:
d1 d2 d3
/opt/data:
d1 d2 d3
HP-UX provides a more elegant solution via the LOFS file system:
# mkdir /opt/data
# mount –F lofs /data /opt/data
# ls /data /opt/data
/data:
d1 d2 d3
/opt/data:
d1 d2 d3
In order to make an LOFS file system accessible after every reboot, add it to the
/etc/fstab file:
# vi /etc/fstab
/data /opt/data lofs defaults 0 0
HP’s MemFS product provides a fast memory-based file system for storing and
managing temporary files without incurring disk I/O
Student Notes
Writing data to and from physical disks is a resource intensive operation; accessing data in
memory is typically much faster.
The HP-UX “memory-based file system” product (MemFS) allows you to store directories and
files in memory rather than on disk. Users can access files and directories in a MemFS file
system as they would files and directories in an HFS or JFS file system, though the internal
implementation is quite different from disk-based file systems.
MemFS typically provides extremely high throughput. However, since MemFS files are
stored in memory rather than on disk, as soon as you unmount a MemFS file system or
reboot, the files and directories in the file system disappear. MemFS is most appropriate for
application file systems containing temporary files.
MemFS was first introduced in 11i v2 and is now supported in 11iv3, too. You can download
the 11i v2 version of the product from http://software.hp.com. In 11i v3, MemFS is
included on the operating environment installation DVD. Use swlist to verify that you
installed the product.
# swlist MemFS
# Initializing...
# Contacting target "myhost"...
#
# Target: myhost:/
#
# MemFS B.11.31.01 Memory File System
MemFS.MemoryFSKern B.11.31.01 Memory File System (MemFS) Kernel
Modules
MemFS.MemoryFSCmd B.11.31.01 Memory File System (MemFS) Commands
# mkdir /data
The size= option specifies the maximum file system size. If not specified, or if you specify
size=0, the size of the file system is limited only by the available swap space. If specified, it
ensures that the file system does not grow beyond the specified size. Specifying size does not
assure that the specified size will be available for use since free space on MemFS file systems
depends on available swap space at that instant.
The ninode= option specifies the maximum number of files to be allowed in the file system.
If not specified, or if you specify ninode=0, the number of files is limited only by the amount
of memory made available to MemFS by memfs_metamax kernel parameter.
After initially mounting a MemFS file system, you can change the size and ninode mount
options at any time via the –o remount option.
To ensure that the MemFS file system remounts after every reboot, add the file system to the
/etc/fstab file.
# vi /etc/fstab
memfs /data memfs size=1g,ninode=1024 0 0
To learn more about MemFS, see the Memory File System (MemFS) 2.0 for HP-UX 11i v3
Administrator's Guide on http://docs.hp.com.
Directions
Record the commands used to complete the tasks below, and answer all of the questions.
/dev/vg01/swapvol
/dev/vg01/datavol
If you already have these logical volumes, you can skip ahead to Part 2 of the lab.
Otherwise, create an LVMv2.2 volume group called vg01 with your spare disk. Specify
maximum volume group size 1TB with a 4MB extent size. Create two 32MB logical
volumes in vg01 called datavol and swapvol.
2. List at least two reasons why it may be beneficial to configure multiple file systems rather
than one file system containing all files and directories.
3. Does your lab system have HFS file systems, JFS file systems, or both types of file
systems?
4. Name at least one advantage of using VxFS file systems rather than HFS.
5. Use the following command to determine if you have the optional OnlineJFS product
installed. The product name suffix varies somewhat depending on the JFS version, so
include an asterisk wildcard character on the end of the product name.
6. If you execute the following commands, in which logical volume will each file be
physically stored?
# touch /stand/test1
# touch /etc/test2
7. The chapter discussed several different file system components. Match each component
below with the appropriate description.
inode = ___ b. Records a file system’s type, size, and other attributes
3. Use mkfs –m to verify the new file system and answer the questions below.
4. Execute mount -v. Why doesn't the new file system appear in the mount table?
6. Mount the file system. Verify that the file system successfully mounted.
7. Add the new file system to /etc/fstab to ensure that it remounts automatically after
every reboot. Specify backup frequency 0 and fsck check order 3.
9. Execute mount –a to mount all of the file systems configured in /etc/fstab. Watch
the resulting messages carefully. You should see several error messages indicating that
/dev/vg00/lvoll and several other file systems are already mounted. Do the
mount -a output messages offer any indication that your new file systems were
successfully mounted?
10. Execute mount -a a second time and note the output messages again. Why did
mount -a mention your new file system in its output this time, but not when you
executed mount -a in the previous exercise?
11. Execute mount –v to verify that your file system is mounted. What other information
can you glean from the mount -v output about your mounted file systems? List three
fields presented in the mount -v output.
You may notice a lost+found directory in your new file system. newfs creates this
directory for you automatically. The fsck utility, which may be used to repair file system
corruption, moves irreparable files to the lost+found directory. This directory will be
discussed in a later chapter.
2. Can you access files in a file system that is unmounted? Try it!
# umount /data
# ls /data
3. Can you copy files to the mount point directory while the file system is unmounted? Try
it!
4. What happens if you mount a file system on a mount point directory that already contains
files? Try it! Remount the file system and list the contents of /data.
# mount /data
# ls /data
Are the /data/d* files visible? Are the /data/r* files visible?
5. Can you unmount a file system that is still in use? cd to /data . Try to unmount the file
system while /data is your present working directory.
# cd /data
# umount /data
umount: cannot unmount /dev/vg00/datavol : Device busy
umount: return error 1.
What happens?
6. Before you can unmount a file system, you will have to kill all processes accessing the file
system. HP-UX provides a command to solve this very problem. Try the following
command:
Each entry in the fuser output lists the PID of a process accessing the file system, a
single letter code indicating how the process is using the file system ("c" indicates that a
user has changed to a directory in the file system), and the name of the user that owns
the offending process.
7. Adding the –k option causes fuser to kill the offending processes. Try it!
What happens?
# umount /data
# mount /data
10. What happens if you accidentally newfs a device containing a mounted file system? Try
it!
# newfs /dev/vg01/rdatavol
11. What happens if you accidentally newfs a device containing an unmounted file system?
Try it!
# umount /data
# newfs /dev/vg01/rdatavol
# mount /data
# ls /data
13. One last experiment: Can you unmount all of your file systems? Execute umount -a
and explain the result.
# umount –a
2. Create a /dvd mount point and mount the /labs/echoapp.iso ISO file. Do not add
the file system to /etc/fstab.
4. Create a /users mount point directory and mount /home as an LOFS file system. Add
the file system to /etc/fstab.
5. List the contents of /home and /users. Create a /home/user26 directory then list the
contents of /home and /users again. What is the advantage of an LOFS file system?
6. Create a /var/opt/myapp/tmp mount point. Mount a MemFS file system on the mount
point with maximum file system size 100MB.
# cp /usr/bin/m* /var/opt/myapp/tmp
# ls /var/opt/myapp/tmp
8. Unmount the file system, then remount it. Are the files still there?
10. Unmount your ISO, LOFS, and MemFS file systems. If you added any ISO, LOFS, or
MemFS entries to /etc/fstab, remove them.
A similar Disks and File Systems functional area exists in sam in earlier versions of
HP-UX.
Directions
Record the commands used to complete the tasks below, and answer all of the questions.
/dev/vg01/swapvol
/dev/vg01/datavol
If you already have these logical volumes, you can skip ahead to Part 2 of the lab.
Otherwise, create an LVMv2.2 volume group called vg01 with your spare disk. Specify
maximum volume group size 1TB with a 4MB extent size. Create two 32MB logical
volumes in vg01 called datavol and swapvol.
Answer:
# pvcreate -f /dev/rdisk/diska
# vgcreate –V 2.2 –S 1t –s 4 vg01 /dev/disk/diska
# lvcreate -L 32 -n swapvol vg01
# lvcreate -L 32 -n datavol vg01
Answer:
# mount –v
The number of file systems may vary, but there should be at least eight.
2. List at least two reasons why it may be beneficial to configure multiple file systems rather
than one file system containing all files and directories.
Answer:
The administrator can allocate a fixed amount of disk space to each file system to ensure
that no single file system is allowed to monopolize an entire disk. The administrator
might, for instance, allocate 1GB to the /tmp file system. This ensures that temporary
files under /tmp can use at most 1GB of disk space; remaining disk space could be
preserved for other file systems.
Each file system may be tuned independently. There are a number of parameters
associated with each file system that can significantly affect system performance. It may
be beneficial to optimize some file systems for storage of large files, while others are
optimized for storage of smaller files.
File system maintenance tasks may be performed on one file system, while other file
systems remain accessible to users.
3. Does your lab system have HFS file systems, JFS file systems, or both types of file
systems?
Answer:
# mount –v
If your lab system is an Integrity server, the file systems are probably VxFS. If your lab
system is a PARISC server, then the /stand file system should be HFS, and all others
should be VxFS.
4. Name at least one advantage of using VxFS file systems rather than HFS.
Answer:
VxFS supports a variety of online operations that can be performed on a mounted file
system.
5. Use the following command to determine if you have the optional OnlineJFS product
installed. The product name suffix varies somewhat depending on the JFS version, so
include an asterisk wildcard character on the end of the product name.
Answer:
OnlineJFS enables the administrator to extend, reduce, and defragment file systems
without unmounting.
6. If you execute the following commands, in which logical volume will each file be
physically stored?
# touch /stand/test1
# touch /etc/test2
Answer:
7. The chapter discussed several different file system components. Match each component
below with the appropriate description.
inode = ___ b. Records a file system’s type, size, and other attributes
Answer:
superblock = b
inode = a
intent log = d
directory = c
Answer:
Answer:
# newfs /dev/vg01/rdatavol
Answer:
The output from newfs suggests that the file system type is VxFS.
The /etc/default/fs configuration file defines the default file system type:
# cat /etc/default/fs
3. Use mkfs –m to verify the new file system and answer the questions below.
Answer:
# mkfs –m /dev/vg01/rdatavol
a. 1024 bytes
b. 1024KB
c. Yes.
4. Execute mount -v. Why doesn't the new file system appear in the mount table?
Answer:
# mount –v
The new file system doesn’t appear since it hasn’t been mounted yet.
Answer:
# mkdir /data
6. Mount the file system. Verify that the file system successfully mounted.
Answer:
7. Add the new file system to /etc/fstab to ensure that it remounts automatically after
every reboot. Specify backup frequency 0 and fsck check order 3.
Answer:
# vi /etc/fstab
/dev/vg01/datavol /data vxfs defaults 0 3
Answer:
# umount /data
9. Execute mount –a to mount all of the file systems configured in /etc/fstab. Watch
the resulting messages carefully. You should see several error messages indicating that
/dev/vg00/lvoll and several other file systems are already mounted. Do the
mount -a output messages offer any indication that your new file systems were
successfully mounted?
Answer:
# mount –a
The output from mount -a notes that several other file systems are "already mounted",
but there is no mention of the two new file systems. Oftentimes in UNIX, the absence of
an error message, you may assume that a command has succeeded. mount -a
exemplifies this philosophy. Because mount -a didn't complain about your new file
systems, you can assume that they mounted successfully.
10. Execute mount -a a second time and note the output messages again. Why did
mount -a mention your new file system in its output this time, but not when you
executed mount -a in the previous exercise?
Answer:
In the previous question, mount -a successfully mounted the new file systems as they
weren't yet mounted. Executing mount -a a second time generates an error message
because the new file system is already mounted.
11. Execute mount –v to verify that your file system is mounted. What other information
can you glean from the mount -v output about your mounted file systems? List three
fields presented in the mount -v output.
Answer:
Device name
Mount point
Mount options
Mount time
You may notice a lost+found directory in your new file system. newfs creates this
directory for you automatically. The fsck utility, which may be used to repair file system
corruption, moves irreparable files to the lost+found directory. This directory will be
discussed in a later chapter.
2. Can you access files in a file system that is unmounted? Try it!
# umount /data
# ls /data
Answer:
After unmounting the file system, the mount point is still there, but the files in the file
system are no longer visible under the mount point. The files in the file system cannot be
accessed again until the file system is remounted.
3. Can you copy files to the mount point directory while the file system is unmounted? Try
it!
Answer:
This should work, but as you will discover in the next step, the files were copied to the
/data mount point in the /dev/vg00/lvol3 root file system rather than the
/dev/vg01/datavol file system!
4. What happens if you mount a file system on a mount point directory that already contains
files? Try it! Remount the file system and list the contents of /data.
# mount /data
# ls /data
Are the /data/d* files visible? Are the /data/r* files visible?
Answer:
The /data/d* files are back, but the /data/r* files are no longer accessible.
5. Can you unmount a file system that is still in use? cd to /data . Try to unmount the file
system while /data is your present working directory.
# cd /data
# umount /data
umount: cannot unmount /dev/vg00/datavol : Device busy
umount: return error 1.
What happens?
Answer:
The umount fails. You can’t unmount a file system that is still in use.
6. Before you can unmount a file system, you will have to kill all processes accessing the file
system. HP-UX provides a command to solve this very problem. Try the following
command:
Each entry in the fuser output lists the PID of a process accessing the file system, a
single letter code indicating how the process is using the file system ("c" indicates that a
user has changed to a directory in the file system), and the name of the user that owns
the offending process.
7. Adding the –k option causes fuser to kill the offending processes. Try it!
What happens?
Answer:
Since your shell is the process using the file system, your terminal session should die.
# umount /data
Answer:
# mount /data
10. What happens if you accidentally newfs a device containing a mounted file system? Try
it!
# newfs /dev/vg01/rdatavol
Answer:
11. What happens if you accidentally newfs a device containing an unmounted file system?
Try it!
# umount /data
# newfs /dev/vg01/rdatavol
Answer:
# mount /data
# ls /data
Answer:
Unfortunately, the files are gone. Be very careful when file systems are unmounted!
13. One last experiment: Can you unmount all of your file systems? Execute umount -a
and explain the result.
# umount –a
Answer:
# umount –a
umount: cannot unmount /dev/vg00/lvol6 : Device busy
umount: cannot unmount /dev/vg00/lvol4 : Device busy
umount: cannot unmount /dev/vg00/lvol7 : Device busy
umount: cannot unmount /dev/vg00/lvol8 : Device busy
umount: cannot unmount /dev/vg00/lvol3 : Device busy
Though a couple file systems may successfully unmount, most will fail because they are
in use by the system daemons.
Answer:
# mount –a
Answer:
# swlist ISOIMAGE-ENH
# kcmodule cdfs=loaded fspd=loaded
2. Create a /dvd mount point and mount the /labs/echoapp.iso ISO file. Do not add
the file system to /etc/fstab.
Answer:
# mkdir /dvd
# mount –F cdfs /labs/echoapp.iso /dvd
Answer:
# mount –v
# ls /dvd
4. Create a /users mount point directory and mount /home as an LOFS file system. Add
the file system to /etc/fstab.
Answer:
# mkdir /users
# mount –F lofs /home /users
# vi /etc/fstab
/home /users lofs defaults 0 0
5. List the contents of /home and /users. Create a /home/user26 directory then list the
contents of /home and /users again. What is the advantage of an LOFS file system?
Answer:
# ls /home /users
# mkdir /home/user26
# ls /home /users
The contents of the two directories should be identical. LOFS makes it possible to access
a directory or file system’s content via two different mount points.
6. Create a /var/opt/myapp/tmp mount point. Mount a MemFS file system on the mount
point with maximum file system size 100MB.
Answer:
# mkdir –p /var/opt/myapp/tmp
# mount –F memfs –o size=100m /var/opt/myapp/tmp
# cp /usr/bin/m* /var/opt/myapp/tmp
# ls /var/opt/myapp/tmp
8. Unmount the file system, then remount it. Are the files still there?
Answer:
# umount /var/opt/myapp/tmp
# mount /var/opt/myapp/tmp
# ls /var/opt/myapp/tmp
The files disappear as soon as you unmount the MemFS file system.
10. Unmount your ISO, LOFS, and MemFS file systems. If you added any ISO, LOFS, or
MemFS entries to /etc/fstab, remove them.
Answer:
# umount /dvd
# umount /users
# umount /var/opt/myapp/tmp
# vi /etc/fstab
A similar Disks and File Systems functional area exists in sam in earlier versions of
HP-UX.