Вы находитесь на странице: 1из 11

The normal procedure to install the IBM Virtual I/O Server (VIOS) is using the delivered DVD.

This
can become an issue, for example if the system is located at a different place in a different building in
a different street etc. or if you'll have to install more than one POWER5/5+ server. The solution is to
install the VIOS using an installation server preferably a Linux based one.

Prerequisites
The following section assumes that you'll hava a Linux based installation server up and running,
meaning tftp, NFS and DHCP works for installation purposes.

Thanks!
My special thanks go to my colleaque Bernhard Zeller, IBM Germany, for working out how it works!

Preparing the installation sources


Just like any other network installation you must get and provide the installation sources on the
installation server and to the clients. These sources are located on the installation DVD.
Either load the disk or mount the ISO-image and you will get something similar to the following.

bc1-mms:/ # mount -o loop -o map=off -o ro vios_1.2.1.iso /mnt


bc1-mms:/ # ll /mnt
total 46
drwxr-xr-x 11 root root 2048 Jan 11 2006 .
drwxr-xr-x 24 root root 632 Jun 23 15:22 ..
-rw-r--r-- 1 root root 49 Jan 11 2006 .Version
-rw-r--r-- 1 root root 27 Jan 10 2006 OSLEVEL
-r-xr-xr-x 1 root root 8985 Jan 11 2006 README.vios
drwxr-xr-x 3 root root 2048 Jan 10 2006 RPMS
-rw-r--r-- 1 root root 5133 Jan 10 2006 bosinst.data
-rw-r--r-- 1 root root 9083 Jan 10 2006 image.data
drwxr-xr-x 3 root root 2048 Jan 10 2006 installp
drwxr-xr-x 3 root root 2048 Jan 10 2006 ismp
-rw-r--r-- 1 root root 60 Jan 11 2006 mkcd.data
*drwxr-xr-x 4 root root 2048 Jan 11 2006 nimol*
drwxr-xr-x 3 root root 2048 Jan 11 2006 ppc
drwxr-xr-x 4 root root 2048 Jan 10 2006 root
drwxr-xr-x 3 root root 2048 Jan 10 2006 sbin
drwxr-xr-x 3 root root 2048 Jan 10 2006 udi
drwxr-xr-x 8 root root 2048 Jan 10 2006 usr
The interesting part is located under nimol/ioserver_res - in our example it is /mnt/nimol/ioserver_res.
This subdirectory includes the bootkernel (booti.chrp.mp.ent.Z), a control file for the installation
(bosinst.data), a system backup (mksysb) and a so called SPOT (ispot.tar.Z) which is a mini filesystem
required during the installation.
Copy the content of nimol/ioserver_res in a directory of your choice. To be consistent I am using
the /export directory on my installation server as a top directory for all sources.

bc1-mms:/ # mkdir /export/vios


bc1-mms:/ # cp /mnt/nimol/ioserver_res/* /export/vios
bc1-mms:/ # ll /export/vios
total 597824
drwxr-xr-x 2 root root 176 Aug 14 12:04 .
drwxrwxr-x 22 nobody nobody 544 Aug 14 12:04 ..
-rw-r--r-- 1 root root 11670966 Aug 14 12:04 booti.chrp.mp.ent.Z
-rw-r--r-- 1 root root 951 Aug 14 12:04 bosinst.data
-rw-r--r-- 1 root root 34033393 Aug 14 12:04 ispot.tar.Z
-rw-r--r-- 1 root root 565862400 Aug 14 12:04 mksysb
As a next step unpack the SPOT ispot.tar.Z in the directory.

bc1-mms:/ # cd /export/vios
bc1-mms:/export/vios # tar -xzf ./ispot.tar.Z
total 597824
drwxr-xr-x 3 root root 200 Aug 14 12:12 .
drwxrwxr-x 22 nobody nobody 544 Aug 14 12:04 ..
drwxr-xr-x 3 root root 72 Jan 10 2006 SPOT
-rw-r--r-- 1 root root 11670966 Aug 14 12:04 booti.chrp.mp.ent.Z
-rw-r--r-- 1 root root 951 Aug 14 12:04 bosinst.data
-rw-r--r-- 1 root root 34033393 Aug 14 12:04 ispot.tar.Z
-rw-r--r-- 1 root root 565862400 Aug 14 12:04 mksysb
Now it is time to copy the bootkernel to your /tftpboot directory.

bc1-mms:/export/vios # cp booti.chrp.mp.ent.Z /tftpboot/


bc1-mms:/export/vios # cd /tftpboot/
bc1-mms:/tftpboot # gunzip booti.chrp.mp.ent.Z
bc1-mms:/tftpboot # ll
total 59495
...
-rw-r--r-- 1 root root 12190720 Aug 14 12:18 booti.chrp.mp.ent
...
Now you'll have two choices:

Keep the name of the bootkernel and point each client to that kernel.
Use a unique name for each system you want to install.
Now the first option works - the system loads and starts the kernel but then it is getting a little bit
tricky. As a next step the system or partition will try to load a file call <bootkernelname>.info. This
.info file includes all required information about which is the NFS server, which NFS directories to
mount, which files to use for installation and things like the client identity (i.e. hostname etc.). As you
might assume at this point, this little .info file must be unique for each client/system/lpar you want to
install! For example if you want to install one VIOS on the systems called bc1-js21-1-vio and bc1-
js21-2-vio you must have two info file - one called bc1-js21-1-vio.info and one called bc1-js21-2-
vio.info. If you've decided to keep the name of the bootkernel as it is you must change the contents of
the booti.chrp.mp.ent.info file each time you want to install a different system. In my opinion, this is
not very comfortable.

You can, of course, create a copy of the kernel and rename it. Which could be a waste of disk space.
A much easier way is to create a symbolic link for each system pointing to that special kernel.

bc1-mms:/tftpboot # ln -s booti.chrp.mp.ent bc1-js21-1-vio


bc1-mms:/tftpboot # ln -s booti.chrp.mp.ent bc1-js21-2-vio
bc1-mms:/tftpboot # ll
...
-rw-r--r-- 1 root root 12190720 Aug 14 12:18 booti.chrp.mp.ent
lrwxrwxrwx 1 root root 6 Aug 17 12:52 bc1-js21-1-vio -> booti.chrp.mp.ent
lrwxrwxrwx 1 root root 6 Aug 17 12:52 bc1-js21-2-vio -> booti.chrp.mp.ent
...
Now you'll have two links pointing to one "real" bootkernel.
Note...
Please note that the name you give this bootkernel must be equal to the name within the DHCP
configuration!

Preparing DHCP
Now you must tell DHCP how to react on boot request from a specific system - i.e. which bootkernel
to provide. Open the file /etc/dhcpd.conf with an editor of your choice and create a client stanza for
each system - similar to the one below. Once again be carefull that the name of the file reflects the
naming conventions in your /tftpboot directory.

...
#IBM-VIO
host bc1-js21-1-vio.stuttgart.de.ibm.com {
hardware ethernet 00:11:25:c9:1a:ed;
filename "/bc1-js21-1-vio";
fixed-address 9.154.2.112;
next-server 9.154.2.86;
}
...
Remember...
Remember to restart the DHCP server in order to activate the changes.

Preparing NFS
Due to the fact that the installation of the VIOS works only over NFS you must add a line to the
/etc/exports.

#Export File
...
/export/vios *(ro,insecure,no_root_squash,sync)
...

Note..
Please note the option insecure. This is mandatory otherwise the VIOS partition will not be able to
mount the NFS share! In addition use no_root_squash because the system will try to mount the NFS
shares as root user!

Ah, don't forget to tell the NFS server to reload the configuration file!

Preparing syslogd
Wouldn't it be nice to see what'll happen during the installation of the VIOS? Personally I'll like the
idea and if you, too, want to know what's going on it is usefull to modify the configuration of the
syslog daemon.

First allow syslogd to receive remote messages by editing /etc/sysconfig/syslog file.


# /etc/sysconfig/syslog
...
SYSLOGD_PARAMS=-r
...

Note...
Please note that SYSLOGD_PARAMS could also be called SYSLOGD_OPTIONS depending on the
Linux distribution you use.
Next modify the configuration of the syslogd by editing /etc/syslog.conf.

# /etc/syslog.conf
...
#local2,local3.* -/var/log/localmessages
local3.* -/var/log/localmessages
local4,local5.* -/var/log/localmessages
local6,local7.* -/var/log/localmessages
local2.* -/var/log/nimol.log
...
And finally restart syslogd to activate the changes.

Creating the .info file


As stated above the partition you want to become a VIOS will boot the kernel image you've specified
in the DHCP configuration file and after this kernel will be successfully loaded the partition will try to
load an <bootkernelname>.info file to get the required information for the installation. In our example
let's assume we want to install the bc1-js21-1-vio. After the system has loaded the kernel (remember it
is a symbolic link) it will look for a bc1-js21-1-vio.info file within /tftpboot.

To create this file simply use vi /tftpboot/bc1-js21-1-vio.info. The content should read as follows.

#----- Network Install Manager Info File -----#


export NIM_SERVER_TYPE=linux
export NIM_SYSLOG_PORT=514
export NIM_SYSLOG_FACILITY=local2
export NIM_NAME=bc1-js21-1-vio
export NIM_HOSTNAME=bc1-js21-1-vio
export NIM_CONFIGURATION=standalone
export NIM_MASTER_HOSTNAME=bc1-mms
export REMAIN_NIM_CLIENT=no
export RC_CONFIG=rc.bos_inst
export NIM_BOSINST_ENV="/../SPOT/usr/lpp/bos.sysmgt/nim/methods/c_bosinst_env"
export NIM_BOSINST_RECOVER="/../SPOT/usr/lpp/bos.sysmgt/nim/methods/c_bosinst_env -a
hostname=bc1-js21-1-vio"
export NIM_BOSINST_DATA=/NIM_BOSINST_DATA
export SPOT=bc1-mms:/export/vios/SPOT/usr
export NIM_BOS_IMAGE=/NIM_BOS_IMAGE
export NIM_BOS_FORMAT=mksysb
export NIM_HOSTS=" 9.154.2.116:bc1-js21-1-vio 9.154.2.86:bc1-mms "
export NIM_MOUNTS=" bc1-mms:/export/vios/bosinst.data:/NIM_BOSINST_DATA:file bc1-
mms:/export/vios/mksysb:/NIM_BOS_IMAGE:file "
export ROUTES=" default:0:9.154.2.1 "
Adjust the file to match your environment - which means change the values of the following lines:

NIM_NAME and NIM_HOSTNAME


This is the DNS hostname for your system you'll plan to install.
NIM_MASTER_HOSTNAME
This is the DNS hostname of your Linux installation server.
NIM_BOSINST_RECOVER
Adjust the option hostname= to the name of the system you'll plan to install.
SPOT
Change to the NFS share where you've extracted the SPOT.
NIM_HOSTS
Change the IP adresses and hostnames.
NIM_MOUNTS
Adjust the NFS mount points appropriate.
ROUTES
Finally tell which system is your default router.

Note..
Please note that each parameter must be written in one line. Do not use backslashes to make the file
more human-readable - the system you want to install could be irritated and abort the installation.

After you've made all necessary changes don't forget to save the file!

Starting the installation


Now everything's in place you could start the installation of the VIOS. If you want to be sure
everything works you can restart the services tftp, nfs and dhcpd if not already done. You can verify if
everything works by mounting the NFS directory from whatever system on your network and by
trying to load the installation image manually by using a tftp-client.
Assuming everything works as expected start the system and boot into the SMS menu.

Note...
If you've not been able to include the correct MAC address of the LAN adapter yet, this is the point
where you can find it. Enter Setup Remote IPL and you'll find the MAC addresses (called Hardware
Address). Use this addresses and include the desired one in your dhcpd.conf file. Don't forget to restart
the DHCP server!

Initiate a network boot from the SMS using:

Selection 5. Select Boot Options


Selection 1. Select Install/Boot Device
Selection 6. Network
Selection 1 or 2. Physiscal Device
Depending which LAN adapter should be used
Selection 2. Normal Mode Boot
Selection 1. Yes
Now the system send a BOOTP request and it will get an answer - you will be able to monitor the
installation in /var/log/messages.

...
Aug 15 14:44:42 bc1-mms dhcpd: BOOTREQUEST from 00:11:25:c9:17:d9 via eth0
Aug 15 14:44:42 bc1-mms dhcpd: BOOTREPLY for 9.154.2.116 to op710-1-vio.stuttgart.de.ibm.com
(00:11:25:c9:17:d9) via eth0
Aug 15 14:45:07 bc1-js21-2-vio nimol:,info=LED 610: mount -r bc1-mms:/export/vios/SPOT/usr
/SPOT/usr,
Aug 15 14:45:07 bc1-mms rpc.mountd: authenticated mount request from op710-1-
vio.stuttgart.de.ibm.com:659 for /export/vios/SPOT/usr (/export/vios)
Aug 15 14:45:07 bc1-js21-2-vio nimol:,info=,
Aug 15 14:45:08 bc1-js21-2-vio nimol:,-S,booting,op710-1-vio,
Aug 15 14:45:08 bc1-js21-2-vio nimol:,info=LED 610: mount bc1-mms:/export/vios/bosinst.data
/NIM_BOSINST_DATA,
Aug 15 14:45:08 bc1-mms rpc.mountd: authenticated mount request from op710-1-
vio.stuttgart.de.ibm.com:703 for /export/vios/bosinst.data (/export/vios)
Aug 15 14:45:08 op710-1-vio nimol:,info=LED 610: mount bc1-mms:/export/vios/mksysb
/NIM_BOS_IMAGE,
Aug 15 14:45:08 bc1-mms rpc.mountd: authenticated mount request from op710-1-
vio.stuttgart.de.ibm.com:713 for /export/vios/mksysb (/export/vios)
Aug 15 14:45:08 bc1-js21-2-vio nimol:,info=,
Aug 15 14:45:15 bc1-js21-2-vio nimol:,-R,success,op710-1-vio,
Aug 15 14:45:15 bc1-js21-2-vio nimol:,info=extract_data_files,
Aug 15 14:45:15 bc1-mms rpc.mountd: authenticated unmount request from op710-1-
vio.stuttgart.de.ibm.com:659 for /export/vios/bosinst.data (/export/vios)
Aug 15 14:45:15 bc1-js21-2-vio nimol:,info=query_disks,
Aug 15 14:45:16 bc1-js21-2-vio nimol:,info=extract_diskette_data,
Aug 15 14:45:17 bc1-js21-2-vio nimol:,info=setting_console,
Aug 15 14:45:17 bc1-js21-2-vio nimol:,info=initialization,
Aug 15 14:45:18 bc1-js21-2-vio nimol:,info=verifying_data_files,
Aug 15 14:45:24 bc1-js21-2-vio nimol:,info=,
...
ENJOY!

Things that can go wrong


Well, first of all it is usefull to use a recent VIOS software. I've seen installations that will not proceed
after the kernel has been loaded because I've been using an older version of the VIOS. Version 1.2.1
has been used during all tests and it worked on POWER5/5+ servers and JS21 BladeServers, too.
In addition a most likely cause for a failure lies within the network setup of the installation server. Be
sure that tftp and NFS are configured correctly! If there's a problem you can find hints in the
/var/log/messages file.

If you don't get an DHCP/BOOTP response be sure that the DHCP server is configured correctly - i.e.
that you are using the correct LAN adapter (check the MAC address in /etc/dhcpd.conf. Note that
most network setups do not allow a BOOTP request travelling over subnet boundaries. Either check
the setup of your firewalls/routers or make shure that the installation is within the same subnet of the
systems you want to install.

I/O Virtualization is one of the founding pillars of PowerVM. Virtual IO Server (VIOS) is a software
appliance in PowerVM that facilitates virtualization of storage and network resources. Physical
resources are associated with the Virtual I/O Server and these resources are shared among multiple
client logical partitions (a.k.a. LPARs or VMs).

Since each Virtual I/O Server partition owns physical resources, any disruptions in sharing the
physical resource by the Virtual I/O Server would impact the serviced LPARs. To ensure client LPARs
have uninterrupted access to their I/O resources, it is necessary to set up a fully redundant
environment. Redundancy options are available to remove the single point of failure anywhere in the
path from client LPAR to its resource.

Fundamentally, the primary reasons for recommending VIOS and I/O redundancy include:

Protection against unscheduled outages due to physical device failures or natural events
Outage avoidance in case of VIOS software issue (i.e. including VIOS crash)
Improved serviceability for planned outages
Future hardware expansion
Protection against unscheduled outages due to human intervention
Role of Dual Virtual I/O Server
Dual VIOS configuration is widely employed, and it is recommended for enterprise environments. A
dual VIOS configuration allows the client LPARs to have multiple routes (two or more) to their
resources. In this configuration, if one of the routes is not available, the client LPAR can still reach its
resources through another route.

These multiple paths can be leveraged to set up highly available I/O virtualization configurations, and
it can provide multiple ways for building high-performance configurations. All this is achieved with
the help of advanced capabilities provided by PowerVM (VIOS and PHYP) and the operating systems
on the client LPARs.
Both HMC and NovaLink allow configuration of dual Virtual I/O Server on the managed systems.

The remainder of this blog focuses on approaches to achieve virtual storage redundancy for client
LPARs.

Enhanced storage availability


Below are the details on various possible configurations for providing enhanced storage availability to
client partitions. PowerVM offers three primary modes for virtualizing the storage to client LPARs,

Virtual SCSI (vSCSI)


N-Port ID Virtualization (NPIV)
Shared Storage Pool (SSP)
Redundancy in Virtual SCSI (vSCSI)
vSCSI allows Virtual I/O Server to drive the client LPARs I/O to the physical storage devices. For
more information, see Virtual SCSI on the IBM Knowledge Center.

Protection against Physical Adapter Failure


Figure 1

Figure 1

The basic solution against physical adapter failure is to have two (or more) physical adapters
preferably from multiple different I/O drawers to the Virtual I/O Server. Storage needs to be made
accessible via both physical adapters. MPIO capability on the VIOS can be leveraged to configure the
additional physical paths in fail-over mode. Figure 1 shows storage connectivity to client LPAR made
available via both paths in the VIOS.

To effectively leverage the capacity of both adapters, this configuration can be fine tuned by specific
Multi-Path I/O (MPIO) settings to share the load across these paths. This can result in better utilization
of resources on the system.

Protection against VIOS outage (planned or unplanned)


Figure 2

Figure 2

VIOS restart may be required during VIOS software updates, which can result in VIOS not being
available to service dependent LPARs while it is rebooting. A dual VIOS setup alleviates the loss of
storage access for any planned or unplanned VIOS outage scenarios.
In this kind of architecture, the client LPARs are serviced via two VIOS partitions (i.e. dual VIOS).
One VIOS acts as the primary server for all client requests and another VIOS acts as the
secondary/backup server. The backup server services the client only when the primary server is not
available to service the client requests. This kind of arrangement is achieved with the help of Storage
multi-pathing on client LPARs.

On the client LPARs running the AIX operating system, multi-pathing is achieved by using MPIO
Default Path Control Module (PCM). MPIO manages routing of I/O through available paths to a given
disk storage (Logical Unit). For more information on MPIO, see Multiple Path I/O on the IBM
Knowledge Center.

Protection against Disk/Storage Array Failures


Figure 3

Figure 3

The basic solution to protect against disk failures is to keep a mirrored copy of disk data onto another
disk. This can be achieved by the mirroring functionality provided by the client operating system; in
the case of AIX disk mirroring is provided by Logical Volume Manager (LVM). For high-availability,
each mirrored data should be located on separate physical disks, using separate I/O adapters coming
from different VIOS. Furthermore, putting each disk into a separate disk drawer protects against data
loss due to power failure.
Notes:

It is possible to access both primary and mirrored disks through single VIOS.
RAID array is another method to protect against disk failures.
High redundancy system with dual VIOS
Figure 4

Figure 4

In order to achieve a highly redundant system, all the solutions discussed above are combined to
derive an end-to-end redundancy solution.

Here, the client LPAR sees two disks, one of which is used as mirrored disk to protect against disk
failure. Each disk seen on client LPAR has paths from two different VIOS, which ensures protection in
case of VIOS failure. Also, each VIOS has two physical adapters to provide redundancy in case of
physical adapter failure. Though this arrangement is good from redundancy perspective, it is certainly
not the most efficient one, since one VIOS is used for backup purpose only and not utilized
completely. To effectively utilize all the available VIO Servers, the VIOS load can be shared across
these available VIO Servers. This configuration is explained in the next section with the help AIX
client LPARs.

High redundancy on AIX client LPARs with dual VIOS


For effective utilization of VIO Servers and their resources, we can create a system where one VIOS
acts a primary VIOS for one-half of the serviced client LPARs and acts as a secondary VIOS for
another half of the serviced client LPARs. Similarly, the second VIOS can act as a primary VIOS for
one-half of the client LPARs and act as a secondary VIOS for another half of the client LPARs. This
arrangement is illustrated in Figure 5.

Figure 5

Figure 5
Here, we have two LPARs (LPAR 1 and LPAR 2) which are being serviced by VIOS 1 and VIOS 2.
The client partition LPAR 1 is using the VIOS1 as an active path and VIOS2 as passive path to reach
its Disk A. Similarly, LPAR2 is using VIOS 2 as primary path and VIOS 1 as secondary path to reach
Disk B. An important thing to note here is that for the mirrored disk, the configurations are just
opposite i.e. on LPAR 1 active path is VIOS 2 and passive path is VIOS 1 for mirrored disk A’ and on
LPAR 2 active path is VIOS 1 and passive path is VIOS 2. The active and passive/backup VIOS is
designated based on the path priority set for the disk for the active I/O path.

Using this configuration, we can shut down one of the VIOS for a scheduled maintenance and all
active clients can automatically access their disks through the backup VIOS. When the Virtual I/O
Server comes back online, no action is needed on the virtual I/O clients.

Common practice is to use mirroring in the client LPARs for rootvg disks and the datavg disks are
protected by RAID configuration provided by the storage array.

RAID stands for Redundant Array of Independent Disks and its design goal is to increase data
reliability and increase input/output (I/O) performance. When multiple physical disks are set up to use
the RAID technology, they are said to be in a RAID array. This array distributes data across multiple
disks but from the computer user and operating system perspective, it appears as a single disk.

Redundancy in NPIV
N_Port ID Virtualization (NPIV) is a method for virtualizing physical fiber channel adapter ports to
have multiple virtual World Wide Port Numbers (WWPNs) and therefore multiple N_Port_IDs. Once
all the applicable WWPNs are registered with FC switch, each of these WWPNs can be used for SAN
masking/zoning or LUN presentation.

More information on NPIV is available at IBM Knowledge center: Virtual Fibre Channel

Redundancy with single VIOS and dual VIOS


Similar to vSCSI, redundancy can be built with NPIV mode of virtualization. Details are shown in
Figure 6 below.

Figure 6

Figure 6

Above figure shows that redundancy against physical adapter failure can be achieved by having one
more physical fiber channel HBA and redundancy against VIOS failure can be achieved by having a
redundant path through another VIOS.

Note: As the storage is directly mapped from SAN for the client LPAR, in order to have protection
against physical adapter failures, all of the virtual WWPN/N_Port_IDs of the client LPAR should be
zoned/masked for the same storage on the SAN. Additional adapters and paths cannot guarantee
redundancy unless zoning is done properly. Similar to vSCSI, active and passive/failover path is
managed by multi-path software on client LPAR.

Subtle differences between vSCSI and NPIV based virtualization:


In vSCSI, multipathing software is typically controlling paths in both the client LPARs as well as the
Virtual I/O Servers.
In NPIV, multi-pathing is at the client level and VIO Server is largely a pass-through.
High redundancy system with dual VIOS
Generally, in the case of a dual VIOS redundancy setup, each VIOS is configured with at least two
Virtual Fiber Channel adapters, each backed by an independent physical fiber channel adapter. Each of
these physical adapters is connected to separate switch to provide redundancy against switch failure.
Each client partition is configured with four Virtual Fiber Channel adapters, two of which are mapped
to Virtual Fiber Channel adapters on one VIOS and the other two mapped to Virtual Fiber Channel
adapters on the other VIOS. Now the client will have four WWPN and four N_Port_IDs. In order for
client LPAR to see same storage from all these ports, zoning has to be done on the SAN. Multi-path
software on the client LPAR takes care of routing IO through passive path if the active path fails.

In the case of VIOS serving multiple client LPARs, the workload can be spread across all the available
VIO Servers and I/O adapters as shown in Figure 7 below.

Figure 7

Figure 7

As shown above, client LPAR 1 and LPAR 2 have paths from both the VIO Servers, i.e. VIOS 1 and
VIOS 2, to reach their respective storage disks/LUNs. LPAR 1 is using VIOS 1 as active path and
VIOS 2 as the passive path, while LPAR 2 is using VIOS 2 as active and VIOS 1 as the passive path.
In this setup, if one VIOS is down for maintenance or the active path is unable to route the traffic, the
multi-path software running on the client LPAR will take care of routing the IO through other
available passive path.

One important thing to note in this configuration is that each VIOS has two paths through it and each
one of these paths is on a separate fabric. If there is a switch failure, the client will failover to the other
path in the same VIOS and not to the other VIOS.

Shared Storage Pool (SSP)


VIOS SSP provides shared storage virtualization across multiple systems through the use of shared
storage and clustering of Virtual I/O Servers. It’s an extension of PowerVM’s existing storage
virtualization technique using VIOS and vSCSI. SSP aggregates the heterogeneous set of storage
devices into pools or tiers. Tiering provides administrators the ability to segregate user data/storage
based on their desired requirement/criteria (aka service-level agreement(SLA) ). Virtual devices are
carved out from pool/tier and mapped to client LPARs via vSCSI mode of virtualization. All the above
redundancy configurations available through vSCSI are still valid for SSP, as SSP provides the same
standard vSCSI Target interface to client LPARs. Figure 8 shows a Dual Virtual I/O Server
configurations across each CEC.

Figure 8

Figure 8

Above figure shows that a single storage pool spans across multiple VIO Servers and multiple
systems, thus enabling location transparency.

More information on SSP is available at following locations:


Redbook link: http://www.redbooks.ibm.com/redbooks/pdfs/sg247940.pdf
IBM Knowledge center: http://www-
01.ibm.com/support/knowledgecenter/POWER7/p7hb1/iphb1clusterviossmit.htm?
cp=POWER7%2F1-8-0-4-0-0-1
Failure Groups in SSP:
To tolerate disk failures in pool/tier, SSP offers the creation of failure groups. Failure group let users
segregate disks into groups, possibly belonging to different failure characteristics and mirror the data
across both groups. Each tier in the SSP can have up to a maximum of two failure groups. Failure
group can be added to SSP tier anytime. Figure 9 shows three tiers in the SSP of which only two tiers
are mirrored.

Figure 9

Figure 9

SSP configuration depends heavily on networking infrastructure to facilitate communication between


all participating VIO Servers nodes in the SSP cluster. If a node is not able to communicate will all the
other nodes in the SSP cluster that node will the temporarily expelled from the cluster.

Вам также может понравиться