Вы находитесь на странице: 1из 28

Workload Partition Manager;AIX

IBM PowerVM Virtual I/O Server

Contents

This Readme contains installation and other information about VIOS Update Release 2.2.5.10

Package informationKnown capabilities and limitations

Installation information

Pre-installation information and instructionsInstalling the Update ReleasePerforming the necessary tasks
after installationAdditional information

Fixes included in this release

Package information

PACKAGE: Update Release 2.2.5.10

IOSLEVEL: 2.2.5.10

VIOS level is NIM Master level must be equal to or higher than

Update Release 2.2.5.10 AIX 6100-09-08 or AIX 7100-04-03

In June 2015, VIOS introduces the minipack as a new service stream delivery vehicle as well as a change
to the VIOS fix level numbering scheme. The VIOS "fix level" (the 4th number) has changed to two digits.
For example, VIOS 2.2.5.1 has been changed to VIOS 2.2.5.10. Please refer to the VIOS Maintenance
Strategy here for more details regarding the change to the VIOS release numbering scheme.

General package notes

Be sure to heed all minimum space requirements before installing. For this version, target at least
900MB available in the '/' directory.

Review the list of fixes included in Update Release 2.2.5.10.


To take full advantage of all the function available in the VIOS, it may be necessary to be at the latest
system firmware level. If a system firmware update is necessary, it is recommended that the firmware
be updated before you update the VIOS to Update Release 2.2.5.10.

Microcode or system firmware downloads for Power Systems

The VIOS Update Release 2.2.5.10 includes the IVM code, but it will not be enabled on HMC-managed
systems. Update Release 2.2.5.10, like all VIOS Update Releases, can be applied to either HMC-managed
or IVM-managed VIOS.

Update Release 2.2.5.10 updates your VIOS partition to ioslevel 2.2.5.10. To determine if Update
Release 2.2.5.10 is already installed, run the following command from the VIOS command line:

$ ioslevel

If Update Release 2.2.4.10 is installed, the command output is 2.2.5.10.

Known Capabilities and Limitations

The following requirements and limitations apply to Shared Storage Pool (SSP) features and any
associated virtual storage enhancements.

Requirements for Shared Storage Pool

•Platforms: POWER6 and later (includes Blades), IBM PureFlex Systems (Power Compute Nodes only)

•System requirements per SSP node:

•Minimum CPU: 1 CPU of guaranteed entitlement

•Minimum memory: 4GB


•Storage requirements per SSP cluster (minimum): 1 fiber-channel attached disk for repository, 1 GB

•At least 1 fiber-channel attached disk for data, 10GB

Limitations for Shared Storage Pool

Software Installation

•All VIOS nodes must be at version 2.2.1.3 or later.

•When installing updates for VIOS Update Release 2.2.5.10 participating in a Shared Storage Pool, the
Shared Storage Pool Services must be stopped on the node being upgraded.

•In order to take advantage of the new SSP features in 2.2.4.00 (including improvements in the min/max
levels), all nodes in the SSP cluster must be at 2.2.4.00.

SSP Configuration Feature Min Max Special*

Number of VIOS Nodes in Cluster 1 16 24

Number of Physical Disks in Pool 1 1024

Number of Virtual Disks (LUs) Mappings in Pool 1 8192

Number of Client LPARs per VIOS node 1 250 400

Capacity of Physical Disks in Pool 10GB 16TB

Storage Capacity of Storage Pool 10GB 512TB

Capacity of a Virtual Disk (LU) in Pool 1GB 4TB

Number of Repository Disks 1 1

Capacity of Repository Disk 512MB 1016GB

Number of Client LPARs per Cluster 1 2000

*These features have special hardware requirements that must be met:


•Over 16 VIOS Nodes requires that the SYSTEM (metadata) tier contains only SSD storage.

•Over 250 Client LPARs per VIOS requires each VIOS have at least 4 CPUs and 8 GM memory.

Other notes:

•Maximum number of physical volumes that can be added to or replaced from a pool at one time: 64

•The Shared Storage Pool cluster name must be less than 63 characters long.

•The Shared Storage Pool pool name must be less than 127 characters long.

•The maximum supported LU size is 4TB, however, for high I/O workloads it is recommended to use
multiple smaller LUs as it will improve performance. For example, using 16 separate 16GB LUs would
yield better performance than a single 256GB LU for applications that perform reads and writes to a
variety of storage locations concurrently.

•The size of the /var drive should be greater than or equal to 3GB to ensure proper logging.

Network Configuration

•Uninterrupted network connectivity is required for operation. i.e. The network interface used for
Shared Storage Pool configuration must be on a highly reliable network which is not congested.

•A Shared Storage Pool configuration can use IPV4 or IPV6, but not a combination of both.

•A Shared Storage Pool configuration should configure the TCP/IP resolver routine for name resolution
to resolve host names locally first, and then use the DNS. For step by step instructions, refer to the
TCP/IP name resolution documentation in the IBM Knowledge Center.

•The forward and reverse lookup should resolve to the IP address/hostname that is used for Shared
Storage Pool configuration.

•It is recommended that the VIOSs that are part of the Shared Storage Pool configuration keep their
clocks synchronized.

Storage Configuration

•Virtual SCSI devices provisioned from the Shared Storage Pool may drive higher CPU utilization than the
classic Virtual SCSI devices.
•Using SCSI reservations (SCSI Reserve/Release and SCSI-3 Reserve) for fencing physical disks in the
Shared Storage pool is not supported.

•SANCOM will not be supported in a Shared Storage Pool environment.

Shared Storage Pool capabilities and limitations

•On the client LPAR Virtual SCSI disk is the only peripheral device type supported by SSP at this time.

•When creating Virtual SCSI Adapters for VIOS LPARs, the option "Any client partition can connect" is
not supported.

•VIOSs configured for SSP require that Shared Ethernet Adapter(s) (SEA) be setup for Threaded mode
(the default mode). SEA in Interrupt Mode is not supported within SSP.

•VIOSs configured for SSP can be used as a Paging Space Partition (PSP), but the storage for the PSP
paging spaces must come from logical devices not within a Shared Storage Pool. Using a VIOS SSP logical
unit (LU) as an Active Memory Sharing (AMS) paging space or as the suspend/resume file is not
supported.

Installation information

Pre-installation information and instructions

Please ensure that your rootvg contains at least 30GB and that there is at least 4GB free space before
you attempt to update to Update Release 2.2.5.10. Run the lsvg rootvg command, and then ensure
there is enough free space.

Example:

$ lsvg rootvg

VOLUME GROUP: rootvg VG IDENTIFIER: 00f6004600004c000000014306a3db3d

VG STATE: active PP SIZE: 64 megabyte(s)

VG PERMISSION: read/write TOTAL PPs: 511 (32704 megabytes)

MAX LVs: 256 FREE PPs: 64 (4096 megabytes)


LVs: 14 USED PPs: 447 (28608 megabytes)

OPEN LVs: 12 QUORUM: 2 (Enabled)

TOTAL PVs: 1 VG DESCRIPTORS: 2

STALE PVs: 0 STALE PPs: 0

ACTIVE PVs: 1 AUTO ON: yes

MAX PPs per VG: 32512

MAX PPs per PV: 1016 MAX PVs: 32

LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no

HOT SPARE: no BB POLICY: relocatable

PV RESTRICTION: none INFINITE RETRY: no

Upgrading from VIOS version lower than 2.1.0

If you are planning to update a version of VIOS lower than 2.1, you must first migrate your VIOS to VIOS
version 2.1.0 using the Migration DVD. After the VIOS is at version 2.1.0, the Update/Fixpack 2.2.5.10
must be applied to bring the VIOS to the latest Fix Pack VIOS 2.2.5.10 level.

Note that with this Update Release 2.2.5.10, a single boot alternative to this multiple step process is
available to NIM users. NIM users can update by creating a single, merged lpp_source that combines the
contents of the Migration DVD with the contents of this Update Release 2.2.5.10.

A single, merged lpp_source is not supported for VIOS that uses SDDPCM. However, if you use SDDPCM,
you can still enable a single boot update by using the alternate method described at the following
location:

SDD and SDDPCM migration procedures when migrating VIOS from version 1.x to version 2.x

After the VIOS migration is complete, from 1.X to 2.X, you must set the Processor Folding, described
here under "Migration DVD":

Instructions to disable Processor Folding are detailed at the link below.


See the "Migration DVD" section:

Virtual I/O Server support for Power Systems

Upgrading from VIOS version 2.1.0 and above

The current level of VIOS must be between 2.2.1.1 and 2.2.4.x, you can put 2.2.5.10 in a location and do
the update using updateios command.

Before installing the VIOS Update Release 2.2.5.10

The update could fail if there is a loaded media repository.

Checking for a loaded media repository

To check for a loaded media repository, and then unload it, follow these steps.

1. To check for loaded images, run the following command:

$ lsvopt

The Media column lists any loaded media.

2. To unload media images, run the following commands on all Virtual Target Devices that have loaded
images.

$ unloadopt -vtd <file-backed_virtual_optical_device >

3. To verify that all media are unloaded, run the following command again.

$ lsvopt

The command output should show No Media for all VTDs.


Migrate Shared Storage Pool Configuration

The Virtual I/O Server (VIOS) Version 2.2.2.1 or later, supports rolling updates for SSP clusters. The VIOS
can be updated to Update Release 2.2.5.10 using rolling updates.

If your current VIOS is running with Shared Storage Pool from 2.2.1.1 or 2.2.1.3, the following
information applies:

A cluster that is created and configured on VIOS Version 2.2.1.1 or 2.2.1.3 must be migrated to version
2.2.1.4 or 2.2.1.5 prior to utilizing rolling updates. This allows the user to keep their shared storage pool
devices. When VIOS version is equal or greater than 2.2.1.4 and less than 2.2.5.10, the user needs to
download 2.2.5.10 update images into a directory, then update the VIOS to Update Release 2.2.5.10
using rolling updates.

If your current VIOS is configured with Shared Storage Pool from 2.2.1.4 or later, the following
information applies:

The rolling updates enhancement allows the user to apply Update Release 2.2.5.10 to the VIOS logical
partitions in the cluster individually without causing an outage in the entire cluster. The updated VIOS
logical partitions cannot use the new SSP capabilities until all VIOS logical partitions in the cluster are
updated.

To upgrade the VIOS logical partitions to use the new SSP capabilities, ensure that the following
conditions are met:

•All VIOS logical partitions must have VIOS Update Release version 2.2.1.4 or later installed. After the
update, you can verify that the logical partitions have the new level of software installed by typing the
cluster -status -verbose command from the VIOS command line. In the Node Upgrade Status field, if the
status of the VIOS logical partition is displayed as UP_LEVEL , the software level in the logical partition is
higher than the software level in the cluster. If the status is displayed as ON_LEVEL , the software level in
the logical partition and the cluster is the same.

•All VIOS logical partitions must be running. If any VIOS logical partition in the cluster is not running, the
cluster cannot be upgraded to use the new SSP capabilities.

The VIOS SSP software monitors node status and will automatically upgrade the cluster to make use of
the new capabilities when all the nodes in the cluster have been updated, "cluster -status -verbose"
reports "ON_LEVEL" .

--------------------------------------------------------------------------------

Installing the Update Release

There is now a method to verify the VIOS update files before installation. This process requires access to
openssl by the 'padmin' User, which can be accomplished by creating a link.

To verify the VIOS update files, follow these steps:

$ oem_setup_env

Create a link to openssl

# ln -s /usr/bin/openssl /usr/ios/utils/openssl

Verify the link to openssl was created

# ls -alL /usr/bin/openssl /usr/ios/utils/openssl

Both files should display similar owner and size

# exit

Use one of the following methods to install the latest VIOS Service Release. As with all maintenance, you
should create a VIOS backup before making changes.
If you are running a Shared Storage Pool configuration, you must follow the steps in Migrate Shared
Storage Pool Configuration.

Note : While running 'updateios' in the following steps, you may see accessauth messages, but these
messages can safely be ignored.

If your current level is between 2.2.1.1 and 2.2.2.1, you can directly apply 2.2.5.10 updates. This fixes an
update problem with the builddate on bos.alt_disk_install.boot_images fileset.

If your current level is 2.2.2.1, 2.2.2.2, 2.2.2.3, or 2.2.3.1, you need to run updateios command twice to
get bos.alt_disk_install.boot_images fileset update problem fixed.

Run the following command after the step of "$ updateios –accept –install –dev <directory_name >"
completes.

$ updateios –accept –dev <directory_name >

Depending on the VIOs level, one or more of the LPPs below may be reported as "Missing Requisites",
and they may be ignored.

MISSING REQUISITES:

X11.loc.fr_FR.base.lib 4.3.0.0 # Base Level Fileset

bos.INed 6.1.6.0 # Base Level Fileset

bos.loc.pc.Ja_JP 6.1.0.0 # Base Level Fileset

bos.loc.utf.EN_US 6.1.0.0 # Base Level Fileset

bos.mls.rte 6.1.x.x # Base Level Fileset


bos.mls.rte 6.1.9.200 # Base Level Fileset or #Fileset Update

bos.svprint.rte 6.1.9.200 # Fileset Update

--------------------------------------------------------------------------------

Applying updates

WARNING: If the target node to be updated is part of a redundant VIOS pair, ensure that the VIOS
partner node is fully operational before beginning to update the target node. NOTE that for VIOS nodes
that are part of an SSP cluster, the partner node must be shown in 'cluster -status ' output as having a
cluster status of OK and a pool status of OK. If the target node is updated before its VIOS partner is fully
operational, client LPARs may crash.

The current level of the VIOS must be 2.2.2.1 or later if you use the Share Storage Pool .

1. Log in to the VIOS as the user padmin.

2. If you use one or more File Backed Optical Media Repositories, you need to unload media images
before you apply the Update Release. See details here.

3. If you use Shared Storage Pools, then Shared Storage Pool Services must be stopped.

$ clstartstop -stop -n <cluster_name > -m <hostname >

4. To apply updates from a directory on your local hard disk, follow the steps:

•Create a directory on the Virtual I/O Server.

$ mkdir <directory_name >

•Using ftp, transfer the update file(s) to the directory you created.

To apply updates from a remotely mounted file system, and the remote file system is to be mounted
read-only, follow the steps:

•Mount the remote directory onto the Virtual I/O Server:


$ mount remote_machine_name:directory /mnt

The update release can be burned onto a CD by using the ISO image file(s). To apply updates from the
CD/DVD drive, follow the steps:

•Place the CD-ROM into the drive assigned to VIOS.

5. Commit previous updates by running the updateios command

$ updateios -commit

6. Verify the updates files that were copied. This step can only be performed if the link to openssl was
created.

$ cp <directory_path >/ck_sum.bff /home/padmin

$ chmod 755 </home/padmin>/ck_sum.bff

$ ck_sum.bff <directory_path >

If there are missing updates or incomplete downloads, an error message is displayed.

7. Apply the update by running the updateios command

$ updateios -accept -install -dev <directory_name >

8. To load all changes, reboot the VIOS as user padmin .

$ shutdown -restart

Note: If shutdown –restart command failed, run swrole –PAdmin in order for padmin to set
authorization and establish access to the shutdown command properly.

9. If cluster services were stopped in step 3, restart cluster services.

$ clstartstop -start -n <cluster_name > -m <hostname >


10. Verify that the update was successful by checking the results of the updateios command and by
running the isolevel command, which should indicate that the ioslevel is now 2.2.5.10.

$ ioslevel

--------------------------------------------------------------------------------

Performing the necessary tasks after installation

Checking for an incomplete installation caused by a loaded media repository

After installing an Update Release, you can use this method to determine if you have encountered the
problem of a loaded media library.

Check the Media Repository by running this command:

$ lsrep

If the command reports: "Unable to retrieve repository date due to incomplete repository structure,"
then you have likely encountered this problem during the installation. The media images have not been
lost and are still present in the file system of the virtual media library.

Running the lsvopt command should show the media images.

Recovering from an incomplete installation caused by a loaded media repository

To recover from this type of installation failure, unload any media repository images, and then reinstall
the ios.cli.rte package. Follow these steps:

1. Unload any media images

$ unloadopt -vtd <file-backed_virtual_optical_device>


2. Reinstall the ios.cli.rte fileset by running the following commands.

To escape the restricted shell:

$ oem_setup_env

To install the failed fileset:

# installp –Or –agX ios.cli.rte –d <device/directory >

To return to the restricted shell:

# exit

3. Restart the VIOS.

$ shutdown –restart

4. Verify that the Media Repository is operational by running this command:

$ lsrep

Additional information

NIM backup, install, and update information

Use of NIM to back up, install, and update the VIOS is supported.

For further assistance on the back up and install using NIM, refer to the NIM documentation.

Note : For install, always create the SPOT resource directly from the VIOS mksysb image. Do NOT update
the SPOT from an LPP_SOURCE.

Use of NIM to update the VIOS is supported as follows:


Ensure that the NIM Master is at the appropriate level to support the VIOS image. Refer to the above
table in the "Package information" section.

On the NIM Master, use the operation updateios to update the VIOS Server.

Sample: "nim –o updateios –a lpp_source=lpp_source1 ... ... ... "

For further assistance, refer to the NIM documentation.

On the NIM Master, use the operation alt_disk_install to update an alternate disk copy of the VIOS
Server.

Sample:

"nim –o alt_disk_install –a source=rootvg –a disk=target_disk

–a fix_bundle=(Value) ... ... ... "

For further assistance, refer to the NIM documentation.

If NIM is not used to update the VIOS, only the updateios or the alt_root_vg command from the padmin
shell can be used to update the VIOS.

Installing the latest version of Tivoli TSM

This release of VIOS contains several enhancements. These enhancements are in the area of POWER
virtualization. The following list provides the features of each element by product area.

Note: Version 6.1.0, the previous version of Tivoli TSM, is still shipped and installed from the VIOS
installation DVD.
Tivoli TSM version 6.2.2

The Tivoli TSM filesets are now being shipped on the VIOS Expansion Pack, with the required GSKit8
libraries.

The following are sample installation instructions for the new Tivoli TSM filesets:

1. Insert the VIOS Expansion DVD into the DVD drive that is assigned to the VIOS partition.

2. List Contents of the VIOS Expansion DVD.

$ updateios -list -dev /dev/cd0

Fileset Name

GSKit8.gskcrypt32.ppc.rte 8.0.14.7

GSKit8.gskcrypt64.ppc.rte 8.0.14.7

GSKit8.gskssl32.ppc.rte 8.0.14.7

GSKit8.gskssl64.ppc.rte 8.0.14.7

..

tivoli.tsm.client.api.32bit 6.2.2.0 tivoli.tsm.client.api.64bit 6.2.2.0

..

3. Install Tivoli TSM filesets.

$ updateios -fs tivoli.tsm.client.api.32bit -dev /dev/cd0

NOTE: Any prerequisite filesets will be pulled in from the Expansion DVD, including for TSM the
GSKit8.gskcrypt fileset.
4. If needed, install additional TSM filesets.

$ updateios -fs tivoli.tsm.client.ba.32bit -dev /dev/cd0

5. Verify TSM installed by listing software installed.

$ lssw

Sample output:

..

tivoli.tsm.client.api.32bit 6.2.2.0 CF TSM Client - Application Programming

Interface

Fixes included in this release

List of fixes in 2.2.5.10 APAR Description

IV61308 HANG IN ELXENT_BUGOUT

IV65101 CAA MKWPAR -S CREATES WPAR WITH SPINNING CLCOMD

IV67228 SYSTEM CRASH DUE TO RECURSIVE LOCK IN FC5287 DRIVER

IV67308 VERY SLOW NETWORK THROUGHPUT FROM VIOC

IV69355 SEA LARGESEND CAN CAUSE TCP BAD HEADER OFFSET ON VIRTUAL CLIENT

IV69897 AIX /USR/SYSV/BIN/DF COMMAND REPORTS INCORRECT VALUES

IV70553 SAS CD DRIVE MAPPINGS CHANGED WHEN VIOS REBOOTED

IV70843 AIXPERT MAY ADD WRITE PERMISSION TO PROGRAMS

IV70899 FC1763/FC5899 supported port may become unresponsive

IV71204 rootcronjobck fails to handle symlink

IV71882 Invalidate on vid matched pvid of non-default trunk adapter.

IV71906 LVS AND VGS GET DISPLAYED WHEN THEY SHOULD BE HIDDEN.

IV72007 INCORRECT READ MAY OCCUR DURING JOINVG


IV72025 Assert in emfsc_free_cmd_elem

IV72133 caa: no gossip transmission on boot can lead to split-brain

IV72147 CFGIPSEC NOT PROPERLY HANDLING COMMENTS IN NEXTBOOT FILE

IV72562 SFW SFWCOM APPEARS UP TO CAA WHEN IT IS NOT FUNCTIONAL

IV72893 CLCOMD CAN USE TOO MUCH CPU WHEN AHAFS NOT ACCESSIBLE

IV73000 INCORRECT DETAIL IN VIOD_ALERT_EVENT IF CM DB FS CANNOT BE CHECK

IV73642 LSDEV -TYPE ENT4SEA EXCLUDES ROCE ADAPTER

IV73705 usrck does not catch non-executable shell

IV73716 partitions with more than 87,380 LMBs may crash

IV73758 EXECUTING STOPSRC -G IKE CAN HANG TMD

IV73765 alert -list option should list alerts of all tiers.

IV73767 Adapter in unknown state with nddctl -r

IV73768 EXTENDVG OF CONC VG MAY FORCE THE VG OFFLINE ON REMOTE NODE

IV73769 system crash in hd_reduce(), lvm layer

IV73770 joinvg in concurrent VG may put disks in missing

IV73838 caa: inactive node fails to generate remote node_up ahafs event

IV73951 I/O may hung during SCSI ACA Error Condition.

IV73952 cleanup SRC VIOS mappings after client lpar is remote restarted.

IV73954 system crash with livedump

IV73955 Unclear script callout errors

IV73976 A potential security issue exists CVE-2014-3566

IV73989 Node cle_globid range incorrect

IV73994 incorrect VPD information for Integrated ethernet adapter

IV74097 Slow NIM install/telnet/tcp connections

IV74184 The second port on Travis3-EN doesn't work


IV74329 CVE-2015-0261 and CVE-2015-2154 tcpdump vulnerablity fix

IV74484 ENTSTAT ON SEA FAILS WITH ENTSTAT_MODE ENVIRONEMT VARIABLE

IV74606 ALT_DISK_COPY ON 4K DISK TRIGGERS LVM_IO_FAIL

IV74650 caa: repository disk replacement may fail

IV74726 Etherchannel not recovering after port reset

IV74927 A potential security issue exists CVE-2015-2808

IV74946 clconfd lvm conflict with chrepos

IV75001 MIGRATEPV FAILURE MAY CAUSE IO HANG IN CONCURRENT VG

IV75057 rmvdev fails with an LU backed vtd.

IV75100 Restore seems to hang.

IV75102 SSP Lu's mapping goes missing after remote restart

IV75273 LSLDAP KEEPS ON FAILLING WHEN LDAP SERVER CLOSES CONNECTION

IV75290 Multiple client_partition entries cause mapping issues.

IV75364 CAA: UNEXPECTED START_NODE BY PEER NODE LEADS TO AST PANIC

IV75387 CAA: SYSTEM WILL CRASH IF GOSSIP PACKET HAS ID = 0

IV75388 updateios gives incorrect error codes/messages for some options

IV75480 LPM validation fails if some port bounce happens on SAN

IV75495 CODEGCHECK MIGHT FAIL WITH FALSE ERROR, DISPLAYING SCRIPTED

IV75538 caa: defined pv that matches repo pvid causes mkcluster to fail

IV75589 SYSTEM CRASHES IN ICMCLOSEQP1

IV75591 Resume can fail with HMC code HSCLA27F after validation pass.

IV75627 Cluster create could fail

IV75685 LPM can fail with HMC to MSP connection timed out error

IV75719 FCSTAT -N WWNXXXX FCS0 RETURNS 0 FOR FCSTAT OUTPUT

IV75738 LSSEC PERFORMANCE SUFFERS WHEN EFS IS ENABLED


IV75879 AIXPERT: RELATIVE CRON JOB SPECIFICATIONS MAY FAIL

IV75880 AIXPERT: FAILS TO HANDLE PARENT DIRECTORIES WITH ROOTCRNJOBCK

IV75883 DST with SRIOV dedicated failed in Stage 3

IV76033 Some SSP VTDs are not restored during cluster restore

IV76130 Invalid repository reserve policy not detected on remote nodes

IV76151 IKED LOOPS AND CAUSES CPU LOAD WHEN 0-BYTE DATAGRAM IS PRESENT

IV76158 SYSTEM CRASH AFTER RENDEV ON FCS ADAPTER

IV76194 VLAN ADAPTER ON PCIE2 4-PORT ADAPTER(10GBE SFP+) DROPS PACKETS

IV76204 ISCSI TOE SENDING IO TO WRONG TARGET

IV76214 VIO Change Mgmt daemon can core dump

IV76216 LUN discovery may fail during cfgmgr

IV76243 ETHERCHANNEL WITH CT3 ADAPTER MAY HANG

IV76254 Unable to start pool after a failed FG operation

IV76256 Unlock adapter is not required when lock did not happen earlier

IV76359 Do not remove adapter if suspend info is not updated

IV76508 EFSENABLE -D WRT LDAP OPERATION FAILS IN SOME SCENARIOS

IV76512 Packets with old MAC are received after changing MAC

IV76529 System crash while running hxecom with modified ent attributes

IV76530 VMLibrary size may be listed as zero

IV76538 SCTP heartbeats are not getting sent within RTO of the path

IV76591 0514-061 CANNOT FIND A CHILD DEVICE ERRORS ON FC ADAPTERS

IV76723 CAA UNICAST CLU: DMS ON LAST NODE DUE TO INCORRECT DPCOM STATE

IV76806 LSGROUP INCORRECTLY CACHES LDAP USER INFORMATION

IV76817 LPM validation or some VIOS function may give ODM lock errors

IV76821 'devrsrv' command to force clear SCSI-2 reserve on a vSCSI fails


IV77175 CAA: MERGE FAILURE DUE TO STALE JOIN_PENDING FLAG.

IV77179 viosecure command fails during rule failure with file option

IV77182 cluster Restore may fail in 1 node cluster with PowerVC

IV77183 System crash during adapter EEH recovery.

IV77471 Callback with NDD_LS_DOWN when EEH detected

IV77472 lnc2ent kdb command fails due to corrupted command table entry.

IV77475 ADD FOR POWERVC RESERVE_LOCK POLICY USING EMC 5.3.0.6 OR OLDER

IV77506 CACHEFS CRASHES IN CFIND()

IV77509 ABEND IN RESET_DECV

IV77510 SYSTEM CRASH IN REMOVING THE SEA ADAPTER BY RMDEV COMMAND

IV77522 Disks belonging to cluster are shown as free.

IV77529 DISK NORMAL OPEN FAILS WITH EACCES

IV77670 DIAG RETURNED 2E41-103: CHECKSUM VERIFICATION TEST FAILED

IV77687 ALLOW FC ATTRIBUTES TO BE SHOWN AND CHANGEABLE VIA SMIT

IV77744 Removing SEA device causes hang, SEA threads hang

IV77964 CAA: CHREPOS -C INCORRECTLY SAVES NODE_LOCAL VALUE TO REPO DISK

IV78144 cluster -status command does not always show pool status

IV78187 DSI IN ENTCORE_TX_SEND IN MLXENTDD

IV78360 LDAP USER HAS ADMCHG SET AFTER CHANGING OWN PASSWORD

IV78412 FAILURE OF VIOSBR (VIRTUALCFG) DOESN'T REMOVE WORKING FILES

IV78416 SECLDAPCLNTD INVALID CACHE ATTRIBUTE

IV78419 FC5899 IS MISSING SMIT PANEL FOR CHANGING MEDIA SPEED.

IV78420 NPIV adapter becomes unresponsive

IV78422 Repository Disk is shown as free when it is actually in use

IV78425 ROOTCRNJOBCK FAILS TO HANDLE RELATIVE SYMLINKS & SYMLINK CHAINS


IV78444 LOOPMOUNT COMMAND MAY FAIL FOR READ-ONLY NFS MOUNT

IV78453 IMPROVED DRIVER'S FAIRNESS ALGORITHM TO AVOID I/O STARVATION

IV78454 IMPROVED DRIVER'S FAIRNESS LOGIC TO AVOID I/O STARVATION

IV78502 LOOPING IN EFC_FREE_LOGIN_TBLE

IV78549 CRASH DUE TO BRKPOINT IN GXIB_RAS_REGISTER CALL DURING BOOT

IV78576 CRFS -A QUOTA=NO SHOULD HAVE FAILED DUE TO INVALID ATTRIBUTE

IV78682 Disk validation for LPM of NPIV client fails

IV78830 NPIV CLIENT LPAR REBOOT CAN HANG THE VIOS

IV78884 ALT_DISK_COPY MAY FAIL WHEN MOUNTING FILESYSTEMS

IV78895 NPIV Migration failed with Function npiv_phys_spt tried to

IV78897 LU-level validation for LPM fails with IBM i or Linux clients

IV78899 System crash in entcore_link_change_nic_callback()

IV78900 Removing certain virtual devices incorrectly fails.

IV79060 ABEND IN NMENT_FREE_RX

IV79066 CLMIGCHECK GIVES WARNING ON SYSLOG.CONF UPDATE

IV79067 CLMIGCHECK GIVES RESERVE ERROR ON REPOSITORY DISK

IV79069 VALID AIX LEVELS MIGHT BE INCORRECTLY IDENTIFIED AS INVALID.

IV79073 MISSING INFORMATION FROM CLMIGCHECK ERROR MESSAGE

IV79074 CLMIGCHECK FAILS IF NETSVC.CONF CONTAINS COMMENTED HOSTS LINES

IV79281 CLMIGCHECK MIGHT FAIL ON CLUSTERS WITH MORE THAN TWO NODES.

IV79303 IO.TEM CAN CRASH DURING THE PROCESSING OF FAILED ASYNCHRONOUS

IV79469 LU count listed in tier list output is incorrect.

IV79471 Some SSP VTDs missing after restore operation

IV79516 VKE_INFO QUEUE FULL ERROR DETAIL MESSAGE

IV79634 FAILED PATH ON A CLOSED DISK MAY NOT RECOVER AFTER DISK REOPEN
IV79658 LEVEL.OT MIGRATIONS FAIL IF ALL NODES NOT AT THE SAME CAA

IV79710 NIMADM FAILS WHEN MASTER IS BOOTED FROM BOS_HD5

IV79729 CLASS 3 NAMESERVER QUERY ODM SUPPORT FOR LEGACY FC ADAPTER

IV79874 SEA CONTROL CHANNEL & TRUNK VIRTUAL ADAPTER SHARE PVID CRASH

IV79991 FAST_FAIL MAY NOT WORK WHEN THE IMM_GID_PN ENABLED

IV80113 viosbr overwrites backups with frequency option.

IV80127 NIB ETHERCHANNEL WITH SR-IOV VF PORT PROBLEM

IV80151 SLOW HA EVENT PROCESSING IN LINKED CLUSTER DUE TCPSOCK RESETS

IV80435 TRUSTCHK -U OUTPUT INVALID ATTRIBUTES FOR HARDLINKS & SYMLINKS

IV80569 MUSENTDD'S RECEIVE PATH HANGS WHEN IT RECEVIES CERTAIN SIZE PKT

IV80598 System crash after cachefs forced unmount

IV80689 LNCENTDD: MULTICAST ADDRS LOST WHEN PROMISCUOUS MODE TURNED OFF

IV80732 ISAKMPD CORE DUMPED IN SPI_PROTO DESTRUCTOR

IV80871 Race in lncentdd/lnc2entdd driver may make adapter unusable.

IV81016 When the catalog path is invalid artex commands coredump.

IV81022 ISAKMPD CORE DUMP AT COPY_REPL

IV81023 NETSTAT & ENTSTAT FOR ETHERCHANNELS WITH SRIOV DO NOT SHOW MAC

IV81024 MKFS -V JFS2 -O LOG=INLINE /DEV/LOOP0 WITH IV79296 INSTALLED

IV81223 HMC reports that VIOS is busy and not responding to queries

IV81241 MELLANOX V2 DRIVER LOGS LINK UP AND LINK DOWN DURING REBOOT

IV81297 VIO CLIENT CRASHES IN DISK DRIVER DURING LPM WITH GPFS AND PR SH

IV81352 VM deploy fails when using NIC adapters

IV81454 DSI IN RRHBA_QUERY_NIC

IV81462 DF AND LSFS MAY LIST THE WRONG FILESYSTEM

IV81739 LPM: MIGMGR(FIND_DEVICES -T VETH) RC=8 ON DESTINATION VIOS


IV81756 VIOS may crash after using SSP image management in IBM Director

IV81820 CM ATTRIBUTE: MISSING MPIOALGORITHM FOR POWERPATH PVS

IV81839 Virtual tape devices do not configure after upgrading VIOS

IV81840 DEVICE IPSEC_V4 MAY COME UP DEFINED AT BOOT

IV81854 Some objects may be missing in list presented to HMC

IV81936 May lose access to one or more NPIV disks

IV81937 Segfault may occur in ioscli vg or lv related commands.

IV81967 UPDATEIOS:REMOVING INSTALLED EFIX FAILS DUE TO GARBAGE CHARACTER

IV82022 LU -RESIZE NOT RECOGNIZED ON CLIENT AND SCF LOCK ERROR

IV82080 SEGMENTATION FAULT IN RULESCFGSET IF MOTD BIGGER THAN 1024 CHARS

IV82145 IPREPORT DOES NOT DISPLAY TCP OPTION 14 (TCPOPT_ALTCKSUM)

IV82194 CRASH DURING VIOENT CALLBACK WHEN USED WITH ETHERCHANNEL

IV82195 SR-IOV TX_TIMEOUT DEFAULTS TO 30 SECONDS

IV82197 LPM-ED CLIENTS TEMPORARILY LOOSE CONNECTIVTY DUE TO RARPS.

IV82449 VIRTUAL ETHERNET ADAPTER DRIVER MAY GOES INTO DEAD STATE.

IV82461 VIOS may rarely crash during LUN-level validation for LPM

IV82463 Adapters using lncentdd driver may log EEH as permanent Error

IV82577 UNMIRRORVG ROOTVG MAY HANG WITH FW-ASSISTED DUMP & HD6

IV82579 SLOW HA EVENT PROCESSING IN LINKED CLUSTER DUE TCPSOCK RESETS

IV82596 SYN NOT RECEIVED WHEN VEA CLIENT TURN OFF CHECKSUM OFFLOAD

IV82627 CAA(MULTICAST): A NODE MAY NOT SEE A REBOOTED NODE AS UP

IV82728 SECONDARY GROUP NOT DETERMINED FOR OLDER NETIQ LDAP SERVERS

IV82752 PACKET ADJUSTED MULTIPLE TIMES BY RRHBADD DRIVER

IV82795 VIOS CRASH WHEN CREATING SEA IN LOW MEMORY CONDITION

IV82983 viosbr -restore not restoring other nodes except initiator node
IV83078 VIOS may crash when doing LUN-level validation for LPM

IV83079 trustchk error on /var/adm/cron/log after migration

IV83100 Handle pv backed virtual target devices during suspend operation

IV83121 After viosecure -undo, viosecure -view reports error

IV83143 I/O HANG IN EFC_FREE_LOGIN_TBLE

IV83212 HOSTNAME ISSUES PREVENT NODE FROM JOINING THE CAA CLUSTER

IV83291 NPIV client hang when cancel command fails

IV83544 TRUSTCHK -U FAILS FOR ALLOW_OWNER ALLOW_GROUP ALLOW_ALL

IV83615 ROLLING UPGRADE FROM DBV3 TO V4 FAILS

IV83637 DRW_LOCK_WRITE() BLOCKING READ THREADS CAUSE HANG.

IV83875 SECLDAPCLNTD MAY FAIL TO HANDLE LONG PASSWORD HASH STRINGS

IV84242 VM validation fails if nondisk type LU configured to NPIV client

IV84341 WRONG UDP FILTERS ADDED BY AIXPERT IPSECSHUNPORTS

IV84449 TCBCK -Y ALL IS RUN AT FIRSTBOOT FROM ALTERNATE DISK IN TCB ENV

IV84470 CRASH DUE TO POSSIBLE MEMORY LEAK IN CAA

IV84478 MULTIPLE DUPLICATE PDAT ODM ENTRIES FOR MPIOAPDISK

IV84555 MAC/VLAN ACL status incorrectly displayed when PVID enabled.

IV84636 BOOTPD DOESN'T FIND ROUTE WITH MANY NETWORK INTERFACES IN USE

IV84649 LPM fails when disk's unique_id has special characters

IV84661 Removing interface,when upgrading FW.Move hang state for FC#EN0H

IV84677 CACHE METHOD DOESN'T SET PROD FLAG CORRECTLY.

IV84704 LARGE RECEIVE ON VIOS IS SLOW WHEN VIO CLIENT HAS CHKSUM OFF

IV84767 FAILOVER HAPPENS ON 802.3AD ETHERCHANNEL WHILE REBOOT

IV84769 PANIC IN TSTART VIA SEAHA_PROCESS_DSCV_INIT

IV84776 NODE CRASHED


IV84833 System crash issuing slot reset after returning EEH Suspend busy

IV84889 CAA UNUSABLE NET_ADDR ON REPOSITORY DISK AFTER CHANGE BOOT IP

IV84953 CM daemon runs out of memory and thus restarts.

IV84985 EEH Resume failure can cause restart to set invalid device state

IV85080 Crash due to race condition of between DEAD and Close

IV85145 CONSTANT DISK_ERR4 ERRORS FOR INTERNAL SCSI DISKS IF HEALTH CHEC

IV85215 RDMAcore returns error if suspend is called more than once.

IV85255 ADDING AUDIT CONFIG FULLPATH OPTION

IV85311 Promiscuous or All mcast flags still shown as set after a close

IV85393 system crash running get_adapter with trace level >= 5

IV85395 alt_disk_mksysb hangs without the absolute path to device

IV85423 MOUNT MAY FAIL REPORTING MEDIA IS NOT FORMATTED

IV85460 ABEND_TRAP FROM CLOCK

IV85508 caa: core dump if netmon.cf entries are defined

IV85593 NFS3 OPEN FAILURE AFTER NFS3 OPEN WITH O_DIRECT AND O_CIO FAILS

IV85670 VARYONVG HANG IN SCSIDISK_CLOSE() DURING BOOT

IV85676 Repos DOWN event on create led to crash

IV85884 EEH Exit after performing Hot/Fundamental reset after FW update

IV85886 AIXPERT PREREQS ARE NOT MAINTAINED AFTER APPLYING RULES

IV85888 caa: clcomd fails to handle local internal bound ip address

IV85891 DSI CRASH IN VIOENT_OUTPUT

IV85895 ECH_CANNOT_FAILOVER ERROR WHEN REBOOTING

IV86129 Fix CVE-2015-7575 in imap,pop3

IV86237 Possible race condition between ctrc suspend and trace operation

IV86408 VIOS fails with adapter in use message during LPM


IV86523 SYSTEM MAY HANG WHEN UPGRADING OR REMOVING BOS.RTE.SECURITY

IV86535 system crash while using adapter with FC#EN0S and EN0W

IV86538 SYNCLVODM IS SLOW WHEN THERE ARE MANY DISKS CONFIGURED

IV86577 System crash due to stack overflow when errlogging on slih path

IV86666 mkvopt command allows vopt device names with special characters

IV86678 PACKETS ARE SEEN TWICE IN IPTRACE

IV86679 parallel importpv and reboot fails the importpv operation.

IV86684 handle large image file copies

IV86792 Removal of more than one node from a cluster was giving error

IV86794 Rolling Upgrade does not finish

IV86861 ERROR MESSAGE DURING REPLACEPV NOT RELEVANT

IV87015 clmigcheck fails to find any shared disks

IV87020 VM deploy fails with VIOS_VSCSI_HOST err (relative port id=0)

IV87053 IPTRACE -L DOES NOT ROTATE LOGFILE IF LOGSIZE MORE THAN 2GB

IV87134 VIOS crash in target_cdt_add

IV87309 CRASH IN XHCI_OPEN WHEN ACCESSING /DEV/USBHC0

IV87315 CAA CLCONFD CORE PTH_USCHED._USCHED_DISPATCH_FRONT._CHK_RESTART

IV87496 The validation fails, while migrating the client partitition

IV87670 IMPROVE HANDLING OF UNDETERMINED DISK ERRORS IN MPIO CONFIGURATI

IV87701 Restore fails for the max_transfer size of the hdisks.

IV87704 LPM will not start with a VASI Open error

IV87734 CLUSTER HEARTBEAT SETTINGS SMIT MENU MAY SHOW VALUE AS ZERO

IV87738 cluster backup through viosbr command gives segmentation fault.

IV87742 Do not use FC port if all NPIV ports are in use

IV87743 Displaying cluster status during cluster node out of network


IV87775 IMPROVE HANDLING OF ABORTED COMMANDS

IV87913 SYSTEM CRASH WITH NPIV_VALIDATE_DO_PORT_LOGIN

IV87957 FC5899 ENTSTAT PRINTS 1818495488 FOR BAD PACKET COUNT IN NON-C

IV87958 DURING FCS ADAPTER RECOVERY NODE CAN CRASH.

IV87981 A POTENTIAL SECURITY ISSUE EXISTS

IV88287 INACTIVE LPM MAY FAIL DUE TO MISSING CAPABILITIES

IV88351 Timing related deadlock issue in LPM

IV88688 RECREATEVG -D NEEDS BETTER USAGE DOCUMENTATION.

IV88719 DSI IN SEADD:SEA_VIRT_INPUT_POST_Q

IV88727 RULESCFGSET MAY ADD GARBAGE LINES IN MOTD FILE

IV89148 Potential cluster hang during svCollect.

IV90068 PCM SSP data collection status may not be updated properly

IV90482 bos.esagent fails to update during VIOS 2.2.5.10 SP install

Вам также может понравиться