Вы находитесь на странице: 1из 25

IBM System Storage subsystem controller firmware version 07.84.56.

00 for
DS3950-all models, DS5020-all models, DS5100-all models and DS5300-all models
storage subsystems.

=====================================
BEFORE INSTALLING 07.84.56.00, PLEASE VERIFY ON SSIC (URL BELOW) THAT YOUR
STORAGE ENVIRONMENT IS SUPPORTED:
http://www.ibm.com/storage/support/config/ssic
=====================================

(C) Copyright International Business Machines Corporation 1999, 2014. All


rights reserved. US Government Users Restricted Rights - Use, duplication,
or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Note: Before using this information and the product it supports, read the
general information in section 6.0 "Trademarks and Notices" in this
document.

Note: Before commencing with any firmware upgrade procedure, use the
Storage Manager client to perform a Collect All Support Data capture.
Save this support data capture on a system other than the one that is
being upgraded.

Refer to the IBM System Storage Support Web Site or CD for the IBM
DS Storage Manager version 10.8 Installation and Host Support Guide for
firmware and NVSRAM download instructions.

For other related publications, refer to Related Publications in the


Installation, User's and Maintenance guide of your DS storage subsystem
or storage expansion enclosures.

Last Update: 10/09/2014

Products Supported:

---------------------------------------------------------------
| New Model | Old Model | Machine Type | Model |
|------------|-----------|--------------|-----------------------|
| DS3950 | N/A | 1814 | 94H, 98H |
|------------|-----------|--------------|-----------------------|
| DS5020 | N/A | 1814 | 20A |
|------------|-----------|--------------|-----------------------|
| DS5100 | N/A | 1818 | 51A |
|------------|-----------|--------------|-----------------------|
| DS5300 | N/A | 1818 | 53A |
---------------------------------------------------------------

Supported Enclosure Attachments:

The DS3950 supports the attachment of EXP395 drive enclosures. EXP810 drive
enclosure attachment is supported as a premium feature and will require a
premium feature key.

The DS5020 supports the attachment of EXP520 drive enclosures. EXP810 drive
enclosure attachment is supported as a premium feature and will require a
premium feature key.

The DS5100 and DS5300 supports the attachment of EXP5000 and the EXP5060
drive enclosures. An IBM RPQ approval is required for support of all EXP810
migration configurations with the DS5100 and DS5300.

Notes:
The following table shows the first four digits of the latest
controller firmware versions that are currently available for
various models of the DS3000/DS3500/DCS3700/DS3950/DS4000/DS5000
storage subsystems.

---------------------------------------------------
| IBM Storage | Controller firmware version |
| Subsystem Model | |
|------------------|--------------------------------|
| DS5300 (1818) | 07.84.xx.xx |
|------------------|--------------------------------|
| DS5100 (1818) | 07.84.xx.xx |
|------------------|--------------------------------|
| DS5020 (1814) | 07.84.xx.xx |
|------------------|--------------------------------|
| DS4800 (1815) | 07.60.xx.xx |
| | (07.60.28.00 or later only) |
|------------------|--------------------------------|
| DS4700 (1814) | 07.60.xx.xx |
| | (07.60.28.00 or later only) |
|------------------|--------------------------------|
| DS4500 (1742) | 06.60.xx.xx |
|------------------|--------------------------------|
| DS4400 (1742) | 06.12.56.xx |
|------------------|--------------------------------|
| DS4300 Turbo | 06.60.xx.xx |
| Option (1722) | |
|------------------|--------------------------------|
| DS4300 | 06.60.xx.xx |
| Standard Option | |
| (1722) | |
|------------------|--------------------------------|
| DS4200 (1814) | 07.60.xx.xx |
| | (07.60.28.00 or later only) |
|------------------|--------------------------------|
| DS4100 (1724) | 06.12.56.xx |
| (standard dual | |
| Single | |
| controller Opt.)| |
| --------------------------------------------------|
| DS3950 (1814) | 07.84.xx.xx |
|------------------|--------------------------------|
| DCS3700 (1818) | 07.86.xx.xx |
|------------------|--------------------------------|
| DS3500 (1746) | 07.86.xx.xx |
|------------------|--------------------------------|
| DS3000 (1726) | 07.35.xx.xx |
---------------------------------------------------
ATTENTION:
1. The DS4300 with Single Controller option(M/T 1722-6LU,6LX,and 6LJ),
FAStT200 (M/T 3542-all models) and FAStT500 (M/T 3552-all models)
storage subsystems can no longer be managed by DS Storage Manager
version 10.50.xx.23 and higher.
2. For the DS3x00 storage subsystems, please refer to the readme files that
are posted in the IBM DS3000 System Storage support web site for the
latest information about their usage, limitations or configurations.

http://www.ibm.com/systems/support/storage/disk

=======================================================================
CONTENTS
--------
1.0 Overview
2.0 Installation and Setup Instructions
3.0 Configuration Information
4.0 Unattended Mode
5.0 Web Sites and Support Phone Number
6.0 Trademarks and Notices
7.0 Disclaimer

=======================================================================
1.0 Overview
--------------

The IBM System Storage controller firmware version 07.84.56.00 release


includes the storage subsystem controller firmware and NVSRAM files for
DS3950-all models, DS5020-all models, the DS5100-all models, and the
DS5300-all models. The IBM DS Storage Manager host software version
10.84.x5.30 or later is required to manage DS storage subsystems with
controller firmware version 7.84.xx.xx installed.

ATTENTION: DO NOT DOWNLOAD THIS CONTROLLER FIRMWARE ON ANY OTHER DS3000


or DS4000 STORAGE SUBSYSTEM MODELS.

The IBM System Storage DS Storage Manager version 10.8 Installation and
Host Support Guide is available on IBM's Support Web Site as a down-loadable
Portable Document Format (PDF) file.

In addition, the FC/SATA intermix premium features and the Copy Services
premium features; FlashCopy, Enhanced FlashCopy, VolumeCopy, and Enhanced
Remote Mirroring are separately purchased options.

The Storage partitioning premium feature is standard on all IBM DS3500,


DCS3700, DS3950, DS4000 and DS5000 storage subsystems with the exception of
the IBM DS4100 (machine type 1724 with Standard or Single Controller
options)and the DS4300 (machine type 1722 with Standard or Single Controller
options) storage subsystems. Please contact IBM Marketing representatives or
IBM resellers if you want to purchase additional Storage partitioning options
for supported models.

See section 3.3 "Helpful Hints" for more information.

Refer to the IBM Support Web Site for the latest Firmware and NVSRAM
files and DS Storage Manager host software for the IBM DS Storage
Subsystems.
http://www.ibm.com/systems/support/storage/disk

New features that are introduced with the controller firmware version
07.83.xx.xx or later will not be available for any DS5000/DS3000
storage subsystem controllers without the controller firmware version
07.83.xx.xx or later installed.

The following table shows the controller firmware versions required for
attaching the various models of the DS3500/DCS3700/DS3950/DS4000/DS5000
Storage Expansion Enclosures.

---------------------------------------------------------------------
| Controller | EXP Storage Expansion Enclosures |
| FW Version |-------------------------------------------------------
| | EXP100 | EXP420 | EXP500 | EXP700 | EXP710 | EXP810 |
|------------|---------|--------|--------|--------|--------|--------|
|5.3x.xx.xx | No | No | Yes | Yes | No | No |
|------------|---------|--------|--------|--------|--------|--------|
|5.40.xx.xx | No | No | Yes | Yes | No | No |
|------------|---------|--------|--------|--------|--------|--------|
|5.41.xx.xx | Yes | No | No | No | No | No |
|------------|---------|--------|--------|--------|--------|--------|
|5.42.xx.xx | Yes | No | No | No | No | No |
|------------|---------|--------|--------|--------|--------|--------|
|6.10.0x.xx | Yes | No | Yes | Yes | No | No |
|------------|---------|--------|--------|--------|--------|--------|
|6.10.1x.xx | Yes | No | Yes | Yes | Yes | No |
|------------|---------|--------|--------|--------|--------|--------|
|6.12.xx.xx | Yes | No | Yes | Yes | Yes | No |
|------------|---------|--------|--------|--------|--------|--------|
|6.14.xx.xx | Yes | No | No | No | Yes | No |
|------------|---------|--------|--------|--------|--------|--------|
|6.15.xx.xx | Yes | No | No | No | Yes | No |
|------------|---------|--------|--------|--------|--------|--------|
|6.16.2x.xx | No | No | No | No | Yes | Yes |
|------------|---------|--------|--------|--------|--------|--------|
|6.16.8x.xx | No | Yes | No | No | Yes | Yes |
|------------|---------|--------|--------|--------|--------|--------|
|6.16.9x.xx | No | Yes | No | No | Yes | Yes |
|------------|---------|--------|--------|--------|--------|--------|
|6.19.xx.xx | Yes | No | No | No | Yes | Yes |
|------------|---------|--------|--------|--------|--------|--------|
|6.23.xx.xx | Yes | Yes | No | No | Yes | Yes |
|------------|---------|--------|--------|--------|--------|--------|
|6.60.xx.xx | Yes | Yes | No | Yes | Yes | Yes |
|------------|---------|--------|--------|--------|--------|--------|
|7.10.xx.xx | Yes | Yes | No | No | Yes | Yes |
|------------|---------|--------|--------|--------|--------|--------|
|7.15.xx.xx | Yes | Yes | No | No | Yes | Yes |
|------------|---------|--------|--------|--------|--------|--------|
|7.30.xx.xx | No | No | No | No | No | No |
|------------|---------|--------|--------|--------|--------|--------|
|7.36.xx.xx | Yes | Yes | No | No | Yes | Yes |
|------------|---------|--------|--------|--------|--------|--------|
|7.50.xx.xx | Yes | Yes | No | No | Yes | Yes |
|------------|---------|--------|--------|--------|--------|--------|
|7.60.xx.xx | Yes | Yes | No | No | Yes | Yes |
|-------------------------------------------------------------------|
|7.70.xx.xx | No | No | No | No | No | Yes |
| and later | | | | | | |
---------------------------------------------------------------------

--------------------------------------------------------------------------------
| Controller |EXP Storage Expansion |

| FW Version |Enclosures |
| |-----------------------------------------------------------------|
| | EXP520 | EXP5000 | EXP395 | EXP5060 | EXP3500 | DCS3700 |
| | | | | | | 80E |

|------------|---------|------------|---------|------------|---------|---------|
|7.30.xx.xx | No | Yes | No | No | No | No |
|------------|---------|------------|---------|------------|---------|---------|
|7.36.xx.xx | No | Yes | No | No | No | No |

|------------|---------|------------|---------|------------|---------|---------|
|7.50.xx.xx | No | Yes | No | No | No | No |
|------------|---------|------------|---------|------------|---------|---------|
|7.60.xx.xx | Yes | Yes | Yes | Yes | No | No |
|------------|---------|------------|---------|------------|---------|---------|
|7.70.xx.xx | Yes | Yes | Yes | Yes | Yes | No |
|------------|---------|------------|---------|------------|---------|---------|

|7.77.xx.xx | Yes | Yes | Yes | Yes | Yes | Yes |


| and later | | | | | | |
--------------------------------------------------------------------------------

The following table shows the storage subsystem (controller module) and
minimum controller firmware versions required for T10-PI support.

Note: To have T10-PI work correctly, please use drives that state in the feature
list that they are T10-PI capable. Also, Refer to 1.4 Dependencies for
further restrictions on particular drive supportability information.

---------------------------------------------------------------
| New Model | Machine Type | T10-PI | Controler FW version |
|------------|--------------|----------|-----------------------|
| DS3500 | 1746 | No | N/A |
| | | | |
|------------|--------------|----------|-----------------------|
| DCS3700 | 1818 | No | N/A |
|------------|--------------|----------|-----------------------|
| DCS3700 | 1818 | Yes | 7.83 or later |
| Performance| | | |
| Module | | | |
| Controller | | | |
|------------|--------------|----------|-----------------------|
| DS3950 | 1814 | Yes | 7.77 or later |
|------------|--------------|----------|-----------------------|
| DS5020 | 1814 | Yes | 7.77 or later |
|------------|--------------|----------|-----------------------|
| DS5100 | 1818 | Yes | 7.77 or later |
|------------|--------------|----------|-----------------------|
| DS5300 | 1818 | Yes | 7.77 or later |
---------------------------------------------------------------
The following table shows the storage subsystem (controller module) and
minimum controller firmware versions required for HIC support.

-----------------------------------------------------------------------------------
----------------
| New Model | Machine Type | FC HIC | SAS HIC | 1Gb iSCSI HIC | 10Gb iSCSI
HIC | 10Gb iSCSI HIC |
| | | | | | (Copper)
| (Optic) |

|------------|--------------|----------|---------|---------------|----------------|
----------------|
| DS3500 | 1746 | 7.77 | 7.77 | 7.77 | 7/77
| N/A |
| | | | | |
| |

|------------|--------------|----------|---------|---------------|----------------|
----------------|
| DCS3700 | 1818 | 7.77 | 7.77 | N/A | N/A
| N/A |

|------------|--------------|----------|---------|---------------|----------------|
----------------|
| DCS3700 | 1818 | 7.83 | 7.84 | N/A | N/A
| 7.84 |
| Performance| | | | |
| |
| Module | | | | |
| |
| Controller | | | | |
| |

---------------------------------------------------------------------------------|-
---------------|

Important: Please update DCS3700 Performance Module Controller firmware to 7.84


or later
before the SAS and 10Gb iSCSI HIC installation. Controller will be
lockdown
if controller firmware is still remained in older version.

1.1 Limitations
---------------------

IMPORTANT:
The listed limitations are cumulative. However, they are listed by the storage
subsystem controller firmware and Storage Manager host software releases to
indicate which controller firmware and Storage Manager host software release
that they were first seen and documented.

Note: For limitations in certain operating system environments, refer to the


readme file that is included in the DS Storage Manager host software package for
that operating system environment.
Limitations with version 07.84.56.00 release.

No new limitations

Limitations with version 07.84.54.00 release.

No new limitations

Limitations with version 07.84.53.00 release.

No new limitations

Limitations with version 07.84.46.00 release.

No new limitations

Limitations with version 07.84.44.00 release.

1. When using a higher controller firmware version DCS3700 Performance Module


Controller upgrade kit, the DCS3700 will migrate to the higher version
of controller firmware. Example, a DCS3700 running on 7.77 and using a
Performance
Module controller upgrade kit with 7.83, will migrate to the 7.83 controller
firmware version.
2. While Enhanced Global Mirroring is enabled on an iSCSI configuration, please
do not disable IPv6 ICMP. Re-establishing mirroring session will fail on IPv6
when the storage subsystem sets ICMP to disable on the iSCSI ports.
(LSIP200253529)
4. On a RHEL6u2 cluster system with DMMP (0.4.9-46.el6.x86_x64) enabled, the
following error message may be seen in the cluster log if the configuration
is experiencing high fail over, fail back activity in a short period of time.
==============================================================================
Device /dev/mapper/mpathaa not found. Will retry wait to see if it appears.
The device node /dev/mapper/mpathaa was not found or did not appear in the udev
create time limit of 60 seconds
Fri Apr 27 18:45:08 CDT 2012 restore: END restore of file system /xyz (err=1)
ERROR: restore action failed for resource /xyz
/opt/LifeKeeper/bin/lcdmachfail: restore in parallel of resource "dmmp19021"
has failed; will re-try serially
END vertical parallel recovery with return code -1
==============================================================================
(LSIP200291263)
5. On an array or disk pool with automatic cache flushing disabled, after a
storage subsystem power cycle, the restored cache data flush to disk will
be stopped. As a result, all the LUNs will remain in write through mode
permanently
and cause significant performance drop. Please set cache flush modifier value to
8 (cacheFlushModifier=8; In default, cache flush modifier is 10) via SMCli, the
cache recovery should be able to complete. (LSIP200293218, LSIP200288495)
6. Please do not perform a LUN resync operation on a Windows 2003-x64/2008R2
cluster
resource volume via VDS, the operation may fail and the LUN goes offline.
The LUN will no longer be visible in DiskManagment and Diskpart utility.
The LUN resync for single and multiple snapshots is successful in a non-cluster
configuration without any issues. (LSIP200277670)
7. While performing a LUN resync operation on a Windows 2008R2/2012 non-cluster
resource volume in DISKSHADOW, the operation can finish successfully,
however, in VSS service log will show an error (ID 8173). (LSIP200301273)
8. While DS3950/5020 attached with a QLogic 1G iSCSI host interface card, once
the HBA damaged, it is unresponsive but accessible as a PCI device, then the
startup process in the QLogic driver will trigger a data abort and the
controller
will reboot. This will continue until the controller locks down after 5
consecutive reboots. (LSIP200276687)
9. When DS3500 and DCS3700 use default IPv6 address on iSCSI ports for
Enhanced Global Mirroring, disabling and re-enabling IPv6 service will
result a failure on "Can not communicate with remote array" in MEL.
(LSIP200257973)

Limitations with version 07.83.27.00 release.

No new limitations

Limitations with version 07.83.25.00 release.

No new limitations

Limitations with version 07.83.23.00 release.

1. This release is only for DCS3700 with Performance Module Controllers


2. While performing concurrent code download on DCS3700 with Performance Module
Controllers,
a FC link may not be able to come up within 60 seconds. On a Linux with RDAC
configured system, the controller will be considered as removed if the FC link
stays down
for more than 60 seconds. The activity required to process the removed
controller, the
associated LUN failover and other related activites may take several minutes to
complete.
If both controllers have the FC links down for more than 60 seconds during a CFW
download,
an IO error could occur on the Linux host. (LSIP200202522)
3. While a DCS3700 Performance Module Controller rebooting, the alternate
controller may
encounter a drive discovery panic if a large number of drives are attached. The
associated
array will became degraded while both controllers are rebooting and access will
be lost.
The system will recover when the reboot associated with this issue completes.
(LSIP200205582)
4. Comparing performance of VAAI enabled vs VAAI disabled with Cache mirroring
enabled, customers
may see up to an 8% degradation in performance during VM cloning operation for
four or more
on the DCS3700 with Performance Module Controllers. (LSIP200246884)
5. While doing VMware VM cloning, the operation could be failed if path failover
triggered.
VMware may not able to successfully failover and retry the connectivity to
alternate path.
The user needs to wait for the clone failure to show on the vSphere screen and
return the path
to an optimal state before trying to re-execute the clone command. This problem
is limited to
scenarios where a clone operation has been started on the controller and the
controller��s
paths need to be transferred to the alternate. (LSIP200260946)
6. While DCS3700 Performance Module Controller, DS5100/5300 rebooting with heavy
I/O, the
controller may encounter a work queue panic. The controller will reboot
immediately and
continue operations normally. There owuld be no data lost after this symptom. To
reduce
I/O transaction would minimize the chance of this happen. (LSIP200159241)

Limitations with version 07.83.22.00 release.

1. While selecting date in FlashCopy scheduling dialog with Windows accessibility


feature,
the screenreader is not able to show the calendar field clearly. The contrast of
calendar background and date number doesn't high enough. Click the date to tell
exact value been selected.(LSIP200276685)

Limitations with version 07.83.18.00 release.

1. Disabling individual host side PHY on the SAS switch during IOs
might cause the DS3500 and DCS3700 controllers to have ancient IOs. The work-
around is to disable the entire switch port instead.(LSIP200268504)
2. Rebooting SAS switch during periods with heavy IOs might cause the DS3500
and DCS3700 controllers to have SAS Discovery Errors 0x90. This error might
cause the controllers to continually reboot until it is placed in lockdown
state.(LSIP200274704)
3. Replacing a failed controller in a simplex DS3500 configuration having
controller firmware version 7.83.xx.xx and later installed with a controller
having version 7.77.xx.xx and earlier installed will result in the controller
be placed in "no drive found" state. The recovery procedure is to use the
Storage Manager version 10.83 and later to upgrade the firmware in the
replaced controllers with controller firmware version 7.83.xx.xx and later.
Note: Use the firmware upgrade menu function in the Enterprise management
window instead of the Subsystem management window to initiate the controller
firmware upgrade process.
4. While DS5100/5300 rebooting with heavy I/O, the controller may encounter a
work queue panic. The controller will reboot immediately and continue operations
normally. There owuld be no data lost after this symptom. To reduce I/O
transaction
would minimize the chance of this happen. (LSIP200159241)

Limitations with version 07.77.34.00 release.

No new limitations

Limitations with version 07.77.18.00 release.

1. "Transferred on" date for Pending configuration is incorrect in AMW


physical tab.
The timestamp displayed for the staged-firmware image will be incorrect.
This problem can be avoided by ensuring the controller�s real-time clock
is correct before upgrading firmware.

2. ESM State capture info is truncated during support bundle collection.


An additional support-bundle can be collected to obtain valid ESM
capture-data. If this issue occurs, the support-bundle that is captured
will be missing some information in the ESM state-capture.

Limitations with version 07.70.38.00 release.

1. The ETO setting of the 49Y4235 - Emulex Virtual Fabric Adapter (CFFh)
for IBM BladeCenter HBA must be manually set to 144 second. Please
review the publications that is shipped with the card for instruction
to change the setting.

Limitations with version 07.70.23.00 release.

1. FDE Drive in �Security Locked?state is reported as �incompatible?state


in the drive profile. The drive can be recovered by importing the
correct lock key.

2. The condition when LUNs within a Raid array have split ownership between the
two controllers could possibly see false positives on synthesized PFA has been
corrected by CR151915. The stagnant I/O will be discarded.

3. ESX 3.5.5:
Filesystem I/O in SLES11 VM failed during CFW upgrade in ESX35U5 + P20 patches.
Restriction: Running I/O with SLES11 VM filesystem while updating controller
firmware in VMware 3.5U5 + P20 patch.
Impact: User will see I/O error and Filesystem volumes in SLES11 VMs will
be changed to read-only mode.
Work-around: User can either perform controller FW upgrade with no I/O running
on SLES11 VM or with no Filesystem created in SLES11 VMs.
Mitigation: User can unmount the filesystem and remount the filesystem, then
I/O should be able to restart.

4. ESX 4.1:
I/O failed on Filesystem volume in SLES11 VM on ESX41.
Restriction: Running I/O with SLES11 VM filesystem while updating controller
firmware in VMware 4.1 env.
Impact: User will see IO error and Filesystem volumes in SLES11 VMs will
be changed to readonly mode.
Work-around: User can either perform controller firmware upgrade with no I/O
running on SLES11 VM or with no Filesystem created in SLES11
VMs.
Mitigation: User can unmount the filesystem and remount the filesystem, then
I/O should be able to restart.

Data Error in Head and Tail Sector is not matching during FcChipLip test with
SANbox.
Restriction: Need excess host and drive side chips reset simultaneously for a
long time on ESX4.1, QLogic QLE2562 and SANbox 5800
Impact: The LBA stored in the header or tail does not match the expected
LBA.
Work-around: N/A
Mitigation: None
I/O's failed in SLES10.2 VM during Controller FW Upgrade test on ESX41 with
Brocade HBA.
Restriction: Avoid online concurrent controller download with Brocade 8Gb HBAs
(8x5) on all guest OSes.
Impact: User will often see this issue if controller firmware upgrade is
performed with I/O.
Work-around: This issue occurs on various guest OSes so to avoid the issue,
user should perform an offline (no I/O to controllers)
controller
firmware upgrade.
Mitigation: Reboot the failed VM host.

BladeCenter: I/O errors reported on Linux VMs during automated CFW download.
Restriction: Running IO with SLES11 VM filesystem while updating CFW in VMware
4.1 env.
Impact: User will see IO errors on Linux Virtual machines if they upgrade
Controller Firmware while they have active I/Os.
Work-around: Stop all I/Os to the array from the ESX4.1 host (at least from the
Linux Virtual Machines) during Controller Firmware Downloads.
Mitigation: Users will have to reissue the I/O after controller firmware
updating has completed.

RHEL5.5 RHCS Node stuck during shutdown when running node failover test.
Restriction: avoid soft-reboot with RHEL5.5 x64, IA64, PPC with Red Hat Cluster
Suite. Need to do hard-boot.
Impact: The node will hang indefinitely and will not come back online.
Work-around: physically power it off.
Mitigation: Will need to turn the node off manually, by physically powering it
off.

RHEL5.5 RHCS GFS failed to mount the filesystems during node bootup.
Restriction: avoid node reboot on RH5.5 x64, IA64, PPC, Red Hat cluster suite env

with nodes running GFS.


Impact: The node failed to remount all the external storage disks file
systems
(GFS) during boot up. Applications will lost access to the
cluster�s
resource when the 2nd node reboot/offline.
Work-around: create a script to sleep for 30 sec and allow it executes at
startup.
Mitigation: Restart clvmd after boot , then restart GFS.

Limitations with version 07.60.28.00 release.

1. The 7.60.xx release includes the synthesized drive PFA feature. This feature
will provide a critical alert if drive IOs respond slower than expected 30
times within a one hour period. There is a condition when LUNs within a RAID
array have split ownership between the two controllers where false positives
could be received on this.

2. When repeatedly remove and re-insert the fan module in the EXP5060 drive
expansion enclosure, the fan module amber fault LED might stay lit and the
recovery guru in the Storage manager might report that the fan is bad whereas
the fan module is working perfectly fine. This condition persists even when
the fan is replaced with a new fan module. Because the fan was removed and
re-insert many times, the controller might run into timing problem between
replacing the fan module and when the controller polls the ESM for status
updates and misinterprets the status of the removed fan as �failed? That
status bit is never reset, causing the fan amber fault led to remain on and
failed status to persist even after a new fan CRU is installed. If a
replacement fan module exhibits these symptoms, place your hand over the fan
and compare the air flow to the fan with good status on the opposite side of
the enclosure. If air flow is similar you can assume the fan with failed
status working properly. Schedule time for the DS subsystem can be placed
offline so that the whole storage subsystem configuration can be power cycled
to clear the incorrect status bit.

Limitations with version 07.60.13.05 release.

1. When modifying the iSCSI host port attributes there can be a delay of up to
3 minutes if the port is inactive. If possible, connect a cable between the
iSCSI host port and a switch prior to configuring any parameters.

2. When doing a Refresh DHCP operation in the Configure iSCSI port window, and
the iSCSI port is unable to contact the DHCP server, the following
inconsistent informational Mel events can be reported;
- MEL 1810 DHCP failure
- MEL 1807 IP address failure
- MEL 1811 DHCP success.
For this error to occur, the following specific conditions and sequence must
have been met:
- Before enabling DHCP, static addressing was used on the port and that
static address is still valid on the network.
- The port was able to contact the DHCP server and acquire an address.
- Contact with the DHCP server is lost.
- The user performs the Refresh DHCP operation from DS Storage Manager Client.
- Contact with the DHCP server does not come back during the DHCP refresh
operation and timeouts.
Check the network connection to your iSCSI port and the status of the DHCP
server before attempting the Refresh DHCP operation again.

Limitations with version 07.60.13.00 release.

1. Brocade HBA does not support direct attach to storage subsystem.

2. Tivoli Productivity Center server does not retrieve the IPv6 address
from an SMI-S provider on an IPv6 host. You must use an IPv4 host.

3. Linux clients in a VIOS environment should change their error recovery


timeout value to 300 seconds and their device timeout value to 120
seconds.
Default ibmvscsic error recovery timeout = 60, the command to change it
to 300 is:
echo 300 > /sys/module/ibmvscsic/parameters/init_timeout
Default device timeout = 30, the command to change it to 120 is:
echo 120 > /sys/block/sdb/device/timeout

4. Maximum sustainable IOPs falls slightly when going above 256 drives
behind a DS5100 or DS5300. Typically adding spindles improves performance
by reducing drive side contention. Not all IO loads will benefit by adding
hardware.

5. Event mechanism cannot support level of events submitted to keep


consistency group state parallel with repeated reboots and auto resync for
more than 32 RVM LUNs. Although we support the user setting RVM consistency
group volumes to auto resync, we advise not to use this setting as it can
defeat the purpose of the consistency group in a disaster recovery
situation. If the customer must do this then the number of RVM LUNs which
are in a consistency group and also have auto resync set should be limited
to 32.

6. If you switch from IPv6 to IPv4 and have an iSNS server running,
you could see IPv6 address still showing up on the iSNS server. To
clear this situation, disable then enable iSNS on the controllers
after you disabled IPv6 support.

7. Long controller Start of Day times have been observed on the iSCSI
DS5020 with a large number of mirrors configured with an Asynchronous
w/ Write Order Consistency mirroring policy.

Limitations with version 07.50.13.xx release.

No new limitations

Limitations with version 07.50.12.xx release.

1. Start of day (a controller reboot) can take a very long time, up to


20 minutes, after a subsystem clear configuration on a DS5000
subsystem. This occurs on very large configurations, now that the
DS5000 can support up to 448 drives, and can be exacerbated if there
are SATA drives.

2. Veritas cluster server node failure when fast fail is enabled.


Recommend setting dmp_fast_fail HBA flag to OFF. Frequency of
this causing a node failure is low unless the storage subsystem
is experiencing repeated controller failover conditions.

3. Brocade HBA does not support direct attach to storage subsystem.

4. At times the link is not restored when inserting a drive side cable
into a DS5000 controller, a data rate mismatch occurs. Try reseating
the cable again to clear the condition.

5. Due to a timing issue with controller firmware, using SMcli or the


script engine to create LUNs and set LUN attributes will sometimes
end with a script error.

6. Mapping host port identifiers to host via Script Editor hangs; fails
via CLI with "... error code 1? This will occur when using the
CLI/script engine to create an initial host port mapping.

7. Under certain high stress conditions controller firmware upgrade will


fail when volumes do not get transferred to the alternate controller
quick enough. Upgrade controller firmware during maintenance windows
or under low stress IO conditions.

8. Concurrent controller firmware download is not supported in storage


subsystem environments with attached VMware ESX server hosts running a
level of VMware ESX older than VMware ESX 3.5u5 p20.
Limitations with version 07.36.17.xx release.

No new limitations

Limitations with version 07.36.14.xx release.

No new limitations

Limitations with version 07.36.12.xx release.

1. For current PowerHA/XD (formerly HACMP)and GPFS support information,


please review the interoperability matrix found at:
http://www-03.ibm.com/systems/storage/disk/ds4000/interop-matrix.html
-or-
www.ibm.com/systems/support/storage/config/ssic/index.jsp

Limitations with version 07.36.08.xx release.

1. After you replace a drive, the controller starts the reconstruction


process on the degraded volume group. The process starts successfully,
and it progresses through half of the volume group until it reaches
two Remote Volume Mirroring (RVM) repository volumes. The first RVM
repository volume is reconstructed successfully, but the process stops
when it starts to reconstruct the second repository volume. You must
reboot the owning controller to continue the reconstruction process.

Limitations with version 07.30.21.xx release.

1. For current PowerHA/XD (formerly HACMP)and GPFS support information,


please review the interoperability matrix found at:
http://www-03.ibm.com/systems/storage/disk/ds4000/interop-matrix.html
-or-
www.ibm.com/systems/support/storage/config/ssic/index.jsp

2. When migrating or otherwise moving controllers or drive trays between


between systems, always quiesce IO and allow the cache to flush before
shutting down a system and moving components.

3. When utilizing remote mirrors in a write consistency group, auto-


synchronization can fail to resynchronize when a mirror link fails
and gets re-established. Manual synchronization is the recommended
setting when mirrors are spread across multiple arrays.

4. Using 8KB segment size (default) with large IOs on a RAID-6 volume
can result in an ancient IO as a single IO will span several stripes.
Adjusting the segment size to better match the IO sizes such as 16KB
will improve this performance.

5. Doing volume defragmentation during peak IO activity can lead to an IO


error. It is recommended that any volume defrag is done during off-peak
or maintenance windows.

6. Doing RAID migration in large configurations during peak IO activity can


take a very long time to complete. It is recommended that RAID migration
is done during off-peak or maintenance windows.
7. When using legacy arrays running 7.10.23.xx firmware as a remote mirror
with a DS5000 array can have performance issues causing "Data on mirrored
pair unsynchronized" errors. Updating the legacy controller to
07.15.07.xx resolves this issue.

8. The IPv6 dynamic Link-Local address for the second management port on the
controller is not set when you toggle Stateless Autoconfig from Disabled
to Enabled. The IPv6 dynamic Link-Local address is assigned when the
controller is rebooted.

8. DS5000 storage subsystems support legacy EXP810 storage expansions. If


you are moving these expansions from an existing DS4000 system to the
DS5000 system as the only expansions behind the DS5000, the DS4000 system
must be running 07.1x.xx.xx controller firmware first.

1.2 Enhancements
-----------------

The DS Storage Manager version 10.84.xx.30 host software in conjunction with


controller firmware version 7.84.53.00 and higher support for

- Provides bug fixes as listed in the changelist file.

Note: Host type VMWARE has been added to NVSRAM as an additional host type,
starting with controller firmware version 7.60.40.00. It is now separated
from the Linux Cluster host type, LNXCLVMWARE or LNXCL, which is now
rename as LNXCLUSTER. The VMWARE host type also has the "Not Ready" sense
data and "Auto Volume Transfer" defined appropriately for VMWare ESX server.

+ The DS4200 and DS4700 with controller firmware version 7.60.40.xx


and later installed will use host index 21.
+ All other supported systems with controller firmware version 7.60.40 and
later installed will use host index 16 instead.
Although not required, it is recommended to move to the VMWARE host type
instead of continuing using the Linux host type for VMware hosts since any
upgrading of controller firmware and NVSRAM would continue to require
running scripts to modify the Linux host type for VMware hosts; whereas,
using the VMWARE host type does not require running scripts.

In addition, starting with controller firmware version 7.83.xx.xx, a new


VMware host type, VMWareTPGSALUA, is created for use with storage
subsystems having ALUA-enabled controller firmware installed (version
7.83.xx.xx and later.)

- The controllers do not need to be rebooted after the change of host type.
- The VMware host will need to be rebooted.
- Changing the host type should be done under low I/O conditions.

1.3 Prerequisites
------------------

The IBM DS Storage Manager host software version 10.84.x5.30 or later is


required to manage DS5000 storage subsystems with controller firmware
version 07.84.56.00 installed.
1.4 Dependencies
-----------------

The information in this section is pertinent to the controller firmware


version 7.83.xx.xx or higher release only. For the dependency requirements of
previously released controller firmware versions, please consult
the Dependencies section in the readme file that was packaged with each
of those controller firmware releases.

ATTENTION:

1. Always check the README files (especially the Dependencies section)


that are packaged together with the firmware files for any required
minimum firmware level requirements and the firmware download
sequence for the storage/drive expansion enclosure ESM, the
storage subsystem controller and the hard drive firmware.

2. The DS5020, DS3950, EXP5000, EXP810, EXP520, and EXP395 FC-SAS drives must
have the FC-SAS interposer firmware version 2264 or later installed. Some
drives with FC-SAS interposer firmware version earlier than version 2264
may report incorrect inquiry information during Start of Day. This causes
the controller firmware to make the drive uncertified (incompatible),
which in turn, causes the drive to no longer be accessible for I/Os. This
condition will also cause the associated array to go offline. Since the
array is no longer accessible, the controller firmware may not recover the
cache for all LUNs under this array. This behavior may result in data not
being written. If you believe that you have encountered this issue, please
call IBM Support for assistance with the recovery actions.

The FC-SAS interposer firmware version 2264 and later is available in the
ESM/HDD firmware package version 1.78 or later.

3. The 3 TB SATA drive option for EXP5060 expansion enclosure requires


ATA firmware version LW1613 and higher. The drive will be
shown as "Incompatible" if it is installed in an EXP5060 drive
slot with ATA translator firmware version older than LW1613. Please
refer to the latest EXP5060 Installation, Users and Maintenance Guide
for more information on working with the 3 TB SATA drive option.

4. The Storage Manager host software version 10.83.x5.18 or higher is


required for managing storage subsystems with 3 TB NL FC-SAS drives.
Storage manager version 10.83.x5.18 or higher in conjunction with
controller firmware version 7.83.xx.xx and later allow the creation of
T10PI-enabled arrays using 3 TB NL FC-SAS drives.

5. The EXP810 and EXP520 ESM firmware version must be at or greater


than 98C5.

6. The disk drives and ESM packages are defined in the Hard Disk Drive
and ESM Firmware Update Package version 1.78 or higher found at the IBM
support web site.

7. Under certain high stress conditions controller firmware upgrade will


fail when volumes do not get transferred to the alternate controller
quick enough. Upgrade controller firmware during maintenance windows
or under low stress IO(off-peak) conditions.
8. The 10Gb dual port iSCSI HIC (FC#3131) and 6Gb quad port SAS HIC (FC#3132)
for DCS3700 Performance Module Controller are supported on controller
firmware 7.84.44.00 or later.

1.5 Level Recommendations


-----------------------------------------

1. Storage Controller Firmware versions:

a. DS3950: FW_DS3950_07845600
b. DS5020: FW_DS5020_07845600
c. DS5100: FW_DS5100_07845600
d. DS5300: FW_DS5300_07845600

2. Storage Controller NVSRAM versions:

a. DS3950: N1814D50R0784V04.dlp
b. DS5020: N1814D20R0784V04.dlp
c. DS5100: N1818D51R0784V04.dlp
d. DS5300: N1818D53R0784V04.dlp

Note: The DS3500/DCS3700/DS3950/DS4000/DS5000 storage subsystems shipped


from the factory may have NVSRAM versions installed with a different
first character prefix. Both manufacturing NVSRAM version and the
"N" prefixed NVSRAM version are the same. You do not need to update
your storage subsystem with the "N" prefixed NVSRAM version as stated
above. For example, the N1815D480R923V08 and M1815D480R923V08 (or
C1815D480R923V08 or D1815D480R923V08 or ...) versions are the same.
Both versions share the same "1815D480R923V08" string value.

Refer to the following IBM System Storage?Disk Storage Systems


Technical Support web site for the latest released code levels.

http://www.ibm.com/systems/support/storage/disk

=======================================================================

2.0 Installation and Setup Instructions


-----------------------------------------

Note: Before commencing with any firmware upgrade procedure, use the
Storage Manager client to perform a Collect All Support Data capture.
Save this support data capture on a system other than the one that is
being upgraded.

The sequence for updating your Storage Subsystem firmware may be


different depending on whether you are updating an existing configuration,
installing a new configuration, or adding drive expansion enclosures.

ATTENTION: If you have not already done so, please check the Dependencies
section for ANY MINIMUM FIRMWARE REQUIREMENTS for the storage server
controllers, the drive expansion enclosure ESMs and the hard drives in the
configurations. The order in which you will need to upgrade firmware levels
can differ based on prerequisites or limitations specified in these
readme files. If no prerequisites, or limitations, are specified, the
upgrade order referred to in section 2.1 Installation For an Existing
Configuration should be observed.
If drive firmware upgrades are required, down time will need to be scheduled.
The drive firmware upgrades require that there are no Host I/Os sent to the
storage controllers during the download.

Note: For additional setup instructions, Refer to the Installation, User's


and Maintenance Guide of your storage subsystem or storage expansion
enclosures.

2.1 Installation For an Existing Configuration


-------------------------------------------------

1. Upgrade the storage manager client program to the latest storage


manager 10.84 version that is available from the IBM system storage
support web site. Older versions of the storage manager client program
will show that the new firmware file is not compatible with the to-be-
upgraded subsystem, even when the existing version of the controller
firmware installed in the DS storage subsystem is of 7.84.xx.xx code
thread.

http://www.ibm.com/systems/support/storage/disk

2. Download controller firmware and NVSRAM.

Please refer to the IBM DS Storage Manager version 10.8 Installation


and Host Support Guide for additional information.

Important: It is possible to download the controller firmware and


NVSRAM at the same time by selecting the option check box
in the controller firmware download window. However if
you have made any setting changes to the host parameters,
downloading NVSRAM will overwrite those changes. These
modifications must be reapplied after loading the new
NVSRAM file. You may need to update firmware and NVSRAM
during a maintenance window.

To download controller firmware and NVSRAM using the DS Storage


Manager application, do the following:

a. Open the Subsystem Management window.


b. Click Advanced => Maintenance => Download => Controller Firmware
Follow the online instructions.
c. Reapply any modifications to the NVSRAM. Both controllers must be
rebooted to activate the new NVSRAM settings.

To download controller NVSRAM separately, do the following:

a. Open the Subsystem Management window.


b. Click Advanced => Maintenance => Download => Controller => Controller
NVSRAM.
Follow the online instructions.
c. Reapply and modifications to the NVSRAM. Both controllers must be
rebooted to activate the new NVSRAM settings.

3. Update the firmware of the ESMs in the attached drive expansion


enclosures to latest levels. (See 1.4 Dependencies)
To download drive expansion ESM firmware, do the following:

a. Open the Subsystem Management window.


b. Click Advanced => Maintenance => Download => Environmental(ESM)
Card Firmware.
Follow the online instructions.

Note: The drive expansion enclosure ESM firmware can be updated


online with no downtime if both ESMs in each of the drive expansion
enclosures are functional and one (and ONLY one) drive expansion
enclosure is selected in the ESM firmware download window for ESM
firmware updating at a time.

Note: SAN Volume Controller (SVC) customers are now allowed to


download ESM firmware with concurrent I/O to the disk subsystem
with the following restrictions.

1) The ESM Firmware upgrade must be done on one disk expansion


enclosure at a time.
2) A 10 minute delay from when one enclosure is upgraded to the
start of the upgrade of another enclosure is required.

Confirm via the Storage Manager Application's "Recovery Guru"


that the DS5000 status is in an optimal state before upgrading the
next enclosure. If it is not, then do not continue ESM firmware
upgrades until the problem is resolved.

Note: Reference IBM System Storage?Disk Storage Systems


Technical Support web site for the current ESM firmware
versions for the drive expansion enclosures.

4. Make any hard disk drive firmware updates as required. FAILURE to


observe the minimum firmware requirement might cause your storage
subsystem to be OUT-OF-SERVICE. Check the IBM System Storage?
Disk Storage Systems Technical Support web site for the latest released
hard drive firmware if you have not already upgraded drive firmware to
the latest supported version. (See 1.4 Dependencies)

To download hard disk drive firmware, do the following:

a. Schedule down time because the drive firmware upgrades require


that there are no HOST I/Os to be sent to the DS5000 controllers.
b. Open the Subsystem Management window.
c. Click Advanced => Maintenance => Download => Drive Firmware.
Follow the online instructions.
Note: with controller firmware version 06.1x.xx.xx or later,
multiple drives from up to four different drive types can be updated
at the same time.

=======================================================================

3.0 Configuration Information


-----------------------------

3.1 Configuration Settings


--------------------------
1. By default the IBM DS Storage Manager 10.83 or higher does not automatically
map logical drives if storage partitioning feature is enabled.
This means that the logical drives are not automatically presented
to host systems.

For a new installation, after creating new arrays and logical drives;
a. If your host type is not Windows, create a partition with your
host type and map the logical drives to this partition.
b. If your host type is Windows, you can map your logical drives to
the "Default Host Group" or create a partition with a Windows host
type.

When upgrading from previous versions of IBM DS Storage Manager


to version 10.70.
a. If upgrading with no partitions created and you have an operating
system other than Windows, you will need to create a partition
with your host type and map the logical drives from the "Default
Host Group" to this partition.
b. If upgrading with Storage Partitions and an operating system other
than Windows is accessing the default host group, you will need
to change the default host type. After upgrading the NVSRAM, the
default host type is reset to Windows Server 2003/2008 non-clustered
for DS storage server with controller firmware version 06.14.xx.xx
or later. For DS4000 storage server with controller firmware version
06.12.xx.xx or earlier, it is reset to Windows non-clustered (SP5 or
higher), instead.

Refer to the IBM DS Storage Manager online help to learn more


about creating storage partitions and changing host types.

2. On a Linux host, DMMP/MPIO is required on ALUA enabled host type and


this fail over mechanism is also prefered on controller firmware
7.83 or later.

3. Running script files for specific configurations. Apply the


appropriate scripts to your subsystem based on the instructions
you have read in the publications or any instructions in the
operating system readme file. A description of each script is
shown below.

- SameWWN.scr: Setup RAID controllers to have the same World Wide


Names. The World Wide Names (node) will be the same for each
controller pair. The NVSRAM default sets the RAID controllers
to have the same World Wide Names.

- DifferentWWN.scr: Setup RAID controllers to have different World


Wide Names. The World Wide Names (node) will be different for each
controller pair. The NVSRAM default sets the RAID controllers
to have the same World Wide Names.

- EnableAVT_W2K_S2003_noncluster.scr: The script will enable automatic


logical drive transfer (AVT/ADT) for the Windows 2000/Server 2003
non- cluster heterogeneous host region. The default setting is to
disable AVT for this heterogeneous host region. This setting is one
of the requirements for setting up the remote boot or SAN-boot. Do
not use this script unless it is specifically mentioned in the
applicable instructions. (This script can be used for other host
type if modifications are made in the script, replacing the Windows
2000/Server 2003 non-cluster host type with the appropriate host
type that needs to have AVT/ADT enabled)

- DisableAVT_W2K_S2003_noncluster.scr: The script will disable the


automatic logical drive transfer (AVT) for the Windows 2000/Server
2003 non-cluster heterogenous host region. This script will reset the
Windows 2000/Server 2003 non-cluster AVT setting to the default.
(This script can be used for other host type if modifications are
made in the script, replacing the Windows 2000/Server 2003 non-cluster
host type with the appropriate host type that needs to have AVT/ADT
disabled)

- EnableAVT_Linux.scr: The script will enable automatic logical drive


transfer (AVT) for the Linux heterogeneous host region. Do not use this
script unless it is specifically mentioned in the applicable
instructions.

- DisableAVT_Linux.scr: The script will disable the automatic logical


drive transfer (AVT) for the Linux heterogeneous host region. Do not
use this script unless it is specifically mentioned in the
applicable instructions.

- EnableAVT_Netware.script: The script will enable automatic logical


drive transfer (AVT) for the NetWare Failover heterogeneous host
region. Do not use this script unless it is specifically mentioned in
the applicable instructions.

- DisableAVT_NetWare.script: The script will disable the automatic


logical drive transfer (AVT) for the NetWare Failover heterogeneous
host region. Do not use this script unless it is specifically
mentioned in the applicable instructions.

- disable_ignoreAVT8192_HPUX.script: This script will disable the DS4000


storage subsystem ignoring of AVT requests for the HP-UX server
specific read pattern of 2 blocks at LBA 8192. The AVT ignoring request
for the LBA 8192 reads was implemented to prevent a possible occurrence
of an AVT storm caused by the HP-UX server probing in the wrong order
of available paths to the volume(s) when it detect server to LUN path
failure. Use this script only when you do not have defined LVM mirrored
volumes using the mapped logical drives from the DS4000 storage
subsystems. Please contact IBM support for additional information, if
required.

- enable_ignoreAVT8192_HPUX.script: This script will enable the DS4000


storage subsystem ignoring of AVT requests for the HP-UX server
specific read pattern of 2 blocks at LBA 8192. The AVT ignoring request
for the LBA 8192 reads was implemented to prevent a possible occurrence
of an AVT storm caused by the HP-UX server probing in the wrong order of
available paths to the volume(s) when it detect server to LUN path
failure. Use this script only when you do have defined LVM mirrored
volumes using the mapped logical drives from the DS4000 storage
subsystems. Please contact IBM support for additional information, if
required.

3.2 Unsupported configurations


------------------------------

The configurations that are currently not being supported are listed below:
1. Any DS4000-all models, DS3200/3300/3400, FAStT500 and FAStT200
storage subsystems configurations are not supported with this
version of controller firmware.

2. The EXP520 Expansion enclosure is not supported attached to any other IBM
DS Storage Subsystems except the DS5020. EXP810 drive enclosures are also
supported in the DS5020 with the purchase of a premium feature key.

3. The IBM EXP395 Expansion Enclosure is not supported attached


to any other IBM DS Storage Subsystems except the DS3950. EXP810 drive
enclosures are also supported in the DS3950 with the purchase of a premium
feature key.

4. The IBM EXP5000 Expansion Enclosure is not supported attached


to any other IBM DS Storage Subsystems except the DS5100 and DS5300.
EXP810 drive enclosures are also supported with the DS5100 and DS5300
once the RPQ is submitted and approved.

5. Fibre Channel loop environments with the IBM Fibre Channel Hub,
machine type 3523 and 3534, in conjunction with the IBM Fibre Channel
Switch, machine types 2109-S16, 2109-F16 or 2109-S08. In this
configuration, the hub is connected between the switch and the IBM
Fibre Channel RAID Controllers.

6. The IBM Fibre Channel Hub, machine type 3523, connected to IBM
machine type 1722, 1724, 1742, 1815, 1814, 3542 and 3552.

7. A configuration in which a server with only one FC host bus


adapter connects directly to any storage subsystem with dual
controllers is not supported. The supported configuration is the
one in which the server with only one FC host bus adapter connects
to both controller ports of any DS storage subsystem with dual
controllers via Fibre Channel (FC) switch (SAN-attached
configuration.)

8. On a VMware host, direct attach to DS storage subsystem is not supported.


Single-switch configurations are allowed, but each HBA and storage
subsystem controller combination must be in a separate SAN zone.

3.3 Helpful Hints


------------------
1. Depending on the storage subsystem that you have purchased, you may have
to purchase the storage partitioning premium feature option or an option to
upgrade the number of supported partitions in a storage subsystem. Please see
IBM Marketing representatives or IBM resellers for more information.

2. When making serial connections to the DS storage controller, the baud


rate is recommended to be set at 38400, 115200 on DCS3700 Performance Module
Controller. Note: Do not make any work and connections to the all storage
subsystems serial ports unless it is instructed by IBM Support. Incorrect
use of the serial port will easily result in lost of configuration,
and possibly, data.

3. All enclosures (including storage subsystems with internal drive


slots) on any given drive loop/channel should have complete unique IDs,
especially the single digit (x1) portion of the ID, assigned to them. For
example, in a maximum configured DS4500 storage subsystem, enclosures on one
redundant drive loop should be assigned with ids 10- 17 and enclosures on
the second drive loop should be assigned with ids 20-27. Enclosure ids with
the same single digit such as 11, 21 and 31 should not be used on the same
drive loop/channel.

The DS3950, DS5020, DS4200 and DS4700 storage subsystems and the EXP395,
EXP520, EXP420, EXP810, and EXP5000 storage expansion enclosures do not have
mechanical ID switches. These storage subsystems and storage expansion
enclosures automatically set the Enclosure IDs. IBM recommendation is not
make any changes to these settings unless the automatic enclosure ID settings
resulting in non- unique single digit settings for enclosures (including the
storage subsystems with internal drive slots) in a given drive loop/channel.

4. The ideal configuration for SATA drives is one logical drive per array and
one OS disk partition per logical drive. This configuration minimizes the
random head movements that increase stress on the SATA drives. As the number
of drive locations to which the heads have to move increases, application
performance and drive reliability may be impacted. If more logical drives are
configured, but not all of them used simultaneously, some of the randomness
can be avoided. SATA drives are best used for long sequential reads and
writes.

5. Starting with the DS4000 Storage Manager (SM) host software version 9.12
or later, the Storage Manager client script window looks for the files with
the file type of ".script" as the possible script command files. In the
previous versions of the DS4000 Storage Manager host software, the script
window looks for the file type ".scr" instead. (i.e. enableAVT.script for
9.12 or later vs. enableAVT.scr for pre-9.12)

6. Inter-operability with tape devices is supported on separate HBA and


switch zones.

=======================================================================

4.0 Unattended Mode


---------------------
N/A

=======================================================================

5.0 WEB Sites and Support Phone Number


----------------------------------------

5.1 IBM System Storage?Disk Storage Systems Technical Support web site:
http://www.ibm.com/systems/support/storage/disk

5.2 IBM System Storage?Marketing Web Site:


http://www.ibm.com/systems/storage/disk

5.3 IBM System Storage?Interoperation Center (SSIC) web site:


http://www.ibm.com/systems/support/storage/ssic/

5.4 You can receive hardware service through IBM Services or through your
IBM reseller, if your reseller is authorized by IBM to provide warranty
service. See http://www.ibm.com/planetwide/ for support telephone
numbers, or in the U.S. and Canada, call 1-800-IBM-SERV (1-800-426-
7378).
IMPORTANT:
You should download the latest version of the DS Storage Manager
host software, the storage subsystem controller firmware, the
drive expansion enclosure ESM firmware and the drive firmware at
the time of the initial installation and when product updates become
available.

For more information about how to register for support notifications,


see the following IBM Support Web page:

ftp.software.ibm.com/systems/support/tools/mynotifications/overview.pdf

You can also check the Stay Informed section of the IBM Disk Support
Web site, at the following address:

www.ibm.com/systems/support/storage/disk

=======================================================================

6.0 Trademarks and Notices


----------------------------

6.1 The following terms are trademarks of the IBM Corporation


in the United States or other countries or both:

IBM

DS3000

DS4000

DS5000

DCS3700

FAStT

System Storage

the e-business logo

xSeries

pSeries

HelpCenter

Microsoft Windows are trademarks of


Microsoft Corporation in the United States, other
countries, or both.

Java and all Java-based trademarks and logos are


trademarks or registered trademarks of Sun Microsystems,
Inc. in the United States, other countries, or both.

UNIX is a registered trademark of The Open Group in the


United States and other countries.

Linux is a registered trademark of Linus Torvalds.


Other company, product, and service names may be
trademarks or service marks of others.

=======================================================================

7.0 Disclaimer
----------------

7.1 THIS DOCUMENT IS PROVIDED "AS IS" WITHOUT WARRANTY OF


ANY KIND. IBM DISCLAIMS ALL WARRANTIES, WHETHER EXPRESS
OR IMPLIED, INCLUDING WITHOUT LIMITATION, THE IMPLIED
WARRANTIES OF FITNESS FOR A PARTICULAR PURPOSE AND
MERCHANTABILITY WITH RESPECT TO THE INFORMATION IN THIS
DOCUMENT. BY FURNISHING THIS DOCUMENT, IBM GRANTS NO
LICENSES TO ANY PATENTS OR COPYRIGHTS.

7.2 Note to U.S. Government Users -- Documentation related to


restricted rights -- Use, duplication or disclosure is
subject to restrictions set forth in GSA ADP Schedule
Contract with IBM Corporation.

Вам также может понравиться