Вы находитесь на странице: 1из 17

HP X9000 Series 6.0.

1 Release Notes

HP Part Number: AW549-96039 Published: February 2012 Edition: Second

Copyright 2012 Hewlett-Packard Development Company, L.P. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation.

Version: 6.0.1 (build 6.0.340)

Description
This release contains updates to HP X9000 File Serving Software, HP X9320 and X9720 Network Storage Systems, and HP X9300 Network Storage Gateway systems. The X9000 Software features a highly scalable file system, CIFS, NFS, FTP, and HTTP file services, high availability, remote replication, data validation, snapshots, data tiering, and CLI and GUI management interfaces, and is installed on HP Network Storage System and Network Storage Gateway servers. Update eligibility Customers running X9000 File Serving Software 6.0, 5.6, 5.5, 5.4.1, or 5.4.0 are eligible for the update. Refer to the administrator guide for your storage system for specific update requirements and procedures. Customers running X9000 File Serving Software 5.3.2 or earlier versions should contact HP Support to determine compatibility before updating their software. Supersedes 6.0 (build 6.0.326)

Product models
HP X9000 File Serving Software

Devices supported
HP X9300 Management Server HP X9300 Network Storage Gateway HP X9320 Network Storage System HP X9720 Network Storage System

Operating systems
X9000 supported devices use the Red Hat Enterprise Linux 5.5 (64 bit) operating system.

Other supported software


Software Linux X9000 clients
nl

Supported versions Red Hat Enterprise Linux 5.1, 5.2, 5.3, 5.4, 5.5 (all 64 bit) Red Hat Enterprise Linux 4 Updates 5, 6, 7, 8 (all 64 bit) SUSE Linux Enterprise Server 1 (64 bit) 1 SUSE Linux Enterprise Server 10 SP3 (64 bit) openSUSE 1 (64 bit) 1.1 CentOS 4.5, 5.1, 5.2, 5.3, 5.4 (all 64 bit)

Windows X9000 clients

Windows X9000 clients are not supported in the 6.0 release.

Description

Software CIFS clients

Supported versions Windows 7 Windows Vista Windows XP Windows 2008 Windows 2003 MAC 10.5 and 10.6

Internet Protocol iLO firmware

IPv4 iLO2 2.05 for G6 servers iLO3 1.16 for G7 servers

Browsers for management console GUI


nl

Microsoft Internet Explorer 8 and 7 Mozilla Firefox 3.6 and 3.5 Adobe Flash Player 9.0.45 or higher for viewing the charts on the GUI dashboard

Languages
International English

New features
The following features are new in this release: X9000 software snapshots. NOTE: You can use either the software method or the block method to take snapshots on a file system. Using both snapshot methods simultaneously on the same file system is not supported. Data retention and validation. Case-insensitive file names for all users (Linux, NFS, Windows). SMB2 support. WebDAV support for HTTP clients. Ibrix Collect utility to collect system information for HP Support. Ibrix Collect replaces the support ticket utility provided in earlier releases. The new ibrix_collect command replaces ibrix_supportticket. Statistics tool that reports historical performance data for the cluster or for an individual file serving node. The GUI and CLI now report information for both Active Tasks and Inactive Tasks. Previously, information was available only for active tasks. The GUI provides Active Tasks and Inactive Tasks panels. The Active Tasks panel shows active tasks for remote replication, segment rebalancing, data tiering, case insensitivity, snapshot space reclamation, and data validation. The Inactive Tasks panel shows tasks that have completed or have been stopped. The new ibrix_task c option displays information about inactive tasks.

About the 6.0 release


X9000 6.0 upgrade installs incorrect iLO2 firmware on G6 servers
If your cluster includes G6 servers, HA will not function properly after the upgrade to the X9000 6.0 release. To correct this condition, upgrade the iLO2 firmware to version 2.05. Download iLO2 version

Languages

2.05 using the following URL and copy the firmware update to each G6 server. Follow the installation instructions noted in the URL. This issue does not affect G7 servers. http://h20000.www2.hp.com/bizsupport/TechSupport/SoftwareDescription.jsp?lang=en&cc=us& prodTypeId=15351&prodSeriesId=1 146658&swItem=MTX-949698a14e1 14478b9fe126499& prodNameId=1 135772&swEnvOID=4103&swLang=8&taskId=135&mode=3

Snapshot behavior for file systems created under an earlier release


To take snapshots of files, they must be created on X9000 File Serving Software 6.0 or later. (These files are referred to as snapable.) To accommodate software snapshots, the inode format has changed in the 6.0 release. For file systems created in a release earlier than 6.0, X9000 software can preserve all name space data in snapshots but can preserve file data only for objects (files) created under release 6.0 or later. To help prevent hybrid snap trees, in which a snap tree contains objects with the old format, the 6.0 release implements restrictions on rename operations. The following restrictions apply to hybrid file systems: Only directories created in version 6.0 or later can become snap tree roots. If the old directory is not in a snap tree and the new directory is in a snap tree, rename is allowed only if the object being renamed is snapable (that is, it has the new format). A snap tree root cannot be renamed. Also, the path to a snap tree root cannot be changed. Rename is allowed when neither the old directory or the new directory are in snap trees. Rename is allowed when the old directory and the new directory are in the same snap tree. Rename is not allowed when the directories are in different snap trees.

The following restrictions apply to both hybrid file systems and pure 6.0 file systems:

These restrictions are intended to prevent hybrid snap trees containing files with the old format. However, hybrid snap trees can still occur when a directory having the new format is populated, using rename, with old format objects and that directory is then made into a snap tree root or is renamed into a snap tree. The X9000 software does not prevent this situation because it could take a prohibitively long amount of time to perform a complete scan for old objects in the sub tree being moved if the new sub tree was sufficiently large.

Implementation changes
The following changes have been made in this release.

Agile management console


The agile management console is now required on all file serving nodes. The management console is active on one node and passive on the other nodes.

Spillover
Spilling over sequentially written files from one segment to another is disabled in the 6.0 release; no new spillover chunks will be created. If a segment is full, writes to files in that segment will fail with -ENOSPC (-28), even if there is space available in other segments. Existing spillover files generated in the 5.x release will continue to work (read/write). Writes will continue to work if there is space available in the segments to which the file belongs. If the segments are full, writes will fail with ENOSPC. Existing spillover files cannot be snapped and cannot be moved into a snap tree. To minimize out-of-space errors when creating files, it is important to ensure that segments are balanced. If some segments are over 80% full and other segments are less than 80% full, it is advisable to run a segment rebalance task.

New features

Remote replication
The following changes have been made in this release: The remote replication commands have been renamed. Be sure to update any scripts using the old names. Event messages still refer to remote replication as CFR. The ibrix_cfrjob command has been renamed ibrix_crr. The new -X option specifies an exported directory on the target and is used only for remote cluster replication. Previously, -P was used for this purpose. The -P option is now used to specify an arbitrary target directory and applies to both same cluster and remote cluster replication. The ibrix_export_cfr command has been renamed ibrix_crr_export. There are no other changes. The ibrix_exportcfrpreference command has been renamed ibrix_crr_nic. Export preferences are now referred to as server assignments.

The GUI has been modified to simplify starting replication and to simplify selecting and modifying server and NIC assignments for remote replication exports. CRR now supports Run-Once replication of a file system snapshot. The GUI must be used for this feature. CRR now supports specifying an optional target directory. For same-cluster replications, the target directory is a subdirectory under the target file system. For remote replications, the target directory is either a subdirectory under the target file system or a subdirectory under the target file system and exported directory. When you specify a source directory for a Run-Once replication (using the ibrix_crr S option) , the files under the source directory are replicated without replicating the source directory path. The root of the replica on the target is based strictly on the configured target file system, the configured target exported directory (remote cluster only), and the optional target directory. Examples: Source directory: /srcFS/a/b/c Exported file system and exported directory on the target: /destFS/1/2/3
nl

You can configure a replication that does not include an additional target directory. The replication command is: ibrix_crr s o f srFs S a/b/c C tcluster F destFs X 1/2/3 The contents of /srcFS/a/b/c is replicated to destFs/1/2/3/{contents_under_c}. When the command includes the -P option to specify the target directory a/b/c: ibrix_crr s o f srcFs S a/b/c C tcluster F destFs X 1/2/3 P a/b/c The replication now goes to /destFs/1/2/3/a/b/c{contents_under_c}. NOTE: CRR does not support replicating from any 5.x cluster to a 6.0 cluster, or from a 6.0 cluster to any 5.x cluster.

New features

Block snapshots
The following changes have been made in this release: Block snapshots can now be scheduled only through the GUI. The ibrix_at command is no longer supported. The names of the CLI commands have been changed. Be sure to update any scripts using these commands. The ibrix_snap command now applies to the software snapshot feature. The ibrix_vs_snap command is now used for block snapshots. The ibrix_snap_strategy command has been renamed ibrix_vs_snap_strategy. The -T option and the linear strategy type are no longer supported.

NTP servers
Previously, NTP servers were configured on each file serving node and synchronized their time individually with the external time source. In the 6.0 release, the NTP server configuration is stored in the management console configuration. The active management console node syncs its time with the external source. The other file serving nodes sync their time with the active management console node. In the absence of an external time source, the local hardware clock on the active agile management console node is used as the time source. This configuration method ensures that the time is in sync on all cluster nodes, even in the absence of an external time source.

Quotas
The ibrix_fs_ops command has been deprecated. You can now use the ibrix_edquota command to create, view, or delete directory tree quotas. The ibrix_online_quotacheck command has been replaced by the ibrix_onlinequotacheck command. When quota limits are defined, you can now set the grace period for blocks and inodes. Previously, the grace period was set to seven days. The grace period can be defined on the GUI or with the ibrix_edquota command.

ibrixinit command
The ibrixinit command is used to install and uninstall X9000 Software on a file serving node. The -tm, -ts, and -M options are no longer supported for installations of the management console software and file serving node software. In the 6.0 release, a single ibrixinit command installs both the agile management console software and the file serving node software. The management console is active on the first node installed and passive on the remaining nodes. Also, the -tm and -ts options are no longer used when uninstalling X9000 Software from a file serving node. There are no changes to the ibrixinit command when it is used to install or uninstall X9000 Software on a Linux X9000 client. NOTE: If you need to uninstall only the agile management console packages from a node, retaining the file serving software packages, use the ibrixinit -tmo option. For example, if the passive management console is configured incorrectly on a node or has a corrupt fminstance.xml file, run the following command to uninstall the agile management console RPMs and unregister the passive agile management console on the node: ibrix/ibrixinit -tmo -u

New features

Fixes in the 6.0.1 release (build 6.0.340)


The following fixes were made in this release: The correct creation time was not maintained when a file on a CIFS share was modified or moved. Now, when a file is moved or modified, the original creation time is preserved as a file property visible to the client. It was possible to create two directories with the same name, but differing in case, if one of the names used characters from the Latin-1 character set. Users were then unable to save or retrieve files from one of the directories. When a CIFS client did not close a file gracefully, the files in the CIFS share were locked. When a local user or group was deleted, the user or group was not deleted from the CIFS database. CIFS users in extended ACLs were incorrectly given permissions that should be applied only to the owner of the file. Run-once remote replications did not replicate files correctly when the source or destination directory name included spaces. The defaults for the TcpKeepalive parameters on the CIFS server did not match the defaults on Windows servers. An @ symbol could not be used in a file or directory name. When the active management console failed over, ibrix_event did not send the correct email notifications. An lwiod failure caused the CIFS service to stop. If a directory name ended with the @ symbol, the contents of the directory could not be read when it was restored from a snapshot. The CIFS server did not handle requests properly when filenames had leading backslashes. When attempting to add ACLs on files and folders in a CIFS share, local users were not visible. You can now add local users; however, the cluster or client should be in the domain. NDMP backups failed when used with the VLS 9200. An attempt to create a file over RPC succeeded, but the file did not exist. When a user quota setting was changed, the ibrix_edquota -l command reported the error RealQuotaMonitor - directory tree quota has unparsable output. A race condition caused a rebalance operation to fail with the message ASSERT[deleg->dlg_side == idel_ds] failed: Trying to send downgrade delegation to local or remote delegation. When a segment becomes unavailable, that state persists after remounting the filesystem or migrating the unavailable segment. The segment unavailable state must now be explicitly cleared. The following message appears after a segment unavailable alert:
Filesystem includes one or more unavailable (or read-only) segments. Recommendation: verify storage health and contact customer support to run FSCK. To proceed with mounting segment as unavailable (or read-only), use force option.

Fixes in the 6.0.1 release (build 6.0.340)

When a segment is successfully evacuated, the file system segment unavailable alert is displayed in the GUI and attempts to mount the file system will fail. There are several options at this point: Mark the evacuated segment as bad (retired), using the following command. The file system state changes to okay and the file system can now be mounted. However, the operation marking the segment as bad cannot be reversed. ibrix_fs -B -f FSNAME {-n RETIRED_SEGNUMLIST | -s RETIRED_LVLIST} Keep the evacuated segment in the file system. Take one of the following steps to enable mounting the file system: Use the force option (-X) when mounting the file system: ibrix_mount f myFilesystem m /myMountpoint X Clear the unavailable segment flag on the file system with the ibrix_fsck command and then mount the file system normally: ibrix_fsck -f FSNAME -C -s LVNAME_OF_EVACUATED_SEG The Linux X9000 client did not start after a minor kernel update. A new utility is now available to determine whether a minor kernel update is compatible with the X9000 client software. See Installing a minor kernel update on Linux clients (page 16) for more information. When a CIFS share included a $ character in the path, CIFS clients could not map the share. When a cluster with more than two nodes was upgraded, the Statistics tool upgrade failed. The Statistics tool could fill the root partition with data for historical reports. Historical data and reports are now written to /local/statstool/histstats by default. See Updates to the Statistics tool documentation (page 16) for changes to Statistics tool procedures. The CIFS service dumped core during Data Protector backups/restores or drag and drop operations. The default UID for the CIFS Guest account could not be changed if it conflicted with another user. You can now delete the account and recreate it with a new UID. Use the following command to delete the Guest account, and enter yes when you are prompted to confirm the operation: /opt/likewise/bin/lw-del-user Guest Recreate the Guest account, specifying a new UID: /opt/likewise/bin/lw-add-user -force --uid <UID_number> Guest To have the system generate the UID, omit the --uid <UID_number> option. Attempts to join a domain failed if the /etc/hosts file had a duplicate or uppercase alias. When ibrix_snap was used to enable a snaptree, the command reported that the directory was already a snaptree; however, other commands could not locate the snaptree.

Workarounds
Following are workarounds for product situations that may occur:

Management console
If the management console service is stopped on the active management console and, within a minute, the passive management console experiences a shutdown or reboot, the passive management console will become active when the node is restarted. When the fusionmanager is started on the formerly active management console (or that node is rebooted), that management console will also be in active mode. Although both management consoles are active, only one of these consoles will control the cluster virtual interface. To recover from this scenario, take the following steps:
Workarounds 9

1.

Determine which management console is running the cluster virtual interface: ifconfig -a In the output, check for a configured bond0:1 virtual interface.

2.

Place the management console that is not running the cluster interface into maintenance mode, and then place it into passive mode: ibrix_fm -m maintenance ibrix_fm -m passive

If the recovery is not successful, contact HP Support. If the node hosting the active management console goes down, it is important to reboot it as soon as possible. The management console on that node can then assume a passive role and receive updates from the new active management console. If the node remains down and the node hosting the active management console also goes down, the cluster configuration data may become inconsistent, depending on the order in which the nodes are rebooted. When the active management console is moved to maintenance mode, a passive management console will transition to active mode. Be sure that this transition is complete before you move the previously active management console from maintenance mode to passive mode. (Use the ibrix_fm -i command to check the mode of each management console.) If the passive management console has not yet assumed active mode, the management console being moved from maintenance to passive mode will become active again.

CIFS
CIFS and X9000 Windows clients cannot be used together because of incompatible AD user to UID mapping. You can use either CIFS or X9000 Windows clients, but not both at the same time on the cluster. Occasionally a share cannot be mounted using the DNS name. The workaround is to use the IP address instead. The X9000 CIFS server does not support connections from Linux SMB clients. The workaround is to use NFS for Linux. Alternate Data Streams (ADS) are not supported. When a file with an ADS is moved or copied to the CIFS Server, X9000 Software moves/copies the file, but the attached ADS is not copied. Attempts to create an ADS or a filename containing a colon (:) will be failed by the CIFS Server. When the Microsoft Windows Share Management interface is used to add a CIFS share, the share path must include the X9000 file system name. The Browse button on the MMC cannot be used to locate the file system. Instead, enter the entire path, such as C:\data\. The X9000 management console GUI and CLI allow only X9000 file systems and directories to be exported as CIFS shares. However, the Microsoft Windows Share Management interface allows you to create a CIFS share that is not on an X9000 file system. Although the share will be available from the file serving node to which Windows Share Management was connected, it will not be propagated to the other file serving nodes in the cluster. The ibrix_localusers -i <user information> command fails if the user information includes commas. To enter commas in the user information, use the management console GUI instead of the CLI. When you use the Windows security tab to add local users or groups to a security ACL on a CIFS file (for either file or share-level permissions), you typically specify the user to add as either a DOMAIN\username or a MACHINE\username. On X9000 systems, local users are displayed as LOCAL\username, and it may seem like you should specify LOCAL\username in the Add dialog box in Windows. However, in this case, the Windows client cannot interpret LOCAL. Instead, specify the machine name of the server. For example, to add LOCAL\user1 to an ACL

10

Workarounds

on a CIFS file shared out by serverX, specify serverX\user1 in the Add dialog box on the security tab. If you later use the Windows security tab to look at this ACL, the server name will have been replaced by LOCAL (the CIFS server performs this remapping to ensure that local users are symmetric between all servers in the cluster, and are not specific to any one machine name in the cluster.) When joining a CIFS domain, the $ character cannot be used in passwords unless it is escaped with a slash (\) and enclosed in single quotes (' '). For example: ibrix_auth -n IB.LAB -A john -P 'password1\$like' If you are using a Windows Vista client and running more than a single copy of Robocopy from that client, a hang is possible. The work around is to disable the SMB2 protocol in the registry. To modify the registry, take the following steps on each file serving node: 1. Enter the Registry Editor: /opt/likewise/bin/lw-edit-reg 2. Locate the following entry: "SupportSmb2"=dword:00000001 Change the entry to: "SupportSmb2"=dword:00000000 3. Restart the likewise server: /etc/init.d/lwreg stop /opt/likewise/bin/lwsm start srvsvc You may also need to restart the Windows client, as the original negotiated protocol, SMB2, may be cached by the client. Restarting the client renegotiates the protocol back to SMB1. Be sure to remove Active Directory users from the X9000 share admin list before removing them from Active Directory. If you remove an Active Directory user from Active Directory before removing the user from the X9000 share admins list, an error is reported when you attempt to change the share admins list. If you are seeing errors from this situation, rejoin Active Directory and remove all share admins. For example: ibrix_auth -n ib.lab -A administrator@ib.lab -P fusion -S "share admins=" Then run ibrix_auth again to specify the new list of share admins: ibrix_auth -t -S "share admins=[ib\Administrator]"

Block snapshots
Snapshot creation may fail while mounting the snapshot. The snapshot will be created successfully, but it will not be mounted. Use the following command to mount the snapshot manually: ibrix_mount -f <snapshotname> -m /<snapshotname> Quotas are disabled on block level snapshots (for example, MSA2000 snapshots) and the quota information from the origin file system is not carried to the block level snap file system. Block level snapshots are temporary file systems that are not writable. Users should not query quota information against block level snap file systems. After the initial creation of a snapshot, it can take 4 to 6 minutes to mount the snapshot.

Workarounds

1 1

Remote Replication
When remote replication is running, if the target file system is unexported, the replication of data will stop. To ensure that replication takes place, do not unexport a file system that is the target for a replication (for example, with ibrix_crr_export -U). Remote replication will fail if the target file system is unmounted. To ensure that replication takes place, do not unmount the target file system. When continuous remote replication is used and File Serving Nodes are configured for High Availability, you will need to take the following steps following failure of a node: 1. Stop continuous remote replication. 2. After the migration to the surviving node is complete, restart continuous remote replication to heal the replica. If these steps are not taken, any changes that had not yet been replicated from the failed node will be lost. No alert is generated if the continuous remote replication target becomes unavailable. Confirm the connection to the target system by issuing a ping command and by inspecting ibrcfrworker.log. Sparse files on the source file system are replicated unsparse on the target. That is, all blocks corresponding to the file size are allocated on the target cluster. Consequently, if the target file system is the same size as the source file system, remote replication can fail because there is no space left on the target file system. To work around this situation, if the source system contains large sparse files, be sure that the target file system is larger than the source file system, and large enough to fit all files in an unsparsed manner. The mountpoint /mnt/ibrix is reserved for remote replication. Hiding or blocking this mountpoint by mounting anything over the parent /mnt will prevent Run Once replication from working at all, and the initial domain scan of Continuous replication will fail.

Data retention and validation


The ibrix_reten_adm command fails if the date string specified with -e contains spaces. As a workaround, use the following command to enter date strings containing spaces: /usr/local/ibrix/sbin/ibr_reten_adm -e expire_time -f FSNAME -P PATHLIST NDMP cannot restore files from an NDMP backup of a filesystem enabled for data retention to a filesystem that is not retention-enabled. If this is attempted, the X9000 NDMP Server returns errors to the users backup application. The errors can include messages such as Error setting xattr value ibrix.virtual.retention_state, Operation not supported, or Error recovering attributes. The ibrix_vs_snap command cannot delete a block snapshot file system that is enabled for data retention. Instead, use the ibrix_fs command with the -R option. For example: ibrix_fs -d -f block_snap_ifs2_1 -R

Segment evacuation
The segment evacuator cannot evacuate segments in a READONLY or BROKEN state. If data is written to a very large file during evacuation of the segment containing the file, the writing process might experience an I/O error and terminate prematurely. The segment evacuation process aborts if a segment contains chunk files; these files have chunks in more than one segment. You will need to move chunk files manually. The evacuation process generates a log reporting all chunk files on the segment. The log file is saved in the management console log directory (the default is /usr/local/ibrix/log) and is named Rebalance_<job

12

Workarounds

ID>-<FS-ID>.info (for example, Rebalance_29-ibfs1.info). Following is an example of the log file:


070390:0518545 | <INFO> | 3075611536 | collect counters from segment 3 070391:0520272 | <INFO> | 3075611536 | segment 3 not migrated chunks 1 <this line shows the segment has 1 chunk> 070391:0520285 | <INFO> | 3075611536 | segment 3 not migrated replicas 0 070391: 0520290 | <INFO> | 3075611536 | segment 3 not migrated files 0 070391:0520294 | <INFO> | 3075611536 | segment 3 not migrated directories 0 070391:0520298 | <INFO> | 3075611536 | segment 3 not migrated root 0 070391:0520302 | <INFO> | 3075611536 | segment 3 chunk: inode inum 300000017 (29B3A23C), poid hi64 300000017 (29B3A23C), primary 500000017 <this line shows information about the chunk>

Run the inum2name command to identify the symbolic name of the chunk file:
root@centos bin]# ./inum2name --fsname=ibfs 500000017 ibfs:/sliced_dir/file3.bin

After obtaining the name of the file, use a command such as cp to move the file manually. Then run the segment evacuation process again.

NDMP
When Symantec NetBackup is used, a full backup job fails for a file system with large directory tree. To correct this, set TYPE=tar in the Backup Selections tab in Symantec NetBackup. This changes the backup type to tar instead of the default dump and forces NetBackup to handle the file-based NDMP file history.

Ibrix Collect
If collection does not start after a node recovers from a system crash, check the /var/crash/ <timestamp> directory to determine whether the vmcore is complete. Ibrix Collect does not process incomplete vmcores. Also check /usr/local/ibrix/log/ibrixcollect/ kdumpcollect.log for any errors. If the status of a collection is Partially_collected, typically the management console service was not running or there was not enough space available in the /local disk partition on the node where the collection failed. To determine the exact cause of a failure during collection, see the following logs: /usr/local/ibrix/log/fusionserver.log /usr/local/ibrix/log/ibrixcollect/ibrixcollect.py.log

Email notifications do not include information about failed attempts to collect the cluster configuration. In some situations, ibrix_collect successfully collects information after a system crash but fails to report a completed collection. The information is available in the /local/ibrixcollect/ archive directory on one of the file serving nodes.

Migration to an agile management console configuration


When a cluster is configured with a dedicated, standard management console, the Quick Restore installation procedure installs both the Fusion Manager and the File Serving Node packages on the dedicated, standard management console and on each node of the cluster. If you then attempt to migrate to an agile management console configuration, the migration procedure will fail. To avoid the failure, uninstall the Ibrix Server package from the dedicated, standard management console, and uninstall the Ibrix Fusion Manager package from the file serving nodes. You can then perform the migration. Complete the following steps:

Workarounds

13

1.

On the standard management console, check for the IbrixServer RPM: # rpm qa | grep i IbrixServer If the RPM is present, the output will be similar to the following: IbrixServer-<version>

2. 3.

If the IbrixServer RPM is present, uninstall the RPM: # rpm -e IbrixServer-<version> On each file serving node, check for the Ibrix Fusion Manager RPM: # rpm qa | grep i IbrixFusionManager If the RPM is present, the output will be similar to the following: IbrixFusionManager-<version>

4.

If the RPM is present on the node, remove the RPM: # rpm -e IbrixFusionManager-<version>

Cluster component states


Changes in file serving node status do not appear on the management console until 6 minutes after an event. During this time, the node status may appear to be UP when it is actually DOWN or UNKNOWN. Be sure to allow enough time for the management console to be updated before verifying node status. Generally, when a vendorstorage component is marked Stale, the component has failed and is not responding to monitoring. However, if all components are marked Stale, this implies a failure of the monitoring subsystem. Temporary failures of this system can cause all monitored components to toggle from Up, to Stale, and back to Up. Common causes of failures in the monitoring system include: Reboot of a file-serving node Network connectivity issues between the management console and a file serving node Resource exhaustion on a file serving node (CPU, RAM, I/O or network bandwidth)

While network connectivity and resource exhaustion issues should be investigated, they can occur normally due to heavy workloads. In these cases, you can reduce the frequency at which vendorstorage components are monitored by using the the following command: ibrix_fm_tune -S -o vendorStorageHardwareStaleInterval=1800 The default value of this command is 900; the value is in seconds. A higher value reduces the probably of all components toggling from Up to Stale and back to Up because of the conditions listed above, but will increase the time before an actual component failure is reported.

HP Insight Remote Support


In certain cases, large number of error messages such as the following appear in /var/log/ hp-snmp-agents/cma.log: Feb 08 13:05:54 x946s1 cmahostd[25579]: cmahostd: Can't update OS filesys object: /ifs1 (PEER3023) This error message occurs because the file system exceeds <n> TB. (This situation will be corrected in a future release.) To disable logging, edit the script /opt/hp/hp-snmp-agents/server/ etc/cmahostd and remove the option -l <logname>. Then restart the agents using service hp-snmp-agents restart.

14

Workarounds

If you want to keep logging enabled, be aware that the log messages occur frequently, and you will need to monitor and clean up the log file regularly to avoid filling the file system. If Fully Qualified Domain Name (FQDN) resolution is not configured properly for the hosts, the following error appears when hpsmhd is restarted: Could not reliably determine the server's fully qualified domain name, using <ip> for ServerName To configure FQDN resolution, use either the /etc/hosts file or the Domain Name Service. If SNMP is logging excessively, add the following command to the file /etc/sysconfig/ snmpd.options to stop the logging: OPTIONS="-LS 0-4 d -Lf /dev/null -p /var/run/snmpd.pid -a" To enable the default SNMP logging, update the following command in the file /etc/sysconfig/ snmpd.options: # OPTIONS="-LS 0-4 d -Lf /dev/null -p /var/run/snmpd.pid -a" Add a pound sign in front of the command and save the file. Then restart the snmp service: service snmpd restart If the X9720 Onboard Administrator is discovered as unknown on IRSS, enter the following information manually: Select the System Subtype as ProLiant Onboard Administrator. Enter the Entitlement serial number (this is same as the X9000 enclosure product number). Enter the Entitlement product number (this is same as the X9000 enclosure product number).

Upgrades
System User login credentials are backed up and restored during upgrades. However, any user data is not retained. Users must save and restore their individual data manually during an upgrade. If clients previously mapped a CIFS share using the hostname/FQDN, they will be prompted continuously to enter their credentials when attempting to access the CIFS share after the upgrade. (The share can be accessed successfully using the IP address.) To workaround this situation, disjoin and then rejoin all file serving nodes to the Active Directory domain. This can be done using the GUI or the ibrix_auth command.

General
The ibrix_pv -a -o mpath command does not recognize a multipath device. The command works properly when an argument is added to ignore the standard scsi devices. Execute the command as follows, specifying the path to your device. ibrix_pv -a -o accept:/dev/mapper/mpath10,~standard HP-OpenIPMI does not work with the HP-Health service. For example, Proliant health check tools such as SMH and SIM and hpasmcli commands such as SHOW POWERSUPPLY do not report errors. This situation occurs because X9000 Software requires standard RHEL IPMI. Remove the HP-OpenIPMI RPM (execute rpm -e HP-OpenIPMI), and then start the standard RHEL IPMI (execute /etc/init.d/ipmi start). The standard RHEL IMPI will now start automatically when the server is booted. During server migration or failover, certain cluster events will be reported as alerts. These events are expected and normal, and are reported temporarily as a server is failing over to another server. Node failover does not occur when a node has a complete loss of power (for example, removing power cords or pulling a blade from a chassis). Do not test high availability in this manner.
Workarounds 15

The log files for user-initiated tasks (such as running ibrix_fsck and migrating or rebalancing segments) are not rotated automatically. To control the space used by these log files, you will need to maintain them manually. The log files are located in /usr/local/ibrix/log. NFS locks may return an error code (37) after an unlock operation even though the lock was correctly released. If other results are normal, you can ignore this error code. Clients cannot view files containing Korean characters on an FTP share. To resolve this, set the LANG variable to ko_KR.euckr.

Documentation additions
Following are additions to the X9000 documentation.

Updates to the Statistics tool documentation


The following changes apply to the chapter Using the Statistics tool in the administrator guides: The default value for the aging configuration parameter is now 24h (24 hours). Steps 4 and 5 in the section Enabling collection and synchronization are now unnecessary. Historical data and reports are now written to /local/statstool/histstats by default. It is no longer necessary to create a symbolic link to this folder, as described in the sections Maintaining the Statistics tool and Management console failover and the Statistics tool configuration. Also, it is no longer necessary to perform the backup steps listed under Management console failover and the Statistics tool configuration.

Installing a minor kernel update on Linux clients


The X9000 client software is upgraded automatically when you install a compatible Linux minor kernel update. If you are planning to install a minor kernel update, first run the following command to verify that the update is compatible with the X9000 client software: /usr/local/ibrix/bin/verify_client_update <kernel_update_version> The following example is for a RHEL 4.8 client with kernel version 2.6.9-89.ELsmp: # /usr/local/ibrix/bin/verify_client_update 2.6.9-89.35.1.ELsmp
nl

Kernel update 2.6.9-89.35.1.ELsmp is compatible. If the minor kernel update is compatible, install the update with the vendor RPM and reboot the system. The X9000 client software is then automatically updated with the new kernel, and X9000 client services start automatically. Use the ibrix_version -l -C command to verify the kernel version on the client.

Installation instructions
New installations
HP X9000 File Serving Software is preinstalled on supported devices. If you need to reinstall the software, see the administrator guide for your storage system.

Upgrades
The upgrade procedure is provided in the administrator guide for your storage system. Contact HP Support for assistance with the procedure.

16

Documentation additions

Compatibility/Interoperability
Note the following: Every member of the cluster must be running the same version of X9000 Software. The cluster must include an even number of file serving nodes. All X9000 clients must be running 6.0.

Compatibility/Interoperability

17

Вам также может понравиться