Академический Документы
Профессиональный Документы
Культура Документы
April 2014
Study guide
GX71155C
___________________________________________________________________________________
Copyright © 2014, Lenovo Corporation.
Lenovo
8001 Development Drive
Morrisville, North Carolina, 27560
Lenovo reserves the right to change product information and specifications at any time
without notice. This publication might include technical inaccuracies or typographical
errors. References herein to Lenovo products and services do not imply that Lenovo
intends to make them available in all countries. Lenovo provides this publication as is,
without warranty of any kind, either expressed or implied, including the implied
warranties of merchantability or fitness for a particular purpose. Some jurisdictions do
not allow disclaimer of expressed or implied warranties. Therefore, this disclaimer may
not apply to you.
Data on competitive products is obtained from publicly obtained information and is
subject to change without notice. Contact the manufacturer for the most recent
information.
Lenovo and the Lenovo logo is a trademark or registered trademark of Lenovo
Corporation or its subsidiaries in the United States, other countries, or both. Intel and
the Intel logo is a trademark or registered trademark of Intel Corporation or its
subsidiaries in the United States, other countries, or both. Other names and brands are
the property of their respective owners.
The following terms are trademarks, registered trademarks, or service marks of Lenovo:
Access Connections, Active Protection System, Automated Solutions, Easy Eject Utility,
Easy-Open Cover, IdeaCentre, IdeaPad, ImageUltra, Lenovo Care, MaxBright,
NetVista, New World. New Thinking, OneKey, PC As A Service, Rapid Restore, Remote
Deployment Manager, Rescue and Recovery, ScrollPoint, Secure Data Disposal,
Skylight, Software Delivery Center, System Information Gatherer, System Information
Reporter, System Migration Assistant, System x, Think Pad, ThinkAccessories,
ThinkCenter, ThinkCentre, ThinkDisk, ThinkDrive, ThinkLight, ThinkPad, ThinkPlus,
ThinkScribe, ThinkServer, ThinkStation, ThinkStore, ThinkVantage, ThinkVision,
ThinkWorld, TopSeller, TrackPoint, TransNote, UltraBase, UltraBay, UltraConnect,
UltraNav, VeriFace.
For more information, go to: http://www.lenovo.com/legal/copytrade.html.
The terms listed for the following partners are the property of their respective owners:
AMD
Intel
IBM
Microsoft
NVIDIA
Table of contents
Objectives .................................................................................................................. 4
Servicing the IBM NeXtScale System nx360 M4 Node (5455) .................................. 29
Overview .................................................................................................................. 29
NeXtScale System nx360 M4 Node ................................................................... 29
Product front view ............................................................................................... 30
Product inside view............................................................................................. 31
Product rear view................................................................................................ 32
Product description ............................................................................................. 33
IBM NeXtScale System nx360 M4 features and specifications ................................ 35
System configurations and diagrams ....................................................................... 37
Memory considerations ...................................................................................... 37
HDDs considerations .......................................................................................... 38
IBM NeXtScale Storage Native Expansion (NeX) Tray ...................................... 40
Connecting to the IBM NeXtScale Storage Native Expansion (NeX) Tray ......... 42
IBM NeXtScale PCIe Native Expansion (NeX) Tray ........................................... 46
Connecting to the IBM NeXtScale PCIe Native Expansion (NeX) Tray .............. 49
Diagrams ............................................................................................................ 51
Problem determination and troubleshooting ............................................................ 54
Helpful links ............................................................................................................. 55
Objectives
After completing this course, you will be able to:
1. Provide an overview of the IBM NeXtScale System n1200 Enclosure and nx360
M4 Node.
2. Describe the features of the IBM NeXtScale System n1200 Enclosure and nx360
M4 Node.
3. Describe the components within the IBM NeXtScale System Enclosure and Node
and their locations.
4. Describe the problem determination steps and explain how to troubleshoot the
IBM NeXtScale system.
Product description
The NeXtScale System Enclosure has an outer skeleton with x86 Intel compute nodes
inside. Unlike Blade and Pure System, the NeXtScale System does not have internal
I/O switches. Use standard switches with the NeXtScale System. Aside from FPC,
which provides management to the fan and power, the system does not have extra
management mechanisms.
Figure 6 shows the IBM NeXtScale System 1U, half-wide compute Node
All power supply modules are combined into a single power domain within the
enclosure, which distributes power to each of the compute nodes and ancillary
components through the enclosure midplane. The midplane is a highly reliable design
with no active components. Each power supply is designed to provide fault isolation and
is hot swappable.
Figure 8: IBM NeXtScale System n1200 Enclosure rear power supply, fan, and FPC module
locations
The FPC module is the management component to control the power bank of the entire
chassis and gives the power permission to each compute node. The FPC web interface
can be accessed through an Ethernet connection. The default IP address is
192.168.0.100. The default User Name is USERID (in capital letters) and the default
password is PASSW0RD (in capital letters) where 0 is zero.
Figure 9 shows the IBM NeXtScale System n1200 Enclosure FPC information tag at
the rear of the enclosure next to FPC module.
Figure 10 shows the IBM NeXtScale System n1200 Enclosure rear components
number tag and screw driver at the rear of the enclosure in between fan 1 and fan 7.
The screw driver is used for disassembling the HDD cage that is shown in Figure 11.
Figure 12 shows the FPC Module with the following LEDs, control, and connector.
Power-on LED: When this LED is lit (green), it indicates that the fan and power
controller has power on.
Heartbeat LED: When this LED is lit (green), it indicates that the fan and power
controller is actively controlling the chassis.
Locator LED: When this LED is lit (blue), it indicates the chassis location in a rack.
Use the FPC web interface to perform chassis management functions within a web
browser. It contains the following functions:
Node status report.
Chassis power and fan status report.
Chassis power and fan configuration management.
Chassis VPD information report.
Chassis event log display, backup, and restore.
FPC module management, setting, backup, reset, and restore.
Note that the FPC currently supports only the following browsers:
IE6 32 bit, IE7 32/64 bit, IE8 32/64 bit, Firefox 2.0.x 32/64 bit, Firefox 3.0.5 32/64 bit,
Firefox 4.0.1 32/64 bit and Firefox 21 or later.
Note: Mixing 900 W and 1300 W power supplies into the same chassis is
prohibited, and 1300 W power supply only supports highline Vin (ac 220 V – 240
V).
Summary: The Summary page displays the overall status and chassis information
including Chassis Front Overview and Chassis Rear Overview.
Power and cooling: There are five sections under power tab that is shown in Figure 17.
Power Overview: Displays the chassis level power consumption, node level power
consumption, and subsystems power consumption includes power supplies and fans.
PSU Configuration: For the user to setup power supply redundancy mode shown in
Figure 18.
Redundancy Mode: There are three different redundancy modes to choose from.
No Redundancy: System could be throttled or shut down if one or more power
supplies are in faulty condition.
N+1: Have one of the properly installed PSUs as redundant power supply so that
there is no impact to system operation or performance if any one of the PSUs is in
faulty condition, given that Oversubscription mode is not enabled.
N+N: Have half of the properly installed PSUs as redundant power supplies so that
there is no impact to system operation or performance if half of the PSUs have
failed, given that Oversubscription mode is not enabled. That is, six PSUs properly
installed and three PSUs could fail without any impact when N+N and no
Oversubscription is applied.
PSU Oversubscription Mode: This function allows users to take advantage of the
extra power from the redundant power supply when the power supplies are in good
condition. When the redundancy fails, the PSU will shut down within one second if
system power loading is not corrected after the time limit. The FPC takes the action of
node throttling at such power emergency. Chassis performance could be impacted even
in redundancy mode if Oversubscription is also enabled. Note that Oversubscription is
applied only with N+1 or N+N redundancy mode. When enabled, Oversubscription
mode with N+1 PSU configuration, the total power available is equivalent to No
redundancy mode.
Power Cap: For users to setup power capping/saving.
Figure 19 shows the power capping policy of the FPC Module. Users can choose either
the chassis or node level capping/saving through power capping policy. Power capping
allows users to set a wattage limit on power consumption.
Power Restore Policy: For users to enable power restore policy as shown in Figure
20. When power restore policy is enabled, FPC remembers nodes that are already
powered on before the ac is abruptly lost and automatically turns on those nodes
when the ac is recovered.
Cooling: There are three sections under cooling tab that is shown in Figure 21.
Cooling Overview: System fan speed is displayed in rpm. Error log is asserted when
fan speed is below lower critical threshold (1472 rpm).
Figure 22 shows the FPC Cooling overview. The NeXtScale System n1200 Enclosure
system fan is equipped with a dual motor and normally operates at 2000 to13000 rpm.
PSU Fan Speed: Shows the power supply fan speed. PSU fan speed normally
operates at 5500 to 23000 rpm.
Acoustic Mode: Users can set acoustic mode. There are three acoustic modes to
choose from as shown in Figure 23.
Event Log: This tab lets users view the SEL (System Event Log) and perform
backup/restore/restore to default operations. There are two sections that are shown in
Figure 25.
Event Log: SEL logs chassis level events so that user can get some clues of what
is going on in the chassis. A maximum of 512 event entries can be logged.
USB Recovery: One USB storage device is used for FPC to preserve or migrate
SEL and user configurations (shown in Figure 12). This USB key must be mounted
on the FPC board to function correctly. When there is no data that is stored in the
USB key, factory default settings are applied for all configurations. User
configurations are automatically backed up to the USB key when they are set or
modified.
Figure 26 shows the USB Recovery functions. There are three functions on the USB
Recovery page.
Backup: Backup SEL and chassis configurations to USB key. Chassis
configurations include power supply redundancy policy, oversubscription mode,
chassis or node capping/saving setting, acoustic mode setting and power restore
policy.
Restore: Restore and apply the configurations that are stored in USB to FPC.
Restore to Default: Return all settings to factory default setting.
Configuration: The configuration tabs have settings that are used to manage FPC
module. There are eight sections that are shown in Figure 27.
SMTP (To configure e-mail alert and SMTP Authentication. Shown in Figure 29.)
SNMP (To configure IPv4 and IPv6 Destination List. Shown in Figure 30.)
PEF (Platform Event Filter) (Configured SMTP and SNMP traps allow user to
monitor chassis for selected events. SMTP/SNMP trap event types can be set in the
PEF page.)
Network Configuration (allows user to modify networking parameters).
Time Setting (to configure system time).
User Account (to configure user roles).
Web Service (web service lets user configure different HTTP/HTTPS ports for
connection and the web page timeout period.
Figure 35: IBM NeXtScale System nx360 M4 Node inside (exploded view)
Product description
The NeXtScale System nX360 M4 compute node is a newly designed server with a
Romely platform and Ivy Bridge-EP processor. It supports up to two 130 W CPUs and
eight of 16 GB DIMMs for a total capacity of 128 GB. It is designed to support an
onboard Mezzanine card and one FHHL (Full Height Half Length) PCIe slot.
Figure 39 shows the Top Cover Ruler of the NeXtScale System nx360 M4 Node that is
used to indicate how far the node has been pulled out. The number on the ruler ranges
from seven to one, where digit one indicates the end of the node.
Figure 40: IBM NeXtScale System nx360 M4 Node, KVM console cable
Figure 41: IBM NeXtScale System nx360 M4 Node, KVM console cable front view
Memory considerations
The NeXtScale nx360 M4 node supports only one DIMM per channel. The DIMM’s
population rules should follow the population sequence as shown in Figure 42 and
HDDs considerations
The NeXtScale nx360 M4 supports three types of HDDs cages. Figure 44 shows the
HDDs numbering of the different HDD cages.
The NeXtScale nx360 M4 supports 2.5/3.5-inch HDDs and 1.8-inch SSD, the connector
type of the cable assembly also different if connect to planar, PCI card or internal
storage tray (3.5-inch HDDs only).
Table 3 shows the three different types of connectors for the HDDs cage while
connecting to the planar, PCI card or storage tray (for 3.5-inch HDDs only). The internal
HDDs numbering of the 3.5-inch HDDs cage will change from HD0 to HD7 when
connecting to the storage tray.
Cage/cable Connectors
assembly
Figures 47, 48 and 49 show the IBM NeX Tray top view, which includes seven 3.5-inch
HDDs, and the bottom view.
Figure 47: IBM NeXtScale Storage Native Expansion (NeX) Tray top view
Figure 48: IBM NeXtScale Storage Native Expansion (NeX) Tray top view with 7 x 3.5-inch HDDs
Figure 49: IBM NeXtScale Storage Native Expansion (NeX) Tray bottom view
Place the IBM NeXtScale Storage Native Expansion (NeX) Tray on top of the
compute node, and align the guide pin of the NeX tray to the nx360 M4 compute
node, then close up the compute node with the IBM NeXtScale Storage Native
Expansion (NeX) Tray as shown in Figure 51.
Connect the signal cable to the socket on the PCI riser card assembly in the
compute node, and route the signal cable as shown in Figure 52.
Figure 52: The NeX tray signal cable and cable routing
Figure 53: Connects the cables in the NeX tray to the compute node
Route the cables properly to avoid interference. Install the HDDs to the NeX tray if
required.
Place the top cover on the IBM NeXtScale Storage Native Expansion (NeX) Tray
and closed.
Figure 54 shows the IBM NeXtScale Storage Native Expansion (NeX) Tray connecting
to the NeXtScale nx360 M4 node.
Figure 54: IBM NeXtScale Storage Native Expansion (NeX) Tray connecting to the NeXtScale
nx360 M4 node
Figure 55 shows the release latch of the IBM NeXtScale Storage Native Expansion
(NeX) Tray.
Figure 55: IBM NeXtScale Storage Native Expansion (NeX) Tray release latch
The IBM NeXtScale Storage Native Expansion (NeX) Tray has its own light path to
identify the location of failed HDDs.
Figure 56 shows the HDDs numbering when the IBM NeXtScale Storage Native
Expansion (NeX) Tray connects to the NeXtScale nx360 M4 node.
Figure 56: HDDs numbering of the IBM NeXtScale Storage Native Expansion (NeX) Tray when
connects to the nx360 M4 node
Figure 57 shows the HDDs error indicator and light path button of the IBM NeXtScale
Storage Native Expansion (NeX) Tray. Press the light path button after remove the IBM
Figure 57: IBM NeXtScale Storage Native Expansion (NeX) Tray HDDs error indicator and light
path
Figure 58: IBM NeXtScale PCIe Native Expansion (NeX) Tray front and rear view
Figures 59 and 60 show the GPU tray top view and the bottom view.
Figure 59: IBM NeXtScale PCIe Native Expansion (NeX) Tray top view
Figure 60: IBM NeXtScale PCIe Native Expansion (NeX) Tray bottom view
Figure 61: IBM NeXtScale PCIe Native Expansion (NeX) Tray riser 1
Figure 62: IBM NeXtScale PCIe Native Expansion (NeX) Tray riser 2
Figure 63: IBM NeXtScale PCIe Native Expansion (NeX) Tray riser 2
To install the GPU tray into compute node, procedures needed are as follows:
Remove the top cover and PCI riser bracket of the NeXtScale nx360 M4 compute
node
Remove all PCI riser-cage assembly from the GPU tray and set aside
Place the GPU Tray on top of the compute node, and align the guide pin of the GPU
tray to the nx360 M4 compute node, then close up the compute node with the IBM
GPU Tray as shown in Figure 64.
Figure 64: IBM NeXtScale nx360 M4 node connects to the GPU tray
If PCI adapters are available, install the adapters to the riser-cage assembly by
following the rules shown in Table 4.
Install the PCI riser-cage assembly to the compute node and connects related
cables as shown in Figure 65.
Figure 65: IBM NeXtScale PCIe Native Expansion (NeX) Tray with 2 GPU cards
Check if both Riser1 and Riser 2 are properly connected to the compute node, and
then close the top cover as shown in Figure 66.
Figure 66: IBM NeXtScale PCIe Native Expansion (NeX) Tray connecting to the NeXtScale nx360
M4 node
Note: 1300 W power supply is required when IBM NeXtScale PCIe Native
Expansion (NeX) Tray is in the configuration.
Diagrams
Descriptions for items that are listed in Figure 67 are as follows:
DMI2 4Gbps: A DMI bus is similar to a PCIe bus and is used for communication
between the CPU and PCH.
Helpful links
Table 1: Prerequisite courses
YouTube n1200:
http://www.youtube.com/channel/UCo3OO3gVr1ScdyDq
O62G4Hg/videos?view=1&flow=list
nx360:
https://www.youtube.com/channel/UC53n0DrNorOmj6oX
XrtuucA/videos?flow=list&view=1
YouKu n1200:
http://u.youku.com/IBMNeXtScalen1200
nx360:
http://u.youku.com/IBMNeXtScalenx360M4
Summary
This course enabled you to:
1. Describe the features of the IBM NeXtScale System n1200 Enclosure, nx360 M4
Node and the Storage Native Expansion (NeX) Tray
2. Describe the components within the IBM NeXtScale System n1200 Enclosure,
nx360 M4 Node, the Storage Native Expansion (NeX) Tray and their locations
3. Highlight some similarities and differences between the IBM Flex System and the
IBM NeXtSWcale System
4. Describe the new features of the FPC module
5. Describe the problem determination steps and explain how to troubleshoot the
IBM NeXtScale System n1200 Enclosure, nx360 M4 Node and the Storage
Native Expansion (NeX) Tray.