Вы находитесь на странице: 1из 28

CX3 Model 20 Systems

Hardware and Operational


Overview
January 23, 2007

This document describes the hardware, powerup and powerdown


sequences, and status indicators for CX3 model 20 systems, which are
members of the CX3 UltraScale series of storage systems.
Major topics are:

Storage-system major components..................................................


Storage processor enclosure (SPE3) .................................................
Disk-array enclosures (DAE3Ps) .....................................................
Standby power supplies (SPSs).......................................................
Powerup and powerdown sequence ...............................................
Status lights (LEDs) and indicators .................................................

2
4
8
14
15
18

Storage-system major components


The storage system consists of:

A storage processor enclosure (SPE3) and two standby power


supplies (SPSs)

One Fibre Channel disk-array enclosure (DAE) with a minimum


of five disk drives

Optional DAEs

DAE3P

SPS
SPE3

EMC3464

Figure 1

Storage system

The high-availability features for the storage system include:

Redundant storage processors (SPs)

Standby power supplies (SPS)

Redundant power/cooling modules

The SPE3 is a highly available storage enclosure with redundant power


and cooling. It is 1U high (a U is a NEMA unit; each unit is 1.75 inches)
and includes two storage processors (SPs). Table 1 gives the number
of Fibre Channel and iSCSI I/O front-end ports and Fibre Channel
back-end disk ports supported by each CX3 model 20 system.

Hardware and Operational Overview

Table 1

Front-end and back-end ports


Storage system

Fibre Channel
front-end I/O ports

iSCSI
front-end I/O ports

Fibre Channel
back-end disk ports

CX3-20

CX3-20c

CX3-20f

The storage system supports 4 Gb/s Fibre Channel operation from its
front-end host I/O ports through its back-end disk ports. The host I/O
front-end ports can operate at up to 4 Gb/s and the back-end ports
can operate at 2 or 4 Gb/s. The storage system senses the speed of
the incoming host I/O and sets the speed of the front-end ports to the
lowest speed it senses. The speed of each back-end port is determined
by the speed of the DAEs connected to it.
The storage system requires at least five disks and works in conjunction
with one or more disk-array enclosures (DAEs) to provide terabytes of
highly available disk storage. A DAE is a basic disk enclosure without
an SP. SPE3 systems include a 4 Gb/s point-to-point DAE3P, which
supports up to 15 Fibre Channel disks. Each DAE3P connects to the
SPE3 or another DAE with simple FC-AL serial cabling.
The storage system supports a total of 120 disks on its single back-end
bus. You can place the disk enclosures in the same cabinet as the SPE,
or in one or more separate cabinets. High-availability features are
standard.

Hardware and Operational Overview

Storage processor enclosure (SPE3)


The SPE3 components include:

A sheet-metal enclosure with a midplane and front bezel

Two storage processors (SPs)

Four power supply/system cooling modules (referred to as


power/cooling modules)

Figure 2 shows the SPE3 components. Details on each component


follow the figure. If the enclosure provides slots for two identical
components, the component in slot A is called component-name A. The
second component is called component-name B. For increased clarity, the
following figures depict the SPE3 outside of the rack cabinet. Your
SPE3 may be installed in a rackmount cabinet.
Rear

SPS B

TLA S/Nxxxxxxxx

Front

SPS A

A
B

Storage processor B
Power/cooling modules
Figure 2

Storage processor A
EMC3454

SPE3 outside the cabinet front and rear views

Midplane
The midplane distributes power and signals to all the enclosure
components. The power/cooling modules and storage processors (SPs)
plug directly into midplane connectors.

Front bezel
The front bezel has a key lock and two latch release buttons. Pressing
the latch release buttons releases the bezel from the enclosure.

Storage processors (SPs)


The SP is the SPE3s intelligent component and acts as the control
center. Each SP includes:

Hardware and Operational Overview

A single-processor CPU module comprised of:


z

2 GB of DDR DIMM (double data rate, dual in-line memory


module) memory

Two small-form factor pluggable (SFP) shielded Fibre Channel


connectors (optical SFP) for server I/O (connection to an FC
switch or server HBA)

One SFP shielded Fibre Channel connector (copper SFP) for


disk connection (BE 0)

One serial port for connection to a standby power supply (SPS)


micro DB9 connector

One 10/100 Ethernet LAN port for management RJ45


connector

One serial port for RS-232 connection to a service console


micro DB9 connector

One 10/100 Ethernet LAN port for service RJ45 connector

For a CX3-20c One I/O module with four 10/100gigabit Ethernet


ports (RJ45 connector) for iSCSI I/O to a network switch or server
NIC or HBA

For a CX3-20f One I/O module with four additional small-form


factor pluggable (SFP) shielded Fibre Channel connectors (optical
SFP) for server I/O (connection to an FC switch or server HBA)

Figure 3, Figure 4, and Figure 5 show the locations of the connectors on


the rear of the SPs.
Service only Management LAN
AC cord
+-

BE 0

SPS port

Power and
fault LEDs

0 Fibre

1 Fibre

Back-end Fibre Front-end Fibre


Channel port
Channel ports
CL3628

Figure 3

Connectors on the rear of a CX3-20 SP

Hardware and Operational Overview

iSCSI ports

Service only Management LAN


AC cord
0 iSCSI

1 iSCSI

2 iSCSI

3 iSCSI

+-

BE 0

SPS port

Power and
fault LEDs

4 Fibre

5 Fibre

Back-end Fibre Front-end Fibre


Channel port
Channel ports
EMC3532

Figure 4

Connectors on the rear of a CX3-20c SP

Service only

Back-end Fibre
SP B
Channel port

Front-end Fibre
Channel ports

SP A

Rear
A
B
BE 0

SPS port

Power &
fault LEDs

0 Fibre

1 Fibre

Management LAN

BE 0

0 Fibre 1 Fibre

Service only

AC cord

Note: ports and LEDs are the same for SP A and SP B

Figure 5

CL3620

Connectors on the rear of the CX3-20f SPs

Power/cooling modules
Each of the four power/cooling modules integrate one independent
power supply and one blower into a single module. The power
supply in each module is an auto-ranging, power-factor-corrected,
multi-output, offline converter.
The four power/cooling modules (A0, A1, B0, and B1) are located in
front of the SPs. A0 and A1 share load currents and provide power
and cooling for SP A, and B0 and B1 share load currents and provide
power and cooling for SP B. A0 and B0 share a line cord, and A1 and
B1 share a line cord.
An SP or power/cooling module with power-related faults does
not adversely affect the operation of any other component. If

Hardware and Operational Overview

one power/cooling module fails, the others take over. If both


power/cooling modules for an SP fail, write caching is disabled.

SPE3 field-replaceable units (FRUs)


The following are field-replaceable units (FRUs) that you can replace
while the SPE3 is powered up:

Storage processors (SPs) (CX3-20)

CPU modules (CX3-20c and CX3-20f)

Memory modules (DIMMs)

I/O modules (CX3-20c and CX3-20f)

Small form-factor pluggable (SFP) modules, which plug into the


Fibre Channel front-end port slots

Power/cooling modules

You or your service provider can replace a failed power/cooling


module or SFP module. A service provider must replace the other
FRUs if they fail.

Hardware and Operational Overview

Disk-array enclosures (DAE3Ps)


DAE3P UltraPoint (sometimes called point-to-point) disk-array
enclosures are highly available, high-performance, high-capacity
storage-system components that use a Fibre Channel Arbitrated Loop
(FC-AL) as the interconnect interface. A disk enclosure connects to
another DAE3P or an SPE3 and is managed by storage-system software
in RAID (redundant array of independent disks) configurations.
The enclosure is only 3U (5.25 inches) high, but can include 15 hard
disk drive/carrier modules. Its modular, scalable design allows for
additional disk storage as your needs increase.
A DAE3P includes either high-performance Fibre Channel disk
modules or economical SATA (Serial Advanced Technology
Attachment, SATA II) disk modules. You can integrate and connect
FC and SATA enclosures within a storage system, but you cannot mix
SATA and Fibre Channel components within a DAE3P. The enclosure
operates at either 2 or 4 Gb/s bus speed (2 Gb/s components, including
disks, cannot operate on a 4 Gb/s bus). Simple serial cabling provides
easy scalability. You can interconnect disk enclosures to form a large
disk storage system; the number and size of buses depends on the
capabilities of your storage processor. Highly available configurations
require at least one pair of physically independent loops (for example,
A and B sides of bus 0, sharing the same dual-port disks). Other
configurations use two, three, four, or more buses. You can place the
disk enclosures in the same cabinet, or in one or more separate cabinets.
High-availability features are standard.
The DAE3P includes the following components:

A sheet-metal enclosure with a midplane and front bezel

Two FC-AL link control cards (LCCs) to manage disk modules

As many as 15 disk modules

Two power supply/system cooling modules (referred to as


power/cooling modules)

Any unoccupied disk module slot has a filler module to maintain air
flow.
The power supply and system cooling components of the
power/cooling modules function independently of each other, but the

Hardware and Operational Overview

assemblies are packaged together into a single field-replaceable unit


(FRU).
The LCCs, disk modules, power supply/system cooling modules,
and filler modules are field-replaceable units (FRUs), which can be
added or replaced without hardware tools while the storage system
is powered up.
Figure 6 shows the disk enclosure components. Where the enclosure
provides slots for two identical components, the components are called
component-name A or component-name B, as shown in the illustrations.
For increased clarity, the following figures depict the disk enclosure outside
of the rack or cabinet. Your disk enclosure may be installed in a rackmount
cabinet.

Power/cooling module B

Link control card B

Power LED
(green or blue)

Fault LED
(amber)

!
!

PRI

EXP

PRI

EXP

#
PRI

EXP

PRI

EXP

B
#

Power/cooling module A
Figure 6

Link control card A

Disk activity
LED (green)

Fault LED
(amber)

EMC3437

DAE3P outside the cabinet front and rear views

As shown in Figure 7, an enclosure address (EA) indicator is located on


each LCC. (The EA is sometimes referred to as an enclosure ID.) Each
link control card (LCC) includes a bus (loop) identification indicator.
The storage processor initializes bus ID when the operating system
loads.

Hardware and Operational Overview

Bus ID

Enclosure
address

0
1
2
3

0
1
2
3

EA selection
(press here to
change EA)

4
5
6
7

4
5
6
7

!
!

PRI

EXP

PRI

EXP

#
PRI

EXP

PRI

#
!

EXP

EMC3210

Figure 7

Disk enclosure bus (loop) and address indicators

The enclosure address is set at installation. Disk module IDs are


numbered left to right (looking at the front of the unit) and are
contiguous throughout a storage system: enclosure 0 contains modules
0-14; enclosure 1 contains modules 15-29; enclosure 2 includes 30-44,
and so on.

Midplane
A midplane between the disk modules and the LCC and power/cooling
modules distributes power and signals to all components in the
enclosure. LCCs, power/cooling modules, and disk drives the
enclosures field-replaceable units (FRUs) plug directly into the
midplane.

Front bezel
The front bezel has a locking latch and an electromagnetic interference
(EMI) shield. You must remove the bezel to remove and install drive
modules. EMI compliance requires a properly installed front bezel.

Link control cards (LCCs)


An LCC supports and controls one Fibre Channel bus and monitors
the DAE3P.

10

Hardware and Operational Overview

Expansion link
active LED

Primary link
active LED
Fault LED (amber)

PRI
!

EXP

Power LED (green)

PRI

EXP
!
!

PRI

EXP

PRI

EXP

#
PRI

EXP

PRI

#
!

EXP

EMC3226

Figure 8

LCC connectors and status LEDs

A blue link active LED indicates a DAE3P enclosure operating at 4 Gb/s. The
link active LED(s) is green in DAE3Ps operating at 2 Gb/s.

The LCCs in a DAE3P connect to other Fibre Channel devices


(processor enclosures, other DAEs) with twin-axial copper cables. The
cables connect LCCs in a storage system together in a daisy-chain
(loop) topology.
Internally, each DAE3P LCC uses FC-AL protocols to emulate a loop;
it connects to the drives in its enclosure in a point-to-point fashion
through a switch. The LCC independently receives and electrically
terminates incoming FC-AL signals. For traffic from the systems
storage processors, the LCC switch passes the input signal from
the primary port (PRI) to the drive being accessed; the switch then
forwards the drives output signal to the expansion port (EXP), where
cables connect it to the next DAE in the loop. (If the target drive is not
in the LCCs enclosure, the switch passes the input signal directly to
the EXP port.) At the unconnected expansion port (EXP) of the last
LCC, the output signal (from the storage processor) is looped back to
the input signal (to the storage processor). For traffic directed to the
systems storage processors, the switch passes input signals from the
expansion port directly to the output signal of the primary port.
Each LCC independently monitors the environmental status
of the entire enclosure, using a microcomputer-controlled FRU
(field-replaceable unit) monitor program. The monitor communicates
Hardware and Operational Overview

11

status to the server, which polls disk enclosure status. LCC firmware
also controls the LCC port bypass circuits and the disk-module status
LEDs.
LCCs do not communicate with or control each other.
Captive screws on the LCC lock it into place to ensure proper
connection to the midplane. You can add or replace an LCC while the
disk enclosure is powered up.

Disk modules
Each disk module consists of one disk drive in a carrier. You can
visually distinguish between module types by their different latch
and handle mechanisms and by type, capacity, and speed labels on
each module. An enclosure can include Fibre Channel or SATA disk
modules, but not both types. You can add or remove a disk module
while the DAE3P is powered up, but you should exercise special care
when removing modules while they are in use. Drive modules are
extremely sensitive electronic components.
Disk drives
The DAE3P supports Fibre Channel disk drives that conform to FC-AL
specifications and 2 or 4 Gb/s Fibre Channel interface standards, and
supports dual-port FCAL interconnects through the two LCCs. A
DAE3P supports 2 Gb/s drives only if the entire back-end bus that contains
the drives is operating at 2 Gb/s. SATA disk drives conform to Serial ATA
II Electrical Specification 1.0 and include dual-port SATA interconnects;
a paddle card on each drive converts the assembly to Fibre Channel
operation. The disk module slots in the enclosure accommodate 2.54
cm (1-in) by 8.75 cm (3.5-in) disk drives.
Drive carrier
The disk drive carriers are metal and plastic assemblies that provide
smooth, reliable contact with the enclosure slot guides and midplane
connectors. Each carrier has a handle with a latch and spring clips.
The latch holds the disk module in place to ensure proper connection

12

Hardware and Operational Overview

with the midplane. Disk drive activity/fault LEDs are integrated into
the carrier.

Power/cooling modules
The power/cooling modules are located above and below the LCCs.
The units integrate independent power supply and dual-blower
cooling assemblies into a single module.
Each power supply is an auto-ranging, power-factor-corrected,
multi-output, offline converter with its own line cord. Each supply
supports a fully configured DAE3P and shares load currents with the
other supply. The drives and LCCs have individual soft-start switches
that protect the disk drives and LCCs if they are installed while the
disk enclosure is powered up. A FRU (disk, LCC, or power/cooling
module) with power-related faults does not adversely affect the
operation of any other FRU.
The enclosure cooling system includes two dual-blower modules.
If one blower fails, the others will speed up to compensate. If two
blowers in a system (both in one power/cooling module, or one in each
module) fail, the DAE3P goes offline within two minutes.

Hardware and Operational Overview

13

Standby power supplies (SPSs)


Two 1U 1000-watt DC SPSs provide backup power for one SPE3 and
the first (enclosure 0, bus 0) DAE adjacent to it. The SPSs allow write
caching which prevents data loss during a power failure to continue.
A faulted or not fully charged SPS disables the write cache. Each SPS
rear panel has one AC inlet power connector with power switch, AC
outlets for the SPE3 and the first DAE (EA 0, bus 0) respectively, and
one phone-jack type connector for connection to an SP. Figure 9 shows
the SPS connectors.

SPE

SP
interface

Active
LED
(green)
On battery
LED
(amber)

AC
power
connector

Power
switch

Fault
LED
(amber)

Replace
battery
LED
(amber)
EMC2292

Figure 9

1000 W SPS connectors

A service provider can replace an SPS while the storage system is


powered up.

14

Hardware and Operational Overview

Powerup and powerdown sequence


The SPE3 and DAE3P do not have power switches.

Powering up the storage system


1. Verify the following:
Master switch/circuit breakers for each cabinet/rack power
strip are off.
The two power cords for the SPE3 are plugged into the SPSs and
the power cord retention bails are in place.
Serial connections between the SPs and the SPSs are in place.
Power cords for the first DAE3P (EA 0, bus 0) are plugged into
the SPSs and the power cord retention bails are in place.
The power cords for the SPSs and any other DAE3Ps are
plugged into the cabinets power strips.
The power switches on the SPSs are in the on position.
Any other devices in the cabinet are correctly installed and
ready for powerup.
2. Turn on the master switch/circuit breakers for each cabinet/rack
power strip.
In standard EMC cabinets, master switches are on the power distribution
panels (PDPs), as shown in Figure 10.

Hardware and Operational Overview

15

ON
I

ON
I

O
OFF

O
OFF

Master switch

Master switch
!
!

PRI

EXP

PRI

EXP

#
PRI

EXP

PRI

EXP

B
#

ON
I

ON
I

O
OFF

O
OFF

!
!

PRI

EXP

PRI

EXP

#
PRI

EXP

PRI

EXP

B
#

ON
I

ON
I

O
OFF

O
OFF

!
!

PRI

EXP

PRI

EXP

B
A

PRI

EXP

#
!

EXP

PRI

EXP

PRI

!
!

PRI

EXP

PRI

EXP

B
A

#
!

EXP

PRI

EXP

PRI

!
!

PRI

EXP

PRI

EXP

B
A

#
!

EXP

PRI

EXP

PRI

!
!

PRI

EXP

PRI

EXP

O
OFF

B
A

ON
I

EXP

O
OFF
ON
I

PRI

EXP

PRI

!
!

PRI

EXP

PRI

EXP

B
A

#
!

PRI

EXP
EXP

PRI

EXP

O
OFF

O
OFF

PRI

ON
I

ON
I

Master switch

Master switch

PRI

EXP

PRI

EXP

B
#

O
OFF

O
OFF

ON
I

ON
I

Power source B

Power source A
Power source D

Figure 10

Power source C

EMC3438

PDP master switches and power sources in the 40U cabinet

The storage system can take 10 to 15 minutes to complete a typical


powerup. If the storage system was installed in a cabinet at your site
(field-installed system), the first powerup will require several reboots
and can take 30 to 45 minutes. Amber warning LEDs flash during the
power on self-test (POST) and then go off. The front fault LED and the

16

Hardware and Operational Overview

SPS recharge LEDs commonly stay on for several minutes while the
SPSs are charging.
If amber LEDs on the front or back of the storage system remain
on for more than 15 minutes (45 minutes for the first powerup of a
field-installed system), make sure the storage system is correctly cabled,
and then refer to the troubleshooting flowcharts on the CLARiiON Tools
page on the EMC Powerlink website (http://Powerlink.EMC.com). If
you cannot determine any reasons for the fault, contact your authorized
service provider.

Powering down the storage system


1. Stop all I/O activity to the SPE. If the server connected to the SPE is
running the Linux or UNIX operating system, back up critical data
and then unmount the file systems.
Stopping I/O allows the SP to destage cache data, and may take
some time. The length of time depends on criteria such as the size
of the cache, the amount of data in the cache, the type of data in
the cache, and the target location on the disks, but it is typically
less than one minute. We recommend that you wait five minutes
before proceeding.
2. After five minutes, use the power switch on each SPS to turn
off power. The SPE and primary DAE power down within two
minutes.

CAUTION
Never unplug the power supplies to shut down an SPE.
Bypassing the SPS in that manner prevents the storage system
from saving write cache data to the vault drives, and results
in data loss. You will lose access to data, and the storage
processor log displays an error message similar to the following:
Enclosure 0 Disk 5 0x90a (Cant Assign - Cache Dirty)
0 0xafb40 0x14362c

Contact your service provider if this situation occurs.


This turns off power to the SPE and the first DAE (EA 0, bus 0). You do
not need to turn off power to the other connected DAEs.
Hardware and Operational Overview

17

Status lights (LEDs) and indicators


Status lights made up of light emitting diodes (LEDs) on the SPE3,
its FRUs, the SPSs, and the DAE3P and their FRUs indicate the
components current status.

Storage processor enclosure (SPE3) LEDs


This section describes status LEDs visible from the front and the rear
of the SPE3.
SPE3 front status LEDs
Figure 11 and Figure 12 show the location of the SPE3 status LEDs that
are visible from the front of the enclosure. Table 2 describes these LEDs.

Fault LED

Power LED
DAE3P

SPS
SPE3

Fault LED
Figure 11

Power LED

EMC3428

SPE3 front status LEDs (bezel in place)

TLA S/Nxxxxxxxx

Power/cooling LEDs

Fault LED
Figure 12

18

Power LED

SPE3 front status LEDs (bezel removed)

Hardware and Operational Overview

EMC3427

Table 2

Meaning of the SPE3 front status LEDs

LED

Quantity

State

Meaning

Power

Off

SPE3 is powered down.

Solid green

SPE3 is powered up.

Off

SPE3 is operating normally.

Solid amber

A fault condition exists in the SPE3. If the fault is not obvious from another
fault LED on the front, look at the rear of the enclosure.

Off

Power/cooling module is not powered up.

Solid green

Power/cooling module is powered and operating normally.

Solid amber

Power/cooling module is faulted.

Blinking amber

Fault condition exists external to the power/cooling module.

Fault

Power/cooling fault
(see note)

1 per module

Note: Light is visible only with the bezel removed.

SPE3 rear status LEDs


Figure 13 shows the status LEDs that are visible from the rear of the
SPE3. Table 3 describes these LEDs.
I/O module
fault LED

SP B

SP A

I/O module
fault LED

Rear

A
B

Power &
fault LEDs
Figure 13
Table 3

Fibre channel
link LEDs

Power &
fault LEDs

Fibre channel
link LEDs

EMC3466

SPE3 rear status LEDs


Meaning of the SPE3 rear status LEDs

LED

Quantity

State

Meaning

SP fault

1 per SP

Off

SP is powered up and operating normally.

Solid amber

SP is faulted.

Blinking amber

SP is in process of powering up.

Off

I/O module is powered up and operating normally.

Solid amber

I/O module is faulted.

I/O module fault


(CX3-20c or CX3-20f
only)

1 per I/O module

Hardware and Operational Overview

19

LED

Quantity

State

Meaning

BE port link

1 per back-end Fibre


Channel port

Off

No link because of one of the following conditions: the


cable is disconnected, the cable is faulted or it is not a
supported type.

Solid green

1 Gb/s or 2 Gb/s link speed.

Solid blue

4 Gb/s link speed.

Blinking green then blue

Cable fault.

Off

No link because of one of the following conditions: the


host is down, the cable is disconnected, an SFP is not in
the port slot, the SFP is faulted or it is not a supported
type.

Solid green

1 Gb/s or 2 Gb/s link speed.

Solid blue

4 Gb/s link speed.

Blinking green then blue

SFP or cable fault.

FE port link

20

1 per front-end Fibre


Channel port

Hardware and Operational Overview

DAE3P status LEDs


This section describes the following status LEDs and indicators:

Front DAE3P and disk modules status LEDs

Enclosure address and bus ID indicators

LCC and power/cooling module status LEDs

Front DAE3P and disk modules status LEDs


Figure 14 and Figure 15 show the location of the DAE3P and disk
module status LEDs that are visible from the front of the enclosure.
Table 4 describes these LEDs.

Fault LED

Power LED
DAE3P

SPS
SPE3

Fault LED
Figure 14

Power LED

EMC3428

Front DAE3P and disk modules status LEDs (bezel in place)

Hardware and Operational Overview

21

Fault LED
(Amber)

Disk Activity
LED
(Green)
Figure 15

22

Power LED
(Green or Blue)

Fault LED
(Amber)

Front DAE3P and disk modules status LEDs (bezel removed)

Hardware and Operational Overview

EMC3422

Table 4

Meaning of the front DAE3P and disk module status LEDs

LED

Quantity

State

Meaning

DAE power

Off

DAE3P is not powered up.

Solid green

DAE3P is powered up and back-end bus is running at 2


Gb/s.

Solid blue

DAE3P is powered up and back-end bus is running at 4


Gb/s.

DAE fault

Solid amber

On when any fault condition exists; if the fault is not


obvious from a disk module LED, look at the back of the
enclosure.

Disk activity

1 per disk module

Off

Slot is empty or contains a filler module or the disk is


powered down by command, for example, as the result of
a temperature fault.

Solid green

Drive has power but is not handling any I/O activity (the
ready state).

Blinking green, mostly on

Drive is spinning and handling I/O activity.

Blink green at a constant


rate

Drive is spinning up or spinning down normally.

Blinking green, mostly off

Drive is powered up but not spinning; this is a normal part


of the spin-up sequence, occurring during the spin-up
delay of a slot.

Solid amber

On when the disk module is faulty, or as an indication


to remove the drive.

Disk fault

1 per disk module

Enclosure address and bus ID indicators


Figure 16 shows the location of the enclosure address and bus ID
indicators that are visible from the rear of the enclosure. In this example,
the DAE3P is enclosure 2 on bus (loop) 1; note that the indicators for
LCC A and LCC B always match. Table 5 describes these indicators.

Hardware and Operational Overview

23

Bus ID

Enclosure
address

0
1
2
3

0
1
2
3

EA
selection

4
5
6
7

4
5
6
7

!
!

PRI

EXP

PRI

EXP

#
PRI

EXP

PRI

EXP

B
#

4
5
6
7

4
5
6
7

0
1
2
3

0
1
2
3

Enclosure
address

Bus ID

#
EA
selection

EMC3178

Figure 16
Table 5

Location of enclosure address and bus ID indicators


Meaning of enclosure address and bus ID indicators

LED

Quantity

State

Meaning

Enclosure address

Green

Displayed number indicates enclosure address.

Bus ID

Blue

Displayed number indicates bus ID. Blinking bus ID


indicates invalid cabling; LCC A and LCC B are not
connected to the same bus; or, the maximum number of
DAEs allowed on the bus is exceeded.

Power/cooling module status LEDs


Figure 17 shows the location of the status LEDs for the power
supply/system cooling modules (referred to as power/cooling modules).
Table 6 describes these LEDs.

24

Hardware and Operational Overview

Power LED (green)


Power fault LED (amber)
Blower fault LED (amber)

!
!

PRI

EXP

PRI

EXP

#
PRI

EXP

PRI

EXP

B
#

Blower fault LED (amber)


Power fault LED (amber)
Power LED (green)
EMC3179

Figure 17

Power/cooling module status LEDs

Table 6

Meaning of power/cooling module status LEDs

LEDs

Quantity

State

Meaning

Power supply active

1 per supply

Green

On when the power supply is operating.

Power supply fault


(see note)

1 per supply

Amber

On when the power supply is faulty or is not receiving AC


line voltage. Flashing when either a multiple blower or
ambient over-temperature condition has shut off power to
the system.

Blower fault (see


note)

1 per cooling module

Amber

On when a single blower in the power supply is faulty.

Note: The DAE3P continues running with a single power supply and three of its four blowers. Removing a power/cooling module
constitutes a multiple blower fault condition, and will power down the enclosure unless you replace a blower within two minutes.

LCC status LEDs


Figure 18 shows the location of the status LEDs for a link control card
(LCC). Table 7 describes these LEDs.

Hardware and Operational Overview

25

Primary link
active LED (green or blue)

Expansion link
active LED
(2 Gb/s - green
4 Gb/s - blue)

Fault LED (amber)

PRI
!

EXP

Power LED (green)

PRI

EXP
!
!

PRI

EXP

PRI

EXP

#
PRI

EXP

PRI

EXP

B
#

EXP

PRI

Fault LED (amber)

PRI

EXP

Power LED (green)

Primary link
active LED

Expansion link
active LED
EMC3184

Figure 18
Table 7

LCC status LEDs


Meaning of LCC status LEDs

Light

Quantity

State

Meaning

LCC power

1 per LCC

Green

On when the LCC is powered up.

LCC fault

1 per LCC

Amber

On when either the LCC or a Fibre Channel connection


is faulty. Also on during power on self test (POST).

Primary link active

1 per LCC

Green

On when 2 Gb/s primary connection is active.

Blue

On when 4 Gb/s primary connection is active.

Green

On when 2 Gb/s expansion connection is active.

Blue

On when 4 Gb/s expansion connection is active.

Expansion link
active

1 per LCC

SPS status LEDs


Figure 19 shows the location of SPS status LEDs that are visible from
rear. Table 8 describes these LEDs.

26

Hardware and Operational Overview

Active
LED
(green)
On battery
LED
(amber)

Fault
LED
(amber)

Replace
battery
LED
(amber)
EMC3421

Figure 19
Table 8

1000 W SPS status LEDs


Meaning of 1000 W SPS status LEDs

LED

Quantity

State

Meaning

Active

1 per SPS

Green

When this LED is steady, the SPS is ready and operating normally. When
this LED flashes, the batteries are being recharged. In either case, the
output from the SPS is supplied by AC line input.

On battery

1 per SPS

Amber

The AC line power is no longer available and the SPS is supplying output
power from its battery. When battery power comes on, and no other online
SPS is connected to the SPE, the file server writes all cached data to disk,
and the event log records the event. Also on briefly during the battery test.

Replace battery

1 per SPS

Amber

The SPS battery is not fully charged and may not be able to serve its cache
flushing function. With the battery in this state, and no other online SPS
connected to the SPE, the storage system disables write caching, writing
any modified pages to disk first. Replace the SPS as soon as possible.

Fault

1 per SPS

Amber

The SPS has an internal fault. The SPS may still be able to run online, but
write caching cannot occur. Replace the SPS as soon as possible.

Hardware and Operational Overview

27

Copyright 2006-2007 EMC Corporation. All Rights Reserved.


EMC believes the information in this publication is accurate as of its publication date. The
information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION
MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO
THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an
applicable software license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on
EMC.com.
All other trademarks used herein are the property of their respective owners.

28

Hardware and Operational Overview

Вам также может понравиться