Вы находитесь на странице: 1из 600

SAN

(, v.1.0)

Copyright 2005 - 2008


Brocade Communications Systems, Inc.
.
, ,
,
,
,

,
, Brocade. Brocade

. , Brocade
, ,
-
.
.

Brocade Bookshelf TM

SAN
:
: ,
: Brocade
Brocade
:
: , , Brocade

Advance Edition - 2005


First Edition 2005,
2005 2006
Second Edition 2007
2008
2008
:

1094 New Dehaven St.


West Conshohocken, PA 19428
info@InfinityPublishing.com
www.InfinityPublishing.com
www.BuyBooksOnTheWeb.com
Toll-free: (877) BUY-BOOK
Local Phone: (610) 941-9999
Fax: (610) 941-9959


Copyright 2005 - 2008 Brocade Communications Systems, Inc.
Brocade, Fabric OS, File Lifecycle Manager, MyView StorageX
, Brocade B-wing, DCX SAN Health
Brocade Communications Systems, Inc.,
/ . ,


.
.:
http://www.brocade.com/products-solutions/products/index.page
:
. (AS IS)
-
- , ,
Brocade. Brocade


. ,

.
Brocade. ,
,

Brocade
San Jose, CA USA
T: +1 408 333 8000
info@brocade.com

Brocade ,
Geneva, Switzerland
T: +41 22 799 56 40
emea-info@brocade.com

Brocade

: +7 985 762-5486
russia@brocade.com

Brocade -
Singapore
T: +65 6538 4700
apac-info@brocade.com

iii


(Mike Klayko)
(Tom Buiocchi)
.

(whitepapers), Brocade
/ Brocade. Brocade ,
,
Brocade McDATA (

Brocade),

.
(Jed Blees s), (J im Heuser),
(Lisa Guess), (Steve W ynne)
(M artin Skagen)
,
(Simon Gordon) .

(AJ Casam ento).
Brocade,
(Tom Clark) .

,
(Thomas Carroll), (Derek Gran ath),
, (Todd Einck), ' (Michael O' Connor), (Mike Schmitt),

(Mario Blandini), (Kent Hansen),


(Robert Snively) (Sue Wilson).
,
Brocade

,
SAN.
iv


(Josh Judd)
Brocade
(Principal Engineer)

. ,
.

,

(SE), OEM
.
,
Net Ware, W indows UNIX

(, ,
). ,
,
.
-


.
Br ocade
IT (contrib utor),
, .
Brocad e,
Senior S AN Architect, ..
FC SAN .
,

-
-.
v


?

(SAN)

Brocade.
,
Brocade.
,
,
SAN,

.

,
,
,
,
.
SAN, Fibre Channel


(Information Lifecycle Man
agement)
(Utility Computing).
,
.
SAN,
vi

-
,
, SAN.

Brocade
SAN, ,
,
.
,
,
. ,
,
.

(FA Q),
,

,
Brocade, .

.
,
, .
,

.
, .

?

SAN.


. , :

McDATA
vii

Brocade
SAN
,
Brocade

?

:

,
BCSD
-, SAN

,
SAN
OEM- ,
SAN
,
SAN
,

BCSD
?
BCSD Brocade Certified SAN
Designer ( Brocade

SAN). BCSD

SAN.
BCS D

BCSD
,
viii

,
Brocade Education Services
Brocade Connect.


(
Brocade
BCSD / BCFP).

web- Brocade Education Services:
http://www.brocade.com/education/index.page

SA N, ,

(Practical Storage Area Networking),
(Dan Pollack
).

SAN (Multiprotocol Routing for SANs)



, SAN,

FC-FC, iSCSI
FCIP. ,

File Area Network (FAN),

(Introducing File
Area Networks) .
(Strategies for
Data Protection).

www.brocade.com Bookshelf,
http://www.brocade.com/data-center-bestpractices/bookshelf/Browse.page.
ix

Brocade
Brocade Connect (
. Connect

www.brocade.com) ,
.
Brocade OEM-
http://partner.brocade.com.


?
-
bookshelf@brocade.com.
, , , ,
,
.
,
.


1 2,
3. 3,
, .
Brocade
. : 1)
,
; 2) ,

.

/ ,
?
. ,

russia@brocade.com c BOOKSHELF.
,
.
+7 (985) 762-5486.
x



................................................................ III
.................................................................................... IV
................................................................................................ V
....................................................................................... VI
.......................................................................................... XI

..........................................................1
1: SAN ...................................................................................... 3
2: SAN................................................................................... 61
3: UC ILM .......................................................................................... 85
4: SAN ........................................................ 121

......................................................147
5: ............................................................... 149
6: .......................................................... 175
7: .......................................... 207
8: ........................................ 227
9: ...................................................... 296
10: ................................................... 325
11: - SAN . 339
12: ......................................................... 373

.......................................................395
A: ............................................ 397
B: ................................... 493
C: ......................................................................... 537
D: .............................. 550
.............................................................................................. 561


................................................................ III
.................................................................................... IV
................................................................................................ V
....................................................................................... VI
? ............................................................................vi
? ..................................................vii
? ................................................. viii
BCSD ?................ viii
? ..................................ix
?.......x
...................................................x
.......................................................................................... XI
.....................................................................xi
.................................................................xii
..................................................................xviii
...............................................................................xxi

..........................................................1
1: SAN ...................................................................................... 3
................................................................... 3
SAN ................................................................................... 6
SAN .............................................................................. 11

, SAN ...................................12
HBA NIC...............................................................................................................17
JBOD SBOD .........................................................................................................18
RAID- ........................................................................................................20
.......................................................................23
..................................................................26
............................................28
..................................................................31

SAN ............................................................................ 33

SCSI..........................................................................................................................35
Fibre Channel ...........................................................................................................36
ATM SONET/SDH...............................................................................................47
IP Ethernet.............................................................................................................48
iSCSI.........................................................................................................................51
iFCP ..........................................................................................................................57
FCIP..........................................................................................................................58
2: SAN................................................................................... 61

................................................................ 61
................................................ 66
......................... 69

SAN

C1: SAN

/
LAN.................................................................. 72
................................................. 77
/ ........... 79
.......................................................................... 81
3: UC ILM .......................................................................................... 85
UC ILM............................................................. 86
Utility Computing............................................................................ 88
Utility Computing..........................................................................93
Utility Computing ..............................................................95
Utility Computing ..................................................................98

.......................... 101
ILM ..............................................................................................106
ILM...................................................................................108
ILM ...................................................................................................110

SAN: UC ILM .................................................. 112

ILM UC..............................................................113
SAN..............................................117
SAN ..................................................................118
4: SAN ........................................................ 121

.......................................................................... 122
.................................................................... 128
, (RAS) ......... 129
...........................................................................................................129
..........................................................................................................134
.................................................................................................136

.................................................................. 138
................................................................... 139
............................................... 140
.............................................................. 141
....................................................... 142

......................................................147
5: ............................................................... 149
SAN............................................ 150
SAN ......................... 152
............................................. 153
SAN ................................................153
......................................................................154
, .............................................154
SAN..................................................................155

......................................................................... 156
............................................................................157
......................................................................159
..............................................................162
....................................165
..................................................................................170

(ROI TCO) ....................................... 170

xiii

SAN ................................ 171


6: .......................................................... 175
, ................ 176
....................................................................... 177
......................................................................... 180
mesh ............................................................................ 182
/ ............................................... 185

CE...............................................187
CE .......................................................................189
CE....................................................................................192

Meta SAN /............................................... 195


..................................... 197
.......................................... 205
7: .......................................... 207
................................................... 207

....................207
..........................................................................................208
-
...............................................................................................................................208
.......................................................208
Access Gateway.................209

..................... 209

..................................................................................210
.....................................................211
ISL IFL .......................................................................................212
.........................................................213

.................... 215

.................................................................................215
........................................................................217
.....................................................................................................218
...........................................219
......................................................................................220
..............................................................................225
8: ........................................ 227

, ............. 229

.........................................................................................229
SAN....................................................................................................231
...................................................................................................232
...............................................................234
(HoLB) ..........................................................................................238
.......................................................................239
....................................................................................240

.................... 243

...............................................244
..........................................................246
.......................................................................................................247

ISL IFL ......................................... 249


................................................................ 258

............................................................................................260
....................................................................263

SAN

C1: SAN

LSAN ...........................................................................................266
: UC ILM ...................................................268

CE SAN ............................................................ 268


................................................................... 272

: FSPF ......275
: ...............277
: Exchange .............................283
.............................................................................291


............................................................. 291
9: ...................................................... 296
SAN HA ................................................................ 296

.............................................................................................297
..................................................................................299

SAN................................ 303

SAN Meta SAN.........304


SAN Meta SAN.............304
SAN Meta SAN ...........304
SAN Meta SAN ...............305

Multipathing ......................... 306


..................................................... 308
.......................................................... 310
Meta SAN ........................... 315
Meta SAN .............................................................................315
Meta SAN................................................................................317
BB Meta SAN ...............................318

LSAN ............................................................... 320


SAN ................................................................ 321
........................................ 322
10: ................................................... 325
................................................................... 325
........................................ 326
........................................... 328
.................................... 329
()
...................................................................................................... 330
................................................................................................331
...........................................................................333

Secure Fabric Operating System (SFOS)...................................... 336


11: - SAN . 339
............................................................ 340
...................................................................342
...................................343
, DR.....................................................................344

FC Buffer-to-Buffer...................................................... 346
LD ........................ 351
MAN/WAN .......................................... 352
BC/DR.... 359

xv

FastWrite Tape Pipelining ......................................................... 360


10Gbit DR/BC ................................................ 364
.............................. 369
12: ......................................................... 373
............................................... 373
........................................................................... 377
, ........... 381

..............................................................................381
..........................................................................................385

........................................................ 387
..................................... 389
............................................................... 389

............................................................................389
...........................................390

........................................................... 392
................ 393

.......................................................395
A: ............................................ 397
Brocade......................................... 397

FC Brocade 200E..............................................................................398
Brocade 4100 ...................................................................................400
Brocade 5000 ...................................................................................403
Brocade 4900 ...................................................................................404
Brocade 48000 ......................................................................................405
Brocade AP7420....................................410
Brocade 7500.........................................414
Brocade 7600........................................................415
- FR4-18i ......................................415
FA4-18 .....................................................416
FC10-6 10Gbit Fibre Channel...................................................................417
FC4-16IP iSCSI to Fibre Channel ............................................................418
.......................................................................................419
Brocade iSCSI .............................................................................................424

McDATA......................................... 425

Brocade Mi10k .....................................................................................426


Brocade M6140.....................................................................................427
Brocade M4400 M4700...................................427
Brocade M1620 M2640.......................................................428
Brocade Edge M3000 .................................................................................428
Brocade USD-X..........................................................................................429

Brocade .............................. 430

SilkWorm 1xx0 FC .......................................................................430


SilkWorm 2xx0 FC .......................................................................432
SilkWorm 3200 / 3800 ..................................................................435
SilkWorm 3250 / 3850 FC ............................................................436
SilkWorm 3900 12000 ........................................................................................437
SilkWorm 24000...................................................................................440
..........................................................................................443

Brocade .............................................. 444

SAN

C1: SAN

Brocade .......................................................................444
Fabric Node (F_Port).....................................................................445
Loop Node (FL_Port) (QL/FA) ....................................................445
(E_Port) ..............................................448
............................................................................................449
...............................................................................................451
.......................................................................................................452
Value Line .......................................................................................................453
/ .......................................454
FCIP FastWrite Tape Pipelining.........................................................................456
FC FastWrite ..........................................................................................................459
....................................................................460
Advanced ISL Trunking (Frame-Level) ................................................................460
(Exchange-Level) ..................................................461
..........................................................................................................461
Fabric OS CLI ........................................................................................................463
WEBTOOLS ..........................................................................................................463
Fabric Manager.......................................................................................................464
SAN Health ............................................................................................................464
Fabric Watch ..........................................................................................................466
Advanced Performance Monitoring.......................................................................466
Extended Fabrics ....................................................................................................467
Remote Switch .......................................................................................................467
FICON / CUP .........................................................................................................467
Fibre Channel ............................................................................468
FCIP........................................................................................................................469
Secure Fabric OS....................................................................................................470

ROI ............................................ 471

ROI .................................................................................................472
1: ........................................473
2: ..........................................................................474
3: ......................................476
4: .......................................486
5: ROI .................................................................................487

Ethernet IP ............................................. 488

Ethernet L2.......................................488
IP WAN...................................................................................489
Gigabit Ethernet -.........................................................491
B: ................................... 493

...................................................... 493

FSPF: .............................................................494
FCRP: .........................................................495

FCR .............................................. 498


............................................ 498

SNS ..............................................498
...............499
............................500

FC .................................................... 501
Brocade ASICs .............................................................................. 502

ASIC ....................................................................................................503
Stitch Flannel.......................................................................................................504
Loom.......................................................................................................................504
Bloom Bloom-II ..................................................................................................505

xvii

Condor ....................................................................................................................506
Goldeneye...............................................................................................................508
Egret .......................................................................................................................509
FiGeRo / Cello........................................................................................................510

............................. 511
SilkWorm 12000 3900 XY .......................................................513
Brocade 24000 48000 CE ........................................................520

................................................................. 523

.........................................................................................524
1Gbit FC .................................................................................................................525
2Gbit FC .................................................................................................................525
4Gbit FC (Frame Trunked Native) .................................................................525
8Gbit FC (Frame Trunked Native) .................................................................533
10Gbit FC ...............................................................................................................534
32Gbit FC (Frame Trunked) ..................................................................................535
256Gbit FC (Frame Exchange Trunked) ........................................................535
1Gbit iSCSI FCIP................................................................................................535
10Gbit iSCSI FCIP..............................................................................................536
C: ......................................................................... 537

........................................................ 537
........................................................................................ 547
D: .............................. 550
.............................................................................................. 561


. 1 Fibre Channel ............................................................................ 6
. 2 DAS ( -) ........................................ 8
. 3 SAN ( )................ 10
. 4 - ........................................ 26
. 5 .................................................... 27
. 6 ....................... 30
. 7 FC .............................................. 37
. 8 Fibre Channel ............................................................................ 40
. 9 - Meta SAN ................................................................................... 44
. 10 iSCSI FC................................................... 52
. 11 iSCSI FC .......................................... 53
. 12

FC iSCSI ....................................................................... 54
. 13 FCIP................................................. 60
. 14 DAS ............................ 62
. 15 - White Space DAS ...................................................... 63
. 16 - White Space DAS SAN.................................... 65
. 17 ................................................................... 67
. 18 SAN..................................................................... 68
. 19 ............................................... 71
. 20 LAN.................................................. 74
. 21 LAN............................ 77
. 22 - Business Continuance SAN............................. 81

SAN

C1: SAN

. 23 Utility Computing ........................ 90


. 24 UC........................................................ 91
. 25 ILM ..................................................... 102
. 26 ILM ................................................................. 103
. 27 - SAN UC ILM ...................................................... 113
. 28 .
.................................................................................................................. 178
. 29 ............................................................................ 180
. 30 mesh ............................................................................... 183
. 31 CE .................................................................................. 185
. 32 / ............... 185
. 33 . 191
. 34 CE................................................... 193
. 35 A/B............................ 194
. 36 HA -HA....................... 195
. 37 CE .................................................. 196
. 38 - CE Meta SAN .................................................. 196
. 39 - CE Meta SAN CE ............... 196
. 40 SAN . ........ 198
. 41 E_Port ......................... 200
. 42 NPIV .
.................................................................................................................. 202
. 43 NPIV ....................................................... 204
. 44 CE................................................................... 206
. 45 ISL 3:1 ................................... 250
. 46 ........................................................... 258
. 47 ....................................................................... 261
. 48 CE............................................................ 269
. 49 CE ............................................................ 270
. 50 ..................... 272
. 51 ...................................... 278
. 52 DLS......................................... 281
. 53 DPS.................................... 285
. 54 - DPS .............................................................. 286
. 55 DPS Fiber .................................. 287
. 56 ..................................................... 299
. 57 HA ....................................................................................... 300
. 58 - .... 309
. 59 .......................... 313
. 60 - Meta SAN .......................................................... 317
. 61 - Meta SAN............................................................ 317
. 62 Meta SAN BB
.................................................................................................................. 319
. 63 Meta SAN + BB ................... 319
. 64 Meta SAN ............................................ 322
. 65 SCSI Write FastWrite ( ) .................................. 362
. 66 SCSI Write FastWrite ( ) ................. 363
. 67 10Gbit DR/BC................... 365

xix

. 68 ,
( SFP .
) ........................................................................................... 369
. 69 ........................ 376
. 70 ......................................... 378
. 71 , . ................ 378
. 72 ................................................. 379
. 73 ........................................... 380
Figure 74 - Brocade 200E ................................................................................. 398
Figure 75 - Brocade 4100.................................................................................. 401
Figure 76 - Brocade 5000.................................................................................. 403
Figure 77 - Brocade 4900.................................................................................. 404
Figure 78 - Brocade 48000 Director.................................................................. 405
Figure 79 - FC16 Port Blade for Brocade 48000............................................... 408
Figure 80 - Brocade AP7420............................................................................. 412
Figure 81 - Brocade 7500 Multiprotocol Router ............................................... 415
Figure 82 - Brocade 7600.................................................................................. 415
Figure 83 - FR4-18i Routing Blade................................................................... 416
Figure 84 FA4-18 Application Blade ............................................................. 417
Figure 85 - FC10-6 10Gbit FC Blade................................................................ 418
Figure 86 - FC4-16IP iSCSI Blade.................................................................... 419
Figure 87 - Brocade 4020 Embedded Switch .................................................... 421
Figure 88 - Brocade 4016 Embedded Switch .................................................... 422
Figure 89 - Brocade 4018 Embedded Switch .................................................... 422
Figure 90 - Brocade 4024 Embedded Switch .................................................... 423
Figure 91 - Brocade 4012 Embedded Switch .................................................... 424
Figure 92 - Brocade iSCSI Gateway ................................................................. 424
Figure 93 - SilkWorm II (1600) FC Fabric Switch ........................................... 431
Figure 94 - SilkWorm Express (800) FC Fabric Switch.................................... 431
Figure 95 - SilkWorm 1xx0 Daughter Card ...................................................... 431
Figure 96 - SilkWorm 2010/2040/2050............................................................. 433
Figure 97 - SilkWorm 2210/2240/2250............................................................. 434
Figure 98 - SilkWorm 2400............................................................................... 434
Figure 99 - SilkWorm 2800............................................................................... 435
Figure 100 - SilkWorm 3200............................................................................. 436
Figure 101 - SilkWorm 3800............................................................................. 436
Figure 102 - SilkWorm 3250............................................................................. 437
Figure 103 - SilkWorm 3850............................................................................. 437
Figure 104 - SilkWorm 3900............................................................................. 438
Figure 105 - SilkWorm 12000 Director ............................................................ 439
Figure 106 - SilkWorm 24000 Director ............................................................ 441
Figure 107 - SilkWorm 3016 Embedded Switch............................................... 443
Figure 108 - SilkWorm 3014 Embedded Switch............................................... 444
Figure 109 - VCs Partition ISLs into Logical Sub-Channels............................. 450
Figure 110 - Foundry EdgeIron 24 GigE Edge Switch...................................... 489
Figure 111 - Tasman Networks WAN Router ................................................... 490
Figure 112 - Foundry Modular Router .............................................................. 490
Figure 113 - WAN Router Usage Example....................................................... 490
Figure 114 - Copper to Optical Converter......................................................... 491

SAN

C1: SAN

Figure 115 - SilkWorm 12000 Port Blades ....................................................... 513


Figure 116 - SilkWorm 12000 ASIC-to-Quad Relationships............................ 514
Figure 117 - SilkWorm 12000 Intra-Blade CCMA Links ................................. 515
Figure 118 - SilkWorm 12000 CCMA Abstraction........................................... 515
Figure 119 - SilkWorm 12000 64-Port CCMA Matrix ..................................... 517
Figure 120 - Full-Mesh Traffic Patterns............................................................ 519
Figure 121 - Top-Level CE CCMA Blade Interconnect ................................ 521

1 UC ILM ................................................ 87
2 ................................................................... 262
3 ............................... 351
4 MAN/WAN ......................................... 358

xxi


SAN

SAN

SAN


SAN

SAN

1: SAN

C1: SAN

,

,
Storage Area Network (SAN),

,
S AN.
SAN
,
.


SAN - ,
(,
RAID- )
( ).
,
SAN.
,
, IP/Ethe rnet,
SAN
.
, IP,
,
,
, IP
. SAN
3

SAN



.
LAN
,
,

( W indows-
LAN , .
C: )
, SAN
.

SAN ,
, SAN,
Fibre Channel (F C),

SAN,

SAN.
, SAN
:
SAN
,

(,
RAID- ).
Fibre
Channel, , ,
.
Fibre Chann el SAN

,
, ,
.
4

SAN

C1: SAN

, SAN ,

, SCSI over F
ibre Channel
( SCSI FC Fibre Channel Protocol FCP.)


.
,
Internet Protocol 1 over Fibre Channel (IP/FC).

, FC-VI for DMA,


.
Fibre Channel
,
FC.

. 1.

1
IP Internet
IP/Ethernet -
-. , IP/Ethernet
. SAN
IP/FC FCP, IP
IP- ,
, IP .

SAN

. 1 Fibre Channel

SAN,

. SAN
,

.

,
,


. SAN ,
(HA),


/.
SAN 2.

SAN
1980-

SCSI, -
1990-
(Direct A ttached Storage, DAS) . ,
6

SAN

C1: SAN

DAS ,
:

DAS
SCSI
, DAS
,
,
.



.

Disaster Recovery (DR).

(High Availability, HA) ,



. SCSI


, SCSI
, .
-
,

.

,
.
(white s pace
utilization)
. ,
.

SCSI

SAN

DAS
- ( point-to-point)
(. . 2).

. 2 DAS ( -)

DAS (,
SCSI)

-.
, SCSI
,

:

. ( DAS

61.)
SAN

.

DAS, :


.
SAN

(Disaste
r
Recovery, DR),

SAN

C1: SAN

,
DR
.

,

,
,
.
HA
,
SAN
.
S AN Fibre Channel

. FC 1Gbit
Gigabit Ethernet
2Gbit, 4Gbit, 8Gbit 10Gbit,

.

FC

256Gbit.

SAN

. 3 SAN ( )

FC SAN.
,
Fibre Channel Arbitrated Loop (FC-AL, ). ( . Fibre Channel
36, FC-AL
FC.)
-
,
SAN
Fibre Channel.
. 3
.
,
,
,
,
..

SAN
.
10

SAN

C1: SAN


(ROI)
2.

- S AN
- SAN
. SAN
Fibre Channel
1997 .
, SAN
.
,
.
SAN -
(ROI) SAN,
2.

SAN
SAN
SAN,
SAN.
SAN

SAN

,
.

.

2
,

. ROI SAN
4: SAN.

11

SAN

,
SAN
(hub)

,

3.
2 (Layer 2, L2)

IP.
, IP/Ethe rnet
Ethernet
Internet Protocol.
,

(shared bandwidth).
,
.
,

4.
,
.


, /
.

Fibre C hannel SAN,

3

(, FC),
(any-to-any) ,

.
4

.
,
, .

12

SAN

C1: SAN

FC-AL

-
,
SAN.
,

,
,

(
).

(congestion)

,
(oversubscription); . ( 8:
( . 227),
).
,
(cros sbar) , -
,
.


5
(m any to
one),
. ,

. 256-
crossbar-. 256-
(, SmartBits)
(full mesh),
,
.

13

SAN


.
,

- :
, .

( ),
,
.
Fibre Channel
,

(fabric servi ces),
SAN. ,
,
( )
, -
SAN. SAN
( FC)

,
SAN FC. , iSCSI

Eth ernet, -
iSCSI



iSCSI,
FC,
.
,
, Fibre Channel ( . 44 ).
,

,
14

SAN

C1: SAN

.

Eth ernet (L2),
IP (L3),

,
. SAN
.


. ,
IP -
, .
IP

3 (L3) ,

L2

L3
.

SAN: Brocade AP7420, 7500 FR4-18i

Fibre Channel
FCIP.



,
,
. ,
, SAN,

,

. SAN
,
15

SAN


SCSI SAN
, DAS.
,
,
,
SCSI -
.
, ,

.
,

SCSI
.

,
,
. ,
/

,
/ ,
.

, SAN
.
Fibre Channel

Brocade
: SAN

.

SAN,

, ..
16

SAN

C1: SAN

.

8:
. 227.

HBA NIC
Fibre Channel Host Bus Adapters (HBA)


Fibre Chann el . HBA
2, 4 8 Gbit


,
6. Brocade
HBA
Unix/Linux Windows
Brocade FC SAN.
iSCSI HBA
Net work Interf ace Cards (NIC).
iSCSI HBA

,
iS CSI

6

, 1 GHz 1Gbit
. ,
/
.
,
.
iSCSI,
Fiber Channel HBA .

17

SAN

FC. iS CSI HBA


, Fibre Channel HBA,
50-75% . iSCSI
,
,
, iSC SI
,
,
Gigabit Ethernet
NIC.
, Brocade, ,
iSCSI NIC
FC-iSCSI,
.

JBOD SBOD
JBOD (Just a Bunch of Disks )) -
, SAN.
,
,
. (
,
JBOD .) JBOD' FC AL FCAL.

FL_Port JBOD'.

,
JBOD
.
JBOD

18

SAN

C1: SAN

, .. JBOD
RAID 7.
JBOD' ,
.
JBOD
, SAN,

,
- ,
FC-AL.
SBOD.
(Switched Bunch of Disks

.) SB OD JBOD,

FC-AL

,
.
JBOD SBOD
,
RAID-,
, ,
boot over SAN ( SAN)


(Utility Com puting)
.

, JBOD'
, RAID-. , RAID 256 LUN- SNS,
JBOD 256
.

19

SAN

RAID-
RAID Redundant Array of
Independent Disks (

8
) . RA ID
,

RAID 9.
,

.
RAID-, .
RAID-
N_Po rts NL_Ports. (
F_Port

FL_Port .)
, ,
,
,
,
. RAID-

:
RAID 0:

(striping).
,
.
striping

RAID 0
,

I Inexpensive (),
RAID- ,
.
9
SAN
.

20

SAN

C1: SAN

. RAID 0
round-robin
( ).
RAID 1 ( ): RAID 1
10 ,
.
RAID-1
RAID-.
RAID 5: RAID 0
11 , RA ID 5

.
RAID 5, ,

, RAID-
,
12 .
RAID- ,
.
RAID 5
(
) ,
RAID
- ,

SCSI 13.

10


.
,
.
.
.
11
RAID 5 .
12

RAID 5.
13
RAID-
. RAM (

21

SAN

RAID 1+0; 5+1:


RAID,
,

(RAID 1)
RAID 0.
,
RAID 0
RAID 1.
RAID 0+1 RAID 1+0.
, RAID 5,


,
.
RAID, ..
RAID 5,
.

(business continuance, BC),



.

BC

.
RAID

(hot spare),

RAID 1 RAID 5 .
,
,
. ,

),
.
RAID 5.

22

SAN

C1: SAN


RAID 0 , ,
RAID 0+1, ..
.
RAID-
, SAN,



SAN,
,

.
,


.

, ,

,
,
. SAN
,
.
SAN
,
, SAN

23

SAN


.

SCSI to Fibre Channel (. 26)
-
SCSI ( , SCSI-2)
FC N_Port
NL_Port.

FC ,
.

4 8 Gbit Fibre Channel.

2 Gbit.
,
,
,
.
, ,
- 14,
.
SAN 4 8
Gbit 15.

14

- ,
,
, ,
, ..
,
, .
,
, , .
15
,
iSCSI, 2Gbit FC

, iSCSI,
1Gbit.

24

SAN

C1: SAN

4Gbit
FC

SAN,
,
.


(point in tim e),

.
,
,


.
,

,

.
, /

SAN.

,

(backup window)
,

-
.

SAN


1Gbit Fibre Channel

25

SAN

, 4 8 Gbit

,


SAN

,
.
.
.

. 4 -

, , ,
Fibre Channel SAN,
Fibre
Channel ( . . 4)

, -

.
26

SAN

C1: SAN

SCSI to FC.
SCSI
Fibre
Channel,

SAN
.
,
FC, SCSI
SCSI .

,
,
.

, ,
FC

iSCSI.
,
, IP-.
(. . 5).

,
. Brocade iSCSI Gateway
FC4-16IP Brocade 48000
.

. 5

27

SAN



(Multipathing software)

.
HBA, ,

16.
,
SAN. (.
. 128 9:
. 296.)


HBA SAN

(best-practices) - HA,
.
HA
( HBA, ,
),
.
SAN
. ,
,
. ,
,
,
VSAN-

16


,
. ,
LSAN,
, .

28

SAN

C1: SAN

,
HB A- B
.

SAN. , ,
, ,

( backplane)
.
, .


. ,
, HA,
,

- .

, , -
. ,
,
.



A B . . 6
,
LUN

.
SAN
. 303 ,
A/B ,
HA.

29

SAN

. 6

30

SAN

C1: SAN


(Volume Managem ent, VM)
RAID- . RAID-
LUN- ,

. LUN
, ,
. RAID-

, . (.
RAID- . 20.) RAID-

,
,
, ,


.
Volum e Managem ent
, RAID, .

,
RAID -RAID ( , JBOD)

RAID ,
.
VM ,
RAID- ,
.

.

.
31

SAN


.
1960- ,

.
,
, ,

.


.
Brocade
,
:

32



,
.



.
,
,

.


,
,

SAN

C1: SAN

,
.

,
RAI D-to-RAID, ,
.
(, Brocade 7600 FA-18)
,
,
.
-,
.


, RAID- -

LUN,
, SAN


(ILM)
(Utility Computing, UC).

SAN
,
, ,
. SAN
SAN

, ,
.

,
.
,
33

SAN

. , ,
-,
,
.
,

, (
,
,
).
(,
)
(, ).
,

- 17.

, ,
.
.

. -
,
.
,
.
, Fibre Channel,

17

(
). , ,
, , ,
,
,
.

34

SAN

C1: SAN

, , xcopy

. ,
,
,
.

SCSI
Small Computer Systems Interconnect (SCSI)

.

Direct Attached Storage
(DAS) ,
SCSI .
point-to-point ( . . 2), SCSI
,
. ,
.
- .
SCSI

,

,

,
Fibre Channel.

SAN

SCSI , Fibre Channel.


SCSI ( ,
- ),
,
SAN ,

SCSI.
SAN, Fibre Channel,

35

SAN


, FC

SCSI,
.

Fibre Channel
, SAN,
- Fibre Channel. Brocade

Fibre Channel

. Fibre Channel ,

, FC
SAN .
Fibre Channel

Fibre Channel
.
1994 18
,
SAN. Fibre Channel


99% SAN.
SAN, IP SAN,
SAN.
Fibre Channel

250Mbits, FC
1Gbit, 2Gbits, 4Gbits, 8Gbits

18

FC-PH 250Mbit, 500Mbi 1Gbit FC 1994


. FC web INCITS T11: http://www.t11.org.

36

SAN

C1: SAN

10Gbits 19.
Fibre Channel
FC

(,
SCSI IP)
FC-
Fibre Channel
(, ATM, S ONET/SDH IP).
. 1 ( . 6) Fibre Channel.
,
. . 7
, Fibre Channel

(non-volatile m edia)
RAID-.

. 7 FC

19

FCIA 8Gbit FC,


, 10Gbit
2Gbit 4Gbit.

37

SAN


Fibre Channel?
,
.

FC

,
,
60 2 .
FC
.
2
, 1Gbit FC 2-
. , 2
, ,
1Gbit FC 2 .
2Gbit FC
2Gbit FC 1 ,
4Gbit FC 500 .
,

.
SAN,
.

, .
,
HBA.

,
, (
). HB A " "
HBA ,

38

SAN

C1: SAN

HBA
20. HBA
,
FC .
RAID-
( ).

"" -
(, RAID 5),
,
.
,
MAN WAN FC
IP, SONET/SDH ATM.
FC,
. , HB A,
SCSI Fibre Channel

IP FC FC-4.
IP
SCSI

21
. SAN
Fibre Channel SCSI
(FCP),
.
S AN,
. . 8
FC .

20

Fibre Channel
iSCSI: iSCSI NIC,
,
.
21

IP over FC. , Fibre Channel IP,
Ethernet.

39

SAN

. 8 Fibre Channel

,
, 2
,
SCSI. ,
2
, Fibre Channel

65

(sequence).
, HBA

.


,
Fibre Channel HBA
,
Ethernet
. Fibre Channel

(exchange). SCSI (
) exchange ID (
iSCSI . 51.)
ISL IFL

Inte r-Switch Link (ISL


)
Fibre Channel. ISL
SAN - ,
SAN
,

40

SAN

C1: SAN

, SAN,
, .
ISL
Brocade, ,
. Brocade
U_Port


E_Port ISL.
ISL

SAN

. , Brocade
native ISL, ,
, RAS
.

,
22.
Inte r-Fabric Lin k (IF L) IS L.
, ISL,

. IFL -
E_Port. , IFL
FC FC-FC,
.
IFL
Meta SAN ( . ), ISL

.

22

VSAN.

41

SAN

ISL
IFL
.
, ,
-
.

active wave division,
, DWDM, 200
ATM, SONET/SDH
FCIP. (
).
SAN Meta SAN

Fibre Channel
/ FC .
,
In ter-Switch Links (ISL ).
,
SAN.

,

23.
, Fibre Channel
,
24,

23
FC
, , . Fibre Channel
N_Port NL_Port. ( N node, ..
).
24
,
.
, ,
,
.

42

SAN

C1: SAN

25 (

). ,
,
.
(,

),
(,

).

(. . 9).
,
S AN ( .. )
FC-FC 26 (. 12),
Inter-Fabric Lin ks (IFL).

Meta SAN,
SAN. Me ta SAN
Fabric
Identifier (FID).
Meta SAN Logical S torage Area Networks (LSAN),

,
. FCFC

,
(Fabri c ID FID)

control-plane,

25

Brocade 4 .
SAN Multiprotocol
Routing for SANs.
26

43

SAN

.
,

Meta SAN .

. 9 - Meta SAN

Fibre Channel

,
.


, . Brocade
plug-and-play,
.
.
,
:

44

Domain Address Manager


ID

Domain Controller

(FLOGI)

SAN

C1: SAN

FSPF

FCRP LSAN
Meta SAN
Name Server



WWN

SAN
,
SAN,
,
,
,
, .
,
,
F ibre Chann el, iSCSI
, iSCSI

,
,
,

,

. SAN
Fibre Channel,
.
SAN

, ,
SAN, ,
SAN Fibre Channel
iSCSI.
iSCSI,
45

SAN

, FC
iSCSI. 27 28

.
FC-FC , Meta SAN

29.

.
,
N_Port NL_Port

(FLOGI) (S NS). Brocade


FC

(. . 122).

27

,
iSCSI , iSCSI
FC.
28
SAN .
iSCSI IP . SAN
, ,
SAN, .
RAID,
, ,
..
SAN, ,
.
29
VSAN Meta SAN
, ,
Brocade FC ,
, VSAN .

46

SAN

C1: SAN

FC

Fibre
Channel: point-to-point
arbitrated loop (

, FC-AL).
Point-to-point
(DAS). RAID HBA
FC point-to-point
SAN, ,
. pointto-point .
Fibre Channel Arbitrated Loop
JBOD ( . 18) HBA ( . 17),
(. 23) RAID- (.
18).
FC-AL
Fibre Channel,

.
,
N_Port NL_Port.

.
.
Brocade phantom logic
ASIC ,
Network Address
Translation (NAT) FC-AL ,

FC-AL.

ATM SONET/SDH
Fibre Channel ( . 36)
, Fiber Channel
(,
47

SAN

SCSI IP)

, ,
Fibre Channel ATM /
SONET/SDH. ,

, MAN
WA N,
Fibre Channel.
ATM Asynchronous Transfer
Mode. ,

LAN WA N. A TM
. ,
, FC , ATM,

.
SONET Synchronous Optical
Networks.
Synchronous Digital Hierarchy (SDH),

SONET/SDH.
,
FC,
.
ATM SONET/SDH
, IP SAN,
.

IP Ethernet
Internet Protocol (IP) Internet
- ,

, web-
. LAN IP
Ethernet.
(, HTTP FTP)
48

SAN

C1: SAN

IP IP TCP
. IPv4
,
. IP -192.168.1.1.
IP
,
. , IP

,
Internet,
30.

, ,
IP
,
.
/ IP
,
,
IP
. , IP
,

30

IP IP
.
,
. , Fibre Channel
- -
,
IP . FC-FC


Fibre Channel.

49

SAN



. IP-

,

. ,
, [ Xcopy over SCSI over iSCSI over IPsec over
TCP over IP over Ethernet over 1000baseT ],


.

(
).


.
Inte
rnet

,
-
we b-.
,
IP-.
, SAN
( ),
,

. IP


Fibre Channel SAN.
-

SAN
50

SAN

C1: SAN

IP-.
, IP
SAN Brocad e
IP SAN, , F CIP iSCSI,
.
Brocad e
SAN
,
FC,

IP SAN ( Brocade)
.

iSCSI
iSCSI
SCSI IP- .
FC-4, FCP
Fibre Channel.
Brocade FC, iSCSI,

,
iSCSI FC.
, iSCSI
, FC.
Ethernet, IP , TCP IPSec FC

.
iSCSI ,
. 10.
, iSCSI

, ,
,
Fibre Channel

. . 11 ( . 53) ,
iSCSI Ethernet
51

SAN

FC.

iSCSI
,
iSCSI

(MTU),
(jumbo) Ethernet.
iSCSI
jumbo

. iSCSI
, , 10Gbit
Ethernet iSCSI,
iSCSI 10 Gbit Ethe rnet
SCSI 4Gbit Fibre Channel.

. 10 iSCSI FC

52

SAN

C1: SAN

. 11 iSCSI FC

iSCSI
Fibre Channel,
-

, .
10Gbit Ethernet

,

. iSCSI



,
. -

,
,
.

53

SAN

. 12
FC iSCSI

Fibre Channel ( .
36), FC

,
,
HBA. . 12 ( . 54)

HBA

iSCSI,
.
. 12 ,
jum bo- iSCSI. ,
/,
iSCSI (, SAN

,

)
iSCSI.

SAN

(, ,
54

SAN

C1: SAN


FC.

),

iSCSI ,
HBA iSCSI,
31,

iSCSI,
.
iSCSI HBA
75% ,
FC HBA.
iSCSI

, ,
,
Fibre Channel ,
iSCSI .
Fibre Channel iS CSI,

,
iSCSI,
,
, FC
.

iSCSI

,
?.
,
iSCSI Fibre Channel

31

,
TCP PDU,
.

55

SAN

Serial ATA

. Fibre Channel over
Ethernet (F CoE) ,
iSCSI .
,
iSCSI ,
,
web-. ,

:


,
.

NFS CIFS. iSCSI
,

.
,
iSCSI Network Attached Storage (NAS) SAN,
iSCSI
NAS. iSCSI SAN
Brocade Brocade iS CSI Gateway,
iSCSI 48000 iSCSI HBA.

Fibre Channel over Datacen ter Ethern et.
(FCoE).

Ethernet (
iSCSI),
Fibre Channel
. -,

SAN iSCSI . iSCSI

56

SAN

C1: SAN

NAS,

FCoE FC SAN.


iSCSI -
SAN ,
FC iSCSI.

Serial ATA iSCSI
-

,
,
,
,
.
FC
iSCSI

Brocade iSCSI ,


.
SAN FC
. -
iSCSI (
Brocade)
,
FC.

iFCP
iFCP Fibre Channel IP- .
Fibre Channel,

57

SAN

Fibre Channel. iFCP


FC

FC-to-iFCP,
FC.

, ,
Fibre Channel,

IP- .

iFCP .
iFCP

- IP- ,
,
FCIP,
). iFCP
FCIP

WA N,
FCIP FC-FC Routing Service.
iFCP ,

iFCP,

iFCP , -,
.

FCIP
Fibre Channel over Internet Protocol (FCIP)

Fibre Channel E _Port


IP.

FCIP point-to-point

IP-

58

SAN

C1: SAN

Fibre Channel ISL



.
( , )
Fibre Channel,
FCIP IP /
FCIP (. . 13) 32.
IP
WAN Fibre Channel
(, Brocade 24000 48000) FCIP
(, Brocade 7500 FR4-18i).
LAN,
IP W
AN.


,
FC ISL IP-.

32

point-to-point FCIP
, ,
DWDM Gigabit Ethernet,
FCIP , FC. ,
( ) FCIP (more than two)
, FC
SAN .
DWDM GE,
FCIP,
DWDM
FC.

59

SAN

. 13 FCIP

FCIP
SAN , IP- ,

, , ATM, S ONET/SDH,
WD M.
SAN, Brocade

FCIP
FCIP. FCIP

SAN,

. Multiprotocol
Routing for SANs
FCIP Brocade.

60

SAN

C2: SAN

2
2: SAN

,
SAN.
,
SAN

SAN. SAN
,

SAN.




.
.


-

,
.

61

SAN

DAS 33
. ,
,
(. . 14).

. 14 DAS

-

DAS

( white sp ace).

,
DAS
. . 15
DAS,

white space
.
,

).

33

Directly Attached Storage

62

SAN

C2: SAN

. ,
(
) .


50%, ..
DAS : white sp ace
,
,

34.

. 15 - White Space DAS

white space
DAS ,

,
,

34
50%
, Merrill Lynch McKinsey 2001
Storage Report ,
70% - white space, .. ,
.
SAN 80% 90%:
40-66% .

SAN.

63

SAN


. ,

, ,
. ,

,
100- ,
1 , ,
, white s pace
99%, 101
, . ,
300 ,
102 white spa ce
2/3.
SAN white space

, ,

. ,
SAN,
( 101 )
,

,
2/3 ( .. white space
). SAN ,
,
. white sp ace,
, ,
. . 16
. 15
DAS SAN.
SAN


64

SAN

C2: SAN

,
(

).
,
,
,
,
.

. 16 - White Space DAS SAN

,



.

DAS

.

,
65

SAN


. SAN

SAN
. , SA N
(.
),
.


High Availab ility
(HA)
( ) ,
,
.
,

.

HA.
,




.


.
HA


,
, ..
, ,
SAN.
SAN

66

SAN

C2: SAN

, , SCSI,

.
SAN . 17.

. 17


SCSI


,

,

,
HA . -

. . 18
,
,

, ,
.
,
,
, -
HA-
. SAN
SAN
,
,
HA .
67

SAN

,
.

RAID
5, ,
.

. 18 SAN

. 18,
. 17

, ,
SAN ,
,
.
SAN

SAN (. 3: UC ILM , . 85.)
68

SAN

C2: SAN



, ,
SAN
.


SAN
, SAN
.
,
,



,

, - -
. SAN

,
, SAN
. SAN

,
.

SAN

SAN

69

SAN

,
.
, ,
,

,

,

. ,
,
.

FTP ,
-
.

NF S CIFS ,
,

. SAN

.
. 19
SAN
.
( )
, ,

70

SAN

C2: SAN

,

,
.

. 19



,
, data m ining,

.

.
71

SAN

HA,
.
,
RAID-

.

/

LAN

, ,

.
,

,
,
,
,

. ,
,

. 1990-

LAN (. . 20).
,
,

,

,
72

SAN

C2: SAN

,
.
IP-


( )

, ,

. IP-



,
IP

(over-subscription),


,
,
.
,
,
.

73

SAN

. 20 LAN



, -
.
,

,
LAN -
,
, ,
. TCP/IP

,
,
,

. ,

,
TCP/IP,
74

SAN

C2: SAN

.
, (,

)
-
7x24 35.

,
LAN, ,

.


IP/Ethernet LAN
. Ethernet,
IP

,
IP ,
. , Ethernet

VLAN

LAN
.

,

NIC
LAN.

35

24 .

75

SAN

,


.

IP
LAN FC SAN. . 21
, . 20.

LAN.
LAN
, -

Fibre Channel. IP Ethernet, FC

,
-
. , FC
,

(light weight).
FC Host Bus Ada pter (HBA).
, FC 10 , Fast Ethernet,
LAN


,
36.

36

Ethernet FC
Fibre Channel (FC-0 FC-1)
Ethernet (802.2 LLC 802.3 CSMA/CD).
Gigabit Ethernet (GE). FC
2Gbit
4Gbit GE - 1Gbit.
GE
, 1Gbit,

76

SAN

C2: SAN

. 21 LAN

, IP/Ethernet Fibre Channel


,
SAN
.

SAN Fibre
Channel,
,
.
FC 2Gbit
4Gbit 8Gbit. HBA Fibre Channel

FC HBA .

2Gbit FC 4Gbit,
, 2Gbit. Brocade
10Gbit ( )
8Gbit.

77

SAN

,
IP/Ethernet


,
,

On-Line Transaction Process ing (OLTP)

.


DAS Fibre Channel SAN.

,
. ,

,
,
.
SAN.
Fibre
Channel

/ ,

.
,
- ,
DAS.

78

SAN

C2: SAN


SAN ,

DAS,
SAN
, DAS.
, iSCSI
(
iSCSI),
.
, -

LAN. SCSI DAS


,
iSCSI

100MB/sec

,
iSCSI. (. . 10 . 11 , . 52.)
DAS iSCSI
( )
. Fibre Channel SAN
DAS.

/



(disaster recov ery, DR)

(business continuity, BC). ( BC
Busines s Continuity and Availability

( ,
BC&A.) DR
BC

79

SAN

,

.


. SAN ,
DR
BC


- SAN
.
, Fibre Channel
, SFP


, , DWDM. , FC
WAN,
SONET/SDH ATM. Fibre Channel,


37.
. 22 SAN
Business Continuance.

37

80

SAN

C2: SAN

. 22 - Business Continuance SAN

-
,
A/B. (
9: . 296.)

(business con tinuity site) .
-
Brocad e
ISL. ( 8:
, . 227).
,
, , .
FC
,
.
DW DM, C WDM, SONET/SDH,
ATM .



.
81

SAN



.
,

. ,
,
,



.
SAN .

,


,
,

82

SAN

C2: SAN

,
.

.
S AN
,
38.

38

,


.
. ,

. SAN
.

83

SAN

C3: UC ILM

3
3: UC ILM
Utility Computing (UC) Infor mation Lifecycle Management (ILM) SA N,
, , ,
, ,

, ,
UC ILM ,
. , UC
ILM
,
,

SAN.
UC ILM ,


.

UC ILM,
, .. ,
ILM.
, I LM UC
, SAN.
ILM UC SAN,
.
SAN ,
85

SAN

ILM UC, SAN

UC ILM, ,

,
, SA N
UC ILM.

UC ILM

UC ILM ,
,
. ILM
UC
.

, ,
. ,

.

,

.

.
, UC ILM .
,
. ILM
UC 1.

86

SAN

C3: UC ILM

1 UC ILM

()

(
)


. IL M
UC ,

IT.

,
..

. ,

.
, ILM
UC .
ILM UC
.
,
IT -
.

.

87

SAN


. ,

,

, . IL M
,
(.. ),
(,
/ ). ,
, ,
,

, ,

.

Utility Computing
Utility Computing (UC, )
, ,
,

.
UC

,
. UC


,
-

,
.
88

SAN

C3: UC ILM


UC:
1. IT (
, ).
2.
.
3.
.
Utility Com puting
,
, , -
, () SMP .


(. . 23).
UC grid- ,
,

, , 39.
,
,

,
.
Utility Computing -
,

UC. , UC
39

UC, ,
,
.
Utility Computing UC.

89

SAN

,
,
,
,
LAN
,
, ,
SAN back end
,


, .

. 23 Utility Computing


UC

Utility Com puting.
. 24 UC ,
.
90

SAN

C3: UC ILM

. 24 UC


UC , .
LAN,
MAN, WAN .
any-to-any

.
, ,
HA,
,

,
,
,
(
, ,
web- ).

,
,
91

SAN

.


SAN
,
.
SAN
, front-end
.
UC
, ,
,


. ,
.

.
, UC
SAN. S AN,


.


. SAN

, ,


.

UC
:
92

SAN

C3: UC ILM

1.
UC
2.

3.
4.

5. UC (. 24)
6. UC
7. UC

Utility Computing

UC ,
UC .
utility com puting

. , UC
,
.
UC

.
,

.
, IT-
, ,
.
,
, ,

.
93

SAN


. Utility Com puting,
,


,

SAN .
- .
UC
( Applica
tion Resource Managem ent, ARM),
Utility C omputing.

7x24,


.


, ,

.
ARM

:

, ,

.

.

.
.

,
-.
-
94

SAN

C3: UC ILM

, (..

).


,
-.

.
UC

. ,
,
UC, ,

.

,
,
, -
, .

Utility Computing
utility c omputing :

.
,

(.. ,
)


.
95

SAN

, UC .

. ,
,

,
, . ,



.
, ,
,

,

, .

2001 -
,

.
UC
, , ,
,
. UC


,
UC.
UC
,
,

.


96

SAN

C3: UC ILM

.
UC ,
,
,

, .
fr ont-end

, SAN

.
,

, UC .

UC

, ,
, ..

,

.
, UC ,

.

UC,
.
,
,


. ,
,
.

97

SAN

Utility Computing
,
,
UC, ,
,
,
, -,
UC
.

,
utility com puting,
. ,
,

.
,
-.
, ,
.
, .. ,
,
,
,
FC
Ethernet LAN SAN.
Utility Com puting -


. , , UC
,
. -
Brocade Fibre Channel,
back-end,
98

SAN

C3: UC ILM

UC,
.
front-end.
, ,

.
, -
Utility Com puting
,
, UC.
, -
UC

,
. , -
, 1GHz
x y.

, ,
,

,

-.
, -
UC,

UC .
UC,

,
SAN.

.
99

SAN

, Brocade
Application Resource Manager (ARM),

Brocade SAN

.
IT-
,

- ,

,
SAN.



.
SAN
ARM
.

FC,
-
UC

.
-

,


SAN.
UC,


. -, ,
UC,


100

SAN

C3: UC ILM


Utility Computing,
ILM 40,
SAN
, .
,
Storage Ne tworking Industry A
ssociation (SNIA),

.
ILM , ,
, ,

IT
.
, ILM ,

.
ILM:
1. IT,
,
.
2.
best-practices
.

40

ILM
(Data Lifecycle Management, DLM)
( intelligence, )
( ).
ILM,
.

101

SAN

ILM

,

(. . 25).

. 25 ILM

,
, ,
.

. ILM

,

,
.


102

SAN

C3: UC ILM



.
,
Utility Com puting.
front-end
, ,

, back-end,
..

( ),

. . 26
ILM

. 26 ILM

103

SAN

,


(any-to-any ).
back-en d
(.. SAN),
.
, , HA

.
.

RAID- .
,
,
RAID- JBOD.

.


,
,

.

104

SAN

C3: UC ILM


ILM ,
,
IT- ILM.
,
, - ILM.
,
(
),
.

CD-R ,
ILM,

IT-

.
ILM
, ILM
.
, ILM
,

,

.

,
.
.
, SAN
ILM. SAN

,

105

SAN

SAN


, ,

,
RAID ILM.

,
ILM,

:
1.
ILM
2.
3.

4.

5. ILM (. 26)
6.
7. ILM

ILM
, ILM


,
.
,

,
. ILM


.
106

SAN

C3: UC ILM

,
,
, ,
.
-
,

.

.

.
,

. ,
.

ILM.
, IT-
.
IL M

, ,
.
, -


. ILM
SAN

,

.
, -
/
107

SAN

. ILM


. Brocade
.
ILM


ILM
. ,
,
SA N
.
SAN

,
ILM .

ILM
, IL M UC
,
. ILM


, ,

.

(.. ,
),


.
UC, ILM
, .

ILM

,
108

SAN

C3: UC ILM

ILM


,
. ,
- ,
,
,
,
,

.
ILM
,
,

. ,

,
SAN, ..

, ,
.


,
.
SAN
ILM SAN

.
, ILM
,
.
109

SAN

, ILM
, ,

,
,
,
ILM.

ILM
,
.
SAN
ILM
. ,
,
..

ILM
ILM
,
. ,
ILM,
ILM
. ILM

,
.
,
ILM.
, , ,
,
ILM.

,
,
.
,
110

SAN

C3: UC ILM

ILM.
ILM



. ,



.


,
.

,

. ,
,
,
,
,
, ,
,
. ,
IL M
, ,

.

,

ILM SAN.
111

SAN

SAN

ILM SAN.

,

ILM

. SAN

ILM,
-

, ILM.

SAN: UC ILM
ILM UC
( ),
.

UC
,

, .
ILM


,
.
, SAN
, SAN
(. . 27).

112

SAN

C3: UC ILM

. 27 - SAN UC ILM

IT ,

. ,
SAN ,

SAN .

ILM UC
ILM UC SAN
.
. ,


ILM / UC. ,
IT

.

,
113

SAN

/
.


. SAN,

any-to-any
,
HA,

,
ILM UC? (
.)
SAN,

,
.
SAN,
, FC-
LSAN. ,
,
,
4Gbit
8Gbit

.

,
(IL M) (UC), ,
.

,
-,
. ,
:

,

, 50%
114

SAN

C3: UC ILM


, ,


HA.

,
.
,

,
.

ILM $x

$y
.
,
,

. 5:
( . 149)
,
SAN.
,
(
), (, com pliance ..),
( )
.
, -
ILM / UC.

.
,
, -, -
115

SAN



. , ILM
:
I:
ILM

(
).


DR (Disaster Recov ery).
,

.
II: , ILM,

. IL M
- ,
$x .
DR
,
(com pliance)


.
III: ILM ,

,

116

SAN

C3: UC ILM

SAN
,

,
()
,
,
.

SAN
.
, S AN
(HBA, ,
) SAN
UC.

,

,

.
,
UC, 1 ( .
87).
,

.

SAN

. -
,
,
SAN

UC.


,
.
117

SAN

SAN


.
,
SAN
.

4/8Gbit Fibre
Channel

(LSAN),
,

.
:
1.
?
,
,

.
,
, -
any-to-any.

,
FC LSAN
.
2.
?

ILM
UC .
3.

, ,
,
118

SAN

C3: UC ILM


?
,
ILM
UC, ,
. . 5:
,

SAN any-toany.

119

SAN

C4: SAN

4
4: SAN

SAN.



.
,
,

.
SAN, IT,
,
, . ,


,
SAN .
,
.

SAN
,
,
.
SAN,


121

SAN

(best
practices).

SAN,
,
,

.
,
.
,
,

-
.

,
SAN, -


,
. ,

.

, ,

().
, HBA-

W indows,
Solaris,
,
.
HBA : HBA PCI,
SBUS.
122

SAN

C4: SAN

Fibre Channel 1990, , ,



,
. Brocade

, , Fibre
Channel .

,

HBA RAID.
SAN
,

FC,
. SAN
(, iSCSI) ,
SAN
, ,

-
?

, ,
,
. ,
,
SAN.
SAN
,
. , HBA
123

SAN

,
,
, SAN,

VL AN.
,

( )

?
,
FC?


FC,
.

?

,
.

.

(, )
( )
?

.

,
fabric login (FLOGI) (SNS).
?
, ,
.
,

124

SAN

C4: SAN

(

, 41 .)
(,
, )
SAN
.


RAID? HB A
JBOD ? 42

(
)
SA N
,
. SAN
Ethernet L3,
DNS, DHCP,
WINS, NIS, LDAP .. ,

41
Brocade
.
OEM qualification.
, SAN .
42
,
SAN,
. ,
iSCSI IP.
SAN ,
FC IP .

iSCSI , iSCSI
, FC.

125

SAN

,
SAN.

FC ,

VSAN,
,
,
. 43

,
( , FSPF )
,

.

,


.
,

. Brocade
,
, ,

, ,
. ,
Brocade

,

43
,
.
, ,
.

126

SAN

C4: SAN


..



, .
, ,
1Gbit FC HBA, ,
FC,

point-to-point

,
-

.
Brocade
, ASIC

ASIC hardware assist
. Brocade ,
HBA ,
.
HBA ,
.
Brocade
HBA
.

127

SAN

, ,
,

HBA.


.
SAN
Brocade.



S AN,
, Fibre Channel
#1


FL_Port. SAN,

,

,
.



.


. ,
,
SAN

.

, ,
.
128

SAN

C4: SAN

SAN,

.
SAN:

(mesh)
/ (Core / Edge, CE)


6: .

,
(RAS)
RAS.
RAS

, . SAN
RAS
.

, ,

, - .

, ,
.

,
. (reliability even t)
,
129

SAN

.
,

SAN


. ,
-
,
.
S AN
:
1. ,
,


,
-
, ,
, ,

.
2.

, .
(. ).

(Mean
Time Between Failu res, MTBF Mean Tim e To Repair,
MTTR.) ,

, , .
SAN
MTBF MTTR. 44

44
, ,
. , SAN

130

SAN

C4: SAN

MTBF
MTTR
.

, MTBF
. ,

, ,
,
-
. ( -


.)

,
.

SAN

Br ocade Brocade
SAN

-
. ,
, Brocade
, ,
.
.
,
.
.

.
Brocade

backplane.
backplane . ,

.

131

SAN

, , Brocade 48000
Brocade 4100 ,
(
16-
!),
.
.



.

Fibre Channel,
:
(1) FC
, IP SAN
.
iFCP,
, iSCSI
20
.

132

SAN

C4: SAN

(2) SAN

VSAN,

FC
,
.
VSAN
,
, .
,

.

FC
,
.

,

.
, Brocade S ilkWorm 3800
, SilkWor m 3850
. , 3800
RAS,


3800,

.

3850
, (
) MTBF,
3800.
3800 - ,
3850 -
.
133

SAN

RAS
MTBF

,

,
. ,
,
.
.
,
SAN

. ( .
28
310 - 321.)
HBA SFP,
, SF P
. SFP,
SFP.
SFP
-
HBA .
- SFP,
, HBA
.

SAN,

,
.
SFP
,

134

SAN

C4: SAN

(,

),


, .
SAN

.
,
.
HB A,

,
.
.

, , ,
, ..
99.999% . ,
0.0001%
,
.

SAN

A/B

.
,
best-practice .

9:

( 296).

135

SAN

(Serv
iceability)
,
.
.
RAS,
, ,
.
,
SAN:
1. ,
. ,
,


.
2.
. ,
.
,

.
MTTR
, .
,
? MTTR,
, , .
,
.
,
,
,
,
136

SAN

C4: SAN

.


,
.
, ping
,
, crash
dump
.
,
,
. ,


.
,
,
.
,
, .


, Brocade

, .
,

,
,
SAN

. ,
(,
backplane )
, SAN,

.
137

SAN

SAN

, ,
, ,
.
.
, SAN IP Brocade

FC, FC over DWDM
FC o
ver
SONET/SDH. 1Gbit Ethernet

IP SAN, ,
4Gbit FC,

Fibre Channel.

Bro cade FCIP ,
,
IP ,
FC. (
SAN"
33). IP SAN
, SAN

FC, IP.

, ,

,
IP
SAN.
,
SAN

. ,

,
138

SAN

C4: SAN

.

SAN

.
S AN
8:
( 227).


, , ,
RAID
(

RA ID-
,
).
SAN

,

( ,


).

FC

16 ,

.

,
, SAN,
SAN.
( SAN Meta SAN
42).
139

SAN

, SAN ,

,
,
. SAN

.
SAN

.

7:
207.


, -
, SAN
.
SAN
,
.
, SAN


HA,


.
SAN
,
,

. , ,
SAN

,
-.

140

SAN

C4: SAN

,

SAN.
, ,

.

SAN ,
SAN ,
SAN ,
ILM UC ( . . 85),
.


SAN.
,

SAN, .

HA.

,


. , ,

.

141

SAN

FC SAN,
.


( , xWDM SONET/SDH).
,
.
IP-
,
IP SAN.

11:
(. 339).


SAN

,
.
,
,
SAN ,
SAN
.
SAN
SAN.
SAN,
Brocade, Brocade
plug-and-play

. Brocad e

,
SAN
IP- , ..
142

SAN

C4: SAN

,
WEBTOOLS.
,

SAN -
.
, S AN,
, SAN
(. 310)
, ,
,
.
SAN, , Bro cade Fabric
Manager,
.
,
,
, , ,
,
45. SAN
,

45

VSAN.
VSAN
A B,
, VSAN.
, , -
, VSAN.
, VSAN
HA SAN, ,
.
, Brocade ,

SilkWorm,
.

143

SAN
,
46.


SAN
A/ B.
,
,
, .

,
.
LSAN, Brocade AP7420, 7500
FR4-18i,

LSAN

(. . 315)

.
,
SAN (

10:

, 325.)
S AN
.

46

VSAN.
VSAN CP.
,
SAN
. x y ,
CP, .
VSAN
2x , - y
. , VSAN

144

SAN

C4: SAN

,
SAN.
.

,

,
, , ,
.
,
.


.
,
.

, , ,
SAN, SAN,
.
SAN
,

.
(

.
,
. Brocade Fabric
Manager

,
145

SAN

Brocade SAN
Health SAN Health Pro
fessional

SAN
best-practices

, ,
.

12:

373.

146


SAN



SAN

147

C5:

5
5:
SAN
.
, SAN,
SAN,

.

,
, SAN .
, . -
SAN

.
SAN.
,

SAN (
). SAN
,

SAN

.

.

.

SAN ,
149

SAN


.


.

-, ,
SAN.
,

.
.
,
SAN,

,

.

SAN

SAN,

. ,
, ,
, .
SAN

( SAN
,
).

:
I:
II:
III:
IV: (ROI)
150

C5:


(TCO) (

V: SAN

SAN
, - , ,
,
, -,
,
-,
, SAN.
,
I ( ),

.
,

,

.
III

. , ,

, , Fibre
Channel
,

,
,
.

,
SAN. (Return on
Investment, ROI) .
151

SAN

SAN x
.


,
- 100
.
ROI ,
.
.
SAN
(TCO).

,

. , SAN
TCO 50%.
TCO ROI
.
,

,
, ,
SAN

,
.

SAN

,

.

SAN , SAN
.
-

152

C5:

SAN. ,
,
,
.
SAN ,
,

.


, ,
.

SAN,
Brocade SAN Health
.



, ,
SAN.

SAN
SAN
SAN
SAN.


.

,
.

153

SAN


.

SAN

,
SAN

.


SAN ,
SAN,


. ,


. ,
,
. SAN


:

SAN


IP-

,

SAN:
,
, ,
/ SAN, ,
,
154

C5:

SAN .
,
.
SAN
,

. ,
,
:

CEO, CTO, CIO CFO ( ,


, -
)


- ,

,
-

SAN
SAN
, - SAN.
, SAN,
web- ,
Internet, web.
SAN,
SAN. ,
web- Inte rnet,

, SAN ,

.
155

SAN

,
,
,
.
,
S AN

.
,
.



. SAN
, SAN
. SAN
(
Brocade),
-
.
,
SAN,
, , FAN.

, SAN,

,

.

,
SAN
. (
- ).
,
, S AN.
156

C5:

,

.


SAN
,
SAN.
,
,
SAN

, SAN
.

,
.
,

SAN ,
. ,
,
, .
, SAN

.

, ,
SAN,
. ,
:

?.
,
- ,

157

SAN

.
ISL

. ISL
47.


,

.

SAN
, .
,
SAN:


.
50%.

, .


.

47

, ISL 4Gbit 10Gbit.


ISL , ISL
10Gbit, , ,
,
10- ISL. ,
4Gbit
32Gbit, , 10Gbit.
Dynamic Path Selection
256Gbit. 10-
ISL,
. ,
,
.

158

C5:


.

,
.
,

.
.

24 x 7 ,
.



. ,
,
.

SAN. ,
:

, .

.

SAN:

SAN


x ,
.
,
SAN . SAN
159

SAN

,
. , SAN

,
:

,
,
.

20%

, ,
,
. , ,


, SAN

.
, ,
.
, SAN

80%

.
SAN:
:

80%,
x
y .
SAN:
160

C5:

SAN
(, ,
).
SAN
-,
x y
.
SAN

x ,
y
.

.
,

-
-
.
(,
)? ,

?
-
? (
,
).

SAN,

,

.

161

SAN


SAN
,
,
(,
,
SAN) ,
(, ,

).

,
SAN. ,
(..
,
SAN)
SAN
SAN, Brocade SAN Health

.
.

,
:
,

,
.

, / .


.

162

C5:

,
SAN:

(, HBA,
, ,
) ?
(,
HBA, , ..)?

?

?
?

, ? (
.)

?

?

,
-
. ,
, .
-


HA SAN.

,
, ,
FC, VSAN
HA.

163

SAN

,
,
,
SAN.

.
,

SAN,


. SAN- Brocade

, .

:

164


SAN?
o



o


SAN
o


( SC, ST, LC),
( 9, 50, 62.5 )

C5:

( SMF, MMF)


,

-
.
o
4

2 ?
,
,

o (
)
o

,

,
SAN, ..
SAN,

.
,
,
. ,

SAN ,

,
.
165

SAN

,
.

SAN, ,
, ,

SAN

- .


,
SAN . ,
SAN ,

.

SAN
MAN/WAN.



,
SAN.
,
.

SAN,

,
, ,
, ..
,
SAN

.

SAN, .
11
(. 339).
166

C5:

,

. ,
,
,
SAN.


SAN.
SAN,
WA N MAN ( IP- )

.

,
SAN ,

.
SAN
SAN

SAN,
, MAN WAN,
, ,

,
. SAN
3000 .

SAN.
SAN

/
,



. SAN

167

SAN

,
(,

,
,
).
9: (. 296).
, (
)

.
, .
, , SA N.
,
,

.

Brocade 48000
DCX,
Brocade.
,
.

, S AN
.

,

. S AN
,
,
.
,

168

C5:

.


SAN
ISL
IFL (
). SAN,
, 10% - 15%
.
SAN

48, .

.
, SAN
,
.
50% ISL, ISL
. , ISL
IFL .

SAN,
, ,
, ,


(A/B) .
(
),
,

48
.. .

,
.

169

SAN

LSAN.

me tro
SAN

ISL IP-.

FC ( ) 49.


SAN
SAN.
SAN,

,
SAN, , ,
, , HBA, , , ,
..
,
.

,

.

(ROI TCO)

.
SAN ,
.

49

LSAN
.
Multiprotocol Routing for SANs.

170

C5:

SAN ,
.
,

(ROI)
(TCO).
ROI TCO
-
, SAN

-
SAN .

LAN ,
, LAN
, ROI
TCO LAN. S AN
.
SAN
,

. ,
SAN,
.
SAN,
.

SAN

SAN

SAN
.
,
,
, ,
171

SAN



( ).
SAN SAN.
-
ISL,
.

SAN. Brocade


. ,
Brocade
Fabric Manager.

Brocade SAN Health

.

Web- Brocade.

,


SAN
.

:
1. SAN
.
2.
SAN .

DNS
.
.
1 - ,
.
( 2 : 1).

172

C5:

,

.
3.
.

, -
.
4.
SAN SAN Health
.
,
, SAN,
,
.

, .

, ,
.
,
.
,

.
,

.
-
,
.

SAN,

SAN.

Brocade
S AN LSAN ,
173

SAN

,

.
SAN

,
.

SAN

-.
, ,
,
, -
,
.

174

C6:

6
6:
,
.. ,
.

,
.
SAN M eta SAN, ,
,
,

SAN.
Brocade Fabric Operating
System
SAN

. ,
,
.
50
,
.

50
,
:
,
,

175

SAN

,
. :
,
, , m esh ( )
/ (core/edge, CE ).
SAN
.
, CE
mesh. CE mesh
.


. (
, )
CE ,
.


, .



,
, ,
ISL IF L.
LUN
, ,

LUN.

, ,
.
, 2 RAID- JBOD,
176

C6:

,
,
.

SAN
SAN,

.
HBA .
, ,
,
, (
).
,


,
.

,

.

ISL

. . 28
.
,

SAN
,

. ,

.
177

SAN

SAN,
.

. 28 .

, ISL

. , A D . 28
A F,
ISL A B, B C, C D
E F.
ISL

,

.
. 28. SAN,
, A D
,
ISL AB, BC CD. E F
,
B E, C F. ,
178

C6:


,
. B-E
ISL, A-D:
BC CD.
C-F CD DE, .
,

.
,
,
, ..
. D
ISL C D,
A-D, . ISL
.
-

, .
, ,

,
,
.



.

(. 258)
, IS L
/
.

179

SAN


,
. . 29
SAN .

. 29



,

. ,
A,

,
E F.
F C, B
D 51.
,
D,

.
,
. FSPF
(Fabric Shortes t Path First) Fibre Channel

. ,
, A,
,
F, ISL AF, ,
51

180

C6:

B E.

,
, .
,
, ,

, .
(hop)
,

.

. ,

.
, . 29
,
ISL,
/, . ,
F,
ISL A F
.
,
,
. SAN
(),
.
MAN/ WAN,
MAN/ WAN
SAN.

181

SAN



. ARCNet,
FIDDI Token Ring .


. (
/.)

mesh
me sh

ISL
52. . 30 me sh
. Mesh

,
mesh
.



.
, , ,
mesh .

52

, full
mesh ( ). mesh ISL .
mesh
,
.

182

C6:

. 30 mesh

, me sh
.
. mesh

ISL

.
- ,
, mesh


ISL.
ISL mesh
,
,
,
.. mesh

,
ISL.
, 16-
mesh,
.
,
.
.
ISL,
183

SAN


. A
B . 30
ISL AB.
A
, B,
A
53.
ISL A B,

ISL ,
, . ISL mesh
,

.
SAN.
-
full m esh .
, 384- ,
. Mesh

MAN/WAN,
mesh. CE me sh
.
,
MAN/WAN me sh
CE.

53

ISL AB.
A B
. (
)
mesh, - .

184

C6:

/
/ (co re-to-edge, CE)

. . 31 . 32
(. 308) CE Fibre Channel.

,

. . 31
A - D, E F.

. 31 CE

. 32 /

CE SAN
:
,

,
Brocade

185

SAN

, SAN,
core / edge
.


.

.
.


. ,
A B . 31
ISL,
C D.

.


, ISL
/
SAN.



.
,
. mesh
CE .





.


186

C6:

SAN,
CE.

CE
CE
, CE

,

. Ethernet


ac tive/passive
Spanning Tree Protocol (STP). STP


,

.



. , full mesh

.
Full mesh
.
ISL,
,
,

.

mesh.

187

SAN


,
.
,

.
, FSPF,
active /active ( .
272)

,
,
ISL, (
). FSPF

.
CE,
Brocade,
,
,

.
,
8:
(. 227 ).

:

-
,
ISL.

.
ISL.

188

C6:

, ,
.

,
- .

CE
CE


.
ISL IFL

. ,
,

,
ISL/IFL.
.33 ,
CE. SAN
16-

7:1,
(oversubscription) ISL - 7:1 54.
,
,
224

54


. , ISL
3:1, ,
.
,
- .

189

SAN

ISL 55,
24
SAN 80 56.

.

,
,
ISL, SAN
.

ISL,

55

16 12 192 . fan-out 7:1 24


168 .
56
11 66
10
. 66:10 7:1,
.

190

C6:

. 33

,

,
, , Brocade 48000 DCX Backbone.

,
.
,

191

SAN

/ ,

/
,

. /
( )
,
.


ISL/IFL.


Brocade 200E, 4100, 4900, 5000, 7500,
7600, 48000
Dynamic Path Selection (DPS),
CE. (. 272)

, ASIC,

. DPS
, ASIC

.

CE
CE .
, -

HA.
,
fan-out,

192

C6:

,
.
HA
. SAN

(. 310)


. ,

.

, . 34.
,
,
.

. 34 CE

,
HA SAN
,

. 9:
,

,
.

A, B.
,
,
multipathing A.
, /
193

SAN

HA,
CE B.
. 35 .

HA SAN, - .
CE, A, ,
HA,
B.
, HA,
B A
.

. 35 A/B


CE,

-
.
SAN
(. . 36).

194

C6:

. 36 HA -HA

,
CE

, ..
SAN,
, CE

.

Meta SAN /
,
CE SAN,
Fibre Chann el.
SAN iSCSI,
Ethernet .

F C-to-FC router (FCR) Brocade AP7420, 7500
FR4-18i Meta SAN 57, LSAN

57
. SAN Meta SAN . 42
.

195

SAN

Meta SAN
CE.

. 37 CE

. 37

CE , . 38 CE
Meta SAN. ,

, CE,
CE Meta SAN n CE
.

. 38 - CE Meta SAN

. 38 n .
37 (. . 39).

. 39 - CE Meta SAN CE

196

C6:

,
, Meta
SAN. Meta SAN
Multiprotocol Routing for SANs,
Brocade SAN Administrators Bookshelf.


Brocade ,

-
.

SAN

-
SAN. . 40
.
-
.

. -
HBA,
backplane
. -

HBA
.

197

SAN

. 40
.

SAN

(
)
,

.
, -
,

. A B
-.
LUN
,
.

IP-
.

SAN

198

C6:

.
( 10 )

CE.

ISL,

.

. ,
100 - (
10 ), 1000
,
.
100
, .
. 41 , -

.

ISL
E_Port.
( )

.

pass-through ,
,
, HBA,
,
.

,
..

,
, . ,
- ,
199

SAN

passthrough .

. 41 E_Port


: (1)
, ,
FC, (2)
Brocade Access Gateway
,
-
,
.

,
,
,
,
200

C6:



-
. ,

.

,
,
.
, Brocade ,
. Access Gateway Fabric
OS 5.2.1b. NPIV (N_Port Id Vir tualization)

,
SAN -.

201

SAN

. 42
NPIV .

. 41 . 42 .
. 42 F_ Ports
E_Ports. Access Gateway

CE,

Access Gateway
Access Gateway
CE.
Access Gateway ,
E_Ports
.
, Access Gateway,
Access Gateway
,
HBA.
202

C6:

HBAs ( , Na me Server) Access


Gateway ,

switch-to-switch Access
Gateway . NPIV
. 43.


(rebuild) ,

,
. ,
,
. ,

,
FC .

203

SAN

. 43 NPIV

SAN

.
,
Access Gateway

, ASIC Goldeneye, ,
4Gbit Brocade 200E.
, Access Gateway

,
AS IC. , Brocade 200E

Access Gateway
Brocade 48000
McDATA, ,
,
Access Gateway
Goldeneye, ,
, ,
NPIV.
204

C6:


CE

.
CE,
, ,
.
full mesh, me sh
, WA N
(. . 44).
.
CE
A/B. A CE

1
A 2.

FC-to-FC (

-
).
CE

CE.

F ibre Channel.
FC ISL.


Brocade Extended F
abrics,

(buffer-to-buffer) .
Extended Fabrics


Brocade.

205

SAN

. 44 CE


11: (. 339 ).

206

C7:

7
7:

SAN
,

.
,
.



SAN ,
,
,
SAN

3000-
384- ,
64-
32- .
,
.
,
, ,
207

SAN

, .

,
ISL, SFP E_Port , , ,
.. 300-
32-
Brocade 48000,

.


/ (.
258), ISL
.

,
/ -
.

-



ISL.

.
, ISL
/,
SAN ISL.

, ,
, Brocade Fabric Manager SAN Health,
,
.


,
208

C7:

,

FC.

. ,
100 10
,

, . ,
,

.
,
.


Access Gateway

, Brocade
48000, -
,
pass-through. ,

.
Brocade Access
Gateway ( . 197),
HB A,
.

S AN
,

209

SAN

, ,
, .


,

, ,
IP- . SAN
,
SAN
.
,
.
, 10
5
. ,
25 ,
.

. SAN
SAN A/B

,
. B
A,

A, B,
,
, . 12,5

.


SAN, SAN
, .. ,
210

C7:


LUN ,
.
, RAID 20 ,

. 12,5
20 1250
.
10 A 10 B
.
-
A B,
LSAN Meta SAN A B.

SAN
.
A/B
, ,
,
, .



SAN?
,
,

SAN. SAN
, ,
, ,

SAN,
,
,

.
211

SAN


SAN ,
,
SAN.

,
SAN
.
SAN

ISL IFL
SAN
, -
ISL IFL,
ISL.
:

,

?
ISL IFL.

?

.



, .




. ,

,
-

212

C7:


.


ISL (
).
ILM UC, SAN
. , Brocade
4Gbit SAN
1Gbit 256Gbit/s
.

ISL IFL
SAN
.
ISL IFL,

,
(,

).

,
SAN
.

(
)
/
,

,
, .
213

SAN

,
,

. 8-
SilkW orm 3250
Brocade 200E

8 ,
.
,
,
, SAN.
200E
8
.
,

.
( ),

.


.
SilkW orm 3850
.

ISL. ,



/ ISL.
-
SAN
. SAN
,

214

C7:

. ,

,

S AN

.


SAN
,

. ,
SAN
,
.

, ,
,
.


, Brocad e
,

,
,
.
FC

.
, ,
,
(
215

SAN

FC-AL
(, JBOD).

, 16.7
(256 3).
FC
, , ,
-
FC-AL, 7.7
(239
256 127 FC-AL).


FC-AL, ,

1.2 .
,
60 .



58. ,

FC
Brocade.
,
. ,
31
256 ,
7000 . Bro-

58

Brocade
600 , 100
.

216

C7:

cade,
.

S
AN

.
, Brocade,
,
-
.
, Brocade

,
.
, ,

.



SAN

.


SAN,

, -

,
.

, .

.
217

SAN

,
-


.
,
,
,

. ,
.
,
, ,

- ,
SAN

,
.



,
.
SAN,
, SAN,
, ,

.
Ethernet

.
,
Ethernet

- .

Ethernet ,
218

C7:

, ,
,
-
.

,
.
Ethernet,
IP- C, ..
250 , , 250
,
.


FC, , Brocade 7500.


,
, . Brocade

Fabric Operating System , OEM Brocade
. ,
Brocade,

,

.

,
219

SAN

. (a)
(b )
.

,
,
,
. SAN
,
,
-
, .

SAN,

.

( . 215),
Fibre Channel
,
.
.
-

,
SAN

.

SAN, ,
Fibre Chann el iSCSI.

.
220

C7:


, SAN
,

. 59
, Brocade Zoning

switch-to-switch zon e transfer protocol


.
,
,
,
.


.
,
60,

.

,
.
, NS

,

59

.
,
, . iSCSI
.
60
,
,
.

.

221

SAN

.
,
-
.
-
, ..


0.01%.
, SAN

.

,
.

( ),
NS
,
,
S AN.
.


, ,

, ,

.
, Brocade
222

C7:

FC
-

. SAN
,
,
SAN, Brocad e

,
.

,

SAN.


VSAN ,

,
,

VSAN ,
. ,
VS AN


VSAN.
, -
,
SAN. Brocade
Virtua l Fab rics,
,
.

Brocade LSAN

223

SAN


.
SAN .

Meta
SAN

,
. ( .
SAN Meta SAN . 42.)
LSAN

,

.
, IP
Ethernet,

. , S AN

,

.

FC Ethernet,
IP-
. ,

Ethernet
IP,
SAN
, VSAN.
IP
Layer 3 VL AN, DNS,
DHCP, WI NS, NIS+, NTP, LDAP, iSNS,
RADIUS
STP, RIP
OSPF. IP ,

224

C7:

FC SAN Brocade .

FC,

,
.

, S AN

, .. ,
SAN,

A/B
(


A B).

(A
B),

FC

.
-, Access Gateway.
,
.


SAN
,
. ,
mesh 100
, (a)
, (b) ,
(c)

(d)

,
225

SAN

.
.

SAN
, .
,

.

, ..
.

(,
A/B),
SAN.
LUN
,
(. 176)
.
DR BC.

HB A. ,


.
,

.
,

/.

226

C8:

8
8:
SAN

,
.

,



, SAN
Fibre Channel.

,

IP Ethernet.


, ,
.
IP- web-
,
9600 baud.
IP-
, web227

SAN

web- ,

.
SAN ( ERP-
LUN )
SAN ,

SCSI

,
, , SCSI
,
. SAN
, SAN
,


.
-

.
, SAN
. Fibre Channel SAN


-
.

.
,

, SA N
228

C8:

,
SAN,

SAN,

, .

SAN SAN,
.
,

SAN ,
.
, 4Gbit
Fibre Chan nel 800MHz.
8Gbits (
full duplex),
SAN 16Gbits, 800


.
SAN.

, SAN,
229

SAN

,
Fibre Channel,

FC.
,
4Gbit FC HBA, , Fibre
Channel S AN
.
Fibre Channel
, RAID-
1Gbit,
2Gbit 4Gbit,
.
SAN
,
200Mbit,
, SAN.
FC HBA
1Gbit 2Gbit
.

iSCSI, ,

, , RAID-
JBOD
,
SAN,
RAID- .

230

C8:

SAN

SAN, ..

.
,
-
.
Fibre Channel

,



(full-duplex). Brocade 5000

Fibre
Channel - (1U)
, Brocade 5000

256Gbits,

SAN.
IP SAN, iSCSI, (
),
.
,

iSCSI,
4Gbits 10 Gbit
Ethernet.
4Gbit FC
(

iSCSI FC
SAN 1: SAN, . 33).
231

SAN

, SAN, -
, iSCSI

-
FC.
, SAN


. iSCSI,



iSCSI.
,
.
, ,

.
Fibre
Channel , FC

SAN.
. SAN

iSCSI,
, Fibre Channel. iSCSI
NAS ( ,
CIFS NFS), FC
.



- 1Gbit,
2Gbit, 4Gbit, 8Gbit, 10Gb it, 16Gbit, 32Gbit, 256Gbit
- .
,
232

C8:
Brocade

256Gbit

Brocade
1Gbit. SAN
iSCSI
,
, SAN,
.
.
,

.
,
10Gbit
, 4Gbit - (. .
381 384).
,
.
4Gbit ,
10Gbit .


4Gbit, 10Gbit, 10 Gbit

(. 10Gbit
DR/BC . 364.)

. SAN

1Gbit, ,

,
233

SAN

, ILM UC ( 85)
.
,

, ,
Brocade 48000
1Gbit

4Gbit,
, .

.
Brocade

.

Brocade ISL Dynamic Path Se lection (DPS)
256Gbit .



SAN. ,
iSCSI,
Fibre Channel,
iSCSI SAN.
SAN


(over-subscription)
,

, .
234

C8:
,
,
. ,

, -
,

61, ,
- .
SAN,

ISL. ISL,

. ,
, ,
.
,
.

,



.

,
.
,
, .

61

over sold,
.

235

SAN

(
),
/
.


,

. ,
Internet,
.
,
.


SAN.

4Gbits/sec ISL
4Gbit/sec SAN,
,
.
SAN
( Inte rnet),
SAN


,
,

,
,

.
,
SAN. 16-
236

C8:

/,

SAN

ISL ,
14 . ,

ISL.
SAN

,

, SAN.

,

.
:


,

ISL, .

, Brocade 48000
- 384
4Gbit Brocade
48000. 10

384- 48000
, .
.
, ,
16- 32-
.
4Gbit FC
ISL , ,
237

SAN

1Gbit FC
Ethernet.

.
DR BC, ISL 10Gbit.

32Gbit
4Gbit IS L

Dynamic Path Selection,
256Gbit. ,
ISL,

.

(HoLB)

,
,

.
.
,

.
,
.
Head of Line
Blocking ( , HoLB)
,
.
,

238

C8:
,
.

, ,

.
Brocade
, SAN
.

HoLB
crossbar .
,
,

.



.
FC (
), FC
CRC

,
SAN ,
.

,
(,
SCSI),

-
.
Fibre Chan nel SAN

( -
239

SAN

).
. (..


,
.)

S AN,

-
IP WAN,
.


. SAN
,
,
. ,
Brocade,
,
.

cut-through


.
store and forward,

,
.


. ,
,
. ,
, ,
240

C8:
,
/ HBA, ,
.
,
, , ,
.

-
,

.
( )
, ,
, . ,

.

,
ISL IFL . ISL
IFL, ,
(, hop count) .


Brocade ISL

. ,

. Brocade
, 700 .
, ,
,
14 ,

241

SAN

. -
FC
, .
, SAN
,
,
IS L ,
,
ISL.
,
- ,
.
,
store-and-forward,
.
,
. ,

, ,
,
.

.
Fibre Channel
,
,
,
.

.

xWDM,
, (a)
(b)
.
242

C8:

Brocade Brocade Exten ded Fabrics,




,
. Brocade
FastW rite
.
, FC SAN
Brocade

.
IP SAN .
, SAN
FCIP - ,
FC- IP,
. , IP
WAN
,
, FC,
- SAN
.


, ,


.

243

SAN

,
.

,

, ,
, SAN.
,

. ,
,
,
,
,
S AN
. ,
, ISL
,
. ,

ISL 7:1
ISL.
,
,
244

C8:
ISL
.

, SAN.
, SAN

.
,
,

ISL
SAN .




.
, ISL

,

.

,


. - ,

.

- .

-
245

SAN

ISL,
.
( ,
)

.

,
SAN
.
SAN,
(
),

.

,
. ,
50

, ..
100 /.
50
1
Data Mining

, ..
6 /
LAN iSCSI. Data
Mining 4Gbit
FC HBA
.
SAN
246

C8:
,

.
SAN


,
,

, SAN,

ISL IFL
. ISL
,

,
.

.
,
,

,
.

, , ,

,
ISL IFL.



,
247

SAN


.
,
, ,
.


/,
- .
.
-
,
,
,

,
,
.

(, HBA )
. ,

.
,

CE

, IP, .
,
, ,
ISL
248

C8:
CE ISL.
ISL
, f ull mesh.
4Gbit ISL
,
8Gbit,
.
ISL
,
4Gbit.

.
CE , 4 CE ISL

.

,
50%
.

ISL IFL

,
, ..

, .
,
ISL
IFL.
ISL IF L
.

, .
249

SAN

,
.

.
SAN ,

,
. . . 45


ISL, .. .

,
3:1.
12
ISL , ..
ISL.
ISL

,
,

.

. 45 ISL 3:1

ISL =
ISL
250

C8:
Io=Nn:Ni.

12:4 3:1.)

Ni= 1. ( ,

SAN (,
ISL) ,
ISL.
ISL

.
. . 45 HBA, ISL
4Gbit, ISL 3:4
12x1Gbit
, ISL - 4x4Gbit.


1 Nn,
Ni, , 1:1.3.
CE ,
ISL. ,
CE 16
, 14
- ISL, ISL .

7:1.
1:1
63:1.

,
. ,
1:1 SAN, ..
IS L
,
251

SAN


,
,

.


.
, SAN

4Gbit,
,
, ISL
,
ISL
,
,
.

/ SAN.
SAN
,
/,
(p258). ,
ISL IF L

.

ISL
.
, SAN
,

.

252

C8:

.


.
,
, .
, ,

, ,
.
,
, Data
Mining, ,

.

IFL
,
,


.
,
,
. ,

( ,
,
PCI).
,
100 15
253

SAN

.
,


ISL

.
- 2Gbit
Fibre Channel,

ISL ( 2x

+ 2x ) x 2Gbit = 8Gbits/sec
.
2Gbit 4Gbit.

,
,
,

,
.

.

,
.


,

.

,
,
-.
.
254

C8:
. ,


0.5Gbits ( ,

1Gbit
,
).
1Gbit.

.

,

. ,
, ,
.


,

.

,


,
.
,

(,
), ,
255

SAN

.

(.. ,

)
.
,
.

.
IS L
/
,
. ,
/
.

,

6:1 7:1 ,
ISL .
Meta SAN
,
,
IF L.
IFL ,
ISL
LSAN.
IFL
:
1.
IFL .

256

C8:
2.
IFL
.
3. ,

/ Meta SAN,
.
Meta SAN 1000 100
, 10:1.
100 , 10 IFL
.
90%,
IFL IFL,
1 2.
, IFL
:

IFL = ( _ + 1 ) +
(__ /
:__ ) * ( 1
_ ) )

,
Meta SAN
, IFL

.
(. 268),

Meta SAN.
,
SAN

,

.
257

SAN




.
,
,
,

. .
, CE

. /


ISL. . 46 , CE
.

. 46


, SAN

. ,

258

C8:
(any-to-any)
. , SAN

,

.
SAN, ..

,
SAN .
, SAN 100%.
, , 0%.


,
100%,

- ,
.

, RAS,



. ,
ISL
,
, MTBF.
,


.


259

SAN

SAN. ,
SAN

.
,
SAN
Direct Attached Storage (DAS).
DAS
100%, DAS SAN
.
SAN

,
,
,
.


--.

Me ta SAN 62,
ASIC
.
-
(. . 47),

RAS
,
SAN.
,
RAS
,
.


. ,

,
,

62

. SAN Meta SAN . 42.

260

C8:

,
,
, , SAN

, Da ta M ining

.

. 47


,
.
MAN WA N.
Meta SAN 100,
.
ISL
IFL,

0%,
- 100%.
2.

261

SAN

ASIC

5%

5%

10%

15%

40%

55%

25%

80%

10%

90%

BB

5%

95%

Meta SAN

5%

100%


,
.
,
,
, , ,
, backbone
Meta SAN.
,
,
, . ,

( ASIC),
.
262

C8:
-
(, ASIC)

.

.
.
,

, ISL,

(any-to-any) .
, Meta SAN
,

. ,

,
backbone,
.


,
.
IFL.

. Brocade

.
263

SAN

, :
ASIC

.
Brocade 63
ASIC .


.
ASIC ,

.
, Brocade 48000
FC-.
ASIC
. ,
Brocade 48000,

,
ASIC,
.
ASIC ASIC
,
.
Brocade ,
AS IC

backplane . 16 Brocade 48000
. 32-

63

,
McDATA.

264

C8:
16- ,
48- 24-.
SilkW orm 12000
4- ,
(quad). , 1
1, 2 1,

backplane

. ,
/
4- ,
. SilkWor m 3900
24000 8 , .
, , Brocade 3250, 3850
4100 ,
.

32
Brocade 5000 24- 48 Brocade 48000.
,
.

0.7 s 0.8 s (700-800


, ).
backplane Brocade 48000

2.1 s - 2.4 s.


265

SAN


.


.
,
.
, ,

. SAN-
HBA ,
.
.

,
,
.
,


, .
,
.

LSAN
,
,
.
,
Meta SAN.
,

.
266

C8:
SAN
(. 220)
.
Meta SAN
.

. ,
Meta SAN

SAN,
100%. , Brocade
Fibre Channel (routing),

Meta SAN,
SAN,

100.
,
Meta SAN ,
.
,
Meta
SAN LSAN. LSAN (Logical Storage
Area Network)
,
Meta SAN. LSAN
Meta SAN
. , ,
,
, -
.
LSAN ,

.. .

267

SAN

: UC ILM
, Utility
Computing Infor mation Lifecycle Management ( . 85)

SAN, UC ILM
-
. ILM UC
,
,



,
ILM UC .
,
,
ILM
UC,
SAN.

CE SAN
-
( , tier)
.


.
/.

( )
( ).
. 48 CE

268

C8:
, .
.

. 48 CE



. . 49
.
,

(. CE . 189.).
,
, .
. 48,

.
LUN
,
LUN ISL ISL
.
. 49
,
ISL,
LUN . Brocade
269

SAN

DLS
DPS

/ ISL (p 272),
SAN.
/
LUN
, ISL

.
SA N

ISL ISL
.

. 49 CE


, ,

RAS -
ISL.

SAN


270

C8:
.
.

, SAN
, data m over ,
,
. ,
Utility Com puting / Inform ation Lifecycle Managem ent
S AN
, .
,
SAN ,
,


. . 50
SAN .


,
Brocade 7600
,
, Brocade FA18,

.

, ,
IS L ,
.

271

SAN

. 50



, ,
.

LUN . ,


.
,

.
,
.
,
.
272

C8:
, Spanning Tree Protocol (STP),
Ethernet,

/ (active/passive) 64 /
.
, STP


. ,

.
, I P/Ethernet

1990-

, , Fibre C hannel. FC


,
FC .
,
.
Brocad
e

- (source-port)
FSPF (Fabric Shortest Path First).
Dynamic Load Sharing (DLS).
DLS,

ASIC.

64
STP,
. Ethernet
STP.

273

SAN

. Advanced IS L
Trunking. 65 Multip rotocol
Router (e xchange), Dyna mic Path Selection
(DPS).

.

.
,
failover
. ,
, ,
(
,
.)


/.
,
/.
,

, .
CE,
.
75% - 90%
, ISL .
,

65

, Condor.

274

C8:

. ,

DLS DPS

.

:
FSPF
Fibre Channel
FSPF 66 (Fabric Shorte st Path Firs t
).

E_Port. FSPF

.
, ,
/ (. 185)

.
.

FSPF


.
Brocade FSPF
.
Dynam ic Load Sharing (DLS)


.

66

Brocade
FSFP
.
275

SAN

DLS (best effort )


/
. ,
, ,
DLS ,
. ,
,


67. ,
,
/, DLS

.
load sharing ( ),
load balancing ( ).
DLS ,
,

Brocade. DLS

,
,



control plane.

67

Fibre Channel

/,
, FC ,
, .

276

C8:


ISL . Brocade

(exchange).
,
ISL, -
,
. . 51
SilkWor m 3850.
,

.

, ,
,


,
(buffer-to-buffer credits)
.


.




.
- , ISL
.
(skew) ,
277

SAN

, .. ASIC

68.

. 51

ASIC

,
.
, ,

68
,
. ,
VSAN.

278

C8:

DWDM metro,
.
,
,
,

ISL
DLS DPS.


,

. ,

.

Bloom 69


2Gbit
4Gbit - 8Gbit. . 52
3850

24000,
2- .

, , Brocade 4100,
4900, 5000 48000.

69

24000.

279

, SilkWorm 3200, 3250, 3600, 3800, 3850, 3900, 12000

SAN


ASIC , ASIC

(. . 51).
Bloom
4 ,
(quad). , SilkWor m 3250,
: 03 47.
Condor 70, 8
71.
Brocade 4100 : 07, 8
15, 1623 2431.
,
ISL.
Condor ASIC

,
4Gbit. . 51,
4
:
8Gbit Condor
32Gbit. (64 Gbit full-duplex).
Condor
Bloom

4x
2Gbit 8x 4Gbit.

,
DLS,

70

, SilkWorm 4100, 4900, 5000 48000.


SilkWorm 3900 24000
(. , . 258 ).

71

280

C8:
.
,
. 52,
, DLS
. 4
ISL ,
,

.

. 52 DLS


. , Silk Worm 3850
12-13,
14-15.
CE
.

SAN
,
,

Extended Fabrics.
281

SAN

ASIC
buffer-to-buffer. Bloom -II
quad
72,

. , BloomII 4-
25 , 50 2-
,
73. 100
,

100-
Bloom-II quad,
, DLS

, .
Condor ASIC

. Condor
,
quad . ,
8 4Gbit 50 (
32Gbit) 4 4Gbit 100 (
16Gbit).
2Gbit, ,

DWDM,

4Gbit.

100- 8 2Gbit.

72

25 25
2Gbit.
73

.

282

C8:

:
Exchange
Dynamic Path Selection (DPS)
,
4Gbit.

Brocade 4100, 4900 200E, Brocade 48000,
Brocade 7500
.
Exchange

DPS
FC (exchanges) .
FC
.

SCSI. DPS
,

, PID-
(SID), (DID)
(OXID).
[ SID, DID, OXID ].
DPS /
SCSI 74.
SCSI
, .
,
. ,

74
, ,
SCSI OXID ,
FICON, .

283

SAN

SCSI,

.
,
(..
, )
.


. ex change

Brocade, DPS
, ,
,
.
Exchange


, DPS
.
DPS

ASIC. DL S, DP S
CE
. DLS, DPS
,
best -effort
. . 53

Brocade 4100
/.

284

C8:

. 53 DPS

DPS, ,

. , DPS
.


,
.
(
) (DPS
HA),

. 53.
DPS /

,
.

.



. , CE
, Brocade
285

SAN

4100 CE,
DPS,
.
, DP S,
,

. ,

DPS,

DL S, ,
DPS (. 54)

. 54 - DPS

, DPS /
-
,

(. 55.) ,
, -

. DPS ,
(skew)
.

286

C8:

. 55 DPS Fiber

ISL
, DPS (exchange).
FC
SCSI. 1:
SAN, ,

, DPS.

,
DPS
. ,
, Fibre Channel
SCSI.
Fibre Channel ,
60 2 .
SCSI 2 ,
287

SAN

SCSI
.
FC .
, SCSI 8
FC
SCSI read
, 2

.

.
FC exchange ID
, ,
SCSI. SCSI

, exchange ID.
read ,
20 . DPS
10
, 10 .
/
, SCSI,

. /.


,
.
,
.
/
SCSI,
288

C8:

.

DPS
, SAN
.
,
DLS,
, DPS.
DLS-
,

.
HBA /
performance enhancing.

DPS
, .. DPS ,
Brocade, Brocade
.
, DPS ,
.
,
.
,
.
,
289

SAN

,
.


,
.
(transient)

.

,
SAN Health Advanced Performance Monitoring. ,

. , DP S
.
, DPS

.
,
,
. , ,

2-
.

.
.
,


290

C8:

,
. Brocade
,
,
.


, ISL,

-

DLS DPS .
SAN

,
.


, -
.

12:

( . 373 )

SAN,

.

Brocade

(BB)
FC.
Brocade
291

SAN

.

Brocade.
SAN,
.

SAN , ..
.

,

-
R_RDY.
Brocade
ASIC .

BB,

ASIC,

. F-port
N-port
,
( Brocade
16).


FLOGI/PLOGI,
Brocad
e


, 16
, .

,
Fibre Channel,
,
,
, (, 64 )
(
292

C8:
)
16
.
,

, .
,
, :

, , 80


.
,
, ,
.

,
,
( ).
,
80 .
,


.
,



.
/,
, ,
,
293

SAN

( ,
( )
.
,
, , ,
.

.

,
.
/
, .
, 99.9%
. Br ocade AS ICs
2 . 100%
,

:
, ,

. ,
/, ,
2 ,

, .

,

Fibre
Channel,
294

C8:
.

, .
,

.
,
,
, ,
,
.

295

SAN

9
9:
,
.
SAN

:
SAN
. ,

,

, .

SAN HA


,
SAN .

,
SAN
.
SAN

(Highly Available, HA)


HA.
, ,
HBA,
(multipath ing),

.
296

C9:

, SAN

SAN .
,
(, SFP) ,
. SAN

,
.


HA SAN
: -
, HA.
SAN (
, ),
,
.
,
(
), ,
,
.
,
. HA
,
. Bro cade ,
, , ,
..
Brocade


.

,
. ,
,
297

SAN

.
, .
Denial of Service (DoS)
,
, Brocade. ,
Brocade ,

DoS
,

DoS .
, SAN
DoS
. Brocade
FC

.

.
,
,
.
, SAN
HA
, Brocade, , ,
CE,


,
(
).
,
,
,
.
,

.

298

C9:

HA.

,
.
HBA
(. 17, 28 306)
HA, (. . 56).

. 56

SAN
. ,
,
HA.
HA.

HA,

299

SAN

.
,
,
HA.

,
, .
,
,
,
.

,

,
, .
,
?

HA . 56, . 57.

. 57 HA

300

C9:

HA 75,
, ,
.
(PS), , (CP)
(core cards) Brocade
.
,
, ,
. ,



HA.

,

, , SAN
PS,
PS - .
PS
.
, . 56,
,
HBA. HBA

m
ultipathing,

HBA.

?

75

,
,
.

301

SAN

,
-

(failover).
,

.
S AN

.
- ,
SAN
,

. (, -
m ultipathing)
.
, ,
, HA
. SFP 2,
,
1,
,
,
.
,
HA , ,

,
HA. multipathing
,
. ,
, ,


. ,
302

C9:

,
.
HA :

.

(,
)
,
,
, .

Brocade

,
, .

SAN
SAN

- ,
.
,
SAN . ,
, ,
S AN
.
- ,
SAN,

.

SAN HA:

.
303

SAN
,
.

SAN
,
:


SAN Meta SAN
,

. SAN

, . 28 ( . 178).
.


SAN Meta SAN
,
, -
.

, mesh / (. 180 - 185)

.
,
,
HA.


SAN Meta SAN
SAN
,

. (
A/B Meta SAN
,
304

C9:

.)
.


m ultipathing,


. SAN
( . 310 ).
HA,

-

multipathing, .

SAN
Meta SAN
SAN
. SAN

A, B
( ),
, -
.

,

,
.
.

SAN
. SAN

(. 310 ).

305

SAN


Multipathing
SAN
,
, , HBA (. . 56), RAID-
,
.

,
, SAN,
-
.

.
,

m
ultipathing.
m
ultipathing
HBA
. HB A
L UN,

, LUN
.
m ultipathing
HBA

LUN.
.
SAN
, m ultipathing
- . m ultipathing


306

C9:

,

. S AN ,
,

.
-

,
SAN .
multipath ing
active/standby, HBA
/,
.
active/active: /
SCSI Fibre
Channel exchange DPS (p 272).
multipathing SAN
:
1.Active/active
, active/standby,
.
2. Active/standby
, .
active/active
.
,
SAN ,
.

ac tive/active.

active/standby.

307

SAN
100%
.

multipathing
(proprietory) .

.

,

,
m ultipathing
.
m ultipathing
:
multipathing
HBA
HA SAN.


(Encyclopedia
Britannica 76) res ilient ( )

.
ISL


.


(. . 58
).

76

Encyclopedia Britannica 2004 Ultimate Reference Suite DVD.

308

C9:

Brocade Fabric Shortest Path


First (FSPF). ,
Brocade,

FC

FC.

. 58 -

309

SAN

Brocade

. ,
. 58 - ,
Brocade
. Brocade
48000

HA
, ,
FSPF.
,
.

Brocade

.

,
-
denial of service, ,
,
.

,
.
HA : HA
SAN

,
.

(redundant)


(, ) -
. SAN.
310

C9:


,
,
SAN.

,
SAN.
,
,

HA,
- ,
.

: A/B
SAN, dual-fabric SAN.


(,
HA

).


.
,
SAN.

,

SAN.


,
. ,
VSAN
SAN. . 59

311

SAN

HA

SAN .
VSAN
. , VSAN


.

312

C9:

. 59

, , , VSAN,
. Brocade

313

SAN

SAN,

. Brocade
, Virtual Fabrics

Brocade.
,
HB A,

SAN. HBA
DoS ,

.
SAN.
,
Virtual Fabrics, ,
.



.

.
,
-,
, SAN.


, , .

,
HBA, RAID-

multipath
ing,
S AN,
. . 59

,
314

C9:

.
,

. . 59



.
, :
HA SAN

A/B

,
.

Meta SAN
Meta SAN
.
, Meta SAN
/ ,
,
/ .
,

. Meta SAN
, (
) .

-

.

Meta SAN
SAN
.
,
- .
,
315

SAN


,
.
Meta SAN
, ,
,
.
A/B
. . 60 Meta SAN
.
,

A/B
,
.

multipathing ,

Meta SAN. ,


,
.

. 60 - Meta SAN

316

C9:


(
) ,
-
Meta SAN.

Meta SAN
. 61 - Meta SAN

Meta
SAN.


SAN, Meta
SAN (A/B) Meta SAN.

Meta SAN.

, , HBA
Meta SAN Meta
SAN.

. 61 - Meta SAN

317

SAN

BB Meta
SAN


,
backbone BB,

E_Ports
switch-to -switch.
Fibre Channel Router
Protocol (FCRP)

Meta SAN.

backbone.
backbone,
. backbone

- LSAN.

backbone . 62. ,

BB

,

backbone
BB-1, BB-2

backbone , .. .
backbone
.
Meta SAN backbone

, .
, . 61 - Meta SAN

318

C9:

Meta SAN.
. 63
Meta SAN + BB
.

. 62 Meta SAN
BB

. 63 Meta SAN + BB

319

SAN

backbone.
FCIP, .63,
backbone IP -,
. FC-FC
Routing Service FCIP Tunneling Service,

WAN.

SAN : Meta
SAN ,
.

LSAN

( )
( ),
Fibre Channel

LSAN (.
. 215 .)

.
, .
, ,
.

. LSAN
-
, FC ISL,
FCIP Brocade Multip rotocol
Router
.
WAN WAN.
WAN ,
(,
320

C9:

WAN).

Meta SAN
(. 258) LSAN.

HA
.
Meta SAN

:
Meta
SAN

SAN
SAN Meta
SAN . ,
SAN
A
CE,
(. . 35, . 194.)
. ,
Meta SAN


, Meta SAN A

,
Meta SAN B
- ( .
35). Meta SAN
Meta SAN,

, .

,

IFL.
. 64.
321

SAN

,
, .


,
RAS.

. 64 Meta SAN

(resiliency) ,
(redundancy).
A/B ,
,
.


SAN

( ,
Meta SAN),
322

C9:


.
,
.

SAN


,
,
.
-

(,
-

),
.
, Brocade 48000,

323

C10:

10:

10

,
SAN,
,
.

,
.
SAN.


Secure Fabric OS 2001 Brocade SAN.
Secure Fabric OS

SAN

. Secure
Fabric OS
Access Control List (A CL) Fibre Channel
Fibre Channel
PKI,

DH-CHAP.
, SAN
2001 .
Secure Fabric OS
Fabric OS,
325

SAN

5.3.0, Brocade
.

,

,
.
SAN ,

.
.
-
SAN
,
,
SAN,
SAN .
SAN
:


,
-
. ,
326

C10:

SAN,
denial of service attack
.

, SAN,
.


, ,

. SAN,

,

,
, SAN.
SAN
, ,
,

. ,
,
,
, .

,

, ,

,

.

.
,
, ,
,
327

SAN

,
.
,
.

,
, ,
,
.

.
SAN



SAN

out-of-band.

,
,
.

(
SAN). ,
S AN,

,

,
.
LAN,

SAN,
. VLAN
,
328

C10:

(
).
LAN SAN,
.
,
.


SAN. , Brocade
secure shell (ssh) telnet.


. Brocade .

.
SMTP.




. Virtual Fabrics
SAN
,
.
strong password
.


SAN

.
, S AN
.
,
,
E_Ports
WWN
329

SAN

.
Brocade
CL I
GUI.
DGHCHAP
.
WWNspoofing.


()
,
.

,

.
VLAN 77 IP- ,

,
.
,
.
VLAN
,

.

,

77

Brocade 1990-
SAN ,
.

330

C10:


/ , .
,

,
.



.
WEBTOOLS GUI, Fabric Manager
SAN
( API).

.

,

,
.



, ..
, ,

,
1
1

2 3.
Fibre Channel

, Brocade
,

(hard zoning).

.

WWN,
WWN .
331

SAN

,

JBOD.
,
WWN,
,

.
,


WWN ,
.
,
.

-

.


,
,

(
).
, ,


,

.
Brocade
,
.

ASIC
.
.
332

C10:

,
.



S AN, SA N
, ,
,
.

,

.
,

SAN.
,

( ).

,
(
SAN ).
SAN
,
ASCII .
, WWN, I D
.

,
, ,
?
,

333

SAN

.
, .
,

multipathing
.
,

, .

, .
,

, .

, SAN

.
.
HBA

,
.
,

.
WWN

( point-topoint).
,
,

,
.

point-to-point ,
334

C10:



.
,

.
,

.

.
,
RS CN ,
.

( )
point-to-point
.


. ,
,
, ,
.

, PID WWN ,

. ,
,
,
,
. ,

335

SAN

SAN.


,
HBA LUN
. ,
,
ASIC,
SAN

RSCN. ,

SAN.

Secure Fabric Operating System (SFOS)


SAN


.
Brocade Secure Fabric Operating S ystem (SFOS),
,
,
, ,

, . SFOS
2.6.1, 3.1 4.1
Brocade.
SAN
SFOS ,
,
Brocade
SFOS .
:
1. Fabric Configuration Server (FCS)
336

C10:


.
2. IP Filters
,
SAN
.
, ,
IP-
.
3. Switch Connection Control (SCC)

,
E_Ports.
4. Device Connection Control (DCC)

(
WWN).

Fabric Manager, WEBT OOLS


Fabric OS CLI.

337

C11: -. SAN

11
11:
- SAN
2: SAN
4: SAN,
Disaster Recovery (DR) Business Continuity (BC),


,
. DR
BC,

,
SAN, ,
-
,



SAN,
SAN
.

,
.

FC.
, ,
339

SAN

xWDM SONET/SDH.
,
FCIP.

,

,
SAN,

WA
N.

,

- SAN.


,
SAN
.
:
?
o
. ,
(native) FC (dark fiber)

-
78.

78

FC-
(extension) (buffer-to-buffer credits),
FC .
Brocade
FC ,
100 .

340

C11: -. SAN
o
,
,
, .
o FCIP ,
.
WAN?
o
. native FC
,
WAN
.
o
. ,
WAN,

?
?
o - SAN
, SAN
, ,
. ,
WAN .
o
.

?
, WAN
?
o
,

341

SAN

WAN
.
o ,

, FC, xWDM
SONET/SDH.
WAN?
o WAN ,
.

.
o WAN management-plane?
IP SAN,
.
,
SAN.


WAN:
?
o WAN,
DR?
?
o 1990-

,
,

. DR WAN

.
342

C11: -. SAN
o WAN
, WAN
DR. -
native FC.
, SONET/SDH,
, FCIP.
:
.
o ,
.

?
o ,
?
FCIP ,
IP-,

.


,
.
,

,
- SAN.




. ,

native FC ,
343

SAN

, FCIP.

.

, DR
-
,

:
:
?
o

A/B
.
o
WAN

SAN, WAN
LSAN ,
DR.
,
WAN
.
:
,
.
o
.
,

WAN


344

C11: -. SAN
?
o
WAN, ,


.
:
,
?
o
,
,
.
o ,
.

,
SAN
native Fibre Channel,
xWDM, FC over SONET/SDH.
o :
- SAN
(write
acceleration),

,
.


,
,
.
,

345

SAN


.
:

.
o
WAN. ,

, FCIP
WAFS,
,
native FC xWDM.
:
,
.
o ,
,
.
o ,
. Fibre
Channel LSAN

.

FC Buffer-to-Buffer
(,
FCIP
SONE
T/SDH)

,
Fibre Channel
.
,

native
Fibre Channel.
346

C11: -. SAN
Fibre Channel ISL
IFL .

.
,

ISL
SAN

( buffer-to buffer, B B B2B). , ,

. 44 ( . 206)

1 2
.
BB
.

(, ),
Fibre Channel
, , .
, -
,
.
SAN ,
,
,
, , ..
,
. (Flow control)
,
.

Fibre Channel
(buffer-t o-buffer credit s).
347

SAN

Fibre Channel (
,
, HB A, JBOD-
) BB.
,
,
,
.

. Fibre
Channel 60 2148 (~2k).
2k,
,
HBA

2k
. Brocade ,
95% 2k,

SCSI FC F
( , RSCN,
..),
Fibre Channel 2k.

, ,
,
.
BB ,
16 .
,
. ,
.
,
79
79


. Brocade cut-thru switching,

348

C11: -. SAN

, .
.
, ,
,
. ,
B2B ,
,

. - ( ) ,



.
Brocade FC

.
, , F-port FLport
. , Brocade 4G
bit
(advertise) F
FL,
4Gbit -
.
Fibre Channel

(,
500
)
BB ,
(10 500 )


, .
BB. FC
store and forward ,

.

349

SAN

, WD M
SONET/SDH,

.
,
,


. , 10Gbit 120

.
,
,

.
SAN

,
ISL.
, :
BB
2Gbit.

30 30 ,
15 ,
, , 1Gbit.
, 1Gbit
2Gbit. 4Gbit
,
2Gbit. native
FC
:
= * ( Gigabits / 2 )

, FC
,
. 100- 2Gbit
75 ,

350

C11: -. SAN
~1.5Gbit.
,
, ..
100 (
2Gbit), 100 .
, 2k, 1k,

.
,
Fibre Channel.
, 30- ISL
2Gbit/se c 100 ,
,
30 , ..
ISL
.

LD
Brocade
(Long Distance, LD):
3

L0 ( E-port)

<500

L0.5 25

L1 100

L2 200

LE 1

LD

LS

351

SAN

.
E-port EXport, ,
L 0, LE
10 ( 10

4Gbi t,

.
L0.5, L1 L2

, ,
,

LE

. ,
LD LS.
LD (dynam ic distance discovery m
ode

)
.


.
LS ,
FOS 5.1.
,

. ,
110 , 2Gbit 1k
2k. ,
, 220.

MAN/WAN
352

C11: -. SAN

- SAN.
FC Over Dark Fiber. native Fibre Channel,
E_ Ports EX_Ports,
,

,
80 . FC
(over dark fiber)
,

, ,

,
SAN.


81
Brocade 4
8 Gbit,
.

FC Over xWDM. native Fibre Channel



(DWDM CWDM,
xWDM.)
,

80



, ( ).
,
.
81
SFP /
.

353

SAN


, .

,

, ,
(
WDM).
, ,
WDM .
,
FC
BB
WDM .

FC Over SONET/SDH. Fibre


Channel Synchronous Optical Networks (
Synchronous Digital Hierarchy). ,
, OC3, OC12
native FC. ,
E3/T3,
. SONET/SDH
,
FC,
,
.

FC Over ATM. Fibre Channel ATM



- SAN.
ATM ,
.
,
, FCIP ,
ATM
.
354

C11: -. SAN

FC Over IP. FCIP



MAN/WAN.
,
- , IP WAN

( )
. ,
,
FC SAN. , FCIP
IP/Ethernet
- (point-to-point) Gigabit Ethernet.
.
point-to-point, native
FC.
FCIP
.
MAN/ WAN
:

.
? ,
,

.
FCIP
IP- ,
IP-

(SLA).

RAS.
,
,
,
SLA
355

SAN

.
RAS native
Fibre Channel, xWDM,

.
native FC ,

.
,
. RAS
FCIP, SONET/SDH ATM
.

.

,
.


, WAN
. ,

.
SONET/SDH, ATM,
xWDM, ;
FCIP, -
WAN
(SLA). ,

. IP SAN- ISDN
128k ,

.
native FC,
xWDM
, FC over SONET/SDH
ATM .
356

C11: -. SAN

.
(, xWDM)
MAN
WAN, , , FCIP iSCSI,
,
.
SONET/SDH
ATM.

.

,
? ,
- SONET/SDH,
,

. FCIP ,

IP-,
-
.

native FC, ATM, SONET/SDH IP


.
4
MAN/WAN .

357

SAN

4 MAN/WAN

622Mbps

ISDN BRI

128kbits

OC12

FracT1

1.5Mbps

STM4 622M

1.5Mbps

Native GE (1)

1Gbps

1.5Mbps

Native FC (1)

1Gbps

5Mbps

Native FC (2)

2Gbps

OC48

2.5Gbps

STM16

2.5Gbps

ADSL

82

ISDN PRI (NA)


DS1/T1 1.
ISDN PRI (E)
E1 2M

2Mbps
bps

bps

Ethernet

10Mbps

Native FC (4)

4Gbps

E3 34M

bps

Native FC (8)

8Gbps

DS3/T3 45M

OC192

10Gbps

Fast ENet

100Mbps

STM64

10Gbps

OC3

155Mbps

Native GE (10)

10Gbps

bps

Native FC (10)

10Gbps

STM1 155M

bps

83

, 100Mbps

SAN
. OC3/STM1

,
OC12/STM4 . 84


, IP.

82

ADSL, SAN
.
83

.
84

FCIP .

358

C11: -. SAN


BC/DR

SAN


,
MAN/ WAN.

,

.
-
SAN MAN/WAN

LSAN.

FC

SAN (Multiprotocol Routing for SANs)


. ,
EX_Port
(bracket)
MAN/WAN
. ,
1 (Site1)
2 (Site2 ) : Site 1
Device Fabric1 Fabric1 E_Port Router1 EX_Port
Router1 E_Port MAN/ WAN Transport (Backbone
Fabric) Router2 E_Port Router2 EX_Port Fabric2
E_Port Fabric2 Site2 Device.
, ,
SAN

MAN/WAN .

359

SAN

FastWrite Tape Pipelining



,
.
initia tor-to-target
,
-
SCSI. 85
port-to-port Brocade

, ,
, , ,
-
MAN/ WAN,

.
,
.
DW DM FC 100 ,

.
- ,
SCSI
(Round
Trip Tim e).
, ,

.

,

85
FCP ,
SCSI.

360

C11: -. SAN
.
,

MAN/WAN ( .
65 . 362).


WAN, . ,

,

.
Round Trip Tim e (RTT)
.

SCSI
SAN
.
Brocade,
. ,
Bro cade FCIP FastWrite Tape Pipelining FC
ISL, MAN.

FCIP FastW rite and Tape Pipe lining FC
FastWrite ,
RTT (. . 65 SCSI Write FastWrite).

361

SAN

. 65 SCSI Write FastWrite ( )

362

C11: -. SAN

. 66 SCSI Write FastWrite ( )

363

SAN

SAN ,
FCIP
FCIP. native FC

FCI P, FC FastW rite


.
(), ,

, ,
- FR4-18i.
FC FastWrite.

10Gbit DR/BC

(DR)
(BC)
.

,

.


xWDMs ,

. ,
10Gbit
,
4Gbit,
10Gbit



WDM
.

.

364

C11: -. SAN

. 67 10Gbit DR/BC


DWDMs 10Gbit
FC, Brocade 10Gbit FC,
FR4-18i FC
FR4-18i .
-

. DR/BC
365

SAN

. 67.
.

, ,
,

.

. :

SAN, .. "Meta SAN A" ( "B" ).

( )
Meta SAN,
DW DM.
480Gbits

Meta SAN.

active/active,
960Gbits (1.9Tbits )
.
(
(A) )
FR4-18i

, E_Port IFL (K)


48- (F ).
, , (B),
,
- 192 64
.

366

C11: -. SAN
86 (C) FR4-18i
,
quad, IFL,

, IFL.



.



(D),

(E).
(, (F))
,

LSAN.

/

DR/BC.
,


, , (H) (I),
.
(, (A) (B))
4-
IFL (J).
: 4 IFL
. IF L
FR4
,

86

367

SAN

E_Ports (K), (L)


EX_Ports.
FR4 ,
,

FR4-18i,


).
IFL
DPS
.

10 ISL (M). (N)
120 DWDM (O)
. (12) 10Gbit ISL
(24)

-.
DPS Condor ASIC
FR4-18i,


backplane
. ,

backbone .

.
,
, ,

/ Brocade Professional Services.


368

C11: -. SAN

LW

SW

SFP
SFP/
SFP+
SFP/
SFP+
SFP+
XFP
SFP
SFP
SFP+
XFP

, Gbps


,
native FC.
,
, ,
, ,
(. . 68).

62.5m/
200MHz
(OM1)
300m

50m/
500MHz
(OM2)
500m

50m/
2000MHz
(OM3)
860m

150m

300m

500m

N/A

70m

150m

380m

N/A

8
10
2
4
8
10

21m
33m
N/A
N/A
N/A
N/A

50m
82m
N/A
N/A
N/A
N/A

150m
300m
N/A
N/A
N/A
N/A

N/A
N/A
30km
80km
25km
10km

9m
N/A

. 68 ,
( SFP . )


,
369

SAN

.
FC,

. 1990- 1Gbit FC
(SW )
300 MultiMode Fiber (MMF) 62.5
m OM1,

,
2Gbit 150
.
,
50m
OM2 2- FC

300 , 4Gb it FC

OM3,

380 .


8Gbit 10Gbit.
native FC

. MMF
Single-Mode
Fiber (SM F) 9 m.
(L W)
ISL .
,
.
,
CWDM
ISL

.
, FC

370

C11: -. SAN
DW DM,
FC

DWDM. DWDM
.
.
. 381.

371

C12:

12
12:
SAN
.

,
,

.
SAN ,
,

,
SAN,
SAN.


,

SAN. ,

.

(rack)
, , ,

,
, .
, SAN-
,
373

SAN

,

.
,

SAN-
, .

(racks, lifts, cabinets) SAN.
,
,
, .
(front-to-back),
(back-to front).
,
,
,

.


.
, Brocade ,
OEM, Brocade SAN
.
,
.
, SAN
,
(side-to-side).

IP- .

, ,
374

C12:

. ,

. ,
-


,
.

,
. . 69 ,
side-to-side

front-to-back / back-to-front.

375

SAN

. 69

SAN ,
,
(. 303).

,
, .


,

(. ). ,
,

.
376

C12:

,
,
.. ,
,
,

.
, SAN
, :


HA SAN


SAN
, .
HA SAN

. . 70 73
.

377

SAN

. 70

. 71 , .

378

C12:


, .

. 72

,

.

379

SAN

. 73


HA, ..
. A/B
, ,
(, - )

(,

, ).
SAN
.

,
, .

SAN


380

C12:

,
10%
,
50%.
,
"
".
,
.
SAN,

.
, SAN


SAN. SAN, 100 ,
Brocade 4900
5000. SANs 100
,
,
Brocade 48000.


SAN,
,
(medi a, SFP, GBIC),
.




.
381

SAN

,
, , ,
. :
GBIC ,

SFP. -,
GBIC SFP.
SAN ,
.
MMF
, ,
SMF, .
, 10Gbit
,
1Gbit, 2Gbit 4Gbit
. , .
SC SFP
. , LC
GBIC. -
,
,
-.
,

.
87

. SFP GBIC

.

87
() ,
,
HBA ,
.

382

C12:

. SFP
,
.

SAN
(SFP GBIC) SAN.

. , ,

,
,

.
,
SAN ,
,
.
,
.
.
SWL

Short W avelength Laser (SW L).


SFP GBIC.

, Gigabit Ethernet
Fibre Channel

.
,

, ,
DW DM. SW L-
MMF.
383

SAN

SW L SFP GBIC
- .
LWL ELWL

Long W avelength Lase r (LW L) Extended Long W avelength Laser


(ELWL) native FC

xWDM .
SWL, SFP GBIC
Fibre Channel Gigabit Ethernet.
,
, SW L
100
. LWL ELWL
SMF.

FC ,
SW L. ,

.
-
LWL
ELW L, OEM ,
,
SWL.
MMF

Multi-Mode Fiber (MMF


)
.
SMF
.
, 88 - 50/125 m
62.5/125m. Brocade

88

-
, .
, 50/125m MMF 50m.

384

C12:

. MMF

SW L GB IC SFP .

50/125 m. ,
OM3

.
(50/125 m
62.5/125m) MMF.
,
.

,
,
.
SMF

Single-Mode Fiber

,
-

LW L, ELW L xWDM.
SMF 9/125m.



-, ,
,
.
SAN

,
,
SAN.
SAN

385

SAN

.


SAN,

.

SAN:


,
.
. , ISL
,

.
ISL .
S AN
,
,
,
, .

,
,

.
,

- .
386

C12:

,
(Field Rep laceable Un its, FRU).
.
.

.

,

.

,
.

. ,
,

.
, ,
,
.


,
, IP ,
,
IP- .

,
.
SAN

387

SAN

, ,
SAN ( , A
B)
(
).

Fabric OS SAN (
). Fabric OS

OE M SAN /
Fabric OS
.



.
ID .
Brocade
,
. ,
,
ID

,
ID,

ID.

ID
.
ID

,
, ID
,
A/B. ( ,
388

C12:

PID .)



, SAN .
,

.

,
.
SAN

/,
. ,
SAN

, ,
.

,

/
.



, SAN,
,

, .


,

389

SAN

.
:
1.
2.

3.
4.

5.


SAN 4 4:
SAN (. 121 ).


, .
,
,

.

.
.
SAN.
, ..

,

. ISL
IFL ,
.
,
,
,
. ,
60
, 30 ,

.




,
390

C12:


.

, ,


.
,

,


, .
,
,
A/B.



.
A
B.

/ A/B ( . 303)

.
,
-.
,


.

391

SAN



. ,

,
. ,

.

,
.
,
,
SAN .
-

SAN ,
.
, ,

.
Brocade ,
, , Fabric
Manager ,
SAN Heath
.

,
,

, , ID
. Fabric Manager


.
392

C12:

, SAN

,

, .
SAN ,
.
,
, .
Meta SAN,

,

.


,
.
-

/
ASIC.
backplane,
.


.

CE

. CE
,

S AN,
.
393

SAN

.
(
)
. 89
Brocade SAN Health W eb Tools.
portLogDump CLI
FC. ( ,
.)

Fibre Channel.
Brocade

,
,
FC.

Brocade ,
,
, Fibre Chan nel
.
(

portLogDump.
FC


,
.
FC,
.

89

, ,
. ,
.

394

395

A
A:

This chap ter prov ides reference m aterial for readers
who m ay be less f amiliar with e ither Fibre C hannel or
IP/Ethernet technology, or adva nced readers who just occasionally need to look up certa in details. Topics covered
include an overview of som e of the more notable item s in
the Brocade hardware and so ftware product lines, and
some of the external devices that might be con nected to
SAN infrastructure equipment.

Brocade 90
Brocade offers a full range of SAN infrastructure
equipment, including switche s and routers ranging from
entry-level 8-port platform s up to 384-port enterpriseclass fully-modular directors. The networking capabilities
of the platform s allow solutions with up to about 10,000
ports in a single network today, with the potential to scale
much higher in the f uture. 91 Brocade curren tly offers
products with Fibre Channel, FICO N, iSCSI, and FCIP.
The Brocade Fabric Application Platforms deliver switching at all levels of the prot ocol stack up to and including
the application layer.

90

Shipping to OEMs for sale as of the date of first printing of this edition of
this book. Check with the appropriate sales channel for product availability.
91
Very large solutions generally require FC-FC routers as well as switches.

Send feedback to bookshelf@brocade.com

397

SAN

All currently shipping FC fa bric switch platform s run


a version of Brocade Fa bric OS 5.x or higher. 92 The use
of a common code base enables com patibility between
switches and nodes, and consistent m anagement between
platforms. It also allows a comm on set of value-added
software feat ures. (See Brocade on p444.)

FC Brocade 200E
The Brocade 200E (below) is the entry point into the
Brocade FC product portfolio.

Figure 74 - Brocade 200E

This platform provides ente rprise-class featu res, performance, and scalability, at an affordable price point for
the entry market. Features include:

Sixteen 4Gbit 93 non-blocking / uncongested interfaces to support the most performance-intensive


applications: ente rprise-class perf ormance at a n
affordable price. It is the highest-performing 8-to16-port SAN switch in the industry.

92
Products brought into the Brocade family from the recent acquisition of
McDATA are an exception to this rule. Brocade intends to converge these
into a common director platform running Fabric OS in the future. Former
McDATA customers are encouraged to discuss any concerns they may have
regarding the roadmap with their local Brocade sales team.
93
See also 4Gbit FC on p525.

398

Investment protection for existing S AN infrastructure to reduce deploym ent cost and com plexity.
This m eans forward an d backward com patibility
with other Brocade switches, routers, and directors
at 1Gbit, 2Gbit, and 4Gbit. 94
Enterprise-class features and high-availability
characteristics such as hot-swappab le FRUs and
hot code load and activa tion. The switch is ide al
for m ission-critical SAN environments too sm all
or cost-sensitive to allow director deployments.
Ports on dem and via optional software license
keys allows the switch to be used in configurations
starting at stand-alone 16- port solutions, but it can
also be used as a core in small to medium CE fabrics, and as an edge in medium to large solutions.

The Brocade 200E was intend ed to replace the Brocade 3250 and 3850 (p 436). In m any respects, these
switches similar. All have hot fixed fans and power supply(s). All support hot code lo ad and activation. All are
compatible with Fabric OS 5.0.1 and later. All three use
SFP media.
However, the Brocade 200E also im proves on the
older switches in many ways. For example, the 200E uses
more modern and highly inte grated technology, resulting
in a m ore reliable switch and lower power consum ption.

94
It is never possible for a technology company to perform regression testing
for all firmware released on new products in all combinations with all firmware releases on all old products. This would result in a virtually infinite
number of tests needing to be passed before any new products could be qualified for shipping. Since this is impractical, Brocade will periodically end
support for very old platforms. For example, the SilkWorm 1000 series
(which has not been shipping this century) has never been supported in combination with the Brocade 48000. Customers running products which have
been at end of life for multiple years should explicitly check for compatibility
before using them with newer platforms, and should consider upgrading in
any case.

Send feedback to bookshelf@brocade.com

399

SAN

Most notably, the 200E is the first entry platform to use


the forth-generation 4Gbit Goldeneye ASIC. (p502) In
addition to the Brocade Fabric OS 5.x features available
on other platforms, the Brocade 200E enables the nextgeneration features of the Goldeneye ASIC, including but
not limited to:

4Gbit Fibre Channel interfaces


Each port is an autosensing U_Port interface, supporting F_Port, FL_Port, and E_Port
Auto-negotiates 4Gbit on ISLs and Trunks with
other Goldeneye & Condor based switches.
Capable of running all ports at 4Gbit line rate simultaneously. That is 128Gbits of cross-sectional
bandwidth per switch.
4-way frame-based trunking, and DPS.
Cut-through routing to minimize latency.
Centralized pool of 288 buffer-to-buffer credits.
Hardware offload support for node login. This improves control-plane scalability.
Centralized hardware zone tables allow more
flexible deployment scenarios. Up to 256 hardware zones are supported per ASIC.
8 VCs per E_Port to support non-blocking (HoLB)
operations in larger networks. This can be used for
advanced QoS features in the future.

Brocade 4100
The Brocade 4100 Switch, shown in Figure 75, provides
enterprise-class features, performance, and scalability.

Figure 75 - Brocade 4100

400

Some of its features include:


4Gbit non-blocking / uncongested interfaces to
support the most performance-intensive applications,
yielding enterprise-class performance at a midrange price.
It is the highest-performing 16-to-32-port SAN switch in
the industry.
Investment protection for existing SAN infrastructure to reduce deployment cost and complexity. This
means forward and backward compatibility with other
Brocade switches, routers, and directors. All ports can operate at 1Gbit and 2Gbit, as well as 4Gbit, and that Fabric
Services behaviors are consistent.
Enterprise-class features and high-availability
characteristics such as hot-swappable FRUs and hot code
load and activation. The switch is ideal for missioncritical SAN environments too small or cost-sensitive to
allow director deployments.
Ports on demand via optional software license
keys allows the switch to be used in configurations starting at stand-alone 16-port solutions, but it can also be
used as a core in small to medium CE fabrics, and as an
edge in medium to large solutions.
The Brocade 4100 replaced the Brocade 3900 (p436)
in late 2004. In many respects, the two switches are very
similar. Both provide up to 32 ports in high-density fixed
configuration. Both have hot swappable fans and power
supplies. Both support hot code load and activation. Both
run Fabric OS. Both use SFP media.
However, the Brocade 4100 also improved on the
3900 in many ways. For example, the Brocade 3900 was
50% larger than the 4100, so the new platform supports
higher density rack configurations. There were corner
cases in which the Brocade 3900 could exhibit internal

Send feedback to bookshelf@brocade.com

401

SAN

traffic configuration. 95 (See SilkWorm 12000 3900


XY on p 513 for m ore information about the 3900 internal architecture.) The 4100 uses m
ore m odern and
highly integrated technology, re sulting in a m ore reliable
switch and lower power consumption.
Most notably, the Brocade 4100 is the first platform to
use the forth-generation 4Gbit Condor ASIC. (See
Condor on p506.) In addition to the Brocade Fabric OS
features available on other platform s, the Brocade 4100
enables the next-generation f eatures of the Condor ASIC,
including but not limited to:

4Gbit Fibre Channel interfaces


Each port is an autosensing U_Port interface, supporting F_Port, FL_Port, and E_Port
Auto-negotiates 4Gbit on ISLs with 4Gbit switches. 96
Capable of running all ports at 4Gbit line rate simultaneously. That is 256Gbits of cross-sectional
bandwidth per chip.
8-way frame-based trunking, and DPS.
Cut-through routing to minimize latency
Centralized pool of 1024 buffer-to-buffer credits
Up to 255 buffers allocated to any given port
Native FC connectivity up to 500 km

95
Note that traffic patterns consisting of large percentages (e.g. 90%) of
small (e.g. 64-byte) frames will have lower throughput. This is not caused by
congestion. It is because the ratio of frame header and inter-frame gap to payload is less favorable with small frames. All networking technologies behave
this way to some extent if they support variable frame sizes. Fortunately,
there are no known bandwidth-sensitive applications that produce large percentages of small frames on all ports in a network simultaneously, which is
the only scenario in which the switch would exhibit degraded performance.
Typical SAN traffic patterns lean much more heavily towards 2k frames than
towards 64-byte frames, and the average frame size is very close to 2k.
96
At the time of this writing, there are few generally available 4Gbit nodes.
The intent is for F_Ports also to auto-negotiate as the 4Gbit node market develops in much the same way that 1Gbit/2Gbit is auto-negotiated today.

402

Hardware offload support for node login. This improves control-plane scalability.
Centralized hardware zone tables allow more flexible
deployment scenarios. Up to 256 hardware zones are
supported per ASIC.
16 VCs per E_Port to support non-blocking (HoLB)
operations in larger networks. This can be used for
advanced QoS features in the future.

Brocade 5000
The Brocade 5000 fabric switch is shown in
Figure
76. This platform provides enterprise-class features, performance, and scalability, delivering high value at an
affordable price point. T his product functionally replaces
the Brocade 4100, and entirely replaces the M4700.
In many respects, these switches are very sim ilar. All
provide up to 32 ports in a high-density fixed configuration. All three have hot sw
appable fans and power
supplies. Each can support hot cod e load and a ctivation.
Both the 4100 and 5000 run Fabric OS. Both use SFP
media. One m inor difference is that the 5000 has a com bined FAN/Power Supply FRU, whereas the 4100 had
separate FRUs for each of those parts. Since th is has no
impact whatsoever on availab ility, this is considered an
academic difference.

Figure 76 - Brocade 5000

However, the 5000 is not simply a replacement for the


4100; it also improves on the 4100 in m any ways. For example, the 4100 was twice as deep as the 5000. Because
Send feedback to bookshelf@brocade.com

403

SAN

of the shallower rack footpr int, it is possible to m ount


the 5000 without a rail kit. In som e configurations, the
5000 supports higher density rack configuratio ns in that
the it can be m ounted back to back in a cabinet, provided
that th e ove rall airf low is a ppropriate. That is, it can be
mounted on the direct opposite side of a cabinet vs. other
equipment, or even behind another Brocade 5000. The
5000 uses more modern and highly integrated technology,
resulting in a m ore reliable switch and lower power consumption: it is about 20% more efficient than the 4100.
From a software featu re set viewpoint, the 5000 is
identical to the 4100 with the exception that, at the time of
this writing, the 4100 does not have a near-term roadm ap
to support native interoperabi lity with McDATA fabrics
whereas the Brocade 5000 does have this.

Brocade 4900
The Brocade 4900 fabric switch is shown in
Figure
77. This platform is essentially identical to the Brocade
4100 (p400) and 5000 in term s of features supported. The
difference is that it has twice as m any ports, and takes up
2u instead of 1u. (I.e. the port density is iden tical.) The
ports on demand feature ranges from 32 to 48 to 64 ports.
Like the Brocade 4100, the Brocade 4900 has sufficient
internal bandwidth to support all ports at full-speed / fullduplex operation sim ultaneously in all traffic configurations. (I.e. is fully non-blocking and uncongested.)

Figure 77 - Brocade 4900

404

Brocade 48000
The Brocade 48000 (below) is a fully-m odular 10-slot
enterprise-class director, and can be populated with up to
eight port-blades and two Control Processors (CPs).

Figure 78 - Brocade 48000 Director

This platform first shipped in mid 2005. It can be configured from 32 to 384 ports in a single dom ain using 16-,
32-, and 48-port 4Gbit FC blades. Using the Virtual Fabrics f eature, it can be carv ed up into m ultiple v irtual
chassis. The platform has industry-leading perform ance
and high availab ility ch aracteristics. Each blade is hotpluggable, as are the fans, WWN c ard, and power supplies. The chassis has redundant control processors (CPs)
with redundant active-act ive uncongested and nonblocking sw itching elem ents, which run Fabric OS 5.0.1
or higher and support HCL/A. To support 48-port blades,
Send feedback to bookshelf@brocade.com

405

SAN

Fabric OS 5.2.0 or higher is re quired and som e advanced


function blades may require higher OS releases.
The Brocade 48000 is an evolution of the Brocade
12000 and 24000 design. The blades can even use the
same chassis as its pred ecessors in some cases: the power
supplies, fans, backplane, and sheet m etal enclosure are
generally compatible. As a result, it is possible to upgrade
an existing 12000 chassis all th e way to the 48000 in the
field by replacing just the CP and port b lades. 97 Sim ilar
procedures can work with the 12000 to 24000, or 24000
to 48000. Look between Figure 78 and Figure 105 (p439)
and the similarity will be apparent.
There are also differences between the directors.
Some of the differences are m
inor. For example, the
24000 and 48000 chassis and blade set has an im proved
rail glide sy stem that makes blade insertion / e xtraction
easier compared to the 12000. La rger ejector levers help
by providing greater mechanical advantage. T he 48000
also has a redesigned cable m anagement system to accommodate using the larger number of ports.
There are also much more important differences in the
underlying technology. For exam ple, the 24000 uses the
2Gbit Bloom-II ASIC, while the 48000 uses the 4Gbit
Condor chipset. (See Bloom Bloom -II p 505 and
Condor on p 506.) The overall chassis power consum ption and cooling requirem
ents have been lowered
drastically, with the r esult that ongoing operational costs

97

As a practical matter, this is almost never done. Its virtually always easier,
less risky, and even less expensive to deploy a new director vs. upgrading an
existing chassis. Also, not all OEMs can support upgrading chassis for administrative reasons. For example, it may be that the chassis serial number is
used to define the support contract for a platform, and it may not be administratively practical to change it from a 12000 to a 48000 in the support system,
even if it is technologically possible from a hardware and software viewpoint.
The bottom line is that field upgrades are rarely performed.

406

are reduced and MTBF is incr eased substantially as well.


Further im provements in MT BF are achieved through
component integration: fewer com ponents means less frequent failures, and the Condor chipset is the most tightly
integrated in the industry.
Performance has been im proved fro m the 12000 by chang ing the m ultistage ch ip
layout from an XY t opology to a CE arrangem ent.
(See
on page 511 for more information.) This allows the 48000 to present all of its ports in a single fullyinternally-connected domain. The 12000, in contrast, presented two 64-port dom ains and required external ISLs if
traffic was required to flow between the domains. In addition, the 48000 runs its internal links faster than the 12000
or 24000. Using the advanced trunking capabilities of
Condor, the 48000 maintains an evenly balanced 1:1 relationship of front-end to back-end bandwidth on the 16port 4Gbit blades. By taking advantage of local switching
and high-port-count blades, it is not only possible, but actually practical to s ustain 1.5 Tbits (3Tb its c rosssectional) of throughput in the chassis.
When making design trade-offs, availability is usually
considered the m ost i mportant factor. This is especially
true for custom ers of direct or-class products. Of course,
the 48000 has the usual director-class feature set, but it
also has a more subtle characteristic related reliability of
port b lades, which tran slates to av ailability of connections. The 48000 has the m
ost efficient com ponent
integration of any FC director built to date.

Send feedback to bookshelf@brocade.com

407

SAN

Figure 79 - FC16 Port Blade for Brocade 48000

Note the hig hlighted section of the figure in the m iddle of the blade. This is th e blades Condor ASIC. It is
the brain of the blade, containing the FC protocol logic,
the serde s 98 functions, buffer m emory, zoning enforcement memory and logic, performance counters, and so on.
Having all of these functions in a s ingle chip d rastically
reduces the com plexity of the blade vs. competing ap proaches, w hich im proves MTBF and lowers power and
cooling requirements. Compare the single-ASIC approach
of the FC16 to any other director in the industry, and the
difference will b e im mediately a pparent. I t is cer tainly
apparent when comparing the re finement of this blade to
the port blade designs from its predecessors.
Perhaps the m ost i mportant difference between the
48000 and its predecessors is that the Brocade 48000 is
the go forward platf orm for the Brocade Enterprise
roadmap. This m eans that purchasing a Brocade 48000
today is a strategic investment that will still have value for
years to come. Brocade shipped the FR4-18i blade for FC

98

A Serializer / Deserializer function, or serdes for short, is required by all


FC switches to convert frames from a parallel mode (such as being held in a
buffer) to a serial format suitable for transmission. In non-Brocade switches,
serdes functions are generally on separate chips, which increases power draw,
and lowers reliability.

408

routing and FCIP some tim e ago, and recen tly shipped
several additional blades such as:

iSCSI port blade


10Gbit FC blade
Application Processor (AP) blade

The advanced feature blades are discussed in more detail later in this section.
The intention is to be able to populate the chassis with
many different com binations of port blades. 99 For exa mple, the system should suppor t a configuration with a
combination of e.g. 128 4Gbit fabric ports plus two
LSAN router blades plus two iSCSI blades.
The Brocade 48000 Fibre Channel Director provides
the following features today:

384 ports per chassis configured in 16-, 32-, or 48port increments


Curent port blades support 1Gbit, 2Gbit, and 4Gbit
Fibre Channel on a per-port basis
FR4-18i router blade supports LSANs and FCIP
FC4-16IP blade with FC and iSCSI support
FA4-18 Application Blade with 16 virtualization ports
FC10-6 10Gbit Fibre Channel blade
Management access via 10/100Base-T RJ45 Ethernet
ports and DB9 serial ports
14U rack mountable enclosure <30 inches deep. This
allows up to 768 ports in a single rack. 100

99

It is possible that some combinational restrictions could apply, and support


may vary between OEMs.
100
Not all racks can support high-density configurations. The rack must be at
least 42u high. There may need to be space between chassis for cable management. Power and cooling infrastructure, cable management, and structure
of the floor must be sufficient. The organization supporting the SAN must often approve the installation of extremely high density deployments.

Send feedback to bookshelf@brocade.com

409

SAN

High-availability features include hot-swappable


FRUs for port blades, redundant power supplies and
fans, and redundant CP blades
Extensive diagnostics and monitoring for high Reliability, Availability, and Serviceability (RAS)
Non-disruptive software upgrades (HCL/A)
Non-blocking architecture enables 128 ports to operate at full 4Gbit line rate in full-duplex mode
Forward and backward compatibility within fabrics
with all Brocade 3000-series and later switches
Brocade 12000s are upgradeable to 48000s
Small Form-Factor Pluggable (SFP) optical transceivers allow any combination of supported Short and
Long Wavelength Laser media (SWL, LWL, ELWL),
as well as CWDM colored laser media
Cables, blades, and PS are serviced from the cable
side and fans from the non-cable side
Air is pulled into the non-cable-side of the chassis and
exits cable-side above the port and CP blades and
through the power supplies to the right

Brocade
AP7420
Routers are used to co nnect different networks together, as opposed to bridging segm
ents of the sam e
network. In this con text, multiprotocol means connecting networks using different protocols, generally at the
lower levels of the stack.
For example, one network could use SCSI over Fibre
Channel (i.e. FCP) and anot her could use SCSI over IP
(e.g. iSCSI). In general usage, a router that can m erely
handle m ultiple Upper Layer Proto cols (ULPs) but not
different lower layer protocols is not considered multipro-

410

tocol. 101 For e xample, the ability to ha ndle both SCSI/FC


and FICON/FC would not qualify a product as a multiprotocol router, whereas handling both SCSI/FC and SCSI/IP
might qualify for the term.
In the conte xt of SANs, a m ultiprotocol router must
connect Fib re Channel fabrics to each othe r and/or to
some other networking protoc ols. F ibre Chann el is m andatory since it is by far the leading protocol for use in
SANs. Other protocols that a router m ay connect to include IP storage protocols such as FCIP and the emerging
iSCSI standard.

Side Note
To learn more about SAN routing in general and the Brocade routers in particular, read the book Multiprotocol
Routing for SANs, by Josh Judd.
Brocade has created a multiprotocol SAN router pro vides which three functions crit ical to m odern e nterprise
SAN deployments, and is designed to provide more in the
future. At the time of this writing, the multiprotocol router
software provides:

FC-FC Routing Service for greater connectivity


than traditional Fibre Channel SANs provide
FCIP Tunneling Service for extending FC fabrics
over distance using IP wide area networks

101

For differing ULPs, a router may not need any special capabilities. For example, an IP/Ethernet router can handle both HTTP/IP/Ethernet and
Telnet/IP/Ethernet without being multiprotocol per se: the ULP is transparent to the router. In some cases, a switch may need special upper layer
services support for a ULP, such as CUP support on a FICON/FC switch.
Even this does not qualify the switch as a multiprotocol router; it is simply an
FC switch with enhanced FICON support.

Send feedback to bookshelf@brocade.com

411

SAN

iSCSI Gateway Service for sharing Fibre Channel


storage resources with iSCSI initiators

In addition to running thes e three services, the Brocade router is also a high-performance FC fabric switch.
The f irst platf orm the multipro tocol route r so ftware
was delivered on was the Brocade AP7420 Multip rotocol
Router ( Figure 80). This platform first becam e generally
available in early 2004. Multip rotocol router capabilities
were added to the Brocade 48000 director in early 2006
via the FR4-18i blade, and th e 4Gbit Brocade 7500 router
shipped in the sam e timeframe. For most routing deployments, these two platforms have supplanted the 7420. For
most application-layer deploym ents, the Brocade 7600
and FA-18 blade have repla ced the 7420. However, this
device is still useful in some cases.

Figure 80 - Brocade AP7420

Multiprotocol rou ting is a subs et of the AP7420s capabilities: as well as performing its role as a multiprotocol
router, it was designed to handl e storage application processing requirem ents (a.k.a. vir tualization) f or the f ull
range of environm ents from small business to large-s cale
enterprises.
At only two RETMA units (2U) in height, the AP7420
allows deployment of fabric -based applications and m ultiprotocol ro uting using very little s pace. W ith ports on
demand licensing, a single platform can be purchased
with as few as eight ports and is scalable to six teen ports
412

with only th e addition o f a lic ense key. Furthermore, its


advanced networking capabilities allow scalability far beyond that level.
The AP7420 can m ake switching decisions using any
protocol layers up to the very top of the protocol stack.
This means that the platfor m hardware is able to function
as a standard Fibre Channel fabric switch, an FC-FC
router, or a virtualizer. Si milarly, it could the oretically
function as an Ethernet or IP switch from layer 2 to laye r
4. 102 The platform has considerable flexibility, since every
port has its own ASIC with multiple embedded CPUs and
both Ethernet and Fibre Channel protocol support.
The Brocade AP7420 Fabric Application Platform
provides the following features:

16 ports with software selectable modes, including


auto-sensing 1Gbit/2Gbit FC, and 1Gbit Ethernet
2U rack mountable enclosure ~25 inches deep
HA features including redundant hot-swappable
power supply and fan FRUs
Compatibility with all Brocade 2000-series and later
Brocade switches within fabrics
Management access via dual 10/100Base-T RJ45
Ethernet ports and one RJ45 serial port
When in Fibre Channel mode:
Auto-sensing ports negotiate to the highest speed supported by attached devices
FC ports auto-negotiate as E_ or F_Ports. Any port
may be configured as an FL_Port to permit an

102

Of course, as a practical matter, not all features and combinations of features will be supported in software just because the platform hardware is
capable of delivering them. For combinations not explicitly called out in this
book, discuss them with support and sales personnel.

Send feedback to bookshelf@brocade.com

413

SAN

NL_Port device to be attached, or as an EX_Port for


FC-FC routing
Exchange-based ISL and IFL routing
When in Gigabit Ethernet mode:
Hardware acceleration through offloading TCP to port
ASICs ARM processors
FCIP for deliver TCP/IP distance extension
iSCSI initiator to FC target protocol mediation
Per-port XPath ASICs for rapid data manipulation
The XPath Fabric ASIC provides non-blocking connectivity between port ASICs
SFP optical transceivers allow any combination of
supported SWL, LWL, and ELWL media
Latency is minimized through the use of storage application processors inside each port ASIC.
Each port has LEDs to indicate behavior and status
Air is pulled into the non-cable-side of the chassis,
and exits cable-side above the SFPs

Brocade 7500
The Brocade 7500 (Figure 81 - Brocade 7500) is a
fixed-configuration 16-port 4Gbit FC router/switch with
two additional ports for FCIP connectivity. The FCIP
ports have the same capabilities as in the FC4-18i (p415).
In addition to being able to perform all standard FC
switching functions, it can route FC (i.e. to form IFLs) on
all sixteen FC ports, and to route tunneled FC IFLs across
the FCIP ports. This is a multiprotocol routing switch
running Fabric OS 5.1 or above. In almost all cases, this is
considered to be a replacement for the AP7420.

Figure 81 - Brocade 7500

414

Like the FC4-18i, the sixteen Fibre Channel ports will


negotiate 1Gbit, 2Gbit, or 4Gbit speeds. The internal
switching architecture is fully non-blocking and uncongested at full-speed / full-duplex. The internal switching
fabric supports up to 256Gbits of bandwidth: more than
enough to handle all ports at full speed.

Brocade 7600
The Brocade 7600 is a fixed-configuration 16-port
Application Platform / Virtualization Switch. In addition
to the 16 FC ports it has two additional 1000baseT ports
for application management connectivity. This platform
requires Fabric OS 5.3 or above.

Figure 82 - Brocade 7600

The sixteen Fibre Channel ports that will negotiate


1Gbit, 2Gbit, or 4Gbit speeds. The internal architecture is
fully non-blocking and uncongested at full-speed / fullduplex. The internal switching fabric supports all ports at
full speed. In addition to the switching bandwidth it also
has 128Gbit of virtualization bandwidth and the ability to
support more than 1 million IOPS.

-
FR4-18i
The FR4-18i (Figure 81) is a multiprotocol routing
blade for the Brocade 48000 director (p405). There are
sixteen FC ports on the blade, and two FCIP ports.

Send feedback to bookshelf@brocade.com

415

SAN

Figure 83 - FR4-18i Routing Blade

Each of the FC ports m ay be used for attachm ent of


1Gbit, 2Gbit, or 4Gbit FC devices such as hosts or storage, connection of FC Inter Switch Links (ISLs), or for
FC to FC routing via In ter Fabric Links (IFLs). It is also
possible to use this blade to enable FC write-acceleratio n
features generally applicable to DWDM or FCIP deployments in enterprise DR or BC solutions.
The FCIP ports m ay tunnel IFLs or ISLs. They support advan ced distance extens ion features such as
compression, encryp tion, and FastW rite acceleration, as
well as hav ing hardware acceleratio n for TCP headers to
ensure top performance and standards compliance.

FA4-18
The FA4-18 (Figure 84) is an Application / Virtualization blade for the Brocade
48000 director. T here are
sixteen (16) FC ports on the blade, and two (2) GE ports.

416

Figure 84 FA4-18 Application Blade

Each of the FC ports m ay be used for attachm ent of


1Gbit, 2Gbit, or 4Gbit FC devices such as hosts or storage, or for FC ISLs. The intern al architecture of the blad e
is fully non-blocking and unconge sted at full-speed / fullduplex. The internal switching fabric supports up to
128Gbits of bandwidth: m ore than enough to handle all
ports at full speed. In addition to the switching bandwidth
it also has 128Gbit of virtua lization bandwidth and the
ability to support m ore than 1 million IOPS. The two
1000baseT ports are used to c onnect to external application management servers.

FC10-6 10Gbit Fibre Channel


The FC10-6 ( Figure 85) is a 10Gbit FC blade for the
Brocade 48000 director. There are six (6) 10Gbit FC ports
on the b lade. Each of the ports m ay be used for attach ment of 10Gbit FC IS Ls. The internal architecture of the
blade is fully non-blocking. There are two Condor ASICs
to handle b ackplane co nnectivity, and six Egret ASICs
(p509) to operate the 10Gbit ports. Each Egret has 720
buffer-to-buffer credits, which is sufficient to support a
full-speed connection at 120km.

Send feedback to bookshelf@brocade.com

417

SAN

Figure 85 - FC10-6 10Gbit FC Blade

The 10Gbit FC ports are only supported for ISL connectivity at this tim e, as no 10Gbit Fibre Channel devices
(hosts or storage) are widely available. Brocade does not
expect 10Gbit devices to b ecome widely av ailable because 10Gbit is not cost e fficient for nodes, and less
expensive 8Gbit FC will be availab le before nodes could
take full advantage of highe r speeds in any case. 10Gbit
Fibre Channel is targeted for MAN/ WAN deploym ents
over xWDM or dark fiber networks.

FC4-16IP iSCSI to Fibre Channel


The FC4-16IP ( Figure 86) has a combination of 8 x
1Gbit iSCSI/Ethe rnet p orts (GE) and 8 x 4Gbit Fibre
Channel (FC) ports. It is de signed for use in the Brocade
48000 director (p405).
The GE ports are used to connect to external iSCSI
initiators d irectly, or (m ore typically) via externa l
Ethernet switches for fan-in. The blade can support up to
64 iSCSI initiators per port (512 per blade).

418

Figure 86 - FC4-16IP iSCSI Blade

At the time of this writi ng, iSCSI initiators for Microsoft W indows, HP-UX, Solaris, Linux (RedHat and
SuSE), and AIX are supported. The iSCSI init iators can
take advantage of advanced features such as L UN masking and re-m apping. Addition al features inclu de Error
Level Recover 0, 1 and 2, iS CSI load balancing, CHA P
support, and many other iSCSI protocol-specific features.
Each of the FC ports m ay be used for attachm ent of
standard 1Gbit, 2Gbit, or 4Gbit FC devices such as hosts
or storage, or connection of FC Inter Switch Links (ISLs).
The internal architectu re of th e blade is f ully non blocking and uncongested at full-speed / full-duplex.


In addition to stand-alo ne platforms, Brocade ASIC
and software technology is used within products from a
number of partners and OEMs. For exam ple, Brocade FC
switch ASICs are embedded into blade server products offered by som e of the industry' s top OEMs. This allows
the connection of high density server b lades in to e ither
existing fabrics or directly to storage. Brocade technology
is also embedded within storage array controllers, providing a server fan-in capability integrated into the array. In
effect, the OEM host or storage product contains som e or
all of th e S AN interna lly, which te nds to im prove m anSend feedback to bookshelf@brocade.com

419

SAN

ageability and reliab ility, and also lowers power, cooling,


and rack space requirements.
Historically, connecting a large nu mber of platforms
with embedded switches to a larger SAN created scalability and manageability problems. If each sto rage device o r
blade-chassis also had one or m ore switch domains inside
it, the size of the FC f abric could get out of hand quickly.
Brocade developed the Access Gateway feature
(p452) to e liminate th is ef fect. Now, most embedded
switches are capable of conn ecting to Brocade fabrics
as F_P orts ins tead of E_Ports, so that they do not
show up as switches in the fabric. Instead, they
are projected as one or m ore nodes which is actu ally what th ey really are, so that te nds to work out well.
To support Access Gateway, it is necessary to run appropriate code levels in the fabric as well as on the embedded
switch, and OEM support is al so required. Consult your
local support organization to see if you can benefit from
this feature.
Brocade 4020 FC

The Brocade 4020 was designed for the IBM eServer


BladeCenter & the Intel Blade Server. It is powered by
the Golden eye ASIC (p 502). The product is a singlestage cen tral m emory switch. It has a cross
-sectional
bandwidth sufficient to suppor t all ports full-speed fullduplex at once in any traffic configuration. Fabric OS
5.0.2 or later is required.
The Brocade 4020 ( Figure 87) has six outbound (to
the SAN) and 14 inbound ports (one to each blade server),
all are non-blocking and unc ongested 4Gbit (8Gbit fullduplex) Fibre Channel f abric U_Ports. This platfor m was
introduced in 2006 by Brocad e and IBM. The 4020 is
available with software packages ranging from entry level
(10-ports enabled) package up to th e full en terprise-class
Fabric OS 5.x feature set (as we ll a s all 20 -ports
420

enabled). T his a llows the pla tform to b e pu rchased with


the right balance of cost vs. features for a wid e range of
customers, f rom sm all businesses to m ajor enterpr ises.
Regardless of licensed opti ons, the 4020 has enterprise
features such as HCL/A and the Fabric OS CLI.

Figure 87 - Brocade 4020 Embedded Switch

Brocade 4016 FC

The Brocade 4016 was designed for the Dell PowerEdge blade server and for Fujitsu-S iemens PRI MERGY
Server Blad e. It is pow ered by the Goldeney e ASIC
(p502). The product is a singl
e-stage central m emory
switch. I t h as a cross -sectional ban dwidth suf ficient to
support all ports full-speed full-duplex at once in any traffic configuration. Fabric OS 5.0.4 or later is required.
The Brocade 4016 ( Figure 88) has six outbound ports
(to the SAN), which are 4Gb it, and 10 inbound ports (one
to each b lade server), w hich are 2Gbit non-b locking and
uncongested Fibre Channel fabric U_Ports. This platfor m
was introduced in 2006 by Brocade and Dell. The 4016 is
available with software packages ranging from entry level
(12-ports enabled) package up to the f ull en terpriseclass Fabric OS 5.x fea ture set and all 16-ports enabled
via Port-On-Demand. (See Brocade Software on p354.)

Send feedback to bookshelf@brocade.com

421

SAN

Figure 88 - Brocade 4016 Embedded Switch

Brocade 4018 Embedded FC Switch

The Brocade 4018 ( Figure 89) was designed f or the


Huawei Blade Server Chassis, an d was intro duced in
2006 by Brocade and Huawei. It is powered by the
Goldeneye ASIC (p 502). The product is a single-stage
central m emory switc h. It has b andwidth s ufficient to
support all ports full-speed full-duplex at once in any traffic configuration. Fabric OS 5.0.5 or later is required.

Figure 89 - Brocade 4018 Embedded Switch

The 4018 has four outbound ports (to the SAN) and
14 inbound ports (one to each bl ade server). All ports are
non-blocking and uncongested 4Gbit (8Gbit full-duplex)
Fibre Channel fabric U_Ports. This board is typically f actory installed, since - unlike othe r blade switch es - it is a
daughter board for an already existing controller module.
Brocade 4024 FC

The Brocade 4024 was designed f or the HP c-class


BladeSystem. The Brocade 4024 is powered by the
Goldeneye ASIC (p 502) and is a single-stage central
memory switch. It h as a cro ss-sectional bandwidth suf fi422

cient to support all ports fu ll-speed full-duplex at once.


Fabric OS 5.0.5 or later is required.
The Brocade 4024 ( Figure 90) has eight outbound
ports (to the SAN) and 16
inbound ports (one to each
blade server), all ports ar e non-blocking and uncongested
4Gbit (8Gbit full-duplex) Fibr e Channel fabric U_Ports.
This platform was introduced in 2006 by Brocade and HP.
The 4024 is available with so ftware packages ranging
from entry level (12-port conf iguration) up to the f ull
enterprise-class Fabric OS 5.x feature set with all 24-ports
enabled via Ports-On-Demand.

Figure 90 - Brocade 4024 Embedded Switch

Brocade 4012 FC

The Brocade 4012 was introduced in 2005 by Brocade


and HP. It represented the industrys ever first 4Gbit
switch for embedded Blade Server m arket. The Brocade
4012 was specifically designed for the HP p-class BladeSystem. It is powered by the Goldeneye ASIC . It has a
cross-sectional bandwidth sufficient to support all ports
full-speed full-duplex at once. Fabric OS 5.0.1 or later is
required. The Brocade 4012 ( Figure 91) has four outbound (to the SAN) and 8 inbound ports (one to each
blade server), all outbo und ports are non-blocking and
uncongested 4Gbit (8Gbit full-duplex) and the inbound
are all non-blocking and unc ongested 2Gbit (4Gbit fullduplex) FC fabric U_Ports.

Send feedback to bookshelf@brocade.com

423

SAN

Figure 91 - Brocade 4012 Embedded Switch

Brocade iSCSI
The Brocade iSCSI Gateway is an iSCSI-optim ized
product, designed to connect enterprise FC fabrics to lowcost edge servers. (Figure 92)

Figure 92 - Brocade iSCSI Gateway

Because this platform is sm aller and offers fewer features than th e FC4-16IP (p 419), it can be less expensive,
and may be adequate for user s who desire an entry point
into the iSCSI bridging m arket. However, there are a differences between the platform s besides cos t and port
count which must be considered when making a selection.
The iSCSI Gateway product is not capable of providing FC f abric switching . It has fewer features and lower
performance than the bladed version. The Gigabit
Ethernet interfaces on the iSCSI product are lo w-end copper, whereas the FC4-16IP uses more reliab
le optical
ports capable of spanni ng greater distances. Be424

cause the iS CSI Gateway has RJ45 copper GE interfaces


on the g ateway itself, rather than just on the iSCSI hosts,
users need to m ake sure th at their IT networking group
provides the correct interface.
This solution should be cons idered for customers who
need a low cost entry point into the iSCSI bridging market
above all else. Otherwise, a native Fibre Channel solution
or the FC4-16IP will likely provide better results.

McDATA
In 2007, Brocade purchased McDATA: one of its
long-time rivals. However, this was not the first tim e that
the two companies had enjoye d a partnership-style relationship. In fact, McDATA wa s one of Brocades firs t
customers, having purchased intellectual property from
Brocade with which to implement its line of FC directors.
Many McD ATA installed-base platfor ms still run Brocade ASICs and code-chunks to this day. In
addition,
some of the com panies that McDATA acquired prio r to
being purchased by Brocade ha d equivalently long-term
partnerships with Brocade. For example, Brocade had a
long-standing relationship with CN T in which CNT resold Brocade switches, a nd Brocade supported CNT for
DR and BC solutions requiring certain distance extens ion
methods.
Upon the close of the a cquisition, Brocade announced
end of sale for a subset of McDATA products in cases
where they directly overlapped with Brocade offerings.
For exam ple, the McD ATA pizzabox edge switches
were superseded by the Brocade 5000. They had no
value-added features beyond t hose available on the Brocade switches, so it was not necessary to con tinue to sh ip
them for much longer after the close of the acquisition.
Brocade announced that it intended to stop shipp ing these
platforms at the end of 2007.
Send feedback to bookshelf@brocade.com

425

SAN

However, Brocade has a firm comm


itment to
McDATA customers, and has not stopped shipping products such as the 140- or 256-por t directors. It is expected
that these platforms will converge with the Brocade director strategy at som e point, but even when that happens
they will b e supported in Brocade networks v ia routed
connections and com patible softw are rele ases for the
foreseeable future. Also , Brocade intends to h onor th e
support lifecycle commitments made by McDATA, which
means that even products which Brocade no longer intends to actively sell are still being supported. Typically,
support continues for five year s after end of sale is announced.
This section discusses a few of the more notab le classic McDATA products, and indicates how they m ay be
integrated into a Brocade environment.

Brocade Mi10k
The Brocade Mi10K offers up to 256 1-, 2-, and 4Gbit
FC ports in a 14 U chassis. 10Gbit FC interfaces are also
available fo r DR and BC solutions. It offers exception al
performance and availability. In som e cas es, it can even
outperform the Brocade 48000, although in m ost deployments the 48000 has 50% m ore usable bandwidth 103 as
well as 50% greater rack density, and m uch lower power
and cooling requirements. Brocade is a ctively selling the
Mi10k platform and has no immediate plans to stop doing
so.
While th is direc tor is built using som ewhat lim ited
technology compared to the Brocade 48000, costs quite a

103

The cases in which the Mi10k can outperform the 48000 are those in
which little or no flow locality is achievable, and the host-to-storage port ratio is near 1:1. If either of those statements are false, then the 48000 will
outperform the Mi10k by a considerable margin.

426

bit more, and requires considerably more power and cooling resources, for Classic McDATA customers who
already have extensive Mi10k deploym ents, this is still
the best option for transpar ently growing those environments. It is expected that Brocad
e will conv erge the
applicable portions of the Mi10k feature-set with Brocade
native director technology at some point in the future.
In the m ean tim e, the Mi10k is still being sold and supported, and can co-ex ist with Bro cade-classic platforms
using a number of strategies such as compatible firmware,
routers, and storage-centric network topologies.

Brocade M6140
The 140-port Brocade M6140 provides a high availability, high-perform ance, flexible building block for
large SAN deploym ents. It is a single-stage, 140-port director designed supporting 1Gbit to 10Gbit FC interfaces.
It can m eet the connectivity dem ands of both open systems and m ainframe FICON environm ents. Brocade is
actively selling this platfor m and ha s no i mmediate plans
to stop doing so.
While this direc tor is built us ing s omewhat outdated
technology com pared to the Brocade 48000, for Classic
McDATA customers who already have extensive M6140
or 6064 deploym ents, the M6140 is still the best option
for transparently growing those environments.

Brocade M4400
M4700
The M4400 has 16x 4G bit FC ports in a 1u / rackwidth form factor. The M4700 has 32x 4Gbit F C ports in
1u, and takes a full rack-width. These two platform s are
still shipping at the time of this writing. Since the Brocade
5000 offers a superset of th eir capabilities, Brocade will
stop selling the M4400 and M4700 at the end of 2007.
Send feedback to bookshelf@brocade.com

427

SAN

Support is expected to continue for five years after the final shipment date.

Brocade M1620 M2640


The M1620 has two GE ports for SA N extension, and
two FC ports for local E _Port connectivity. The platform
can be deployed to support lower-end DR and BC environments. The M2640 has a sim ilar architecture and use
case, but with 12x FC ports and 4x GE ports.
These platform s used the now-defunct iFCP protocol
for SAN extension. Since no other vendors ever im plemented iFCP besides McDATA, an d even McDATA had
an FCIP roadm ap, the iFCP pr otocol has actually been
considered a dead end by the industry at large for several
years. As a resu lt, Bro cade intend s to stop se lling the se
two platforms at the end of 2007 in favor of extension solutions using the Brocade 7500 router and FR4-18i blade,
which support the FCIP protocol.

Brocade Edge M3000


The Edge M3000 interconne cts Fibre Channel SAN
islands over an IP, ATM or SONET/SDH infr astructure.
Brocade is activ ely se lling th is platfor m and ha s no im mediate plans to stop doing so.
The M3000 enables many cost-effective, enterprisestrength data replication solutions, including both disk
mirroring and rem ote t ape backup/restore to m aximize
data ava ilability and bu siness continuity. Its any-to-any
connectivity and m ulti-point SAN routing capab ility provide a flex ible sto rage infrastru cture for rem ote storage
applications.
In m ost cas es, the Edge M3000 has been superseded
by the Brocade 7500 router and FR4-18i blade. However,
in some cases the M3000 provides a superior fit. For example, depending upon the nature of the payload,
428

the M3000 can com press data by up to 20:1, dram atically


reducing ba ndwidth cos ts. W ith this com pression tech nology, custom ers can achieve gigabit per second
throughput using existing 100Mb Ethernet infrastructure
but at a fraction of the cost. It also implem ents tape pipelining which can prov ide a cons iderable perfor mance
benefit for remote tape vaulting solutions.
Of course, not all custom ers have such highly com pressible data, and e quivalent features enabled or planned
for the Brocade 7500 and FR4-18i m ay provide equivalent benefits, so the market for the M3000 is considered to
be limited where com pression and tape pipelining in particular and concerned. Bu t the M3000 does have ATM
and SONET/SDH connectiv ity advantages which are
likely to keep it in the product portfolio for quite som
e
time to come.

Brocade USD-X
The USD-X is a high -performance platform that connects and extends m ainframe and open-system s storagerelated da ta replication app lications for both disk and
tape, along with rem ote channel networking for a wide
range of device types. Brocade is actively selling this platform and has no immediate plans to stop doing so.
While it is p ossible to use this platform in pure o pensystems environm ents, the prim ary curren t use cases for
this product are m ixed and pure m ainframe environments
as other products solve the extension problem more costeffectively for most open-systems customers.
This m ulti-protocol gateway and e xtension pla tform
interconnects host-to-storage and storage-to-storage systems across the en terprise regardless of distance to
create a hig h capacity, high performance storage network
using the latest high speed in terfaces. It supports Fibre
Channel, FICON, ESCON, Bus and Tag, or m ixed enSend feedback to bookshelf@brocade.com

429

SAN

vironment systems. The intermediate WAN may be ATM,


IP, DS3, or many other technologies.

Brocade
One of the advantages that Brocade has in the SAN
marketplace is its la rge insta lled b ase. Brocad e has m illions of ports running in
m ission-critical production
environments around the world, representing literally billions of hours of production operation to date. Brocade
has a po licy of prioritizing backwards compatibility with
the installed base for new products. 104 This allo ws customers buying Brocade products to get a long useful life
out of them , to achieve high ROI before needing to upgrade.
This subsection describes many of the platforms in the
Brocade installed base. SAN designers may encounter any
of these products, and must know their capabilities when
designing solutions that involve them.

SilkWorm 1xx0 FC
The first platform-group that Brocade shipped was the
Brocade 1xx0 series ( Figure 93 and Figure 94). Shipped
first in early 1997, this design was sim
ply called the
SilkWorm switch as there we re no other Brocade platforms to differentiate between. Over time, other platforms
were added. The first 16-port switch became known as the
SilkWorm I, with its successo r b eing the S ilkWorm
II. In early 1998, a lower co st 8-port S ilkWorm Express platfor m was shipped based on the sam
e

104

Some restrictions apply, of course. For example, it may be necessary to


run certain firmware versions and design solutions within scalability constraints to fit within a vendor support matrix. Also, it is not possible to
continue support for an installed-base platform literally forever. The typical
case is to continue support for five years after the last sale date of a product
line.

430

architecture, but with h alf of the ports rem oved. By the


time that the SilkW orm 2000 series shipped, Brocade had
enough platfor ms that the first generation switches became known as the SilkWorm 1xx0 series.

Figure 93 - SilkWorm II (1600) FC Fabric Switch

Figure 94 - SilkWorm Express (800) FC Fabric Switch

These switches could be configured at the tim


e of
manufacture to support either FC-AL or FC fabric devices
(Flannel or Stitch ASICs respec tively, p 503) using com binations of 2-port daughter cards (Figure 95).

Figure 95 - SilkWorm 1xx0 Daughter Card

All SilkWorm 1xx0 swi tches ran Fabric OS 1.x. The


product line consisted of 8and 16-port FC fabric
switches, with all ports runni ng at 1Gbit. (8-port = SilkWorm Express and SilkWor m 800; 16-port = SilkW orm
Send feedback to bookshelf@brocade.com

431

SAN

II and S ilkWorm 1600.) Ports could accep t either optical


or copper GBICs. Managem ent tasks could be perform ed
using buttons on the front
panel on m ost models. All
models had RJ45 IP/Ethernet and DB9 serial interfaces.
This Brocade platform group is considered to be entirely obsolete. The 1xx0 switches are sim
ply not
compatible with m any of the new features released from
Brocade ov er the pas t few years, and the hard ware predated som e of the FC stan dards. Brocade recomm ends
that SilkWorm 1xx0 series switches be upgraded to newer
Brocade products and technologies in all cases.

SilkWorm 2xx0 FC
The SilkW orm 2xx0 series consisted of several platforms all u sing th e L oom ASIC (p 504) and running
Fabric OS 2.x. The first platform s in this group the
SilkWorm 2400 and 2800 sh ipped in the m iddle of
1999. At the time of this writing, the SilkWorm 2xx0 platform group has reached the end o f its suppo rtable life.
Most OEMs have declared these switches to be unsupported, and the rest are expected to do so by the end of the
year. Users should consider 2xx0 switches to be obsolete,
and should plan for upgrading in the near future.
Figure 96 through Figure 99 show the m ost popular
2xx0 series platform s. All of these products operated at
1Gbit Fibre Channel, and had a single-stage central memory architecture for nonblocking and uncongested
operation. All of the switches
in this series had an
IP/Ethernet management port. Most had a DB9 serial port
for initial configuration , em ergency access, an d out-ofband m anagement, with the 2800 being the exception to
that rule. (It had a push-button control panel and screen
for initial configuration.)
The 2xx0 series has been superseded by other Brocade
products. However, these sw itches are still widely
432

deployed. Brocade has found that the num ber of SilkWorm 2800 platform s still in production is close to the
number that originally shipped: something on the order of
a million ports in p roduction. As a result, Brocade anticipates that many custom ers will need to perf orm 1Gbit to
4Gbit m igrations over the next year, now that these
switches have reached the end of their lifecycle.
SilkWorm 20x0

The entry-level SilkW orm 20x0 ( Figure 96) was a 1u


8-port switch, with seven fi xed ports (GLMs) and one
port with removable media (GBIC).
The platform could be purchased in three varieties,
depending on the software keys that were load ed at the
factory. The third digit in the platform product ID (20x 0)
indicated these software op tions, not any difference in
hardware. T he 2010 cam e with support only for QuickLoop, so only FL_Ports could be attached, not F_Port
fabric devices or E_Ports. The 2040 supported fabric
nodes but only one E_Port, and the 2050 had unlim ited
fabric support. Both the 2010 and 2040 provided custom ers with complete investm ent protection, as either could
be upgraded to the full-fabric 2050 with license keys
available through all channel partners. Power input was
provided by a single fixed suppl y, and fans were fixed as
well, so the entire platform was considered a FRU.

Figure 96 - SilkWorm 2010/2040/2050

Send feedback to bookshelf@brocade.com

433

SAN

SilkWorm 22x0

The 1.5u 16-port S ilkWorm 22x0 (Figure 97) brought


higher rack density to the entry-level switch market.

Figure 97 - SilkWorm 2210/2240/2250

It had a single fixed powe r supply, like the 20x0, and


could be purchased with the same three software licens e
variations. Also like the 20x0, the entire platform was
considered a single FRU. However, all 16 m edia on the
22x0 were removable GBICs.
This platform was also us ed as the basic building
block for the SilkW orm 6400, wh ich consisted of a sheet
metal enclosure containing si x SilkWor m 2250 switches,
configured and wired together at the factory to for m a
Core/Edge f abric, m anageable as a single platform . That
arrangement yielded sixty-four usable ports.
SilkWorm 2400

The SilkW orm 2400 ( Figure 98) was targeted at the


midrange segment. Like the 20x0, it was an 8-port switch,
but had redundant hot-swappa ble power supplies and
fans.

Figure 98 - SilkWorm 2400

SilkWorm 2800

The SilkWorm 2800 (Figure 99) was a 16-port switch


like the 22x0, but had ente rprise-class RAS fea434

tures like the 2400. This was by far the m ost popular of
the 2xx0 series. In m any environm ents, the num ber of
2800 switches installed today st ill rivals the number of
later p latforms. This was th e only platf orm in the s eries
that did not have an externally -accessible serial port. Instead, the initial switch conf iguration could be perform ed
using buttons and a screen built into the cable-side panel.

Figure 99 - SilkWorm 2800

SilkWorm 3200 / 3800


In 2001, the SilkW orm 2xx0 product fam ily was superseded by the SilkWorm 3200 and 3800 switches. They
were both powered by the Bloom ASIC (p505), which increased the port speed to 2Gb it and added a range of new
features including trunking, a dvanced performance monitoring, and more advanced zoning. Both platform s had
IP/Ethernet and DB9 serial m anagement interfaces, and
both ran Fabric OS 3.x. Another major difference between
these and prior Brocade platform s was that th e SilkWorm
3200 and 3800 used SFPs, whereas all prior platforms had
used GBICs.
At the tim e of this wri ting, the S ilkWorm 3200 has
been superceded by the SilkW orm 3250 (p 436), and the
SilkWorm 3800 has been largely superceded by the SilkWorm 3850. (The Silk Worm 3800 is still ship ping, but
most users are expected to transition to the 3850 in the
near future because of its many improvements.)
SilkWorm 3200

This platform had eight 2Gbit FC ports in a 1u enclosure. It was targ eted at the entry m
arket. Like its
Send feedback to bookshelf@brocade.com
435

SAN

predecessor, the SilkW orm 20x0, this switch had a single


fixed power supply and fixed fans: the entire platform
was considered a FRU.

Figure 100 - SilkWorm 3200

SilkWorm 3800

The SilkWorm 3800 was targeted at the m idrange and


enterprise markets. It had RAS features equ ivalent to the
SilkWorm 2800.

Figure 101 - SilkWorm 3800

SilkWorm 3250 / 3850 FC


These platforms represented th e entry level of the Fibre Channel fabric switchi ng market. They each had nonremovable power supplies. Both were powered by the
Bloom-II ASIC (p 503). The ASIC arrangem ent in both
platforms yielded a single-stage central m emory switch.
They both had a cross-secti onal b andwidth su fficient to
support all ports full-speed fu ll-duplex at once. Fabric OS
4.2 or later was required. The SilkW orm 3250 ( Figure
102) had eight non-blocking and uncongested 105 2Gbit

105

There has been debate in the industry about the definition of blocking.
When Brocade uses the word, it refers to Head of Line Blocking (HoLB). For
example, the SilkWorm 24000 is not subject to HoLB because it uses virtual
channels on the backplane. It is therefore non-blocking. All ports can run
full-speed full-duplex at the same time, which is uncongested operation.

436

(4Gbit full-duplex) Fibre Channel fabric U_Ports.


SilkWorm 3850 (Figure 103) had sixteen ports.

106

The

These two platform s were introduced in 2004 to replace the p opular Silk Worm 3200 and 3800 switches.
Both were available with software p ackages ranging from
the lowest e ntry lev el ( Value Line ) package u p to the
full enterprise-class Fabric OS 4.x feature set. (See
Brocade Software on p444.) This allowed the platform s
to be purchased with the right balance of cost v s. features
for a wide range of custom ers, from sm all businesses to
major enterprises. Reg ardless of licensed options, both
switches had enterprise features such as hot (nondisruptive) code load and activation (HCL/A) and th e
Fabric OS CLI.

Figure 102 - SilkWorm 3250

Figure 103 - SilkWorm 3850

SilkWorm 3900 12000


The SilkW orm 3900 ( Figure 104) delivered 32 ports
of 2Gbit Fibre Channel in a 1.5u rack-m ountable enclosure.

106

U_Port interfaces automatically detect FC topology to become F_Port,


FL_Port, or E_Port as needed.

Send feedback to bookshelf@brocade.com

437

SAN

Figure 104 - SilkWorm 3900

First shipped in 2002, this platform was targeted at the


midrange SAN market, but had many features appropriate
for the enterprise m arket as well. In m any ways, the
SilkWorm 3900 was more like a small director than like a
switch. Like the SilkW orm 12000, this platform had an
XY topology CCMA m ultistage architecture. ( See
on page 511
for more information.) Like the 12000, it supported
FICON (a m ainframe protoc ol), had redundant and hot
swappable power and cooling FRUs, and ran Fabric OS
4.x with hot code load and activation.
Typical usage cases for th e 3900 included stand-alone
applications for small fabrics, edge deploym ents in sm all
to large Core/Edge (CE) fabrics, and core deploym ents in
small to medium CE fabrics.
The SilkWorm 12000 ( Figure 105) was Brocades
first fully-modular 10-slot ente rprise-class director. This
system first shipped in 2002.

438

Figure 105 - SilkWorm 12000 Director

The chassis was rack-mountab le in 14u, and could be


populated with up to eight
port-blades and two CPs.
Overall, the chass is could be configured star ting with 32
and going up to 128 2Gbit Fibre Channel ports. Each
blade was hot-pluggable, as we re the fans and power supplies. The redundant CPs ran Fabric OS 4.x and supported
HCL/A. Typical usage cases for the 12000 included
stand-alone applications, edge deploym ents in large CE
fabrics, and core deployments in medium to large CE fabrics.
The backplane interco nnected the port blad es with
each other to form two separate 64 -port domains. The interconnection em ployed an XY topology CCM A
multistage architectu re, much like the SilkW orm 3900.
The two 64-port dom ains were both controlled by the
same redundant CP blades, and resided in the sam e chassis, but had no internal data path between them. They
could be used separately in redundant fabrics, or could be
used together in th e same fabric by connecting them with
ISLs.
Send feedback to bookshelf@brocade.com

439

SAN

At the tim e of this wri ting, the S ilkWorm 3900 has


been superseded by the SilkW orm 4100 (p 400), and the
SilkWorm 12000 has been supe rseded, first by the SilkWorm 24000 (p 403), and then the 48000 (p 403). For the
foreseeable future, the older platforms will continue to b e
supported in networks with m ore advanced platform s. In
addition, the SilkWorm 12000 chassis can be upgraded in
the field to become a SilkWorm 24000 or 48000. 107

SilkWorm 24000
The SilkWorm 24000 ( Figure 106) was a fullymodular 10-slot enterprise-c lass director, and could be
populated w ith up to eight port-blades and two Control
Processors (CPs). This platform first shipp ed in ea rly
2004. It could be configured from 32 to 128 ports in a
single domain using 16-port 2G bit Fibre Chann el blades.
The platform had industry-l eading performance and high
availability characteristics. Each blade was hot-pluggable,
as were the fans and power supplies. The chassis had redundant control processors (CPs) with redundant activeactive uncongested and non-bloc king switching elem ents,
which ran Fabric OS 4.2 or higher and supported HCL/A.

107

Of course, not all OEMs support this procedure.

440

Figure 106 - SilkWorm 24000 Director

The SilkW orm 24000 was an evolution of the SilkWorm 12000 design. It could use the sam e chassis as the
12000: the power supplies, fans, backplane, and sheet
metal enclosure were a ll com patible. As a result, it was
possible to upgrade an ex isting 12000 chassis to the
24000 in the field by replacing just the CP
and port
blades. Look between Figure 106 and Figure 105 (p 439)
and the similarity will be apparent. It can also su pport 16port 4Gbit FC Brocade 48000 blades in som e com binations with existing SilkWorm 24000 blades.
Even though the chassis we re m echanically compatible, there were differen ces between the SilkW orm 24000
and the SilkWorm 12000.
Some of the differences were m inor. For example, the
24000 chassis and blade set had an im proved rail glide
system that makes blade insertion / extraction easier. Larger ejector levers helped by providing greater m echanical
advantage. The 24000 CP blades had a blue L ED to indicate which CP was active.
Send feedback to bookshelf@brocade.com

441

SAN

There were also more important differences in the underlying technology. For exam ple, the 24000 used the
Bloom-II ASIC, while the 120 00 used th e orig inal
Bloom chipset. (See Bloom Bloom -II p 505.) The
overall chassis power consum ption and cooling requirements were lowered by m ore than 60%, with the resu lt
that ongoing operational costs were reduced and MTBF
increased by m ore than 25%. Further im provements in
MTBF were achieved through com ponent integration:
fewer components means less frequ ent failures. Performance was improved by changing
the m ultistage ch ip
layout from an XY t opology to a CE arrangem ent.
(See
on page 511 for m ore infor mation.) This al lowed
the 24000 to present al l of its ports in a single
internally-connected domain. The 12000, in contrast, presented two 64-port domains and needed external ISLs if
traffic was required to flow between the domains.
The SilkWorm 24000 Fibre Channel Director provided the following features:

128 ports per chassis in 16-port increments


Port blades are 1Gbit/2Gbit Fibre Channel
Management access via Ethernet and serial ports
High-availability features include hot-swappable
FRUs for port blades, redundant power supplies and
fans, and redundant CP blades
Extensive diagnostics and monitoring for high Reliability, Availability, and Serviceability (RAS)
Non-disruptive software upgrades (HCL/A)
14U rack mountable enclosure allows up to 384 ports
in a single rack.
Non-blocking architecture allows all 128 ports to operate at line rate in full-duplex mode
Forward and backward compatibility within fabrics
with all Brocade 2000-series and later switches
442

SilkWorm 12000s are upgradeable to 24000s


Small Form-Factor Pluggable (SFP) optical transceivers allow any combination of supported Short and
Long Wavelength Laser media (SWL, LWL, ELWL),
as well as CWDM media
Cables, blades, and PS are serviced from the cable
side and fans from the non-cable side
Air is pulled into the non-cable-side of the chassis and
exits cable-side above the port and CP blades and
through the power supplies to the right


SilkWorm 3016 FC

The SilkWorm 3016 was specifically designed for the


IBM eServ er BladeCenter. It was powered
by th e
Bloom-II ASIC. It had a cross-se ctional bandwidth sufficient to support all ports fu ll-speed full-duplex at once.
The SilkWorm 3016 (Figure 107) has two outbound ports
(i.e. facing to the SAN) and 14 inbound ports (one to each
blade server), all are non-blocking and uncongested 2Gbit
(4Gbit full-duplex) F ibre Channel fabric U_Ports. This
platform wa s introduced in 2004 by Brocade and IBM.
The 3016 was available with software packages ranging
from entry level (Value Line) p ackage up to the full enterprise-class Fabric OS 4.x feature set.

Figure 107 - SilkWorm 3016 Embedded Switch

Send feedback to bookshelf@brocade.com

443

SAN

SilkWorm 3014 FC

The SilkWorm 3014 was specifically designed for the


Dell PowerEdge blade server. It was powered by the
Bloom-II ASIC. It had a cross-se ctional bandwidth sufficient to support all ports fu ll-speed full-duplex at once.
The SilkW orm 3014 ( Figure 108) had four outbound (to
the SAN) and 10 inbound ports (one to each blade server),
all were non-blocking and uncongested 2Gbit (4Gbit fullduplex) Fibre Channel fabric U_Ports.

Figure 108 - SilkWorm 3014 Embedded Switch

This platform was introduced in late 2004 by Brocade


and Dell. The 3014 was available w ith software packages
ranging from entry level (Value Line) package up to the
full enterprise-class Fabric OS 4.x feature set.

Brocade
Brocade adds value in its products with both hardware
(i.e. ASICs) and software. This subsection describes some
of the m ost popular software features Brocade offers. It
only covers features develope d internally by Bro cade Engineering; it does not, for ex ample, discuss third-party
management tools which use one of the supported APIs.

Brocade
Some features are basic com ponents of the operating
system and platform ASICs, such as support for nodes using N_Port. (I.e. support for F_Port on a switch.) These
generally do not require purch asing a license key, but do
add value. Som e Brocade com petitors (i.e. lo op-switch
444

vendors) do not offer products that support F_Port, so


even though it seem s like this should be a basic building
block of all switches, it is worth calling it out exp licitly to
show its value.
Other featu res, such as the FC-FC Routing Service,
require m uch higher value enhancem ents to both ASIC
and OS support. Routing features and more advanced fabric service options require the purchase and installation of
license keys to enable them . On all platform s, the CLI 108
command licenseShow can be used to determ ine which
keys are installed. If a desi red feature is m issing, work
with the ap propriate sa les ch annel to purch ase the k ey,
and then us e the licenseAdd command to ins tall it on th e
switch or router.

Fabric Node (F_Port)


At the tim e of this writing, m ost Fibre Channel nodes
(e.g. host and storage devices ) use the N_Port topology.
Node Port is a set of sta ndards-defined behaviors that
allow a no de to acces s a fabric and its s ervices m ost
cleanly. In order to co nnect an N _Port to a s witch, the
switch m ust support the corresponding Fabric Port, or
F_Port topology as defined in the standards. E very Brocade platform ever shipped supports F_Port, although in a
few of the older platform s (e.g. the SilkW orm 2010) this
feature required purchasing a se parate license key. This is
the preferred method for connecting nodes into fabrics.

Loop Node (FL_Port) (QL/FA)


Early in the evolution of Fi bre Channel, there was debate about whether or not fa brics were necessary. Som e
108

There are also equivalent GUI commands in WEBTOOLS and Fabric


Manager. CLI commands are generally used for examples because all platforms include the CLI as part of the base OS, while some do not include the
GUI tools.

Send feedback to bookshelf@brocade.com

445

SAN

vendors believed that F C-AL hubs and loop switches


provided sufficient connec tivity. The argument went
something like, How many people will ev er n eed m ore
that a dozen or so devices in a SAN? Nobody! It turned
out that the real answer wa s, Just about everybody, so
the vastly more scalable and f lexible fabric switches ra pidly eroded the hub market.
To accomplish this market transition gracefully, it was
necessary for nodes designed fo r FC-AL hubs to attach to
fabric switches. The Fibre Channel standards defined a
switch port type to accomplish this: the FL_Port. (Fabric
Loop port.) This allowed, for exam ple, HBA drivers
written for hubs to present NL_Ports (Node Loop ports)
and plug into switch FL_Ports. Brocade developed the
Flannel ASIC (p503) to address this need. Platforms using
Flannel needed to be configured with loop ports at the
factory, but in subsequent pr oducts with m ore advanced
ASICs, any port could support loop nodes.109
There are s ome im portant variab les that affect how
loop devices connect to a fabric:

Does the loop device know how to talk to the name


server, and does it know how to address devices using
all three bytes of the fabric PID address? (Public vs.
private loop.)
If the device uses private loop, is it an initiator or a
target? Private loop initiators need more help to use
fabric services, i.e. the name server.
Is there just one loop device directly attached to a
switch port (like an NL_Port HBA) or are there many
loop devices on that port (like a JBOD)?

109

Throughout the remainder of this subsection, the obsolete SilkWorm 1xx0


series will not be considered. E.g. statements about all platforms may actually refer to all platforms except the SilkWorm 1xx0.

446

Public loop support for a dire ctly attached NL_Port is


the easiest case for a switch to handle. The switch ASIC
needs to be able to support FC-AL loop prim
itives,
which is the protocol used fo r loop initialization and control. All ports on all B rocade platform s today have the
hardware and software to support this m ode of operation
as part of the base OS.
Public loop support for m ultiple nodes on a sing le
switch port is slightly more complex. At the time of this
writing, all platform s excep t the A P7420 Multiprotocol
Router support this mode as part of the base OS. The m ajor a pplication f or this is JBODs: it is not curren tly
possible to attach a JBOD directly to the AP7420, but
JBODs can coexist in a f abric or Meta SAN with that
platform.
Private loo p storage d evices requ ire s till m ore advanced ASIC functionality known as phan tom logic,
and corresponding software e nhancements. This allows
Network Address T ranslation (NAT) between the onebyte priv ate loop and three-by te fabric add ress spaces.
This needs ASIC hardware support because every fram e
needs to be rewritten without performance penalty. Trying
to implement multi-gigabit NAT in software would not be
practical. Brocade began to provide support for private
loops with the Flannel ASIC.
Private loop technology has been declining rapidly, so
Brocade had not prioritized phantom logic for future platforms. All ASICs through Bloom -II (p 505) support this,
but subsequent ASICs like FiGeRo (p 510) and Condor
(p506) do n ot. Platform s like the Multip rotocol Router
and the Brocade 4100 cannot accept direct private storage
attachment, but can co-exist seamlessly in ne tworks with
private storage atta ched to Loom , Bloom , and Bloom -II
switches. Switches with private storage support include it
as part of the base OS.
Send feedback to bookshelf@brocade.com

447

SAN

Private loop initiators (hosts) are the hardest case to


solve. Not only do they requir e loop primitives and phantom logic, but they also re quire much m ore advanced
fabric services enhancements.
An initia tor norm ally queries th e f abric nam e server
for targets, and then sends IO to the m. With public initiators ta lking to private targets, a switch can no tice the
IO f rom the initia tor an d automatically set up phantom
logic NAT entries as n eeded. Private initiators do not
know how to talk to the nam e server; they learn about
available targets by probing th eir loop. They cannot send
IO to a target un til after NAT has been set u p, so th e
automatic learning mechanism does not work.
The Quick Loop / Fabric A ssist optionally licensed
feature set is designed to addr ess this need. Users explicitly define which devic es a p rivate host n eeds to a ccess
using zoning, and the switch creates the required NAT entries on that basis. QL/FA is supported as an optionally
licensed feature on the SilkW orm 2xx0 series, and the
SilkWorm 3200/3800 switches, i. e. all Fabric OS 2.x and
3.x platform s. QL/FA only applies to private initiators,
not to any other usage case, an d private initiators are the
most rapidly declining segm ent of the SAN m arket. As a
result, Brocade has not priori tized porting the feature to
4.x or beyond, except to support QL/FA on 2.x/3.x
switches in the same fabric as 4.x switches. At the time of
this writing, even that level of QL/FA support is essentially obsolete.

(E_Port)
The E_Port (Expansion Port) protocol allows sw itches
to be in terconnected to f orm a larg er f abric: a single region of c onnectivity built f rom multiple disc reet

448

switching com ponents. 110 This f eature allows SAN solutions to b e built us ing a pay as y ou grow approach,
adding switches to a fabric as needed . It also allows much
more flexible network designs, including support for geographical separation of com ponents. Without this feature,
the m aximum scalability of a connectivity m odel would
be limited to the num ber of ports on a single switch, and
the maximum geographical radius of a network would be
the distance supported by a node connected to that switch.
Today, the ability to n etwork switches together to
form a fabric seem s comm onplace, but when Brocade
started selling switches for production use in 1997, it was
a key differentiator. Most competitors could not do this a t
all, and the f ew that had th e feature had m any configuration constraints. Brocade was not just a pioneer in this
space; Brocade was the pioneer. T his is r eflected in th e
fact that FSPF 111 was authored and given to the standards
bodies by Brocade. W ithout this and other Brocadeauthored protocols, it woul d not be possible m uch less
commonplace to form multi-switch fabrics today.


A unique feature available in every Brocade 2Gbit and
4Gbit fabric switch, Brocade Virtual Channel (V C) technology represents an important breakthrough in the design
of large SANs. 112 To ensure reliable ISL communications,
VC technology logically partitions bandwidth within each

110

This also requires the interaction of other fabric services, such as the name
server and zoning database processes, but Brocade keys the feature off of
E_Port.
111
The protocol used by all vendors to determine topology and path selection.
112
Actually, even the SilkWorm 1xxx series of switches had a form of VC
support, but it was quite different and not particularly relevant to SAN design
today. But it is interesting to note that Brocade has already gone through four
generations of VC development: its a well-baked feature.

Send feedback to bookshelf@brocade.com

449

SAN

ISL into m any different virt ual channels as shown in


Figure 109, and prioritizes traffic to optimize performance
and prevent head of line blocking.
Fabric Oper ating Sys tem autom atically m anages VC
configuration, eliminating the need to manually tune links
for performance. This technology also works in conjunction with trunking to im prove the efficiency of s witch-toswitch communications, and simplify fabric design.

Virtual Channels Mapped to ISLs

Switch
E_Port

VC0

Physical Inter-Switch
Link (ISL)

VCs provide separate queues for


different traffic streams. This
prevents head of line blocking
(HoLB), and allows QoS between
different classes of traffic.

Switch
E_Port

(2Gb/ Sec Switches)

VC7
Multiple logical Virtual Channels (VCs) exist
within a single physical ISL or trunk group.
Figure 109 - VCs Partition ISLs into Logical Sub-Channels

In 2Gbit Brocade products, there were a to tal of 8


VCs (0-7) assigned to any li nk. This could be internal
links, ISLs, or trunk groups. Each VC had its own independent flow control mechanisms and buffering scheme.
In Brocades 4Gbit products, the Virtual Channel infrastructure has been greatly enhanced, and som e of the
automatic assignm ent m echanisms have been im proved.
There are now 17 VCs assigned to any given internal link:
one for class F traffic a nd sixteen for data. Each data VC
now has 8 sub-lists or sub-Virtual Channels; each of those
has its own credit m echanism and independent flow control. SID/DID pairs are assigned in a round-robin
450

fashion acro ss all the VCs, but with these n ew enhancements, a better distribution is m ade. Of course, when
connecting 4Gbit switches toge ther with 2Gbit switch es,
the ISLs and trunk groups still use 8 VCs. This is done to
avoid potential backwards compatibility issues.
In the ne ar f uture, Broc ade will b e rele asing a QoS
feature which allows 4Gbit sw itches to u se the increas ed
VC capabilities to prioritize so me flows above others in
congested networks. As a practic al matter, this feature is
expected to apply almost exclusively to long distance connections in DR or BC solutions, s ince, for local-distance
ISLs and IFLs, it is generally better to avoid congestion in
the first place than it is to m anage which devices are most
harmed by congestion.


Buffer-to-buffer (BB) credits are used by switch ports
to determine how many frames can be sent to the recipient
port, thus preventing a sour ce device from sending m ore
frames than can be received. The B B credit m odel is th e
standard method of controlling the flow of traffic within a
Fibre Channel fabric.
Like VCs, BB credits are hand led autom atically by
the Fabric Operating System in most cases. For e xtremely
long distance links, it m ay be desirable to m anually increase the num ber of cred its on a port to m
aximize
performance. (This m ay require an Extended Fabrics license.)
In the context of host or storage connections to a
switch, the number of BB credits on a link will be negotiated be tween the devic e and the s witch at initia lization
time. For ISL connectio ns, each Virtual Channel will receive its own share of BB credit s. In this case, credits are
handled the sam e way whether the port is part of a trunk
group or operating independently.
Send feedback to bookshelf@brocade.com

451

SAN

This topic is discussed in more detail under on page


346.


Access Gateway uses the N_Port ID Virtualization
(NPIV) stan dard to pres ent blade server FC connection s
as logical nodes to fabrics. This elim inates entire categories of traditiona l heterog eneous switch -to-switch
interoperability cha llenges. Attach ing throug h NPIVenabled switches and d irectors, Access Gatew ay seam lessly conn ects serv er blades to Brocade, clas sicMcDATA, or even to other vendors SAN fabrics.
Traditionally, when blade se rver chassis hav e been
connected to SANs, each enclosure would add one or two
more switch dom ains to the fabric, which had a potentially disastrous effect on
scalability. Increasing th e
number of blade enclo sures also m eant additional switch
domains to m anage, increasing day-to-day SAN m anagement burden. These additional dom
ains created
complexity and could sometimes disrupt fabric operations
during the deployment process. Finally, fabrics with large
numbers of switch dom ains created firm ware version
compatibility m anagement cha llenges: som etimes it was
impossible to find a firmwa re version which was supported by all devices in the fabric.
To address these challenges, Access Gateway presents
blade serv er NPIV connection s rather th an switch domains to the fabric. This means that Access Gate way can
support a much larger f abric, and that switch firmware on
the Acces s Gateway does not in teract with the o ther
switches in the f abric as a switch. Rather, it interacts as a
node, which greatly reduces firmware dependencies.
Unlike FC pass-th rough solutions, it can do all of this
without substantially increas ing the num ber of switch
ports required.
452

To enhance availability , Access Gateway can automatically and dyna mically fail over the pref erred I/O
connectivity path in cas e one or more fabric connections
fails. This approach helps en sure that I/O operations finish to com pletion, even duri ng link failures. Moreover,
Access Gateway can autom atically fail back to the preferred fabric link after the c onnection is restored, helping
to maximize bandwidth utilization.

Value Line
The Value Line software license p ackages reduce the
cost of acquiring and depl oying an entry-level SAN,
while allowing software-key upgrades to full enterpriseclass functionality. Designed for s mall and m edium sized
organizations, the Value Line integrates innovative hardware and software features that m ake it easy to deploy,
manage, and integrate into a wide range of IT
environments. These powerful yet flex ible capab ilities enable
organizations to start sm all and grow their storage networks in a s calable, non-disruptive, and efficient m anner.
This is especially beneficial for organizations that need to
upgrade their existing SAN e nvironment with m inimal
disruption. In addition, th
ey sim plify adm inistration
through embedded Brocade WEBTOOLS software.
The main thing that SAN designers need to be aware
of is that a Value Line switch m ight not have f ull fabric
capabilities. In exchang e fo r substantially reduced acquisition cost, the buyer of a Value Lin e switch would giv e
up features such as fabric sc alability (number of dom ains
supported) or num ber of E_ Ports allowed. W hen deploying a Value Line switc h into a lar ger solu tion, it m ight
therefore be necessary to upgrad e its license k ey to a full
fabric key.

Send feedback to bookshelf@brocade.com

453

SAN

Virtual Fabrics allows the partitioning of one physical


fabric into multiple logical fabrics that can be managed by
separate A dmin D omain a dministrators. Virtu al Fabric s
are ch aracterized by hierarch ical m anagement, granular
and flexible security, an d fast and easy reconfiguration to
adapt to n ew inf rastructure requ irements. They allow IT
administrators to m anage se parate corporate functions
separately, use different perm ission levels f or SAN administrators, provide storage for team s in re mote offices
without comprom ising local SAN security, and increase
levels of data and fault isolation without increasing SAN
cost and complexity. Once Fabr ic OS 5.2.0 or later is installed in th e SAN, Virtual Fabric s can be im plemented
on the fly w ith no physical topology changes and no disruption.
The Adm inistrative Dom ains feature is the key enabler for V irtual Fabrics technology. Adm in Dom ains
create par titions in th e f abric. Admin Dom ain m embership allows device resources in a fabric to be grouped
together into separately m anaged logical groups. For example, a SAN adm inistrator m ight have the Adm in role
within one or m ore Admin Domains, but be restricted to
the Zone Admin role for other Admin Domains.
Although they are part of the sam e physical fabric,
Virtual Fabr ics ar e sepa rate logica l entities be cause they
are isolated from each other via sev eral mechanisms such
as:
Data isolation: Although data can pass from one Virtual Fabric to another usi ng device sharing, and links can
be shared among m ultiple Virtual Fabrics, no data can be
unintentionally transferred even when Virtual Fabrics are
not zoned.
454

Control isolation : W ithin Virtua l Fabrics, f abric ser vices are independent and ar e secu red from unwanted
interaction with other Virtua l Fabric se rvices. This includes zoning, RSCNs, and so on.
Management isolation: Switc hes in a Virtual Fabric
provide independent management partitions. If a switch is
a member of more than one Virtual Fabric, it has multiple,
independent management entiti es. Administrators are au thenticated to m anage one or m ore Virtual Fabrics, but
they cannot access m anagement objects in other, unauthorized Virtual Fabrics.
Fault iso lation: Data co ntrol or m anagement fa ilures
in one Virtual Fabr ic will no t im pact any other Vir tual
Fabric services.
Admin Dom ain adm inistrators can m anage one or
more Admin Domains while Virtual Fabric administrators
have adm inistrative perm issions on all Adm in Dom ains.
Separate Admin Domains can be created for different operating system s (FICON, Z-Series, and open system s
FCP, for example).
Devices can easily be s hared among different Adm in
Domains without any special routing requirem
ents.
Admin Domain administrators can configure and m anage
their own zones; they can c onfigure all rights and devices
as long as they have the Adm in role for that particular
Admin Domain. The Admin Domain feature is backwards
compatible with the m illions of Brocade SAN ports already deployed, and no new hardware is required.
Implementing Virtual Fabrics is straight-forward, and
fits into existing SAN m anagement m odels. The m anagement and best p ractices used today in a pre-Fabric OS
5.2.0 physical fabric with zoning can be im plemented in
the sam e way in a Fabric OS 5.2.0 fabric with Adm
in
Domains and zoning.
Send feedback to bookshelf@brocade.com

455

SAN

FCIP FastWrite Tape Pipelining


FCIP is a method of transparently tunneling FC ISLs
between two geographically distant locations using IP as a
transport. S torage is o ften sensitive to laten cy, and
throughput is a great concern as well. Unfortunately, IP
networks tend to have hi gh latency and low throughput
compared to native F C solutions. Tape Pipelining and
FastWrite are features av ailable on the Brocade 7500
router and FR4-18i blade th at im prove throughput and
mitigate the negative affects of IP-related delay.
Tape Pipelining refers to writing to tape over a W ide
Area Network (W AN) connection . FastW rite ref ers to
Remote Data Replic ation (RDR) between two stor age
subsystems. Tape is serial in nature, meaning that data is
steadily streamed byte by byte, one file at a time onto the
tape from the perspective of the host writing the file. Disk
data tends to be bursty and random in nature. Disk data
can be written anywhere on the dis k at any time. Because
of these differences, tape and disk are handled differently
by extension acceleration technologies.
Tape Pipelining accelerates the transport of streaming
data by m aintaining optim al utiliza tion of the I P W AN.
Tape traffic without an accel erator mechanism can result
in periods of idle link tim e, becoming more inefficient as
link delay increases.
When a host sends a write comm
and, a Br ocade
7500/FR4-18i sitting in the data path intercepts the com mand, and responds with a transfer ready. T he router
buffers the incoming data and starts sending that data over
the WAN. The data is sent as fast as it can, lim ited only
by the bandwidth of the link or the committed rate limit.
On the heels of the write comm and is the wr ite data th at
was enabled by the proxy targ ets transfer-ready reply.
After the remote target receives the command, it responds
with a transfer ready. The remote router intercepts
456

that transfer ready, acts as a proxy initiator, and starts forwarding the data arriving over the WAN.
The host is on a high-speed FC network, and most often will have com pleted se nding the data to the local
router by this time. The local router returns an affirmative
response. While the buffers are still transmitting data over
the link, the host sends the next write comm and and the
process is repeated on the host side until the host is ready
to write a f ilemark. This proce ss m aintains a b alance of
data in the r emote routers buffers, permitting a constant
stream of data to arrive at the tape device.
On the target side, the transfer ready indicates th e allowable amount of data that can be receiv ed, which is
generally less than what the host sent. The tran sfer ready
on the host side, from t he proxy target, is for the entire
quantity of data advertised in the write command. The
transfer ready the proxy target responds with for the entire
amount of data does not have to be the sam e as the transfer ready the tape device responds with, which m ay be for
a sm aller amount of da ta, that is, the am ount that it was
capable of accepting at that
time. The proxy initiator
parses out the data in sizes acceptable to the target per the
transfer re ady f rom the tape devic e. This m ay result in
additional write commands a nd transfer read ies on the
tape side compared to the host side. Buffering on the remote side helps to facilitate this process.
The command to write the f ilemark is not inte rcepted
by the routers and passes unf ettered from end to end.
When the filem ark is com plete, the target responds with
the status. A status of OK indica tes to the ho st that it can
move on.
FastWrite works in a som ewhat different m anner.
FastWrite is an algorithm that reduces the n umber of
round trips required to com plete a SCSI write operation.
FastWrite can m aintain throughput levels over links that
Send feedback to bookshelf@brocade.com

457

SAN

have sign ificant latency . The Remote Data Replic ation


(RDR) application still experi ences latency ; bu t reduced
throughput due to that latency is minimized.
There are two steps to a SCSI write:
1. The write comm and is sent acro ss the WAN to the
target. This is essentially asking permission of the storage
array to send data. The target responds with an acceptance
(FCP_XFR_RDY).
2. The initiator waits until it receives that res
ponse
from the target before star ting the second step, which is
sending the actual data (FCP_DATA_OUT).
With the FastW rite algo rithm, the local SAN router
intercepts the originati ng write command and responds
immediately requesting the initiator to send the entire data
set. This happens in a coupl e of microseconds. The initiator s tarts to send th e da ta, which is then buffered by the
router. The buffer space in th e router includes enough to
keep the pipe f ull plus addition al m emory to com pensate for links with up to 1% packe t loss. 113 The Brocad e
7500/FR4-18i has a continuous su pply of data in its buffers that it c an use to com pletely f ill the W AN, driving
optimized throughput.
The Brocade 7500/FR4-18i se nds data across the link
until the co mmitted ba ndwidth has been consu med. The
receiving router acts on behalf of the initiator and opens a
write exchange with the target over the local f abric or d irect connection. Often, this t echnology allows a write to
complete in a single round trip, speeding up the process
considerably and mitigating link latency by 50%.

113

If a link has more than 1% packet loss or more, it means that there are serious network issues that must be resolved prior to a successful
implementation of FastWrite.

458

There is no possibility of undetected data corruption


with FastW rite becau se the final response (FCP_RSP) is
never spoofed, intercepted, or altered in any way. It is this
final response that th e receiving device sends to indicate
that the entire da ta set has been successfully received an d
committed. The local router does not generate the final response in an effort to expedi te the process, nor does it
need to. If any single FC fr ame were to be co rrupted o r
lost along the way, the target would detect th e condition
and not send the final response. If the final response is not
received within a certain am ount of tim e, the write sequence times out (REC_TOV) and is retransmitted. In any
case, the host init iator knows that the write w as unsuccessful and recovers accordingly.

FC FastWrite
For native FC links or FC over
xWDM, delay and
congestion are typically one or more orders of magnitude
better than with FCI P. However, th e spe ed of ligh t
through glass still creates not iceable latency o ver long
distance co nnections. As a result, it is poss ible for FC
links over MAN/WAN distances to benefit from the same
algorithms used in FCIP FastW rite. Brocade has added
support for this feature to its 4Gbit router portfolio.
For exam ple, it is possible to deploy FR4-18i blades
into chassis at each side of a DR or BC solutio n, and attach storag e ports directly to th
ese blades. (This is
illustrated in s tarting on page 364.) After configuring
appropriate zoning policies, any replication or m irroring
traffic between the storage por ts will be accelerated us ing
a similar mechanism to the one d escribed in the previou s
section. This can sometimes result in massive increases in
throughput, with the exact improvement depending on the
distance, co ngestion of the ne twork, block size, and the
number of devices sharing the inter-site links.
Send feedback to bookshelf@brocade.com

459

SAN


Hot code lo ad and activati on supports the stringent
availability requirements of mission-critical environments
by enabling firmware upgrades to be downloaded and activated without disrupting othe r operations or disruption
to data traf fic in the SAN. The switch continue s to route
frames and provide full fabr ic services while new f irmware is loaded onto its non- volatile storage. Once the
download is complete, the new image is activated. During
the ac tivation proces s, the switch still continues to rou te
frames, without losing even a single bit of data traffic.

Advanced ISL Trunking (Frame-Level)


Brocade IS L Trunking is ideal for optim izing performance a nd sim plifying the m anagement of a m ultiswitch SAN fabric containi ng Brocade switches. W hen
two, three, or four adjacent ISLs are us ed to connect two
Brocade 2Gbit FC switches, the s witches au tomatically
group the ISLs into a single logical ISL, or trunk. W ith
4Gbit switches, it is pos sible to trunk up to eight adjacent
links. Traf fic will be b alanced a cross these links, while
still guaranteeing in-order and on-time delivery.
ISL Trunking is designed to significantly reduce traffic congestion in storage ne tworks. W hen up to eight
4Gbit ISLs are combined into a single logical ISL, the aggregated link has a total bandwidth of 32 Gbit/sec which
can support a large number of si multaneous full-speed
conversations between devices.
To balance workload across a ll o f the ISLs in the
trunk, each incom ing fr ame is sent across the first available physical ISL in the tr
unk. As a resu lt, transient
workload p eaks for one system or application are m uch
less likely to im pact the performance of other parts of the
SAN fabric. Because the full bandwidth of each physical
link is available, bandw idth is not wasted by inef460

ficient traffic routing. As a result, the entire fabric is utilized more efficiently.

(Exchange-Level)
Dynamic Path Selection (DPS) may also be referred to
as exchange-level trunking. Like Advance ISL Trunking,
DPS balanc es traf fic ac ross m ultiple ISLs. Unlike trunk ing, DPS does not req uire that th e ISLs be adjacen t. It
uses the industry standard Fa bric Shortest P ath Firs t
(FSPF) algorithm to select the m ost efficient route for
transferring data in multi-switch environments. Any paths
which are d eemed by FSPF to have equal cost will be
evenly balanced by the DPS software and hardware. This
is a particular advantage in core/edge networks with multiple core switches, since DPS can distribute load between
different cores while Adva nced ISL Trunking cannot do
so.
DPS matches or outperform s all similar features from
any vendor except for Brocade Advanced ISL Trunking.
However, because DPS can be combined with frame-level
trunking, organizations can achieve both m aximum performance and availability.

Brocade Zoning is a feature of all switch m odels. Using zoning, organizations can autom
atically or
dynamically arrange fabric-connected devices into logical
groups (zones) across the phys ical configuration of the
fabric. It is f unctionally sim ilar to VLANs f rom the IP
networking world, though consid erably more advanced in
many ways. In fact, zones coul d be thought of as being a
combination of VLAN controls plus firewall-like ACLs.
Providing secure access control over fabric reso urces,
Zoning prevents unauthorized data access, simplifies heterogeneous storage m anagement, segregates storage
Send feedback to bookshelf@brocade.com

461

SAN

traffic, m aximizes storage capacity, and reduces provisioning time.


The need for this kind o f access con trol relates to th e
roots of SAN technology: th e SC SI DAS m odel. Storage devices directly attached to hosts (DAS) have no need
for network-based access control features: access by other
hosts is precluded by the limitations of the DAS architecture. In contrast, SANs a llow a potentia lly larg e num ber
of hosts to access all storage in the network, not just the
systems that they are intended to access. If each host is allowed to access every s torage array, the potential im pact
of user error, virus infecti on, or hacker attacks could be
immense. To prevent unintended access, it is necessary to
provide access control in the network and/or the storage
devices themselves.
There are m any m echanisms f or solving the SANbased acces s contro l problem . All of them have som e
form of m anagement interface that allows th e creation of
an access control policy, and some mechanism for enforcing that policy. Brocade switc hes and routers use a set of
methods collectively referred to as Brocade Advanced
Zoning. Brocade Advanced Zoning requires a license
key on all platform s, but all currently shipping platform s
bundle this key with the base OS.
Using this key allows the creation of m
any z ones
within a fabric, each of whic h may be comprised of many
zone objects, which are storage or host PIDs or WWNs.
These objects can belo ng to zero, one, or m any zones.
This a llows the cre ation of overlapping zones. Every
switch in the fabric then enforces access control for its attached nodes. Zone objects ar e grouped into zones, and
zones are grouped into zone c onfigurations. A fabric can
have any number of zone c onfigurations. This provides a
comprehensive and secure m ethod for defining exactly
which devices should or should not be allowed to com 462

municate.

Fabric OS CLI
All Brocade switches provide a comprehensive Com mand Line Interface (C LI) which enables m anual lowest
common denominator control, as well as task automation
through scripting mechanisms via the switch serial port or
telnet interfaces.

WHETRROV
Brocade WEBTOOLS is web-brow ser-based G raphical User Interface (G
UI) for elem ent and network
management of Brocade switches. WEBTOOLS uses a
set of processes (e.g. httpd) and web pages that run on all
Fabric OS s witches in a f abric. Once a switch or router
has an IP address configure d, it is possible to m anage
most functions sim ply by pointing a Java-enabled web
browser at that address.
This product sim plifies m anagement by enabling administrators to configure, m onitor, and m anage switch
and fabric param eters from a sing le onlin e access poin t.
Organizations m ay configure and adm inister individual
ports or switches as well as small SAN fabrics. User name
and password login procedures protect against unauthorized actions by lim iting access to configuration features.
Web Tools provides adm inistrative control point for
Brocade Ad vanced Fabric Se rvices, including Advanced
Zoning, ISL Trunking, Advanced Perform ance Monitoring, Fabric Watch, and Fabric Manager integration. For
instance, ad ministrators can utilize tim esaving zoning
wizards to step them through the zoning process.
While this is technic ally a licensed f eature, lik e zoning, WEBTOOLS is included with all currently shipping
Brocade platforms.
Send feedback to bookshelf@brocade.com

463

SAN

Fabric Manager
Fabric Manager is a flexib le and powerful tool that
provides rapid access to crit ical SAN inf ormation and
configuration functions. It allo ws ad ministrators t o effi ciently configure, m onitor, provision, and other perform
daily m anagement tas ks f or m ultiple f abrics or M eta
SANs from a single location. Through this single-point
SAN m anagement architecture, Fabric Manager lowers
the overall cost of SAN ownershi p. It is tightly integrated
with other Brocade SAN m anagement products, such as
Web Tools and Fabric W atch, and enables third-party
product integration through bui lt-in m enu func tions and
the Brocad e SMI Agen t. Organizations can us e Fabric
Manager in conjunction with other leading SAN and storage resource m anagement applications as the drill-down
element manager for a single or multiple Brocade fabrics,
or use Fabric Manager as the prim ary SAN m anagement
interface.

SAN Health
SAN Health is a powerf ul too l th at helps op timize a
SAN and track its components in an autom ated fashion.
The tool g reatly increase s SAN manager productivity,
since it au tomates m any m andatory recurring S AN m anagement tasks. It sim plifies the process of data colle ction
for audits and change tracking, uses a client/server expert systems appro ach to identify poten tial is sues, and
can be run regularly to monitor fabrics over time. This is
especially useful to SAN designers in three ways:

When designing changes to existing environments, the


tool can help to audit the target environment before
finalizing a design
In any design context, it can help to document a SAN
after implementation
It can be specified in the SAN project plan as an on464

going proactive maintenance and change-control tool


to satisfy manageability requirements
The tool has two software components: a data capture
application and a back-end report processing engine. SAN
managers may run the data captu re application as often as
needed. After SAN Health fi nishes capturing diagnostic
data, the b ack-end reporting process automatically generates a point-in-tim e snapshot of the SAN, including a
Visio topology diagram and a detailed report on the SAN
configuration. This report co ntains summ ary inform ation
about the entire SAN as well as specific details about fabrics, switches, and individual ports. Other useful item s in
the report include alerts, historical perfor mance graphs,
and any recomm ended change s based on continually updated best practices.
The SAN Health prog ram is powerf ul and f lexible.
For exam ple, it is possible to configure m any different
fabrics in a single audit set, and schedule them to run
automatically on a recurring basis. These audits can run in
unattended mode, with automatic e-mailing of captured
data to a designated recipient.
The tool als o has enhan ced change-track ing features
to show how a f abric has evolved over tim e, or to f acilitate troubleshooting if something goes wrong. This can be
an inva luable add ition to the chan ge-tracking process,
both for most-m ortem analysis and for proactive m anagement. For instan ce, SAN Health can track traffic
pattern changes in weekly or m onthly increm ents. This
can help to identify loom ing perfor mance problem s proactively, an d take corrective action before end -users are
affected.
SAN Health is curren tly availab le to SAN end-users
and Brocade OEM and r eseller channel partners. It can be
used with Brocade install-base fabrics, and fabrics using
equipment from selected othe r infrastructure vendors as
Send feedback to bookshelf@brocade.com

465

SAN

well. The tool is ava ilable for download on the public


Brocade web site (www.brocade.com
/sanhealth). Fo r
partners, Brocade also provides a co-branded version.

Fabric Watch
Brocade Fabric W atch provides advanced m onitoring
capabilities for Brocade products. F abric W atch enables
real-time proactiv e awareness of the health, perform ance
and security of each switch, and autom atically alerts network m anagers to pro blems in order to avo id costly
failures. Mo nitoring f abric-wide events, ports, and environmental p arameters p ermits ear ly f ault de tection and
isolation as well as performance measurement.
With Fabric W atch, SAN adm inistrators can selec t
custom fabric elem ents and al ert thresholds or they can
choose from a selection of preconfigured settings for
gathering valuable health, pe rformance and security m etrics. In addition, it is easy to integrat e Fabric Watch with
enterprise systems management solutions.
By im plementing Fabric W atch, storage and network
managers can rapidly im prove SAN availability and performance without installing
new software or system
administration tools.

Advanced Performance Monitoring


Brocade Advanced Performance Monitoring is a com prehensive tool for m
onitoring the perform ance of
networked storage resources. It en ables adm inistrators to
monitor both transmit and receive traffic from source
devices to destination devi ces, enabling end-to-end visibility into the f abric. Using this to ol, adm inistrators c an
quickly identify bottlenecks and optimize fabric configuration resources to compensate.

466

Extended Fabrics
Extended Fabrics software enab les native Fibre Channel ISLs to span extrem ely long distances. Extended
Fabrics optimizes switch buffering (BB credits) to ensure
the highest possible perfor mance on these long-distance
ISLs. W hen Extended Fabrics is installed on gateway
switches, th e ISLs (E_P orts) are co nfigured with a large
pool of buffer credits. T he e nhanced switch buffers help
ensure that data transfer can occu r at full o r near-full
bandwidth to ef ficiently u tilize the connection over the
extended links. As a r esult, organizations can use Extended Fabrics to implement strategic applications such as
wide area data replication, high-speed rem ote backup,
cost-effective remote storage centralization, and business
continuance strategies.

Remote Switch
Remote Switch is a n ow large ly obsole te f eature
which enabled f abric co nnectivity o f two switches over
long distances by supporting ex ternal gateways to encapsulate Fibre Channel over ATM. Connecting SAN islands
over Fibre Channel-to-ATM device enabled organizations
to extend their solutions ove r a W AN. This type of configuration could be used for solutions such as rem ote disk
mirroring and remote tape backup. While ATM extension
may still be used, this method has largely been superseded
by FC over SONET/SDH and native FC links using Extended Fabrics. For all such configurations, Brocade now
supports an Open E_Port m ode to support for Gateway/Bridge devices. Custom
ers m ay simply use
portCfgISLMode CLI command which is now part of the
base OS: there is no need for a license anymore.

FICON / CUP
The Brocade directors and selected switches support
the FICON protocol for m ainframe environm ents, enaSend feedback to bookshelf@brocade.com

467

SAN

bling o rganizations to u tilize a sing le p latform f or both


open system s and m ainframe storage networks. FICONcertified Brocade platforms support the ability to run both
open systems Fibre Channel and FICON traffic on a portby-port basis within a single platform. The Brocade FICON i mplementation also supports cascaded FICON
fabrics at 1 and 2 Gbit/sec FICON speeds.
With Fabric OS version 4.4, Brocade fully supports
CUP in-band m anagement functions, which enable m ainframe applica tions to
perform configuration,
management, monitoring, and error handling for Brocade
directors and switches. CUP support also en ables advanced f abric s tatistics repor ting to f acilitate m ore
efficient network performance tuning.

Fibre Channel
The Brocade FC-FC Routing Service provides connectivity between two or m ore fabrics without m erging
them. Any platform it is running on can be referred to as
an FC router, or FCR for short. At the time of this writing,
the feature is available on the Brocade AP7420, the Brocade 7500, and the FR4-18i blade.
The serv ice allows the cre ation of Logica l Storag e
Area Networks, or LSANs, which provide connectivity
that can sp an fabrics. It is most useful to think of an
LSAN in term s of zoning: an LSAN is a zon e that spans
fabrics. The fact that an FCR can connect autonom
ous
fabrics without m erging them has advantages in term s of
change m anagement, network m anagement, scalability,
reliability, a vailability, and service ability to na me just a
few areas.
The customer needs fo r this p roduct are sim ilar to
those that brought first routers and then Layer 3 switches
to the data networking world. An FC router is to an FC
fabric as an IP router is to an Eth ernet subn et.
468

Early efforts were m ade to cre ate la rge, fla t Ethe rnet
LANs without routers. Thes e efforts hit a ceiling beyond
which they could not grow effectively. In m any cases,
Ethernet broadcast storms would create re liability issues,
or it would becom e i mpossible to resolve dependencies
for change control. Perhaps m erging Ethernet networks
that grew independently woul d involve too much effort
and risk. An analogous situation exists today with flat Fibre Channel fabrics. Using an FCR with LSANs solves
that problem, while other proposed solutions such as
VSANs just m ove the problem around in a shell-game
effort to confuse users.
For more information about this feature, see the book
Multiprotocol Routing for SANs by Josh Judd.

FCIP
Fibre Channel over IP (Internet standard RFC 3821) is
one of several m echanisms available to extend F C SANs
across long distances. FCIP transparently tunnels FC ISLs
across an in termediate I P ne twork, m aking the entire IP
MAN or WAN appear to be an ISL from the viewpoint of
the fabric. This is available as a fully-integrated feature on
the Brocade AP7420 Multip rotocol Router, the Brocade
7500 router, and the FR4-18i blade.
It is important to note that FCIP is neither the only nor
always the best approach to distance extension. The major
advantages of FCIP are cost and ubiquitous availability of
IP MAN an d WAN services. However, for users interested in reliability and perf ormance, it is th eoretically
impossible for FCIP or an y other IP SAN technology
for that m atter to m atch native FC solutions. Generally
speaking, SAN designers prefer distance extension solutions in the following order:
1.
2.

Native FC over dark fiber or xWDM


FC over SONET/SDH

Send feedback to bookshelf@brocade.com

469

SAN
3.
4.

FC over ATM
FC over IP

Many of the shortcomings of FCIP can be m itigated


though not elim inated by usi ng FastW rite and/or Tape
Pipelining. (p 456) In fact, before the advent of FC FastWrite, it was som etimes even possible to achie ve bette r
performance on a 1Gbit FCIP link than a 4Gbit FC link.
FCIP should therefore almost always be used in com bination with some form of write acceleration technology.
For more information about this feature, see the book
Multiprotocol Routing for SANs by Josh Judd.

Secure Fabric OS
As organizations interconnect larger and larger SANs
over longer distances and thr ough existing networks, they
have an ever greater need to effectively m anage their security and policy requirem
ents. To help these
organizations im prove securi ty, Secure Fabric OS, a
comprehensive security solution for Brocade-based SAN
fabrics, provided policy-ba sed security protection for
more predictable change m anagement, assured configuration integrity, and reduced risk of downtim
e. Secure
Fabric OS protected the ne twork by using the strongest,
enterprise-class security m ethods available. W ith its
flexible design, Secure Fabric OS allowed organizations
to customize SAN security in o rder to m eet specific policy requirements. All Secure Fabric OS features have now
been made available in the ba se OS for free as of Fabric
OS 5.3.0. It is recommended th at custom ers m igrate to
that solu tion as it pro vides additional f eatures such as
DH-CHAP to end devices (HBAs) and is also more s calable.

470

ROI
This section provides guidance on ways to calculate
the Return on Investment (ROI) for the SAN project. For
a m ore comprehensive evaluation of the benefits of a
SAN, it is bette r to p erform a Total Cost of Ownership
(TCO) analysis. However, TCO is harder to calculate, and
ROI analysis m ay be sufficient in m any cases, so this is
usually where a designer would start.
In fact, even doing a detailed ROI analysis is not
needed in most cases. This should be done only if the
stakeholders responsible for signing off on the SAN
budget have asked for it. For example, if the SAN is being
deployed in order to m eet a legal requirem ent for a disaster recovery solution, the implementation is mandatory, so
analyzing the financial ROI c ould be m eaningless. After
all, if the le gal requirement is not m et, it could cause an
organization-wide disaster, so m ost stakeholders would
agree th at the deployment is needed regard less of the financial ROI analysis. Many organizations also put in a
SAN based on a total cost of
ownership ju stification,
which may not require ROI justification.
For installations which do require it, the ROI analysis
method below will provide a useful guideline for how to
approach the project. It is not intended to be viewed as a
hard and fast procedure set, indicating the only right
way of calculating ROI, but sim ply as a starting point. In
many organizations, there is already an estab lished methodology for ROI calculations, in which case the f ollowing
guidelines can be mapped into the existing processes.
Some of the sources of SAN ROI include:

Additional revenue or productivity gains generated


during backups that - prior to the SAN - required taking systems off line.

Send feedback to bookshelf@brocade.com

471

SAN

Similar gains generated through higher average system or application uptime


Lower IT management costs and increased productivity generated through the centralization of resources.
Significantly shorter process time for adding and reconfiguring storage.
Reduced capital spending through improved utilization of space on shared storage.

To perform an ROI analysis for a SAN, the following


steps can be used:

Identify the servers and applications which will participate in the SAN. (This should already have been
done previously in the planning process. Refer to
Chapter 5: starting on
page 149.)
Select ROI scenarios. These are the primary functions
that the SAN is expected to serve, such as storage
consolidation or backups.
Determine the gross business-oriented benefits of this
scenario. E.g. how much money will the company
save by purchasing fewer storage arrays?
Determine costs to achieve this benefit. (Again, this
should already have been done in a previous step in
the planning process.)
Calculate the net benefits. Essentially, this means subtracting the costs from the benefits.

ROI
An ROI analysis can focus on specific them es which
generally ha ve business relev ance. This will h elp IT or ganizations dem onstrate the fi nancial value of the SAN.
The Brocade ROI m odel clarifies in non-technical term s
the benefits of SANs, quantifyi ng the financial benefits to
demonstrate real-world ROI. Five key SAN benefit
472

themes which are often used for ROI analysis are:

Improved storage utilization: SAN-enabled access


to enterprise storage will result in economies of scale
Improved availability of information: Enterprises
are increasingly relying on information to control
costs and improve their competitive advantage. SANenabling access to storage (where the information resides) will make that information more available by
keeping the systems processing the information running longer. Backups (and restores) will finish quicker
in SAN-enabled environments. The result is that mission critical information is at the disposal of the
enterprise more of the time.
Improved availability of applications: SAN solutions dramatically reduce application downtime both
scheduled and unscheduled. Global enterprises can
profit from the extra availability.
More effective storage management: SAN-based solutions are easier to manage because they tend to be
centralized. Centralization translates to increased operational control and management efficiencies. These
are directly related to cost reductions.
Foundation for disaster tolerance: Certain elements
of SAN-enabled solutions create the opportunity for
improved disaster tolerance as a by-product of the architecture. Examples include remote backups, disk-todisk-to-tape backups, data mirroring or replication,
and inter-site applications failovers.

1:

The first step is to d efine important servers, their applications, and their as sociated storage. This should have
been done during the requirem ents gathering phase of the
SAN planning process. Then group them according to th e
role they p lay. For ex ample, an organization m ight have
Send feedback to bookshelf@brocade.com

473

SAN

back-end database servers, fr ont-end application servers,


email servers, web servers, and servers hosting network
file systems such as NFS or CIFS.
Using data f rom the inventory of existing equipm ent,
define groups of servers perform ing sim ilar tasks. For
each server-group, defin e the average am ount of directattached storage they cu rrently have configured. Also define for each server-group how fast their storage capacity
is growing and how much space they need to leave unoc cupied on storage arrays to grow into for a given year.
(I.e. how much headroo m each requ ires.) Also d efine the
availability requirem ents for each server g roup, if you
have not already done so.

2:
In the beginning of this chapter we discussed the
business requirements of the SAN. The requ irements define a se t o f ROI scenarios. Th is n ext se ction illus trates
how to process thr ee common scenarios: Storag e consolidation, backup and restore,
and high availability
clustering. (These and other scenarios are discussed in
Chapter 2: SAN starting on page 61.) In your
own analysis, include all business-oriented benefits which
the SAN will provide.

The goal of this scenario is to migrate from traditional


Directly Attached Sto rage (DAS) to SAN-based storage.
Two benefits to consider are (1) reduced need for storage
headroom (a.k.a. white sp ace), and (2) reduced downtime associated with sto rage adds, moves, and changes.
See starting on page 61 for a
description of this scenario.

474

This scenario addresses backup and restore savings


opportunities based on perform ance. It is assum ed that an
existing enterprise ne
twork-based distributed
backup/restore facility is alr eady in place, e.g. sending
backup data to a tape server via a LAN. If that is not true,
then the ROI will be greate
r. See
/
LAN starting on page 72 for a description of this scenario.

High Availability (HA) clustering is a m ethod of i mproving of the ava ilability of application s. Nor mally in
HA configurations, a standby server stands at the ready to
step in for a failing producti on server. If the production
server fails, the applications are transferred to the standby
server throu gh partially or totally autom ated m eans. In
addition to protecting against failures, HA clusters can be
used to reduce planned downtime for upgrades or changes
to a server hardware platfor m. In this case, an ad ministrator would m anually trigger an application failover
(usually called a switchover in this context) to the
standby server, perform maintenance on the prim ary, and
then manually move the application back on ce the m aintenance was complete and verified.
Most HA configurations have a dedicated standby
server for every production se rver they are protecting.
One reason for is the inability to atta ch m ore than two
computers to exte rnal SCSI disk ar rays. The r esulting 1:1
ratio of prim ary to hot standby servers m eans a very
costly HA facility, which in practice m eans that m ost
applications are no t in cluded in HA clusters, and are
therefore exposed to outages during failures or planned
hardware m aintenance operations. See

Send feedback to bookshelf@brocade.com

475

SAN

starting on page 66 for a m ore comprehensive


discussion of this topic.

3:

Once you have decided which scenarios apply to your


SAN by looking at the busin ess problem s whi ch it will
address, it is time to calculate the benefits of those scenarios. W hen calculatin g ROI, be nefits are commonly
divided into two types: hard benefits, and soft benefits.
Hard benefits include a ny benefits for which a specific m onetary savings or revenue increase can be
identified with a high d egree of confidence. For exam ple,
it is of ten r elatively eas y to ass ign specif ic v alues to reduced capital expenditures , operational budget savings,
and gains through some kinds of staff productivity increases.
Soft benefits include items for which specific monetary savings are m ore diffi cult to define. One typica l
example is opportunity costs. It may be difficult to assign
an exact value to the opportunity cost of degraded performance, system downtim e fo r repairs, lengthy backup
windows, or lengthy data restoration tim es. The characterization of a benefit as sof t does not im ply that it is
less important; just that it is harder to prove exa ctly how
much money it is worth.
Remember while reading the remainder of this section
that each of the benefits listed below can be classified as
either hard or soft. Also rem ember that costs will be calculated in a subsequent step; this section is only about
benefits.

Benefits of storage conso lidation can be calculated by


evaluating the savings of eliminating unused white
476

space on storage (a.k.a. excess h eadroom), which is a


hard benefit, and the sa vings obtained by the elim ination of som e of the downtim e associated with upgrading
server-attached storage, which is usually a soft benefit.
Headroom savings are deferred savings, which m eans
that th e org anization will get ben efits in the f uture, and
will continu e to get the benef its pe rpetually in stead of
merely having a one-tim e savings. If the overall storage
capacity keeps expandin g in an o rganization, s o will the
requirement for storage headroom . Of course, this is true
of both SAN and DAS environm ents. The difference is
that the demand for storage headroom will always be proportionally lower in a S AN. So as long as the need for
storage gro ws over tim e, the benef its of the SAN will
keep growing, too.
The benefit of reduced downtim e includes the savings
obtained by eliminating much of the downtime associated
with upgrading storage. If an adm inistrator adds a new
storage array to a SAN, conf iguring servers to access it
can be completely non-disrupt ive, and m uch of the configuration can be perform ed by managem ent software.
Adding storage in a DAS environm ent usually requires
rebooting or even disassem bling servers, which is costly
in adm inistrative tim e as well as ca using an applica tion
outage.

Side Note
It is possible to achieve ROI through improved management of storage, or through economies of scale in
purchasing power achieved by using few large arrays instead of many small units.
Here is an exam ple of how storage consolid ation ROI
might be di scussed in the SAN project planning document:
Send feedback to bookshelf@brocade.com

477

SAN

ROI
In our current environment, we have 60% unused space
on our storage arrays, on average. This ranges from
1% free space on some arrays, to over 95% free space
on others. I estimate that we will need to spend $x to
purchase new arrays over the next year, if we continue
to use directly attached storage. This is because the
servers currently at the 1% free end of the spectrum
will need to grow their storage pool, but cannot access
the arrays attached to the servers at the 95% free
end. I.e. we have plenty of free space, but no way to get
the servers which need it to the arrays which have it. By
putting in a SAN, we should be able to avoid all of the
new array purchases this year, and for most of next
year as well. This means that we will directly save more
than $x through implementing a SAN.
In addition, the SAN will increase the uptime of each
server. Today, each time a server runs out of space, we
need to schedule an outage to add another disk, controller, or array. In some cases, this is no problem, but in
others, it is extremely disruptive to our business. For
example, the manufacturing line relies on several of the
servers which are currently almost out of space. It may
be necessary to shut down the line to add more disk.
Shutting down the line costs $y per hour. Last year, we
had to take four hours of manufacturing line outages for
storage upgrades, and next year is projected to be even
higher. Therefore we will save in excess of $4y per year
in downtime by putting in the SAN.
Total First Year Benefit: $x due to reduced array purchases because of white space optimization, plus $4y
from reduced downtime on the manufacturing line.
An ROI benefit expressed as a dialogue such as the
one above will often b e transl ated into anoth er form to
satisfy an accountant. This is often just a spread478

sheet, with little or no supporting text. However, it is usually not the respons ibility of the SAN designer to do this
translation. Rather, the tec hnical team would norm ally
provide this kind of dialogue to an accounting department
member.
/

The backup scenario c ontracts the backup window,


thus reducing am ount of tim e the servers are unavailable
or have degraded perform ance because their data is being
backed up. Shrinking the bac kup window creates savings
for the organization thr
ough increased productivity,
whether or not the applications need to be taken off line.
Even if they are still online during the operation, performance is often degrad ed quite a bit. This is of ten a sof t
benefit, though it m ight be quantifiable for m issioncritical applications.
In addition to speeding up backups, a SAN will speed
up restore operations. A restor e will occur whe n data is
lost or corrupted, and in m ost cases, operations at th e organization will be disr upted while waiting f or this to
complete. The ROI to an organization for i mproved restore time is the reduced opportunity cost of being unable
to opera te b etween the tim e of a data los s and the f inal
restoration of data. Typicall y, the metrics for quantifying
this will involve productivity decreases and lost revenue
during the outage.
In m any cas es, it is eas y to d etermine the cos t of an
outage to a system. The previous scenario gave the example of a m anufacturing line, wh ich had a defined cost of
downtime. However, in that example, the SAN project
manager had a good idea of how m any outages could be
avoided. By looking at historic al growth for storage array
data, it is p ossible to m ake defensible projections about
future growth. This to ld the SAN project m anager which
arrays were likely to run out of space. It is h arder to preSend feedback to bookshelf@brocade.com

479

SAN

dict which s ystems will have corrup ted filesystems, or in


which cases user err or will requ ire a resto ration. Avoidance of unplanned dow ntime has to be calculated based
on statistical probabilities: what is the percentage chance
that a restoration will need to happen on any given server?
How long is that likely to ta ke without a SAN? How long
will it take with a SAN? Once you know how much time a
SAN would save in restoring from a hypothetical downtime event, and how m uch per hour uptim e of the system
is worth, you multiply the savings times the probability of
the event occurring to get the benefit of the shorter restore
time.
This exam ple calculates th e savings realized through
improved backup and restore perform ance alone. Another
possibility is consolidating m any s mall tape drives onto
fewer large r libraries. This can create a si gnificant economy of scale when buying new tape libraries, and can
reduce m anagement costs as well. Yet another way to
achieve backup savings via a SAN is to consolidate white
space on tapes, in m uch the sam e way that th e previous
scenario consolidated space on disk drives. Each tape in a
backup set is only partially used. Depending on the
backup software used, it m ay be possible to put backups
from m ultiple serv ers o nto a sing le tape, thus f illing it
more completely. This is generally not possible with DAS
tape solutions. Over tim e, the savings achieved by using
up fewer tapes could be significant.
For example, take th e manufacturing line SAN again.
That SAN m ight be pe rforming ba ckups as well as consolidating storage arrays. The SAN project manager might
make an entry in the planning document like this:
ROI
SAN
The manufacturing line has to run backups once a day.
When we do this, the server response time drops
480

by 50%, and as a result, the line runs 50% slower. That


window currently lasts one hour. 50% performance
degradation for one hour on the line costs at least $x in
lost revenue. The SAN will reduce that window to six
minutes, or 90% of the window. In addition, the SANenabled software is more efficient, and will lower the
performance impact to the application during the remaining window, though it will not be possible to
quantify that until implementation time. This means that
we will directly save more than 0.9 times $x through
implementing a SAN.
In addition, using centralized tape libraries will allow
us to compress white space out of backup tapes. Currently, our average tape utilization is 50%. With the
SAN, our utilization will reach increase enough to use
10% fewer tapes. We currently spend $y per month on
tapes, so the SAN will save 0.1 times $y each month.
Since our storage needs increase over time, this benefit
will increase as the SAN ages.
Finally, the SAN will reduce downtime during data restorations. Last year, the manufacturing line had two
hours of downtime for restores. If we assume that the
same things will happen next year, the higher performance of SAN-enabled restorations will reduce the
restoration time by 25% or more. Total downtime for
the line costs $z per hour, so the SAN will save an estimated 0.75 times 2 times $z. Another way to estimate
the potential for needing to restore data is to look at the
overall odds of a failure occurring. By taking the mean
time between failures (MTBF) and mean time to repair
(MTTR) of all components in the manufacturing systems
into account, I estimate the probability being 50% that
we will have four hours of downtime due to component
failures. A 50/50 chance of four hours of downtime
means that avoidance of the risk is worth 50% of the
cost of the outage. This is 0.5 times 0.75 times 4 times
Send feedback to bookshelf@brocade.com

481

SAN

$z, which reduces down to the same equation as the two


hour estimate above.
Total First Year Benefits: 0.9 times $x due to increased
productivity on the manufacturing line from backup
window reduction, plus 0.1 times 12 tim es $y from reduced monthly tape consumption, plus an estimated
0.75 times 2 tim es $z from hypothetical reduced restoration times.
In genera l, the cost of downtim e will vary by s erver
class. The exam ple above showed only one class of
server: the platform s running a pplications critical to the
manufacturing line. In most large-scale SANs, there will
be m ore than one class of se rver attached. For exam ple,
the SAN might connect both the manufacturing servers
above, and also the corporati ons em ail servers and fileservers. Each class of server should have its own separate
valuation for uptim e. Even se rvers for which hard uptime num bers are unav ailable shou ld be includ ed in the
ROI analysis as a soft bene fit, w ith an indic ation that
the exact financial value is unknown.

SANs and com plementary software products elim inate many of the restric tions traditionally associated with
HA solutions, and allow solutions based on clusters of
more than two servers. Larger c lusters allow for a sing le
platform to serve as a sta ndby for more than one prim ary
server. (I.e. SANs allow an n:1 ra tio of p rimary to hot
standby servers.) This dram atically reduces the cost of
protecting applications. This is illus trated in . 17 and
. 18 starting on page 67.
In addition to achieving ROI through lowered cost of
protecting m ission-critical a pplications, SANs also expands the num ber of applic ations which can be costjustified to participate in an HA solution. That m eans that
482

an organization can achieve ROI through increased uptime and associated pro ductivity / r evenue gain s f or the
services which would otherwise not have been protected.
The benefits of HA cluste ring can be found by calculating the savings on bot
h planned and unplanned
downtime for all protected serv er classes, and the savings
on equipm ent obtained by im plementing n:1 HA cluste r
instead of using a 1:1 prim ary/standby design. Additional
savings can be calculated by accou nting for th e reduced
the maintenance cost for all pro tected server classes ove r
a year. I.e. having f ewer tota l pla tforms in the solution
means buyi ng maintenance contracts on fewer m achines,
and statistically reduc
ing the num ber of repairs
needed.
Once again, take the manufacturing line SAN as an
example. There cou ld be four critical applications required to support the line,
one of which (application
number a4 ) spans tw o platform s. An outage to either
platform causes an ou tage to a4. The project m anager
might make an entry in the planning document like this:
ROI
SAN

The manufacturing line has four critical applications:


a1, a2, a3, and a4. The value of protecting these applications via an HA solution is increased uptime for the
manufacturing operation. Previously, the value of uptime for the line was show to be $x per hour. Last year,
we had two outages to the line caused by failures of
these applications, which could have been avoided by
clustering them. The total avoidable downtime to repair
them was four hours. Assuming that the same events occurred over the next year, the avoided cost would be 4
times $x. Using the MTBF and MTTR of all related
components to calculate the statistical probability of a
failure in the line shows that there is a 25% chance of
Send feedback to bookshelf@brocade.com

483

SAN

eight hours of avoidable downtime. 0.25 times 8 times


$x reduces to 2 times $x, which is lower than the previous estimate. We will use midpoint for this analysis, and
say that HA protection will likely save the company
more than 3 times $x in downtime for each year of operation. This is a conservative estimate, particularly
since our business is growing. This means that the number of servers requiring protection will increase, which
will increase the likelihood of an avoidable failure and
the cost of failures not avoided, so the benefit of clustering will increase substantially in subsequent years. In
addition to unplanned failures, we had to take four
hours of planned downtime last year, and expect the
same for next year. Half of that would be avoidable with
a cluster, so the total downtime reduction is 5 times $x
for both planned and unplanned downtime.
There are two approaches to building this HA solution:
we can dedicate a hot standby server for each application platform, or we can use a SAN to allow one standby
platform to protect all of the production servers.
The a4 application spans two hosts, so there are a total
of five servers which need to be protected. This will require five standby servers in the first method, or just
one in the second. The difference is four extra platforms, vs. installing a SAN. Accounting for software
package and operating system licenses, maintenance
contracts, and projected staff time for performing maintenance, each extra server costs $y, so the SAN will
save 4 times $y on hardware, software, and maintenance.
This benefit will accelerate with time. The clustering
package we propose to use allows up to z platforms to
be protected by a single hot standby server, so we will
include several lower-tier applications in the cluster as
well. This will still leave room for projected increases in
484

the number of manufacturing line servers required for


the next year, so we can add to the cluster without increasing its cost.
Total First Year Benefits: 5 times $x due to increased
productivity on the manufacturing line from downtime
reduction, plus 4 tim es $y from reduced hardware,
software, and maintenance cost. We would also receive
soft benefits from having lower-tier applications protected by the cluster.
It is also worth m entioning that one SAN can support
many clusters. The benefits of protecting the m anufacturing line m ight easily justify the cost of the SAN by
themselves, but whether they do or not, it would often be
possible to connect other m ission-critical hosts to the
same SAN even if they are in a d ifferent cluster or even if
they use a com pletely different kind of clustering software. W hile evaluating SAN ROI, look at all of the
applications which could benefit from SAN at tachment,
whether or not they are the imm ediate focus of the pro ject.

This brings up the topic of combined SAN solutions.


Historically, alm ost all SANs were built as ap plicationspecific islands. However, t odays SANs are in creasingly
heterogeneous, with one SAN supporting not just different applications, but indeed supporting hardware and
software from different vendors. The SAN us ed in the
preceding examples could support a storage consolidation
solution, a tape backup / restore solution, and an HA clustering solution. The benefits of the SAN would com e
from all three use cases , but the cos t of the infrastructure
would only need to be paid once. Always look for other
applications which could benefit from SAN attachm ent
even if one particular applica tion is driving the project.
To the extent that any can be identified, see if they can be
Send feedback to bookshelf@brocade.com

485

SAN

quantified as hard benefits. In most cases, even if the


SAN is initially env isioned only to host one ap plication,
over tim e more and more uses will inev itably com e to
light. Even if there ar e no initial plans to inclu de othe r
uses, it is a ppropriate to incl ude som e discussion of this
principle in the ROI analysis as a soft benefit.

4:

The next step is to determ ine the costs requ ired to


achieve th e benef its. As this cos t determ ination is being
done in the early stag es the cost used will be prelim inary
estimates. If you are fol lowing the overall SAN project
plan discu ssed in this c hapter, you will alread y have a
good idea about the cost s of the project at this stage. If
this is the case, utilize th is information and proceed to the
step five on page 487.
If you do not already h ave a cost e stimate, you will
need to m ake one. To create an es timate, the top-level
SAN architecture m ust be defined. The architecture need
not be correct in every detail: for a S AN of any c omplexity, it will have to be refine d as the project progresses. It
only needs to be sufficient for budgetary purposes, which
means knowing m ore or less how m any ports you will
need to buy, and their HA characteristics.
Create an estim ate of costs for each scenario. Treat
each scenario independently a nd create discreet ROI calculations for each. This will allow you to determ ine the
most effective strategy for justifying the SAN inf rastructure. However, th is m eans that the analys is will conta in
duplicated elem ents. For exam ple, one switch port used
for an ISL will support traffic from a backup s olution, a
storage consolidation solution, and an HA cluster solution. Theref ore you should present an aggregate ROI as
opposed to the sum of the individual ROI analysis num 486

bers to show the real cost savings.

5: ROI
In step three, you showed the gross benefits of a SAN.
I.e. you showed how much money the SAN would save or
help to produce, but did not take into account the costs to
achieve thos e benefits. In this s tep, you will produce an
estimate of the net benefit that th e SAN will delive r: the
benefits minus the costs.
There are a num ber of ways to calculate ROI. Two of
the m ost co mmon m ethods are Internal Rate o f Return
(IRR) and Net Present Value (NPV). Here are commonly
used accountant definitions of the two methods:
IRR: The discount rate to equate the projects present
value of inflows to present value of investment costs.
NPV: The sum of a projects discounted net cash
flows (present values includi ng inflow and outflows, discounted at the projects cost of capital).
What do you actually do to calculate either of those?
One answer is, get an accountant to do it. In fact, m ost
organizations have a preferred m ethod for perform ing an
ROI calculation, and have accounting departm ents which
would insist on being the ones to perform the analysis in
any case, so this is the answer that most SAN designers
will use.
However, it is som etimes useful for the SAN project
team to estim ate the ROI of the pr oject before discussin g
it with accounting. To do a rough ROI estim ate, sim ply
subtract any identified costs from any quantified benefits.
In the example used throughout the previous sections, the
manufacturing line would receiv e benefits from three different sources. Add all three up to get a total first-year
figure. Then add up the costs of the project as estimated in
previous steps. Subtract the second number from the first,
Send feedback to bookshelf@brocade.com

487

SAN

and that is how much hard benefit the SAN will provide
in the first year of operati ons. An accountant would also
need to take equipm ent depreciation into account, and
might look at ROI over a
longer tim eframe, but this
should at least give the SAN design team an idea of how
the ROI analysis will come out.
The key to ROI is to be su re you have identified and
accounted for all of the benef its. Many things in life tend
to have hidden costs such as the m aintenance problems
associated with buying a used car. However, som e things
also have hidden benefits su ch as the redu ction in administrative overhead inheren tly associated with
implementing a SAN. As long as the ROI analysis includes all costs and all benefits both hard and soft it
will give yo u a good idea about whether o r not a SAN is
right for your organization.

Ethernet IP
This section does not provide a comprehensive tutorial on Eth ernet o r IP e quipment. Nor is it in tended to
supplement the m anuals for thos e products. It is sim ply a
high-level discussion of how such equipm ent relates to
the Brocad e AP7420 Multip rotocol Router, and other
Brocade platforms.


Ethernet L2
It is possible to use
commodity 10/100baseT hubs
and/or switches to attach to the Ethernet managem ent
ports of an FC switch or router. It is not recommended to
use hubs for data links to iSCSI hosts or for FCIP connections, since perform ance on hubs is rarely sufficient for
even minimal SAN functionality.
When connecting to iSCSI hosts, it is possible to use
accelerated Gigabit Eth ernet NICs with optical
488

transceivers to connect hosts directly to the router. However, th is is not recommended: this approach has m uch
higher cost and m uch lower perform ance than attach ing
the host to a Fibre Channel sw itch using a Fibre Channel
HBA. The value propo sition of iSCSI vs. Fibr e Channel
only works if the low-end hosts are attach ed via already
existing software-driven NICs to a low-cost Ethernet edge
switch. Many iSCSI hosts then sh are the sam e router in terface. There are m any vendors who supply Ethernet
edge switches. Figure 110 shows an exam ple from Foundry Networks. (http://www.foundrynetworks.com)

Figure 110 - Foundry EdgeIron 24 GigE Edge Switch

IP WAN
When connecting to a WAN in an FCIP solution , it is
usually necessary to use one or more IP WAN routers.
These devices generally have one or m
ore Gigabit
Ethernet LAN ports and one or more WAN interfaces,
running protocols such as S ONET/SDH, fra me relay, or
ATM. They almost always support one or more IP routing
protocols like OSPF and RIP. Packet-by-p acket path selection decisions are made at layer 3 (IP).
Figure 111 (p490) shows an IP WAN router from Tasman Netwo rks. (http://www.tas mannetworks.com) There
are m any other vendors who supply IP WAN routers,
such as Foundry Networks (Figure 112).
Make sure that th e WAN router an d service are both
appropriate f or the app lication. Two considerations to
keep in m ind when selecting a WAN router for SAN extension are perfor mance and reliability. Most W
AN
Send feedback to bookshelf@brocade.com

489

SAN

technologies were not intended for either the performance


or reliability needs of SANs.

Figure 111 - Tasman Networks WAN Router

Figure 112 - Foundry Modular Router

Finally, for redundant deploym ents it is strongly desirable for a W AN router to support a method such as the
IEEE standard VRRP. Such m ethods can allow redundantly deployed routers to fail over to each o ther and load
balance WAN links while both are online.
Figure 113
shows one way that an IP W AN r outer might be used in
combination with the Multiprotocol Router.

Figure 113 - WAN Router Usage Example

In this example, there are two sites connected across a


WAN using FCIP. The Multip rotocol Router s e ach have
two FCIP interfaces attached to enterprise-clas s Ethernet
490

switches. T hese are connected redundantly to a pair of


WAN routers, which are running VRRP.

Gigabit Ethernet -
Some IT organizations s upply Gigabit Ethernet connections using copper 1000baseT instead of 1000baseSX
or LX. To c onnect copper Ethe rnet ports directly to optical FCIP or iSCSI ports - e.g. on a Brocade AP7420 - is
not possible. One solution is to use a Gigabit Ethernet
switch with both copper and op tical ports, attaching the
router to the optical ports a nd the IT network to the copper ports. A product such as the Foundry switch shown in
Figure 110 (p 489) could be used in this m anner. Alternately, a media converter (sometimes called a MIA) can
be used. There are a number of vendors who supply such
converters. TC Communica
tions is one exam
ple.
(www.tccomm.com)

Figure 114 - Copper to Optical Converter

Send feedback to bookshelf@brocade.com

491

B
B:

This chapter provides ad vanced material for readers who


need the greatest possible in-depth understanding of Brocade
products and the underlying tec hnology. It is not necessary
for the va st majority of Brocade use rs to have this inf ormation. It is provided for advanced users who are curious, for
systems engineers wh o occasionally need to troublesho ot
very complex problems, and for OEM personnel who need to
work with Brocade on new product development.


This subsection is intended to clarify the uses for the different routing proto cols asso ciated with the multipro tocol
router, and how each works at a high level. Broadly, there
are three categories of rout ing protocol used: intra -fabric
routing, inter-fabric routing, and IP routing. The router uses
different protocols for each of those functions.
To get from one end of a Meta SAN to another m ay require all three protocol gr
oups acting in concert. For
example, in a disaster tolerance solution, the router m ay connect to a production fabric wi th FSPF, use OSPF to connect
to a WAN r unning other IP routing protocols, and run FCRP
within the IP tunnel.

Send feedback to bookshelf@brocade.com

493

SAN

FSPF:
Fabric Shortest Path First (FSPF) is a routing protocol
designed to sele ct pa ths between d ifferent switches with in
the sam e fa bric. It was authored by Brocade and subsequently b ecame the FC standa
rd intra-fabric routing
114 115
mechanism.
FSPF Version 1 was released in March of 1997. In May
of 1998 Version 2 was released, and has completely replaced
Version 1 in the installed base. It is a link-state path selection
protocol. FSPF represents an evol ution of the principles used
in IP and other link-state prot ocols (such as PNNI for ATM),
providing much faster convergence tim es and optim izations
specific to the stringent requirements of storage networks.
The protocol tracks link states on all switches in a f abric.
It associates a cost with each link and com putes paths fro m
each port on each switch to all the other switch es in the fabric. Path se lection invo lves adding the cos t o f all link s
traversed and choosing lowest cost path. The collection of
link states (including cost) of all the switches in a fabric constitutes the topology database.
FSPF has four major components:

The FSPF hello protocol, used to identify and to establish


connectivity with neighbor switches. This also exchanges
parameters and capabilities.
The distributed fabric topology database and the protocols and mechanisms to keep the databases synchronized
between switches throughout a fabric
The path computation algorithm

114

Much of the content in this subsection was adapted from Fabric Shortest Path
First (FSPF) v0.2 by Ezio Valdevit.
115
This and other Fibre Channel standards can be found on the ANSI T11 web
site, http://www.t11.org.

494

The routing table update mechanisms

The f irst tw o item s m ust be im plemented in a specif ic


manner for interoperability be tween switches. T he last two
are allowed to be vendor-unique.
The Brocade im plementation of FSPF allows usersettable sta tic routes in addition to autom atic c onfiguration.
Other options include Dyna mic Load Sharing (DLS) and InOrder Delivery (IOD). These affect the behavior of a switch
during route recalculatio n, as, for exam ple, during a fabric
reconfiguration.
This featu re works in concert with Brocade fram e-byframe trunking mechanisms. Each trunk group balances traffic evenly on a fra me-by-frame bas is, while FSPF balances
routes between different equal-cost trunk groups.
The Brocade Multiprotocol Rou ter further enhances
FSPF by pr oviding an optionally licensed exchange-based
dynamic routing m ethod that balances traffic between equal
cost routes on an OX_ID basis. (OX_ID is the f ield within a
Fibre Channel fram e that uniquely defines the exchange between a so urce and destination node.) W hile this m ethod
does not provide as even a balance as fram e-by frame trunking, it is more even than DLS.

FCRP:
The Fibre Channel Router Protocol (FCRP) is used for
routing between different fabrics. It was desig ned to select
paths between different FC Rout ers on a backbone fabric, to
coordinate the use of xlat e dom ains and LSAN zoning information, and to ensure that exported devices are presented
consistently by all ro uters with EX_Ports into a given edge
fabric. Like FSPF, this protocol was authored by Brocade. At
the time of this writing it is in the process of being offered to
the appropriate standards bodies. (T11)
Send feedback to bookshelf@brocade.com

495

SAN

Within FCRP, there are two sub-protocols: FCRP Edge


and FCRP Backbone.
The FCRP Edge protocol firs t searches the ed ge fabric
for other E X_Ports. If it finds one or m ore, it comm unicates
with them to determ ine what o ther f abrics ( FIDs) the ir
routers hav e access to, and to determine the overall Meta
SAN topology. It checks the Meta SAN topology, looking
for duplicate FIDs and other invalid configurations. Assuming that the topology is valid, th e routers hold an election to
determine ownership of xlate phantom domains for FIDs that
they have in common.
For exam ple, if several routers with EX_Ports into the
FID 1 fabric each have access to FID 5, one and only one of
them will own the def inition of network address tr anslation to FID 1 f rom FID 5. This router will r equest a dom ain
ID from the fabric controller for the xlate domain intended to
represent FID 5, and will assign PIDs under that dom ain for
any devices in LSANs going from FID 5 to FID 1. All of the
other routers with FID 5 to FID 1 paths will co ordinate with
the owner router and will present the xlate dom ain in exactly
the same way. If the owner router goes down or loses its path
to FID 5, another election will be held, but the new owner
must continue to pres ent the trans lation in the s ame way as
the previous owner. (In fact, al l rou ters save all tran slation
mappings to non-volatile m emory and even export the m appings if their configurations are saved to a host.)
Note that the owner of the FID 5 to FID 1 m apping does
not need to be the same as the owner of e.g. the FID 4 to FID
1 mapping. Each xlate dom ain could potentially have a dif ferent owner.
It is im portant to stress th at the F ibre Channel standard
FSPF protocol works in conjunction with FCRP. Existing
Fibre Channel switches can us e FSPF to coordinate with and
determine p aths to the phantom dom ains projected by the
496

router, but only because FCRP m akes the phantom dom ain
presentation consistent.
On the backbone fabric, FCRP operates using ILS 0x44.
It has a similar but subtly different set of tasks. It still discovers all other FC Router s on the backbone fabric, but
instead of operating between EX_Ports it operates between
domain controllers. For each other FCR found, a router will
discover all of its NR_Ports and the FIDs that they represent,
each of whi ch yields a path to a re mote fabric. It will dete rmine the FCRP cost of each pa th. Finally, it will transfer
LSAN zoning and dev ice s tate inform ation to each oth er
router.
When the initial inte r-fabric rou te databas e c reation is
complete, routers will be cons istently presenting EX_Ports
with xlate d omains into all ed ge fabrics, each w ith phantom
devices for the approp riate LSAN m embers. Into the ba ckbone fabrics, routers will p resent one NR_Port for each
EX_Port. This is another situation in which FCRP and FSPF
work together: FCRP allows the N R_Ports to be set up and
their activities coordinated. Once traffic starts to flow across
the backbone, it will flow betw een NR_Ports. FSPF controls
the path selection on the standa rd switches that m ake up the
backbone.

Side Note
Not only FSPF and FCRP are complementary. On an FCIP
connection in a Meta SAN, all routing protocol types plus
layer 2 protocols like trunking and STP can apply to a single
connection. STP works outside the tunnel on LANs between
FCIP gateways and WAN routers, IP protocols like OSPF
work through the WAN outside the tunnel, FSPF operates at
the standard FC level inside the tunneled backbone fabric,
and FCRP operates above FSPF but still within the tunnel.
Send feedback to bookshelf@brocade.com

497

SAN

FCR
The FC-FC Routing S ervice defines two new fra me
headers: an encapsulation header and an Inter-Fabric Addressing (IFA) header. These are used to pass fram
es
between N R_Ports of routers on a backbone f abric. These
extra headers are inserted by the ingress EX_Port and interpreted and removed by the egress EX_Port.
The form at for these headers go ing to be submitted for
review in the T11 FC Expans ion Study Group and is subject
to change. Since fram e handling is perform ed by a programmable portion of the port ASIC on router platform s,
header format changes can be accommodated without hardware changes.
The Inter-Fabric Addressi ng Header (IFA) provides
routers with inform ation used for routing and address translation. The encapsulation header is used to wrap the IFA
header and data fram e while it traverses a backbone fabric.
This header is form atted exactly like a norm al FC-FS standard header, so an encapsulated fram e is indistinguishable
from a standard fram e to switches on the backbone. This ensures tha t the route r is com patible with existing switches ,
unlike proprietary tagging schem es proposed by other vendors.


This subsection discusses three different
enforcement
mechanisms used in zoning, in cluding when each is used,
and what the significan ce is in each case. For a high level
discussion of zoning, see on p461.

SNS
When an HBA logs into a Fibre Channel fabric, it queries
the name server to dete rmine the fabric addresses all storage
498

devices. The m ost basic for m of z oning is to lim it what the


name server tells a host in response to this inquiry. Hosts
cannot access storage devices without knowing their ad
dresses, and the SNS 116 inquiry is the only way they should
have of obtaining that information. If the name server simply
does not tell a host about any st orage devices o ther than the
ones it is a llowed to acc ess, then it will neve r try to vio late
the access control policy.
SNS zoning works well unless the HBA driver is defective in a significant and specific way and/or the host is under
control of a very skilled attacker. It does rely on each host to
be a good citizen of the network, but in most cases this is a
safe assumption.
SNS zoning is always used in Brocade SANs if zoning is
enabled at all, but it is always supplemented by one or both
of the two hardware methods below. 117



In the per-fram e hardware zoning method, switches program a table in destination
ASIC ports with all devices
allowed to send traffic to that port. This is in addition to SNS
zoning, not instead of it.
For example, if the access contro l policy for a fabric allows a host to talk to a storage device, then the ASIC to

116

Both Fibre Channel and iSCSI support automatic device discovery through a
name server. In Fibre Channel, the service is known as the Storage Name
Server, or SNS. In iSCSI, it is known as the iSNS. This subsection discusses FC
SNS zoning, but a similar mechanism works with the iSNS.
117
The exceptions are the SilkWorm 1xx0 or 2xx0 series switches. The SilkWorm 1xx0 switches did not support hardware zoning at all, and the SilkWorm
2xx0 switches only supported hardware zoning for policies defined by PID, not
by WWN.117 All 200, 3xx0, 4xx0, 12000, 24000, and 48000 products support one
or both hardware zoning methods in all usage cases. In other words, all Brocade
switches shipped in this century.

Send feedback to bookshelf@brocade.com

499

SAN

which the storage is a ttached will be programmed with a table en try f or tha t hos t. It wi ll drop any fram e that does not
match an address in the table. 118 This method is very secure.
Even if a host tries to access a device that the S NS does not
tell it abou t (extremely rare but theoretically possible) hardware zonin g will pre vent f rames f rom that host f rom
reaching the storage port.
However, in very large configurations it is possible to
exceed the table size for a destinatio n port. 119 If this happens
on a particu lar sto rage port, th e per-fram e hardware zoning
method will usually still be in force on the host port, which is
sufficient to prevent access. Even if all ports in a fabric were
to exceed zoning ta ble size lim itations ( highly unlikely) all
now-shipping Brocade switches can fall back to the Session
Hardware Zoning method.
Another lim itation on hardwa re zoning is related to
WWN zoning vs. Dom ain, Port, or PID zoning. In the
older Loom s witches, WWN zones were software en forced, and only PID zones woul d be enforced by ha rdware.
With all cur rently shipping switches, full hardware enforcements is availab le whether using WWN or PID zoning
definitions, but only for zones tha t contain W WNs or PIDs.
If a single zone uses both WWNs and PIDs, that zone will
use session hardware zoning.

If the fabric acces s control policy results in a zo ning table larger than a de stination ASIC can support, or if a zone
contains both WWNs a nd PIDs, then som e ports on the af-

118

Note that there is no performance penalty for hard zoning with Brocade
ASICs.
119
Each generation of Brocade ASIC has improved the zoning subsystem, but it
is never possible to support infinitely large tables within an ASIC.

500

fected chip(s) will us e the second h ardware zoning method.


In addition to SNS enforcem ent, certain com mand fram es
(e.g. PLOGI) will be trapped by the port hardw are and filtered by the platform control processor.
This is effe ctively like the previous m ethod, except that
hardware filtering is not done on all data frames, which is
why it is called session hardware zoning. This works because
Fibre Channel nodes require
command fra mes to allow
communication: data fram es se nt without command fram es
will be ignored by destination devices. For example, if a host
cannot PLOGI into a storage de vice, the storage should not
accept data from the ho st since PLOGI is need ed to setup a
session context in the s torage controller. 120 Any fram es that
managed to get past both SNS zoning and hardware-based
session command filtering should be dropped by the destination node.
Since th is is based on a catego ry of fram e rather than a
device address, there is no theor etical limit to the number of
devices supportable with this m ethod, short of the m ain system m emory and CPU resources on the platform CP. Since
the trap is implemented in hardware , it is still s ecure and ef ficient.

FC
All Brocade products adhere
to applicable standards
wherever possible. In some cases, there may not be a ratified
standard. For exam ple, there is no standard for upper-level
FC-FC routing protocols at th is tim e, so Brocade created

120

This is effective unless the storage device has a serious driver defect. That
small chance is the main reason why Brocade implements full hardware zoning
whenever possible, but as a practical matter the command version works fine.
There has never been a reported case of an initiator accessing a storage device
protected by command zoning, even in a lab environment in which experts
were trying to achieve that effect.

Send feedback to bookshelf@brocade.com

501

SAN

FCRP in much the sam e wa y that Brocade created FSPF


when there was a vac uum in the standa rds f or switch to
switch routing. Brocade has in f act either authored or coauthored ess entially ev ery standard u sed in the F ibre Channel marketplace. While Brocade tends to offer such protocols
to the stand ards bod ies, there is no guaran tee that th ey will
be adopted by competitors.
Some of the applicable standards include FC-S W-x, FCFLA, FC-AL-x, FC-GS-4, FC-MI-2, FC-DA, FCP- x, FC-FS,
and FC-PI-x.
For m ore infor mation on these and other Fibre Channel
standards, visit the ANSI T11 website, www.t11.org.

Side Note
Gigabit Ethernet was created by bolting on some of the
existing Ethernet standards on top of 1Gbit FC layers. Few
IP network engineers realize it, but all optical Gigabit
Ethernet devices still use Fibre Channel technology today.

Brocade ASICs
Brocade adds value as a SAN infrastructure manufacturer
by developing custom software and hardware. Much of th e
hardware value-add comes from the developm ent of Application-Specific Integrated Circuits (ASICs) optimized for the
stringent perform ance and relia bility requirem ents of the
SAN market. 121 Brocade has been building best-in-class custom silicon for SAN i nfrastructure equipm ent since 1995 .
This also enables greater soft ware value-add, since custom

121

ASICs are customized microchips designed to perform a particular function


very well. Brocade uses ASICs developed in-house as opposed to using generic
off the shelf technology designed to perform different tasks such as IP switching. Most other FC vendors use off the shelf technology.

502

silicon is required to enable m any software features like


hardware zoning, frame-filter ing, perform ance monitoring,
QoS, and trunking. This subs ection discusses several 122 Brocade ASICs, and shows how thei r feature sets evolved over
the years.

ASIC
Brocade takes an evolution, not revolution approach to
ASIC engineering. This balances the need to add as m uch
value as possible with the need to protect custom er investments and de-risk new deploym ents. Each generation of
Brocade ASICs builds upon the le ssons learned and features
developed in the previous generation, adding features and refinements while m aintaining c onsistent low-level behaviors
to ensure b ackward and forward com patibility with other
Brocade products, as well as hos ts and storage. Brocade has
been developing ASICs for a de cade now, with each generation becoming more feature-rich and reliable than the last.

Side Note
The ASIC names used in this subsection are the internal-use
Brocade project codenames for the chips. Brocade codenames generally follow a theme for a group of products.
There have been three different themes for ASICs to date:
fabric-related, bird-related, and music-related. Platforms
and software packages also have codenames, but their external marketing names are used throughout this book. This
is not done with ASICs because Brocade does not have external-use names for ASICs.

122

Brocade has developed a number of ASICs that are not yet being shipped, and
thus are not included in this work. Register on the SAN Administrators Bookshelf
website to receive updated content as additional chips become generally available.

Send feedback to bookshelf@brocade.com

503

SAN

Stitch Flannel
The first ASIC that Brocade developed was called Stitch.
Development on Stitch began in 1995. It was initia lly introduced to the m arket in the SilkW orm 1xx0 series of Fibre
Channel switches in 1997. (See SilkW orm
1xx0 FC on p430.)
Stitch had a dual p ersonality: it co uld ac t as e ither a 2 port front-end Fibre Channel fabr ic chip, or a b ack-end central memory switch. The SilkW orm 1xx0 m otherboards had
a set of back-end Stitch chips, and accepted 2-port daugh ter
cards that e ach had on e f ront-end Stitch. The ASIC could
support F_Port and E_P ort operations on those cards. However, it could not support FL_Port.
To address that gap, Bro cade developed the Flannel
ASIC. Flannel could act as a front-end loop chip on a daughter board, but could only act as an FL_Port. It was therefore
necessary to configure a S ilkWorm 1xx0 switch as the factory for som e number of fabric ports and som e num ber of
loop ports. Once deployed, the cu stomer would need to live
with the choices m ade at the tim e the switch was ordered .
Furthermore, there was no way to m ake device attachm ent
entirely au to-magic; it could m atter which p ort a use r
plugged a device into.

Loom
The second -generation Brocade ASIC, Loom , was designed to re place both Stitch and Flannel. Th e new ASI C
lowered cost, im proved reliabili ty, and added key features.
The first Loom-based products were introduced in 1999.
The port density of the chip was increased from 2-port to
4-port, and each Loom had the personalities of both S titch
and Flannel. Four Looms could be combined to form a single
non-blocking and uncongested 16-port central m
emory
switch. This substantially lowered the com ponent count in
504

the SilkWorm 2xx0 series platforms, improving reliability as


well as low ering cost. (See SilkWorm 2xx0
FC on p432.)
Feature improvements were m ade in many areas, including PID-based hardware zoning, larger routing tables, and
improved buffer management. Updated phantom logic was
introduced to support private loop hosts. (The QL/FA feature.) Virtual channels were added to elim inate blocking on
inter-switch links.
One of the most i mportant features that Loom introduced
was the U_Port. All three port types (F, FL, and E) could exist on any interface, depending on what kind o f device was
attached to the other en d of the link. Switches using Loom
could auto-detect the p ort type of the rem ote device: a sub stantial advance in p lug and play u sability. Auto-detecting
switch ports cam e to be known as a Universal Ports
(U_Ports) and the S ilkWorm 2800 running the Loom ASIC
was the first in the industry to support this feature.
Loom enjoyed rem arkable su ccess and longev ity. Bro cade shipped well over a m illion Loom ports, an d still has a
very high p ercentage of them active in the f ield, despite the
length of time for which the chip has been shipping. Brocade
has therefore continued to s upport backwards com patibility
with Loom-based products in all subsequent ASICs and platforms.

Bloom Bloom-II
Bloom was designed to replace L oom, again lowering
cost, improving reliability, and adding features.
Bloom first appeared in 2001 in the SilkW
orm 3800
switch. It had eight ports per ASIC , and two Bloom s coul d
be combined to form a single non-blocking and uncongested
16-port central m emory switch called a Bloom ASIC-pair.
(One ASIC-pair is what powered the SilkW orm 3800, for
Send feedback to bookshelf@brocade.com

505

SAN

example.) Because th is ASIC had more ports than its pred ecessor, Brocade nam ed the chip by adding a B in front of
Loom to indicate that it was Bigger than Loom.
Bloom also increased the por t spe ed to 2Gbit, d oubling
performance vs. Loom. In addition, the new ASIC added better hardware enforced zoning (both PID- and WWN-based),
frame-level trunking to load-bal ance groups of up to four
ports, fram e filtering, end-to-e nd p erformance monitoring,
and enhanced buffer management to support longer distances
on extended E_Ports. The chip also had routing table support
allowing many chips to be com bined to form a 128-port single-domain director (SilkWorm 24000).
The Bloom -II ASIC has such m inor changes to Bloom
that it is co nsidered a s imple refinement, not a new generation. A new process was used in its design to shrink the size
of each chip , lowering p ower and co oling requirements. Additional test interfaces were added to improve manufacturing
yield and reliability. Buffer management was improved to allow longer distance links at full 2Gbit speed.
At the tim e of this writing, Bloom is still shipping in the
SilkWorm 12000 port blade and the SilkW orm 3800 switch.
It was also used in the S ilkWorm 3200 and 3900 switches,
and in a nu mber of OEM em bedded products. Bloom -II is
still shipping in the S ilkWorm 3250 and 3850 switches, and
in the SilkWorm 24000 blade set. (S ee both
Brocade on p 397 and
Brocade on p430.)

Condor
The fourth generation ASICs from Brocade have codenames related to birds. C ondor is the fourth-generation Fibre
Channel f abric ASIC, a nd the f irst of its g eneration to b ecome gener ally available. It builds upon the previous three
ASIC generations, adding signifi cant features and im proving
reliability to an unprecedented degree. At the tim e of
506

this writing, Condor is sh ipping in the Brocade 4100 and


4900 switches, and Brocade 48000 director.
Like previous Brocade
ASICs, Condor is a highperformance central m emory switch, is non-blocking, and
does not congest. It builds on top of the advanced features
that Brocad e added to Bloom -II. 123 However, Condor has
many m ajor enhancem ents as well, and is not sim
ply a
Bloom-III. It is truly a fourth-generation technology.
Condor has thirty-two ports on a single chi p, with each
port able to sustain up to 4G bits per second (8Gbits fullduplex) in all traffic configurations. Each chip has 256Gbits
of cross-sectional bandwidth. It was designed to support single-domain director configura tions m uch larger than th e
Bloom-II-based SilkW orm 24000, in which case the p latform cross-sectional bandwidth will be massively higher. For
example, if the Brocade 48000 is configured with 128 4Gbit
Condor ports, its internal cross- sectional bandwidth is 1Tbit.
The num ber of virtual channels per port has also been increased to allow non-blocking operation in larger products
and networks.
The doubling in port speed is only the beginning of Condors performance enhancements. Frame-based trunking has
been expanded to support 8-way trunks, yielding 32Gbits
(64Gbits full-duplex) per trunk. Exchange-based load balancing (DP S) is possible betw een either trunked or nontrunked links. (See starting on page
272.) Two Condor ASICs networke d together with half of
their ports could sustain 64Gb its (128Gbits full-duplex) between them, and far more bandwidth could be sustained
between Condor-based blades in directors. In fact, com bin-

123

Except for private loop support. This is near end of life based on declining
customer demand, so priority was given to other features. Private loop devices
are almost entirely out of circulation already, and the little remaining demand can
be met by using Bloom-based switches in the same network as Condor platforms.

Send feedback to bookshelf@brocade.com

507

SAN

ing m ultiple Condor ASICs running 4Gbit link s with fram e


and exchange trunking can yield 256Gbit evenly balanced
paths.
Condor also i mproves control-plane perfor mance. Each
ASIC can offload the platfo rm CP from m any node login
tasks. When a Fibre Channel device attem pts to initialize its
connection to the fabric, previous ASICs would forward all
login-related fram es to the CP. Condor is capable of performing m uch of this without involving the CP, which
improves s witch and fabric scal ability as well as response
time for nodes.
The ASIC m emory system s have also been improved.
Buffer management and hardwa re zoning tables are the primary benefi ciaries of this. A centralized buffer pool allows
better long distance support.: any port can receive over 200
buffers out of the pool. Centra lized zoning m emory allows
more flexible and scalable deploym ents using full hardware zoning. (See
on p498 for m ore inform ation.)

Goldeneye
Goldeneye, like Condor, is part of the fourth-generation
Fibre Channel fabric ASIC set from Brocade, and the second
of its generation to becom e ge nerally av ailable. It build s
upon the previous three ASIC generations, adding significant
features and im proving reliability to an unprecedented degree. At the time of this writing, Goldeneye is shipping in the
embedded switches and Brocade 200E switch.
Like previous Brocade ASIC s, Goldeneye is a high performance central m emory switch, is non-blocking, and does
not congest. It builds on top of the advanced f eatures that
Brocade added to Bloom-II. However, Goldeney e has many
major enhancem ents as well, and is not sim ply a

508

"Bloom-III. It is truly a fourth-generation technology.


Goldeneye has 24 ports on a single chip, with each port
able to sustain up to 4Gbits per second (8Gbits full duplex)
in all traffic configurations. Each chip has 192Gbits of crosssectional bandwidth. It was de signed to support highly dense
products such as the embedded blade server switches.
The doubling in port speed is only the beginning of
Goldeneyes perform ance enhan cements: Fram e-based
trunking can support up to 4- way trunks, yielding 16Gbits
(32Gbits full-duplex) per trunk. Exchange-based load balancing (DP S) is possible betw een either trunked or nontrunked links.
Goldeneye also im proves c ontrol-plane perform ance.
Each ASIC can offload the pl atform CP from m any node
login tasks. When a Fibre Channel device a ttempts to initialize its con nection to the f abric, previous ASICs would
forward all login-related frames to the CP. Goldeneye is capable of perfor ming much of this without involving the CP,
which im proves switch and f abric scalab ility a s well as re sponse time for nodes.
The ASIC m emory system s have also been improved.
Buffer management and hardwa re zoning tables are the primary benefi ciaries of this. A centralized buffer pool allows
better long distance support: a ny port can receive over 200
buffers out of the pool. Centra lized zoning m emory allows
more flexible and scalable deploym ents using full hardware zoning.

Egret
Egret is a b ridge chip whic h tak es thre e inte rnal 4Gbit
FC ports on a blade, and converts them into a single external
10Gbit FC interface. At the tim e of this writin g, it is used
only on the FC10-6 blade (p 418), which has six Egret chips
connected to two Condor ASICs. From a performance standSend feedback to bookshelf@brocade.com

509

SAN

point, an Egret-Egret IS L can be thought of as functionally


identical to a three-port by 4Gbit frame-level trunk.
The are differences, ho wever. The Egret app roach uses
1/3rd of the num ber of fiber optic strands or D WDM wave lengths, which can produce substa ntial cost-savings in som e
long distance solutions. On the other hand, 10Gbit FC requires m ore expensive XFP m edia, m ore com plex and thus
more expensive blades, and single-mode cables, which can
increase cos t m assively for shorter-distan ce IS Ls. As a result, it is ex pected tha t Egret will only be used for DR and
BC solutions. In addition to aggregating three interfaces into
one, the Egret chip also contains its own buffer-to-buffer
credit memory, allowing each and every 10Gbit port to support a full-speed connection over dark fiber or xWDM of up
to 120km.

FiGeRo / Cello
The FiGeRo and Cello chips power the Brocad e Multiprotocol Router (AP7420). Both ASICs were acquired when
Brocade bought Rhapsody Networks. The platfor m consists
of sixteen F iGeRo chips (one per port) interconnected via
one Cello that acts as a cell switching fabric.
FiGeRo was codenamed to follow a music theme. (As in,
The Marriage of Figaro.) The Fi and Ge c omponents
of the nam e refer to the fact that a FiGeRo ASI C can act as
either a Fi bre Channel port or a G igabit Ethernet port. Cello
got its name by being a cell switching ASIC.
Each FiGeRo ASIC has fixed gates to perform fram
elevel functions efficiently, and three embedded RISC processors plu s external RAM to giv
e each po rt exceptional
flexibility for higher-level routing and application processing
functions. Currently, the Multip rotocol Router running FiGeRo supports FC fabric switc hing, FC-to-FC routing, FCIP
tunneling, and iSCSI bridging. Mo re advanced fabric applications are being develope d by Brocade and its part510

ners. In fact, at the tim e of this writing, several ILM and UC


applications for this architecture are just beginning to ship.
Similar functionality is expected to be available throughout the Brocade product line by the end of 2005.


Modular switches like S AN directors always require internal conn ectivity between disc rete com ponents ov er a
midplane or backplane. It is not possible or even desirable to
have, for exam ple, a single-ASIC director. Some of the m ajor ben efits of a bladed archit ecture are that custom ers can
select different blade types fo r different applications, swap
out old blades one at a tim e during upgrades, and have the
overall sys tem continue to ope rate even in the face of failures on som e com ponent. A single-chip solution would
prevent all of these features and more from working. As a result, all such products from all vendors have som e chips on
port blades, som e other chips on control processor blades,
and (typically) some chips on back-end data-plane switching
blades. The Brocade directors are no exception.
There are many different approaches that can provide the
required chip-to-chip connectivity. It is possible to use
shared memory, a cross bar, a cell switch, or a bus, to name
just a few approaches that have been used in the networking
industry. A director m ight ha ve connectivity between frontend pro tocol blades v ia a crossb ar using of f the shelf
commodity chips, or it m ight use native F ibre Channel connections between blades using SAN-optimized ASICs. Highspeed packet switches f or both Ethernet and Fibre Channel
use shared m emory designs for highest perform ance. Com modity Ethernet switches often use crossbars to lower
research and development costs, th us increa sing short- term
profits for investors at the expense of long-term viability and
customer satisfaction. It is al so possible for more than one
Send feedback to bookshelf@brocade.com

511

SAN

option to be combined within the s ame chassis, which is often known as a multistage architecture.
Most Brocade products are si ngle-stage central m emory
switches, often consisting of ju st one f ully-integrated ch ip.
However, some of the larger products use multistage designs
to support the required scalabil ity and m odularity. All of internal-connectivity approaches from all vendors have an
internal topology, a set of performance characteristics, and a
set of protocols, much like a network. 124 The arrangement of
the chips an d traces on the backplan e or midplane create th e
topology, and the chips connected to this topology have link
speed and protocol properties. Indeed, it is possible to m ake
many analogies between networks and internal director designs, no matter what connectivity method is used.
Brocade multistage switches use ce ntral memory ASICs
with back-end connections based on the sam e protocol as the
front-end ports. This avoids th e performance overhead asso ciated with protocol conversi ons that affect other designs
like crossbars. The back-end connectivity is an enhanced Fibre Channel variant called the Channeled Central Mem ory
Architecture. (CCMA) The c onnections between ASICs are
therefore called CCMA Links. W hile these are enhanced beyond standard FC links in a number of ways, the payload and
headers of fram es carried by the CCMA Links use an unmodified, native F ibre Channel fram e for mat. This allows
the director to operate efficiently and reliably.
The use of CCMA links defines protocol characteristics,
but there are variations in terms of other performance characteristics and topology depending on how CCMA connections
are m ade. (I.e. the back-end t opology of a director is the
geometrical arrangem ent of the back-end ASIC-to-ASIC
124

While the internal connectivity in a chassis does not work exactly the same
way that an external network works, they do have enough in common that this
provides a useful analogy.

512

links, much the same way as the topology of a SAN is the arrangement of ISL connection.) The rem
ainder of this
subsection discusses two vari ations on the Brocade CCMA
multistage architecture in detail.

SilkWorm 12000 3900 XY


The Brocade SilkWorm 12000 is a highly available Fibre
Channel Director with two domains of 64 ports each, and the
SilkWorm 3900 is a high-perform ance 32-port m idrange
switch. Both platf orms can deliv er f ull-duplex line -rate
switching o n all por ts s imultaneously using a non-blocking
CCMA m ultistage inte rnal ar chitecture. Th is s ection dis cusses the details of how ASICs are interconnected inside the
two products, and provides som e analysis of how that structure performs.
SilkWorm 12000

The SilkWorm 12000 chassis ( Figure 105 p 439) is com prised of up to two 64-port
domains, each of which may
contain up to four 16-port cards . Each card is divided into
four 4-port groups known as qua ds. Viewed from the front
and the side, a blade is constructed as depicted in Figure 115.

Send feedback to bookshelf@brocade.com

513

SAN

Port 15

Port 15

Port 0

Quad 3

Quad 3

Quad 2

Quad 2

Quad 1

Quad 1

Quad 0

Port 0

Quad 0

Figure 115 - SilkWorm 12000 Port Blades

Front End

Front End

Back End

(user ports)

(interconnect)

ASIC
pair 3

ASIC
pair 3

12

ASIC
pair 2

ASIC
pair 2

12

ASIC
pair 1

ASIC
pair 1

12

ASIC
pair 0

ASIC
pair 0

12

Figure 116 - SilkWorm 12000 ASIC-to-Quad Relationships

The SilkWorm 12000 uses a distributed switching architecture. Each quad is


a self-co ntained 16 -port cen tral
memory switching element, com prised of two ASICs. Four
ports of each quad are exposed outside the chassis, and m ay
be used to attach FC devi ces such as hosts and stor514

age arrays, or for Inter-Switch Links (ISLs) to other domains


in the fabric. The remaining twelve ports a re used internally,
to interconnect the quads together, both within and between
blades. This m eans that th e SilkWorm 12000 actually has
three po rts of internal bandwid th for each port of externa l
bandwidth: a 1:3 undersubscribed design. Viewed logically
from the side, the ASIC-to-qua d relationship on a blade can
be viewed in either of the ways shown in Figure 116.
The interconnection mechanism used to tie the quads together involves connecting each quad directly to every other
quad in the sam e row and colum n with one internal 4Gbit
CCMA link. Each link uses two internal ports plus fram elevel trunking to achieve 4Gb it full-duplex bandwidth on its
path. Three of the six links are vertical (within a blade) and
three are h orizontal (b etween blades). W ithin a blad e, th e
connection pattern is as shown in Figure 117.

ASIC
pair 3
2

ASIC
pair 2

2
2

ASIC
pair 1

2
2

2
4

ASIC
pair 0

Figure 117 - SilkWorm 12000 Intra-Blade CCMA Links

Each of the four quads has four ports for front-end connections, and six ports (three 4Gbit VC links) going to the
other quads within that blade. (Each of the lines with a 2 in
the figure represents 2x2Gbits balanced with fra me trunking.) Figure 118 provides a more abstract depiction of this.
Send feedback to bookshelf@brocade.com

515

SAN

Two-port
VC link

1 port

Figure 118 - SilkWorm 12000 CCMA Abstraction

Each one curved vertical line rep resents a 4Gbit internal


trunk. Each numbered box is a quad, which has four external
connections, represented by the four pins attached to quad
0. The diagram represents one SilkWorm 12000 port blade.
In addition to the th ree ve rtical back-end 4Gbit CCMA
links within the blade, each quad has three horizontal back end 4Gbit links to the o ther three blades in th e domain. The
overall interconnection with in a SilkW orm 12000 64-port
domain can be viewed like Figure 119.
This m atrix connection m ethod is known as the XY
method, since the internal CCM A links follow a grid. The
name com es from m athematics. The horizon tal connection s
are called X connections, since that is the v ariable traditional used to represent the horizontal axis on a graph. The
vertical connections are called Y links.
If the source and destination quads are in th e same row,
the director will us e one X-axis internal CCMA hop to get
between them, since there is a direct connection available.
This adds just 700 or so nanosec onds of latency. If they are
in the same column, it will use one Y-axis hop. Look back at
the figure. See how any two qua ds in the sam e row or co l516

umn are d irectly connected? This shortest path will alway s


be used if it is availab le. If the source and destination are in
different rows and columns, there is no direct connection. In
that case, in the default shi pping configuration, the platform
will route traffic between any two quads using an X-then-Y
formula: first the frame will traverse a horizontal CCMA link
to an intermediate ASIC, then it will take th e vertical link to
the destination ASIC.

Sl

1
ot

Sl

2
ot

Sl

3
ot

Sl

4
ot

Figure 119 - SilkWorm 12000 64-Port CCMA Matrix

SilkWorm 3900

SilkWorm 3900 internal connections are sim ilar to those


in the SilkWor m 12000 port blade. The platform consists of
four ASIC-pairs wired toge ther in an XY topology. Since
there are no other blades to connect to, all of the links are
used to connect the ASIC-pairs into a square. Each quad has
eight external ports, and eight internal ports. Like the 12000,
traffic will take a direc t path if it is available, a nd will take
an X-then-Y path if moving diagonally.

Send feedback to bookshelf@brocade.com

517

SAN

XY

There are th ree ways to eval uate perform ance of a network product: theoretical analys is, em pirical stress-testing,
and real-world performance testing.
From a theoretical standpoi nt, both XY products have
more than adequate perform ance. There is m ore bandwidth
used to interconnect the quads together on a 12000 than there
is input bandwidth on the front-e nd of the switch. This is re ferred to as an under-subscribed architecture: for each quad,
there are fewer ports su bscribed to the backend than there is
bandwidth on the backend by a ra tio of one-to-three, usually
written 1:3. (Four front-end conn ections to twelve back-end
ports reduces to a ratio of 1:3.) This is 8Gbits of front-end
bandwidth feeding into 24Gbits of total b ack-end bandwidth
per quad. T he SilkW orm 3900 ha s a 1:1 subscription relationship: 16Gbits of input f eeding into 16Gbits of back-end
CCMA link capacity.

Side Note
For almost all users, all Brocade multistage platforms have
plug and play performance, and the information in this
section is only provided to satisfy curiosity. However, for advanced users who need to tune their applications for ultimate
performance, the topology information below can be relevant. The rule of thumb is this: It is worth taking the time to
understand the internal topology of a multistage product
only if it is necessary to run all ports on the platform fullspeed, full-duplex, for sustained periods, and there will be a
business impact if even a few of the ports run slower than the
theoretical maximum possible line rate.
While the front-end ports ca nnot generally flood all of
the back-end bandwidth on the SilkWor m 12000, it is theoretically po ssible for certain
traffic patterns to exh ibit
congestion due to an imbalanced usage of this band518

width. To determine if theoretical limits of a platform can be


exhibited in the real world, em pirical testing can be performed. This has been done extensively by Brocade, by third
parties such as networking m agazines, major customers, and
independent laboratories, and of course by other switch
vendors. In every case, the c onclusion was the same: the XY
products produce uncongested operation in any real-world
and m ost purely contrived traffi c patterns. Ev en incred ibly
stressful traffic configurations su ch as a full m esh tes t will
produce no congestion.
For exam ple, it is pos sible to conn ect all 32 p orts of a
SilkWorm 3900 to a Sm artBitsTM traf fic gen erator. Usin g
their m anagement tool, the Sm artBits can be configured to
send t raffic fl ows fro m every port on the switch to every
other port. This is known as a fu ll mesh traffic pattern, and is
generally acknowledged as one of t he m ost stressful traffic
configurations possible. Figure 120 illustrate s an eight node
full mesh and a sixteen node fu ll mesh. Each box represents
a port on the switch, and each line a pair of flows.

Figure 120 - Full-Mesh Traffic Patterns

Clearly, there are quite a fe w si multaneous traffic flows


in these configurations. W hen testing the SilkWor m 3900
with a 32-port full m esh, far m ore connections are in play,
and yet all 32 ports show full-speed, full-duplex performance. Sim ilarly, th e Silk Worm 12000 will p erform at peak
with a 64-port full mesh.
Send feedback to bookshelf@brocade.com

519

SAN

It seem s unlikely based on th is that any given environment would experience any internal perform ance bottlenecks
related to th e XY CCMA architec ture. If that e ver did happen, there are a number of options for tuning XY
performance. For exam ple, following Brocades tradition of
supporting localized switching, each group of four ports on
the 12000 (quad) and eight ports (octet) on the 3900 can
switch locally without even using the XY traces. This provides users who take advant
age of known locality the
opportunity to optimize performance still further.

Brocade 24000 48000 CE


The Brocade 24000 and 48000 chassis ( Figure 106 p 441
and Figure 78 p 405 respectively) are f unctionally equivalent
to that of the SilkW orm 12000. Bot h are CCMA m ultistage
directors, though the products us e different backplane traces.
Both of the newer directors can exhibit uncongested operation both in theory and in empirical testing.
In the Brocade 24000, each port blade has two Bloom -II
(p505) ASIC-pairs which expose eight ports to the user, and
have equivalent bandwidth us ed for backplane CCMA links:
any given octet has 16Gigabits (32G full-duplex) of possible
external input, and the sam e bandwidth available to connect
to any other octet. Local switching can be done within an 8port group.
The Condor-based (p 506) Brocade 48000 has 16-, 32-,
and 48-port blades. Local switch ing is poss ible within a 16 port group on the first two, a nd a 24-port group on the 48port blade. In each case, the director has 64Gbits of internal
bandwidth per slot (128Gbits full-duplex) in addition to the
local switching bandwidth. This means that the 16-port blade
has a 1:1 subscription ratio even if all exte rnal ports are all
connected to 4Gbit devices and no traffic is lo calized. The
larger blades also have 4Gbit interfaces, and are uncongested
in most real-world sc enarios. However, it is important to re alize that th e large r bla des can exhibit inte rnal con520

gestion if (a) traffic on enough por ts is sustained at or near


full speed, and (b) none of the flows are localized. Most environments have som e degree of burstyness and/or som e
degree of locality, so the ove rsubscription of the two highport-count blades is largely academic.
The characteristics of the tw o newer directors are similar
to the SilkWor m 12000 in som e respects, but radically different in others. This is because th e two newer platforms use
a Core/Edge (CE) ASIC layout instead of the XY layout. The
CE layout is m ore symm etrical: all ports have equal access
to all other ports. In additi on, local switching is allowed
within an octet rather than a quad on the 24000, which
doubles the opportunity to tune connection patterns for absolute maximum performance if locality is known. The 48000
doubles that again for two blades, and triples it for the 48port blade.

Slot 1

Slot 10

Figure 121 shows how the blade positions in the Brocade


24000 director are connected to each other. On the left is a
somewhat abstract cab le-side v iew of the director, showing
the ten blade slots. Each of the port cards has four quads depicted. Quad boundaries are still relevant for things like ISL
trunking. The top two and bottom two quads on each blad e
each form an octet for local switching.

16

c
p

c
p

s1

16

16

s2

16

s3

s4

16

16

s7

s8

16

s9

16

s10

8
8

s5

s6

Figure 121 - Top-Level CE CCMA Blade Interconnect

Send feedback to bookshelf@brocade.com

521

SAN

On the right is a high-level diagram of how the slots interact with each other over th e backplane. Each thick lin e
represents a set of eight 2G bit CCMA links connecting the
port blad es with the CP blades. Th e CP blades contain th e
ASICs that switch betw een octets. Every port blade is connected to every CP blade, and the aggregate b andwidth of
these CCMA links is equal to the aggregate bandwidth available on external ports. Each port blade has 16 2Gbit FC ports
going outside the box, and 2x8=16 2Gbit CCMA Links going to the backplane.
As this dia gram illustr ates, the in ternal conn ectivity
looks similar to a resilient core/edge fabric design. This is no
accident: the geometry of the core/edge design has been universally accepted as the best-practice for high-perform ance,
highly scalable, high availability S AN designs, and is currently recommended by all vendors. By using the sam
e
geometry for the internal layout of its directors, Brocade has
achieved the same benefits within the chassis that users have
adopted for external connections. The every port blade to
every CP blade m esh is what m akes it a CE layout, and
the 1 :1 in ternal-to-external bandwid th r atio m akes it a f attree or non-over-subscribed layout.
The Brocade 48000 has the sam e top-level connectivity
diagram when populated with 16-port blades. The difference
is that each unit rep resents a 2 Gbit connection in the
24000 and a 4Gbit connection in the 48000. So, for exam ple,
the 8 unit link between s1 and s5 represents 16Gbits of aggregate bandwidth in the Bro cade 24000, and 32Gbits in the
Brocade 48000.
Of course, the two directors are not
really Core/Edge
networks of discrete switches, but thinking of them that way
does provid e a useful visualization. Because th ey are fully integrated single-dom ain FC di rectors and not merely networks in a can, the two platforms also:
522

Are easier to manage than the analogous network of individual switches.


Take up less rack space than a network would use.
Are easier to deploy and manage.
Simplify the cable plant by eliminating the large number
of ISLs and media required for a network.
Are far more scalable, as they do not consist of a large
number of independent domains.
Are much less expensive, both in terms of its initial and
ongoing costs.
Have higher reliability due to having far fewer active
components.
Do not run switch-to-switch protocols internally.
Are capable of achieving even greater performance due
to internal routing optimizations.

When frames enter a port bl ade on either director, under


normal working conditions it can select between either of the
two CP blades to switch the traffic. This pro
vides redun dancy in case one CP blade should fail, and also allows full
performance. For exam ple, the Brocade 48000 uses fram elevel and exchange-level trunki ng to balance IO between the
two CPs in much the sam e was Condor-based switches can
balance traffic in a co re/edge fabric. The net resu lt is that n o
empirical test has ever shown congestion within either director: testing from Br
ocade, independent laboratories,
networking m agazines, and ot her vendors alike have confirmed that these two platform s are s imply the high est
performing SAN products in the world today.


Storage networks m ay operate at a variety of speeds. Fibre Channel standards define speeds including 1Gbit, 2Gbit,

Send feedback to bookshelf@brocade.com

523

SAN

125
4Gbit, 8Gbit, and 10Gbit.
Ethernet defines 10Mbit,
100Mbit, 1Gbit, and 10Gbit, though only 1Gbit and 10Gbit
are relevant to storage networking.

This subsection discusses each link s peed. More detail is


provided for 4Gbit FC than for the other speeds, since it is
the newest of the link rates from an implementation perspective. (Although it predates 10Gbit from a standards point of
view.)


Each of the link speeds discussed in this section has an
encoding format. Encoding is used on the signal to m ake it
transition from zero to one m ore often, thus allowing the
high vs. lo w signals to be distingu ished from each other. If
long periods were allowed to elapse between transitions, a
link m ight not be able to te ll the difference between m inor
signal variations (i.e. noise) a nd real 0/1 transition. It could
begin treating noise as if it were data, which could cause link
failures and even data corrupti on in extreme cases. Encoding
formats ensure that this will not occur. As a sid e benefit, encoding provides an error de tection m ethod, som ewhat like
parity bits in a modem protocol.
There are m any for mulas that can be used to encode a
signal. Some encoding formats are referred to by the num ber
of bits on the link required to represent a certain num ber of
data bits, such as 8b/10b. The ratio indicates the amount of
user data in a given data unit.

125

FC-PH also defines 250Mbit 1/4 speed and 500Mbit 1/2 speed Fibre
Channel interfaces. However, 1/4 speed has been obsolete for about a decade, and
1/2 speed was never implemented. It is also possible to run Fibre Channel at other
speeds on intra-platform links. For example, the Condor ASIC is capable of
forming 3Gbit FC connections to other Brocade ASICs, even though there is no
standard defined for this.

524

8b/10b requires that ten bits be sent down the line to represent eight data bits. This affects throughput. 8b/10b is 20%
encoding overhead.
In contrast, the 64b/66b enc oding format is only about
3% overhead, which m eans more payload can be m oved for
a given link speed. However, it also means that the link can
be less effective at detecting errors, and could be subject to
more frequent failures.
The bottom line is that encoding is necessary and present
on all technologies discussed belo w. It is also n ecessary that
devices on both ends of a connection use the sam e encoding
format, i.e. 8b/10b or 64b/66b. It is not po ssible to have a n
8b/10b device talk to an 64b/66b device natively; one or the
other would need to be converted before communication
would be possible. This caveat only applies to 10Gbit, since
all other speeds use 8b/10b encoding.

1Gbit FC
1Gbit Fibre Channel was defi ned in the FC-PH standard
in 1994. All Brocade platform s ever shipped support this
speed. It was considered the s weet spot in the industry for
many years, and is still viab le today for m any custom ers.
Links running at this speed use 8b/10b encoding, and can
achieve a user-data throughput of just over 100Mbytes/sec.
(200Mbytes full duplex.) Both copper and optical m edia are
defined by the stand ard. 1Gbit in terfaces m ost often use
GBICs, although 2Gbit Fibre Ch annel SFPs also support this
rate to maintain backwards compatibility.

2Gbit FC
2Gbit Fibre Channel was defined in the FC-PH-2 standard in 1996, though no vendor implem ented it for some
time after that. All Bro cade pl atforms more recent than th e
SilkWorm 2xx0 series support
auto-negotiation between
1Gbit and 2Gbit FC. This is co nsidered to b e the sweet
Send feedback to bookshelf@brocade.com

525

SAN

spot in the industry today, alt hough 4Gbit is expected to replace 2Gbit in 2005. Lin ks running at this speed use 8b/10b
encoding, and can achieve a us er-data throughput of just
over 200Mbytes/sec. (400Mbytes f ull duplex.) Both copper
and optical m edia are defined by the standar d. 2Gbit inte rfaces most often use SFPs.

4Gbit FC (Frame Trunked Native)


For several years now, Brocade has offered fram e-level
trunking (p 460) on all 2Gbit products. This can be used to
combine two 2Gbit interfaces in to one evenly balanced
4Gbit channel.
Recently, Brocade introduced a native 4Gbit interface, in
which each individual port can run at that speed. These ports
still may be trunked to for m even higher rate pipes. This allows node connections at 4Gbit as well as higher speeds and
lower costs for ISL con nections. Native 4Gbit is expected to
become the sweet spot in the SA N industry for 2005 and
beyond.
Like 2Gbit Fibre Channel, na tive 4Gbit was defined in
the FC-PH-2 standard in 1996. The first Brocade platform to
support this standard is the Brocade 4100. (p 400) It can support auto -negotiation be tween 1Gbit and 2Gbit FC on all
ports f or ba ckwards-compatibility. W hile othe r 4Gbit ven dors may not support trunking, on Brocade platform s up to
eight 4Gbit links can be tr unked to for m a single 32Gbit
channel (p 535), and m ultiple trunks can be balanced in to a
single 256Gbit pipe.
Links running at 4Gbit use the same 8b/10b encoding as
existing 1Gbit/2Gbit in frastructure, and can achieve r ealworld payload throughput of
over 400Mbytes/sec. (Over
800Mbytes in full-dup lex m ode.) 4Gbit interf aces use th e
same SFP s tandard and optical cabling as 1Gbit and 2Gbit
interfaces, which allo ws 4Gbit products to be backwards
526

compatible with installed base switches, routers, nodes, and


data center cable plants.
Despite the fact that the 4Gbit standard was ratified at the
same tim e as the 2Gbit standard, no 4Gbit products were
built until 2004. There was a debate in th e FC industry about
whether or not to build 4Gbit products at all, or to go straight
to 10Gbit. The debate ended when the Fibre Channel Industry Association voted to adopt 4Gbit, and all m
ajor FC
vendors began to add 4Gbit products to their roadm aps. The
factors that motivated the industry in this direction included
both economic and technological trends.
Technical Drivers for Native 4Gbit FC

Two of the most critical questions in the 4Gbit vs. 10Gbit


debate were whether or not higher than 2Gbit speeds were
needed at all, and if so which of t
he candidates could be
widely deployed in the most practical way.
Higher speeds were deem ed desirable for several reasons. For exam ple, som e hosts and storage devices - e.g.
large tape libraries - were r unning fast enough to saturate
their 2Gbit interf aces. In so me c ases, th is was causing a
business im pact for custom ers: if a backup device could
stream data faster, then backup windows could be reduced
and/or fewer tape devices coul d be purchased. Furtherm ore,
running faster ISLs would m ean needing fewer of the m, thus
saving cost on switches and cab ling. For long distance applications running over xWDM or da rk fiber, the reduction in
number of links could have a substantial ongoing cost savings.
For these and m any other re asons, the industry acknowledged that 2Gbit speeds were no longer sufficient for storage
networks. The choice w as to use 4Gbit or 10Gbit. It turned
out that 4Gbit had substantial te chnical advantages related to
deployment, and provided at least the sam
e perform ance
benefits as 10Gbit.
Send feedback to bookshelf@brocade.com

527

SAN

Hosts and storage dev ices that w ere exceeding their


2Gbit interf ace cap acity were not doing so by a large
amount. Som e tape drives were designed to stream at between 3Gbit and 4Gbit, and som e hosts could m atch these
speeds, but only a handful of th e highest-end systems in the
world could exceed 4Gbit, and even these could not generally sustain 10Gbit stream s. 4Gbit interfaces could be
marketed at cost parity with 2Gbit, but 10Gbit interfaces
demanded a m assive price prem ium due to architectural differences in the interfaces, so there was no point in using the
more expensive 10Gbit interface in a node that could no
t
even saturate a 4Gbit interface. Actual performance on nodes
would be identical whether
using 4Gbit or 10Gbit, and
10Gbit cost more across the board.
The biggest barrier to wide deployment of 10Gbit was its
innate incompatibility with existing infrastructure. It required
different optical cables, used different m edia, and was not
backwards com patible with 1Gb it or 2Gbit. N eeding to rip
and replace all HBAs and storag e controllers at once, not to
mention an entire data center cable plant would not only be
prohibitively expensive, but ope rationally im possible in the
always on data centers th at power todays global businesses.
It became clear becaus e of these factors that th e optimal
speed for nodes would be 4Gbit. However, there was still a
case to be made for ISLs at 10Gbit.
Replacing the optical infrastructure would be less of a
technical issue with backbone connections, because there are
typically far fewer of them than there are nod e connections.
Additionally, som e high-end inst allations rea lly do require
their switch -to-switch c onnections to run f aster than 4Gbit.
Indeed, some networks require backbones to run at far higher
than 10Gbit speeds. No m atter how fast an individual interface can be m ade, there always seem s to be an applica tion
that needs more bandwidth. Br ocade decid ed to solve th is
528

with trunk ing for 4Gbit in terfaces, giving 4G bit network s


performance parity with 10Gb it (and indeed beyond) while
still lowering costs and simplifying deployments.
Another technical factor to consider is network redundancy. Most users configure links in pairs, so that there will
be no outage if one link shoul d fail. W ith a single 10Gbit
link, any component failure will result in an outage, which
means that the m inimum realistic configuration between two
switches is 20Gbits (2x 10Gbit links). Relatively few applications requ ire so m uch bandwidth between each pair o f
switches, and given the cost
of 10Gbit interfaces, redundancy would be harder to ju
stify to m anagement when
purchasing a SAN.
To fully app reciate this, consider the perform ance parity
case. If three 4Gbit link s are config ured, and o ne fails, the
channel is 33% degraded. For a network with the exact same
performance requirem ent, a single 10Gbit link is needed,
which is m ore expensive than the th ree 4Gbit in terfaces and
requires m ore expensive single- mode optic al inf rastructure.
If that link fails, the network has an outage because 100% of
bandwidth is lost, thus requiring a second expensive 10Gbit
link to be provisioned, even though the additional performance is not required. If a 10Gbit proponent were to argue that
two tim es t he perform ance were really needed, the 4Gbit
proponent could configure six 4G bit links, which would still
cost less, have higher availability, and perform identically.
All of this adds up to substantial technical advantages for
4Gbit abov e 10Gbit. Until m ainstream nodes can saturate
4Gbit channels, this is likely to rem ain the mainstream interface speed for storage networks.
Native 4Gbit FC

In the final years of the 20 th centu ry, com panies were


buying technology for its own sa ke, regardless of proven
value proposition. In the early 21 st century, however, the
Send feedback to bookshelf@brocade.com

529

SAN

overall global econom ic downturn caused the high-tech industry to adapt: any new tec hnology had to provide endusers with a proven Return on I nvestment (ROI) in order to
be adopted, so technology com panies began to reevaluate
their value propositions before going to m arket with new
products. Since 4Gbit interfaces co uld provide m ore real
technical b enefit than 1 0Gbit in m ost cas es, it becam e a
question of which technology c ould lower the total cost of
ownership the most, thus providing the highest ROI.
When using 10Gbit interfaces, the lowest speed possible
is on a link is, obviously, 10Gbit. If a network designer feels
that less perfor mance is needed, and that less co st would be
appropriate, there is no way to insta ll part of a 10Gbit pipe.
With 4Gbit trunked in terfaces, th e g ranularity of configuration is m uch finer: a designer can star t with o ne 4Gbit lin k
and add m ore links as needed if real perform ance data justifies the added cost.
4Gbit interfaces use the sa me low-level technology and
standards as 1Gbit and 2Gbit across the board: the encoding
format is just one example. One way to thin k of a 4Gbit
switch is th at it is like running a 2G bit switch with a higher
clock rate. The net result is th at 4Gbit products can be m arketed at about the sam e price as the existing 2Gbit products.
10Gbit, on the other hand, is f undamentally different: it uses
technology that requires differe nt components, which are all
much lower volum e. This is true to such an extent that current price projections indicate that three 4Gbit links will co st
quite a bit less than one 10Gbit link, so even deploying equal
bandwidth is more economical with 4Gbit.
With 4Gbit, redundancy and performance can be decoupled to a greater extent than with 10Gbit: redundant
configurations can start at 8Gb it (2x 4Gbit) at a f raction the
cost of a non-redundant 10Gbit link, and can scale up to
trunked configurations supporting far m ore bandwidth than
10Gbit: Bro cade 4Gbit ASICs support up to 2 56Gbit con530

figurations using fram e-based plus exchange-based trunking


algorithms.
Not only were 10Gbit in terfaces more expensive, but the
optical inf rastructure u sers a lready insta lled f or 1Gbit and
2Gbit would not work with 10Gbit devices. 10Gbit interfaces
require expensive single-mode fiber, and the vast majority of
data centers today are wired with multi-mode fiber. 4Gbit, on
the other hand, could use the ex isting cable plant, and could
support the sam e SFP i nterface us ed for 1Gbit and 2Gbit.
This meant that media and cable plants could be designed to
run at all th ree speeds, provi ding b ackwards c ompatibility,
whereas 10Gbit installations would require forklift upgrades.
Since 4Gbit products cost le ss than 10Gbit even at performance parity, and installation would be less expensive as
well, the econom ic debate cam e out firm ly on the side of
4Gbit, just as had the technical discussion.
Native 4Gbit

At every point in the price / performance / redundancy /


reliability map, 4Gbit is more desira ble than 10Gbit. All major Fibre C hannel vendors ha ve 4Gbit on their roadm aps,
including switch, router, HBA, and storage manufacturers.
The Fibre Channel Industry A
ssociation has officially
backed this m ovement, and it is expected th at m ost FC
equipment shipping by the end of 2005 will run at this speed.
Indeed, at the tim e of this wr iting, Brocade has already been
shipping 4Gbit products since late 2004.
Even though the benefits are clear and num erous, 4Gbit
will no t f ully penetrate the Fi bre C hannel m arket imm ediately. L ike any new technology, 4Gbit FC is expected to
follow a curve of adoption, with different m arket penetration
extents and different end-user be nefits at different points on
the timeline.
During the early-adoption tim e, 2Gbit native switches
will still be in high volume production. First, the 4Gbit techSend feedback to bookshelf@brocade.com
531

SAN

nology will be available only in selected pizzab ox switches


like the SilkWor m 4100. It is usual for director-class products to follow behind switches by at least several m onths,
since m odular platform s are by nature harder to engineer,
test, and market. This is why the Brocade 48000 shipped
later than the 4100. During the interim
period, 4Gbit
switches will be deployed in stand-alone configurations, as
the cores and/or edges of small to medium CE networks, and
as edge switches in larger SANs.
Once 4Gbit blades begin to ship in higher volume, SilkWorm 24000 2Gbit directors at the edge of fabrics will
simply have all n et-new blades p urchased with SilkW orm
48000 4Gbit chips. There is pr obably no real incentive for
most users to throw out their ex isting 2Gbit blades, so it is
likely that 4 Gbit ports will sim ply sit along sid e the ex isting
2Gbit interfaces within existing ch assis. 126 The new 4Gbit
blades will replace 2Gbit ISLs goi ng to the core. Directors at
the core of large SANs will either have their blades upgraded
(4Gbit b lades purch ased and old b lades tran sferred to edg e
chassis) or in som e cases the entire core chass is may be m igrated to the edges of a fabric.
The time lag between edge switches and directors is not
considered to be a p roblem: the industry does not believe
that 2Gbit is by any m eans obsolete. Most custom ers do not
immediately require 4G bit inte rfaces, and m any custom ers
will be able to use th eir 2Gbit switches for years to come. In
fact, it is lik ely that 2Gbit sw itches will still be shipping f or
all of 2005 and even into 2006: th ey will sim ply decline in
volume over that time.

126

Brocade will offer 4Gbit blades that can co-exist with SilkWorm 24000 2Gbit
blades in the same chassis, but at least two other vendors require forklift chassis
upgrades. Be sure to ask if a 2Gbit chassis purchased today will support 4Gbit
and 10Gbit blades in the future, and if these can co-exist with existing blades in
an existing chassis.

532

Some time after the f irst 4Gbit switches ship, n ode vendors will start to com e out with 4Gbit interfaces. Most users
will not have an immediate need for e.g. 4Gbit HBAs, so it is
likely that only net-ne w installations will us e this spee d.
(This is why backwards compatibility with 1Gb it and 2Gbit
was so im portant: it will take years for the installed base to
become purely 4Gbit.)
By the end of 2005, it is expected that all m ajor vendors
will sh ip 4 Gbit in terfaces by def ault on p roducts in eve ry
segment, and that the v ast m ajority of green field deployments will use this speed almost exclusively.

8Gbit FC (Frame Trunked Native)


Brocade offers 8Gbit F C trunks on all of its 2Gbit platforms today. 8Gbit trunks are crea ted by striping data across
four 2Gbit channels to form one 8Gbit pipe. It is also possible to trunk two native 4Gbit in terfaces on pro ducts which
support that link rate; this ha s the sam e effect. Trunking can
be used to resolve or proac tively prevent perform ance bottlenecks in the network, which is where high-speed links are
most needed.
In the future, it is expected that sto rage controllers and
some hosts will ne ed h igher spe eds on the ir ne twork in terfaces as well, and trunking cannot easily be used to solve this
challenge. Unfortunately, the theory that 10Gbit would be
the next logical step for node in terconnects has run into cost
and technology problem s, as discussed under 10Gbit FC
later. As a result, the FCIA announced that its members have
ratified the extens ion of the Fib re Channel roa dmap to include native 8Gbit speeds on a single interface.
This should allow each inte rface on a node or switch to
support 1G bit, 2Gbit, 4Gbit, or 8Gbit, all using the sam e
media and c able types. The intent is to allow custom ers to
preserve the ir exis ting inf rastructure investm ents and avoid
Send feedback to bookshelf@brocade.com

533

SAN

costly forklift upgrades, which would be needed to support


10Gbit technology.
In fact, at the tim e of this writing, 8Gbit products are already in late stages of devel opment, and so some additional
details are now available abou t this technolog y. It is expected that 8Gbit products will sell for a premium above
4Gbit, and that they will of course require new SFP media to
operate at that speed. In genera l, 8Gbit can ope rate over the
same optical infrastructure as 4Gbit, but it is advisable to run
some tests e.g. for DB loss to make sure that the cable
plant is sufficiently reliable. For a given cable quality, 8Gbit
may support a shorter distance th an 4Gbit, in the sam e way
that 2Gbit supported shorter dist ances than 1Gbit. Finally, it
seems almost certain that 8Gbit capable media will not autonegotiate all the way down to 1Gbit; they will support 2Gbit,
4Gbit, and 8Gbit negotiation. Th e SFP industry realized that
it would be costly and complex to add 1Gbit support, and did
not expect customers to pay a premium for 8Gbit media only
to connect it to 1Gbit devices. There is a sim ple work around
for this: if you intend to connect 1G bit devices to an 8Gbit
switch, use 1Gbit, 2Gbit, or 4Gbit SFPs to do so.

10Gbit FC
10Gbit FC uses a different low-level encod ing f ormat
(p524) than any of the other port speeds 64b/66b instead of
8b/10b so a 10Gbit FC link has the throughput of three
4Gbit links. 10Gbit can be
thought of as equivalent to
12Gbit from a payload carrying standpoint. On the other hand,
at the tim e of this writing, three 4Gbit links cost much less
than one 10Gbit link, and have higher availability: if a 10Gbit
link fails, the connection is 100% down, whereas if a 4Gbi t link
fails in a 3-port trunk, the link is just degraded.
Perhaps more to the point, 10Gbit has fundamentally different requirem ents vs. any of the other link speeds across
the board. 1Gbit, 2Gbit, 4G bit, and 8Gbit can all use
534

SFPs and multi-m ode fiber, but 10Gbit uses XFPs and m ore
expensive single-mode fiber. Most existing data center infrastructure is designed with m ulti-mode fiber, and virtually all
existing SAN com ponents are de signed to receive 8b/10b
format; substantial reengin eering is required for 64b/66b
both at the product and data cen ter levels. This adds total
cost of ownership burden far beyond the m assive price premium that 10Gbit interfaces are currently demanding.
This has kept 10Gbit adopti on slow. In fact, there is
widespread speculation that 10G bit FC will sim ply never be
implemented in hos ts or storage devices, and that the industry will bypass it by adopting 8Gbit and then 16Gbit or faster
link speeds based on the 8b/ 10b encoding method. However,
there is a case to be m ade in favor of 10Gbit links for
DWDM extension, since these pr oducts already have 10Gbit
interfaces to day. Brocade has theref ore develop ed a 10Gbit
FC blade for the Brocade 48000 director to support these distance exten sion applications. See the section s
Brocade 48000 on page 405 and FC10-6 10Gbit
Fibre Channel on page 417 for more inform ation. The section starting on page 364 has an extended example of this
use case.

32Gbit FC (Frame Trunked)


All of the Condor-based pl atforms support 32Gbit FC
trunks. These are evenly balanced paths, so that one 32Gbit
trunk is truly equivalent to a
single link operating at that
speed. The m ajor differ ence is that trunks are com prised of
multiple ph ysical inte rfaces, and th erefore have an inherent
element of redundancy built in: if one link fails in a 32Gbit
trunk, the rem aining seven links will still deliver 28Gbits of
bandwidth: more than 8 7% of the o riginal cap acity will r emain. A single physical 32Gbit link would have failed down
to 0% in a similar scenario.

Send feedback to bookshelf@brocade.com

535

SAN

256Gbit FC (Frame Exchange Trunked)


Up to eight 8-port fram e-level trunks can be balanced at
the exchange level by DPS to for m a single 256Gbit path. In
this cas e, a single link failure will still leave in excess of
98% of the aggregate capacity. This is m ost likely only applicable to large -scale CE ne tworks for med fr om Brocade
48000 directors at both the core and edge layers.

1Gbit iSCSI FCIP


In theory, it should be po ssible to achieve about 1/4 th the
performance of a Fibre Channel link by using commodity
Ethernet equipment instead of purpose-built storage network
gear. If this were true, it m ight allow organizations to deploy
their SANs at a lower cost, if p erformance were not a facto r.
As it turns out, neith er iSCSI nor FCIP can ach ieve nearly
1Gbit of real throughpu t on a 1Gbit interface. S ee iSCSI
on page 51 for some of the reasons behind this.

10Gbit iSCSI FCIP


Some industry comm entators m ake an argum ent which
goes something like this:
1Gbit iSCSI cannot meet requirements for performance in
todays SANs, much less meet requirements for future
datacenter architectures involving ILM or UC. However,
deploying 10Gbit interfaces with hardware iSCSI and TCP
engines will allow 10Gbit iSCSI to almost match 4Gbit Fibre Channel performance. Therefore 10Gbit iSCSI shall
have a market.
On the one hand, Brocade does carry num erous iSCSI
and FCIP products, and is inve sting substantial R&D money
in im proving them . The re are use cases for SAN technologies which do not require the pe rformance of Fibre Channel,
and Brocade intends to support them.
536

On the other hand, just as with 10Gbit FC, this is not expected to form a substantial percentage of the overall SAN
market, because arguments like the one above are unlikely to
convince many users. It is cu rrently possible to im plement
3x 4Gbit F C ports for about th e same price as a single nonaccelerated optical 10Gbit Ethern et link, and iS CSI protocol
acceleration typically adds up to an order of magnitude to the
cost of an interface. With Fib re C hannel m aintaining th at
kind of lead in price/perfor mance, and also having about a
decade lead in m aturity and market adoption, IP SAN interfaces are likely to remain a fringe market for the future.

Send feedback to bookshelf@brocade.com

537

C
C:
This study guide is divided into two sections: a set of
questions, and a corresponding set of answers. After reading
the main body of the book, go through the questions below,
and on a separate sheet of pape r, write your answers. If you
cannot think of an answer, firs t try looking it up in the preceding chapters. If you cannot find the answer there, also try
looking in D: starting on page 550.
Once you have com pleted the questions, double-check
your answers by looking at the section Error! Reference
source not found. on page 546. You can also use that section as a last resort if you cannot think of an answer and
cannot find it by looking it up in the m ain body of the book
or in the FAQ.


5.

Storage Area Networks (SANs) are primarily intended


to provide _____ level connectivity between hosts and
storage devices.

6.

_______________ is by far the most common technology used for SANs today.

7.

The traditional _______________ architecture failed to


meet increasing storage performance and asset utilization requirements, which paved the way for SANs.

8.

Existing network technologies like ________ were too


slow and unreliable to support SANs, which prompted
the SAN industry to invent the __________ protocol.

Send feedback to bookshelf@brocade.com

539

SAN
9.

____________ is a SAN solution category which allows improved asset utilization through reduced white
space on storage arrays.

10. __________ is the industry leader in SAN infrastructure, carrying FC, iSCSI, FCIP, virtualization, and
SAN Management products.
11. __________ is a set of processes and procedures related to managing the way the business value of
information changes over time.
12. Switches are distinguished from hubs in that switches
do not have a ________ architecture.
13. When deploying a SAN to support mission-critical systems, industry best-practices mandate a ____________
SAN architecture with redundant HBAs and multipathing software.
14. When communication between port-pairs in a switch
or network of switches impair communication between
other ports it is known as _____. This distinguished
from blocking which actually prevents communication,
and is a typical characteristic of crossbar switches.
15. In order to optimize compute resources such as CPU
cycles, a _________ solution should be considered.
16. The last step in the SAN planning process is to create a
more detailed _________ document and _______ plan.
17. The ILM and UC trends intersect in the _________.
18. To justify the cost of a SAN, the design team should
compare the hard and soft benefits of the SAN to the
costs as part of a ________________ analysis.
19. When considering which protocol to use for a SAN, it
is important to understand that the ________ protocol
is vastly more efficient and mature than _______.
20. The first step in designing a SAN is to ___________.
540

21. The ______________ has the responsibility of coordinating the entire SAN effort and usually has the SAN
project plan as a deliverable.
22. In order to optimize ___________, it is best to move
tape systems onto the SAN.
23. SAN-enabled ____________ are a good way to increase application uptime by allowing a standby node
to take over if a production node fails.
24. The mapping of SCSI over Fibre Channel is called
___, whereas the mapping of SCSI over IP is called
_____.
25. Looking at Gigabit Ethernet and Fibre Channel from a
maturity standpoint, one factor to consider is that ____
came first, and ____ was actually on top of the ____
protocol layers.
26. Originally invented by Brocade, ____ is now the industry-standard protocol for routing between FC
switches in a fabric.
27. The time during which the backup runs is called the
_______ and its maximum size is determined by the
length of time that the business can tolerate the associated performance degradation or application outage.
28. _________ is the fundamental storage protocol that
lies under both FC and IP SAN technologies.
29. To connect a host to a Fibre Channel fabric, a card
called a _________ is required.
30. To achieve even a fraction of FC performance, iSCSI
hosts require an expensive _____________.
31. _______ are sets of processes and overall design and
management philosophies, not specific products.
32. Currently shipping Fibre Channel products support the
following link rates: _____________________.
Send feedback to bookshelf@brocade.com

541

SAN

33. The FC standards also provide for the following link


rates: ________________, some of which are obsolete
and some of which are expected to ship in the future.
34. Two important concepts for SAN designers moving
forward are __________, both of which are related to
virtualizing resources, and neither of which are currently available in feature complete solutions.
35. In order for devices on a SAN to discover each other,
they need to register with and inquire from the _____,
which is built in to FC switches but generally requires
external hardware in an iSCSI network.
36. ____________ is a solution category related to moving
data between storage subsystems e.g. when old systems are coming off of lease.
37. The Fibre Channel equivalent of an Ethernet hub uses
the rather limited ________ protocol.
38. In order to achieve faster performance between
switches than a single ISL can support, Brocade supports two link aggregation methods: ____ and ____.
39. Almost all companies use _____ or _____ instead of
iSCSI when they want to support storage over IP.
40. Regulatory requirements and fiduciary duty to investors are increasingly driving IT departments to
implement ________ solutions, which are facilitated
by SANs mapped over a MAN or WAN.
41. __________ is a category of SAN solution used in
most other SAN solutions, which results in more efficient utilization of storage assets.
42. ____ is the concept that resources such as CPU power,
RAM, and storage capacity could be provided in a
manner similar to an electric power grid.
43. In an HA cluster or UC solution, compute nodes need
access to each others data sets to enable application
542

mobility. This means building the cluster onto a ____.


44. JBODs and SBODs are almost never used as primary
storage in mission-critical applications. Such needs are
usually better met by ____ arrays.
45. _______ in the context of SANs are behaviors that devices must follow in order to communicate.
46. SANs have been used to connect multiple processing
nodes to scale ____________, either through parallel
operations or sequential workflow optimization.
47. Running backups over ____ robs hosts of needed CPU
power, whereas running them over ____ is even more
efficient than DAS.
48. Using the FC protocol guarantees _________ and
timely frame delivery with negligible error rates.
49. _______pose the greatest challenge for compatibility
testing within storage networks, regardless of protocol.
50. In a formulaic resilient CE fabric, ____ core
switches interconnect many edge switches.
51. Fibre Channel SANs almost always outperform DAS,
but ______ most often does not.
52. FC links can be extended across up to a hundred kilometers or so of dark fiber using long-wavelength ____.
53. ____ allows an organization to determine where data
belongs at any point in time.
54. UC is being driven primarily by three factors: ______.
55. There are five phases to the SAN planning process for
green field deployments: _______________________.
56. There are five layers to the UC and ILM data center
architectures: __________________.
57. The place where ILM and UC intersect is the _____.

Send feedback to bookshelf@brocade.com

543

SAN

58. Specific _______ requirements must be gathered to determine what the SAN is supposed to accomplish for
the organization.
59. Compatible devices are capable of being _____.
60. If devices are not compatible, further analysis is _____
because the network will simply not function.
61. Designers should try to support initial performance requirements, and also _________.
62. ______ is a measure of how often service personnel
need to touch a system.
63. ______ is a measure of how much time a system is
able to perform its higher-level functions.
64. ______ is a somewhat subjective measure of, among
other things, how easy it is to fix problems in a SAN.
65. ______ allows multiple fabrics to be controlled from a
single management point.
66. ______ automatically checks the SAN against evolving best-practices and has automated housekeeping
features such as looking for unused zones.
67. ______ refers to how large a network can become
without needing to be fundamentally restructured.
68. The most common SAN topology is _____.
69. ______ allows native FC ISLs to cross very long distances while maintaining full performance.
70. The rule of thumb is that it takes one _____ per kilometer of distance for full-speed 2Gbit operation.
71. Performance in a network will ______ over time.
72. _____ are the most common performance limiting factor in a SAN.
73. The mechanism which carries traffic across a SAN between edge devices is known as the SAN ______. FC
544

and iSCSI are two examples.


74. ______ is a condition in which more devices might
need a resource than that resource can serve.
75. ______ is a condition in which devices actually are
trying to use a path beyond its capacity, so some of the
traffic destined for that path must be delayed.
76. ______ refers to a queuing problem, not merely to contention for bandwidth on a link.
77. ______ is how long it takes to forward a frame.
78. ______ is often matched to the ratio of storage to
hosts.
79. Using the ______ product will help to automate UC
and other advanced solutions by managing the complex relationships between hosts, storage, operating
systems, and applications.
80. ______ is the practice of optimizing traffic by putting
ports that communicate close together.
81. ______ is the practice of connecting hosts to one group
of switches, and storage to a different group.
82. ______ are two features which allow traffic to be balanced across ISLs while preserving in order delivery.
83. The process of taking a design from paper all the way
through release to production is ________.
84. Avoid single points of failure when selecting racks for
switches by ______.
85. The most effective access control mechanism for a
SAN is ______, because it is enforced by both the
Name Server and the ASIC.
86. It is important to ____ a SAN before releasing it to
production to verify that all switches, routers, devices
and applications are capable of recovering from faults.
Send feedback to bookshelf@brocade.com

545

SAN

87. Maintaining a _____ can help with tasks such as


switch and fabric maintenance, troubleshooting, and
recovery.
88. Users interested in clean, stable fabric environments
should run _____ regularly.
89. It is possible to use the _____ product to optimize storage performance at branch offices.
90. When evaluating candidate SAN designs, it is appropriate to consider which of the following factors:
a.
b.
c.
d.
e.
f.
g.

Compatibility
RAS
Scalability
Performance
Manageability
Total solution cost
All of the above

91. Any SAN design should meet or exceed all requirements, but most designers consider _____ to be the
most important consideration when making trade-offs.
92. If a fabric has a single point of failure, and the SAN
has only one fabric in it, then the overall architecture is
considered to be ______.
93. Connecting a host to the same switch as its primary
storage is an example of the use of ______.
94. ILM and UC are two trends which are likely to increase the use of _____ fabric topologies, in which
hosts are connected to one group of switches and storage to a different group.
95. To maximize fabric scalability, compatibility, and reliability, when planning zoning for a fabric it is best to
zone HBAs so that:
a. All HBAs accessing a given storage port are
in the same zone.
546

b. Hosts with a common OS type are all zoned


together, and separated from all other OSs.
c. Each HBA is in its own dedicated zone.
d. All devices in the fabric are in one zone.
e. If possible, zoning should be avoided, since it
is hard to manage.
96. If every switch in a fabric is directly connected to
every other switch, this is an example of a _____ topology.
97. The most reliable way to connect fabrics across MAN
or moderate WAN distances is by using ____ connections, either over dark fiber or xWDM equipment.
98. The FCIA has approved the _____ line rate, which has
now replaced 2Gbit as the basic rate for FC fabrics.
99. Dividing a director into two or more partitions - using
zoning, VSANs, or a similar scheme such as the dualdomain capability of a Brocade director - will make it
into a highly available system. (True/False)
100. Some of the options available for increasing the performance of a fabric include ________.
101. It is necessary for a SAN designer or project manager
to prepare and maintain proper _____ to ensure that future administrators will know what has been done and
why various decisions were made.
102. The simplest fabric design is the _____ topology, but
this is only suitable for very small deployments, due to
its limited scalability, performance, and reliability.
103. Proper use of zoning will improve fabric services scalability and reliability through Brocades automatic use
of ______ scoping.
104. The maximum number of ports currently supported by
Brocade inside a single-domain director is _____. The
smallest switch offered by Brocade has ______ ports.
Send feedback to bookshelf@brocade.com

547

SAN

105. The single biggest factor in determining how vulnerable a SAN is to DoS attacks or failures is whether or
not the SAN uses a ______ design.

548

106. block
107. Fibre Channel (FC)
108. Directly Attached Storage (DAS)
109. Ethernet and IP ; Fibre Channel
110. storage consolidation
111. Brocade
112. Information Lifecycle Management (ILM)
113. shared bandwidth
114. Redundant (A/B) fabrics
115. congestion
116. Utility Computing (UC)
117. SAN design ; implementation plan
118. Storage Area Network (SAN)
119. Return on Investment (ROI)
120. Fibre Channel ; iSCSI
121. gather business-oriented requirements
122. SAN Project Manager
123. Backup, restore, and LAN performance
124. HA clusters
125. FCP ; iSCSI
126. Fibre Channel ; Gigabit Ethernet ; FC-0 and FC-1
127. Fabric Shortest Path First (FSPF)
128. backup window
129. SCSI
130. Host Bus Adapter (HBA)
131. iSCSI hardware accelerated HBA
132. Utility Computing (UC) and Information Lifecycle
Management (ILM)
133. 1Gbit, 2Gbit, 4Gbit
134. 133Mbaud, 266Mbaud, 531Mbaud, 8Gbit, 10Gbit
135. ILM and UC
136. Name Server
137. data igration
138. Fibre Channel Arbitrated Loop (FC-AL)
139. frame-level trunking ; Dynamic Path Selection (DPS)
Send feedback to bookshelf@brocade.com

549

SAN

140. NFS ; CIFS


141. Disaster Tolerance (DT), Disaster Recovery (DR), or
Business Continuity and Availability (BC&A)
142. storage consolidation
143. UC
144. SAN
145. Redundant Array of Independent Disks (RAID)
146. Protocols
147. compute power
148. TCP/IP
149. On-time and in-order
150. Storage-related services, such as FC fabric services
151. two or more
152. iSCSI
153. SFPs, GBICs, or other similar laser media
154. ILM
155. Lowering capital costs, increasing management efficiency, and improving application performance
156. gathering requirements, developing technical specifications, estimating cost, performing an ROI analysis, and
creating a detailed design and rollout plan
157. clients, LAN, compute nodes, SAN, storage
158. SAN
159. business-oriented
160. connected to each other directly or across a network
161. irrelevant
162. all anticipated future increases in performance demand
163. Reliability
164. Availability
165. Serviceability
166. Fabric Manager
167. SAN Health
168. Scalability
169. Core/Edge (CE)
170. Extended Fabrics
171. BB credit
172. increase
550

173. Hosts and storage devices


174. protocol
175. Over-subscription
176. Congestion
177. Blocking, or Head of Line Blocking (HoLB)
178. Latency
179. ISL over-subscription
180. Tapestry Application Resource Manager (ARM)
181. Locality
182. Tiering
183. Frame-level trunking and exchange-level Dynamic
Path Selection (DPS)
184. SAN implementation
185. separating redundant fabrics into different rack and
providing separate power grids and UPSs
186. hard zoning
187. stage and validate
188. configuration log
189. SAN Health
190. Tapestry Wide Area File Services (WAFS)
191. G; all of the above
192. Application availability
193. Non-resilient and non-redundant
194. Locality
195. Tiered
196. C; each HBA should have its own zone
197. full mesh
198. Native FC
199. 4Gbit
200. False One of anything is not HA
201. adding ISLs or IFLs, increasing line rates, using trunking and/or DPS, localizing flows
202. SAN documentation
203. cascade
204. Registered State Change Notification,(RSCN)
205. 256; 8
206. redundant (A/B) fabric
Send feedback to bookshelf@brocade.com

551

SAN

D
D:

Q: What SAN planning process does Brocade use?
A: There are five phases in the reco mmended SAN planning process: gather the requi rements of the SAN through
interviews, develop pr eliminary technical specifications,
estimate the project cost, calculate ROI, and finally create
a detailed SAN design and rollout plan.
Q: What is a SAN project plan?
A: The SAN Project P lan may be very similar to other IT
project planning tools used within your company. The key
items it sho uld include are: notes and docum ents to support collected data such as interv iews and device surveys;
interpretations of the data; the design which emerges from
the data; a list of required equipment and associated costs;
a plan for implem enting, testing, releasing to production,
and managing the SAN.
Q: Generally, who is included on the project team?
A: The SAN Project Manager and SAN Designer are arguably the two most important roles. The project manager
will coo rdinate the ef fort and the d esigner will tran slate
business needs into technical requirem ents. It is not uncommon for both roles to be accom plished by the sam e
person. The technical team will consist of SAN Adm inistrators, System Adm inistrators, S torage Adm inistrators,
552

IP Network Adm inistrators, Database Administrators and


Application Specialists. The members of the team should
have a strong interest in, or have decision m aking authority related to the project.
Q: What is the difference between a business requirement
and a business problem?
A: A business problem is a statement about what needs to
be fixed or at least improved to help the organization
accomplish its m ission. For exam ple, Backups are in terfering with custom er service. A business requirement
will state a direction for the solution to one or more business problem s, and can be
used as a gu ideline for
choosing the appropriate solu tion. For example, The
SAN must complete the backup in no m ore that x hours,
and remain online during the process. This will save $y by
increasing productivity.
Q: What should be included in business requirements?
A: Be sure to gather specific business requirem ents, with
each requirement statement includin g what needs to happen, when it needs to happen, and how m uch money or
mission impact is involved if the re quirement is not m et.
This answers what, when, a nd why. How is a nswered
by a subsequent step. Where is generally self-evident.
Q: How do I develop technical specifications for a SAN?
A: The spe cification d ocument will be cre ated in the
planning phase. A number of factors m ust be t aken into
consideration in addition to the business requirem ents
statement. The location s of SAN equipm ent, the m echanisms for connecting the locations together, estim
ated
bandwidth, uptim e, and the num ber of attached devices
must all be analyzed when
creating the specifications
document.
Q: How do I justify my project?
A: As part of the ROI analys is you will have to produce
Send feedback to bookshelf@brocade.com

553

SAN

an estim ated net benefit. This is done by subtracting the


estimated cost of equ ipment from the p rojected gross
benefits. The projected benef its m ay include things like
increased productivity, lower m anagement costs, reduced
capital spending, and revenue gains. This task m ay be
best suited for your accounti ng departm ent, or at least
should be taken on in partnership with them.
Q: What is the most commonly used SAN technology?
A: Fibre Channel. Period.
Q: iSCSI is supposed to be cheaper, but there do not seem
to be m any real-world depl oyments. W hy is it not being
used extensively?
A: Although m any ve ndors, including Brocade, offer
iSCSI solutions, it is an imm ature and unreliable protocol
with m arginal ROI and m any hidde n costs. FC products
have had price reductions which eroded the iS CSI value
proposition, and serial ATA is ava ilable in the low end
market. This is squeezing out iS CSI from both ends of
the market, and its long-term viability is now in question.
Q: What is the difference between an ISL and an IFL?
A: An Inter-Switch L ink, or ISL, is the connection between two F C switches in a f abric. An Inter-Fabric Link,
or IFL, is the connection be tween an FC switch and an
FC-FC router. LSANs cross IF Ls. An IFL allows traf fic
to flow between different fabr ics in a Meta SAN, whereas
an ISL allows traffic and services to flow between
switches within a single fabric.
Q: How can SANs be extended over long distances?
A: There are m any options to extend a FC network over
long distances including SONET/SDH, xWDM, ATM,
and native FC over dark fiber. W ith limited solutions, IP
may also be an option. Both ATM and SONET/SDH solutions have very high pe
rformance and reliability
554

compared to IP SAN solutions, but also tend to cost more.


Q: What services do Fibre Channel switches provide?
A: Unlike IP SAN switche s, all Brocade FC switches
have a robust group of built-i n services. Fabric services
include a nam e services, m anagement services, highspeed routing services, auto-discovery and configuration,
and so on.
Q: What is driving the increased Fibre Channel speeds?
A: There are always in creasing demands for perfor mance
in networking. One example is the need to reduce backup
windows. Another is the incr easing need for high-speed
long-distance connections to support disaster recovery.
ILM and UC architectures are also drivers.
Q: Will my SAN support HA clustering?
A: All m odern clustering m ethods have one thing in
common: in order for one node to be able to take over an
application if another node fails, it needs to have access to
the data set that the failed node was using just before the
crash. As long as your SAN pr ovides that connectivity, it
should be a good basis for building HA clusters.
Q: What is SAN implementation?
A: This is the process of taking your paper design to
physical setup, through staging and testing, all the way
through release to production.
Q: I am designing dual fabrics, what are the implem entation considerations?
A: The concept of dual fabrics is to avoid any single point
of failure. For high-availabil ity fabrics, ensure that you
have separate power circuits available, and m ount redundant devices into different racks.
Q: What is the difference between hard and soft zoning?
A: Hard zoning is enforced by ASICs, while soft zoning
is enforced by the nam e server. All Brocade platSend feedback to bookshelf@brocade.com
555

SAN

forms shipped since about the turn of the century support


some for m of hard z oning in all usage cases. Older
switches supported hardware zoning only when zones
were defined by PID.
Q: How do I prepare m y SAN to go into production after
it has been cabled and configured?
A: Prior to transitioning your fabric to production, it is
important to validate the SAN by estab lishing a prof ile
and injecting faults into the fabric to verify that the f abric
and the edge devices are capable of recovering.
Q: Will keeping a change management log be helpful?
A: A diligently maintained configuration log can help you
with many tasks such as switch and fabric maintenance as
well as troubleshooting and recovery.
Q: Zoning is backed up to ev ery switch, but what about
the rest of the configuration parameters?
A: The best-practice is to crea te a backup of each switch
configuration on a host when implementing a n ew SAN,
changing a switch configurati on, or adding or replacing a
switch in the SAN.
Q: With so m any protocols av ailable, which should be
used in my SAN?
A: Fibre Channel is the dom inant SAN transport becaus e
of the im portance for ev en lower-tier storage networks to
have high perform ance and re liability. Brocade supports
other options, but FC should be the default choice unless
there is a comprehensive business case showing why another option should be used,
and proving that it will
actually work properly.
Q: What are common performance limitations in a SAN?
A: SAN attached devices, the SAN protocol, and link
speeds are usually the bottlenecks.
556

Q: What is the impact of protocol selection on the SAN?


A: It affects performance, reliability, scalability, manageability, cost, and indeed m ost other aspects of SAN
design. The best approach is to use a protocol with a long
and proven track record of production deployment.
Q: My SAN will initially be used as a low-end SAN but I
would like to scale in th e future, is Fibre Channel an appropriate choice?
A: Fibre Channel networks can be configured to meet any
performance requirem ent. Also, Brocade SANs can be
designed to scale and for investment protection.
Q: What are som e of t he cost issues should think about
when designing ISLs and IFLs?
A: The cost to performance ratio is probably the most obvious, but some designers may forget to consider the total
cost of a co nnection. T his m eans the cost of cables and
connectors. It also m eans the cost of downtime i f redundant links are not used, and the cost of productivity of
links are allowed to congest massively.
Q: What is over-subscription?
A: Over-subscription refers to a con dition in wh ich more
devices might need to access a resource than that resource
could fully support. In m any instances, oversubscription
is deliberately engineered into a SAN to reduce cost.
Q: Does over-subscription cause congestion?
A: No. However, it does cr eate the potential for congestion. Congestion is a conditi on in which devices are
actually trying to use a path beyond its capacity, so som e
of the traf fic destined f or th at path m ust be queued and
transmitted after a delay.
Q: What can I do to avoid congestion in my SAN?
A: The m ost comm on approaches for dealing with congestion include using locality, faster link s such as 4Gbit
Send feedback to bookshelf@brocade.com

557

SAN

or 10Gbit interfaces, or using h


ardware tru nking to
broaden link speeds into higher path rates.
Q: Do Brocade switches have Head of Line Blocking?
A: No. Head of Line Blocking occurs on poorly designed
switches. Brocade does not ship products which are capable of exh ibiting th is m isbehavior. However, other SAN
infrastructure vendors do.
Q: How do Brocade switches have such low latency?
A: Brocade uses cut-through ro uting which allows a
frame to be transm itted out the destina tion switch por t
while it is still being received into the source port.
Q: How do I determ ine the am ount of bandwidth will b e
required for any given path?
A: Analyze how much data each application will need to
move over that path, and then apply one of several calculation m ethods. For exam ple, it is possible to all up all
application peak loads, or to take their average loads, or
simply to apply a rule of thumb such as u sing the ratio of
hosts to storage ports.
Q: In addition to increasing SAN perform ance what other
benefits does locality provide?
A: Locality im proves RAS as there are fewer links and
therefore fewer total component s in the network, thus reducing cost and im
proving reliability numbers like
MTBF.
Q: Do Brocade switches offer load balancing?
A: Brocade switches have an option that allows FSPF to
reallocate routes whenever a fabric event occurs. This feature is called Dynam ic Load Sharing (DLS) because it
allows routes to be rese t dynam ically under conditions
that c an still guaran tee in or der delivery. Also, Brocade
platforms support one or m ore for ms of hardware trunking.
558

Q: Does trunking work well over long distances?


A: Yes, although diffe rent tr unking m ethods work over
different distances, or work best in different ways.
Q: What factors affect compatibility?
A: Protocols, frame for mats, node-to-node com patibility,
node-to-switch storage servic es behaviors, switch-toswitch services exchange.
Q: How important is it to plan for future expansion?
A: Always consider p erformance and scalab ility requirements of the initial deployment, and all anticipated future
increases in dem and. Network requirements tend to increase rather than d ecrease over tim e, and so all SAN
protocol an d topology choices sho uld be ab le to accom modate a wide range of scenarios.
Q: What can impact SAN performance?
A: Areas to consider when thinking about SAN perform ance include protocols, link rates, congestion, blocking,
and latency.
Q: Should I be more concerned with congestion or blocking?
A: Congestion does not stop communication between
endpoints entirely; it just slows it down som ewhat for a
period of tim e. Blocking, m ore properly called Head of
Line Blocking (HoLB), can actually stop communication
for an exten ded period o f time and is therefore an area of
concern. Brocade does not sell any product which exhibits
HoLB and any such product should be avoided.
Q: How should I prioritize RAS?
A: Application availability is th e most im portant consideration in S AN designs overall because an av ailability
issue can have an impact at the end-user level. Reliability
should be considered second because of the potential im pact of a failed com ponent to the S AN. Serviceability is
Send feedback to bookshelf@brocade.com

559

SAN

usually of least concern; however it should be considered.


Q: What SAN managem ent task s should be expect on a
day to day basis?
A: Day-to-day management tasks generally include monitoring the health of the network, and perfor
ming adds,
moves, and changes to the SAN itself and to the attached
hosts and storage devices. Using Fabric Manager will
simplify tasks associated w ith coordinating day-to-day
management of m ultiple f abrics. SAN Health will va stly
simplify proactiv e m anagement, since it au tomatically
checks the SAN agains t evolvi ng best-practices and ha s
automated housekeeping features such as looking for
unused zones.
Q: When planning m y SAN for scalability, w hat is th e
best approach?
A: To maximize the scalability of a SAN, it is always best
to break it down into sm aller fabrics. Use an A/B redundant m odel first, then s plit off other fabrics by function,
geographical location, adm inistrative groups, or by
spreading storage ports.
Q: When planning for scalability, w hat limitations should
be considered in the SAN design?
A: Limitations can be classified into five categories: manageability, fault contain ment, vendor support m atrices,
storage networking services, and the protocol itself.
Q: Which topologies are the most commonly used?
A: Just a few topologies are typically used as the basis for
SANs, and these are combined or varied to fit the needs of
specific deploym ents. The m ost common topologies for
SANs include cas cades, ri ngs, m eshes, and various
core/edge designs.
Q: What is the best way to preven t denial of s ervice attacks against a SAN?
560

A: It is never possible to m ake a system completely proof


against deliberate or accidental DoS attacks. However, it
is possible to m ake such ev ents far less like ly. Following
security best-practices is a goo d start. Im plementing
sound m anagement procedures helps, too. However, the
single biggest factor in determ ining vulnerability to this
form of attack is whether or not the SAN uses physically
isolated redundant fabrics, with redundant HBA connections.
Q: What is the best long-distance method in a SAN?
A: Extended native Fibre Channel IS Ls or IFLs over long
distances are generally the ea siest e xtension solutions to
manage and have the highest performance. Long distance
ISLs require that the SAN designer have an understanding
of buffer to buffer credits (BB credits).
Q: What are buffer to buffer credits (BB credits)?
A: In order to prevent fram es from dropping, no port can
transmit f rames unless the port to which it is direc
tly
communicating has the ability to receive them. It is possible that the receiving port will not b e able to f orward the
frame imm ediately, in which case it will need to have a
memory are a res erved to hold th e f rame until it can b e
sent on its way. This m emory area is called a buffer. All
devices in a SAN have a lim ited number of buffers, and
so they need a mechanism for telling other devices if they
have f ree b uffers bef ore a f rame is transm itted to them.
This mechanism is the exchange of BB credits.
Q: How do BB credits impact long distance links?
A: When using FC over long dis tance links, BB credits
become i mportant. The rule of thumb is tha t it takes one
credit per kilometer for full-speed 2Gbit operation. Given
a fixed number of BB credits, a link can go twice as far at
1Gbit as with 2Gbit. W ith 4Gbit links, twic e as m any
buffers per kilom eter a re requ ired as with 2Gbit links.
However, it is im portant to note tha t all curr ently
Send feedback to bookshelf@brocade.com

561

SAN

shipping Brocade platforms support more BB credits than


are needed to go the m aximum distance supported by todays optical com ponents. Realistically, it is necessary to
move to an DWDM archite cture to go beyond a hundred
kilometers or so, regardless of how m any credits a switch
can supply, and the leading DWDM vendors also provide
a credit mechanism which supersedes that of the switches.
Note that BB credits to not a pply to FCIP or other protocol tunneled links in any significant way.

562

Access Gateway ( ) NPIV



- N_Port,
E_Port
AL_PA (Arbitrated Loop Physical Address)
,

arbitrated loop.
American National Standards Institute ANSI
ANSI Americ an National Stan
dards Ins titute
.
ANSI T11 FC.
AP (Application Platforms )

, ,
, ,
..
API (Application Programming Interfaces
)


.
,
.
Send feedback to bookshelf@brocade.com

563

SAN

Application Platform AP
Application Programming Interface API
Application-Specific Integrated Circuit ASIC
Application Resource Manager (
)
,

(Utility Com puting)
Brocade SAN. Tapestry A pplication
Resource Manager Tapestry ARM.
Arbitrated Loop
Fibre Channel, 126

ARM . Application Resource Manager


ASIC (Application -Specific In tegrated Circuit)
,

Asynchronous Transfer Mode . ATM
ATM Asynchronous Transfer Mode


CAN, MAN WAN. ATM

, IP.
Backbone Fabric . BB Fabric
Bandwidth ( ) ,
.
BB_Credit Buffer-t o-buffer

, ,


564

BB Fabric (Backbone Fabric) FCR

Backbone Fabric

Meta SAN.
BB Fabric
E_Ports.
Bloom ASIC
Brocade F C. 16-
.
SilkW orm 3000 12000,

(, RAI
D), OEM- Brocade. 1Gbit 2Gbit FC.
Bloom-II Bloom.

,
- .
SilkW orm 3250, 3850, 24000

OEM- Brocade.
Broadcast ( )
.
Bridge ()
Brocade 1995 , Brocade

Fibre Channel.
,
.
Buffer-to-Buffer Credits . BB_Credit
CAN Ca mpus Area Networks ( )
1
. LAN,

Send feedback to bookshelf@brocade.com

565

SAN

100
, , CAN
.
Carrier Sense Multiple Access with Collision Detection
. CSMA/CD
Class Of Service . COS
CLI Command Line Interface (
) .
FCR Brocade
Fabric OS, .
Coarse Wave Division Multiplexer See CWDM
Command Line Interface . CLI
Condor SIC Brocade FC.
32 .
Brocade 4100.
1Gbit, 2Gbit 4Gbit FC.
Egret 10Gbit FC.
COS Class Of Service (
)
, ,
.
CRC Cyclic Redundancy Check (
)
.
Brocade A SIC CRC

Credit ()

-,
F/FL_Port
N/NL_Port ,
N/NL_Port
F/FL_Port.
566

CSMA/CD Carrier Sense Multiple Access with Collision


Detection (
)
,
(NICs)

CWDM Coarse Wave Division Multiplexer -
-
,

. . WDM DWDM.
Cyclic Redundancy Check . CRC
Dark Fiber ( ) -

.
DAS Direct Attached S torage (
)
.
DAS
, DAS -

, Fibre Channel HBA

.
Denial of Service . DoS
Dense Wave Digital Multiplexer . DWDM
Destination Fabric ID . DFID
Destination Identifier . DID
DID Destination Identifier ( )
Fibre C hannel

Send feedback to bookshelf@brocade.com

567

SAN

,
.
DID 010100 1, 1

).

.
Direct Attached Storage . DAS
DLS Dynam ic Load Sharing (

)

Domain ID 1
239,

FC
,


DoS Denial of Service

.
.

. SAN DoS

(b est-practices)

(A/B).
DWDM Dense W ave Digital M ultiplexer
. . WDM
CWDM.
,
CWDM.
Dynamic Load Sharing . DLS
E_D_TOV Error-Detect Tim
e Out Value

, ,

E_Port Expansion port (
),

. E_Port
568

ISL. E_Port
EX_Ports IFL.
Edge Fabric ( ) Fibre
Channel, FCR
EX_Port (

Core-edge ).

Meta SAN.
Egret ASIC Brocade,
4Gbit

10Gbit.
ELWL Extended Long W avelength Laser -
ELWL
1550 nm
.
Fibre
Channel ,

LWL.
SMF.
Error-Detect Time Out Value . ED_TOV
Ethernet IEEE
802.3. Ethernet LAN,
10Mbps.
CSMA/CD
. Fast Ethernet
100 Mbps, Gigabit
Ethernet - 1 Gbps.
10Gbps.
EX_Port Enhanced E_Port


.
, EX_Port
E_Port. Fibre
Channel, Br ocade E_Port.
Send feedback to bookshelf@brocade.com
569

SAN

, EX_Port,
,

E_Port. EX_Port
-
,
.
Exchange ()
FC N_Port. Exchange
.
Expansion Port ( ) . E_Port
Exported Device

FC LSAN.

,
Fabric 1 Fabric 2.
Extended Long Wavelength Laser . ELWL
F_Port (Fabric Port) ,
N_ Port,
HBA
Fabric (1) Fibre Channel,
N_Port
F_Ports. (2)

Fibre Channel ISL.
(3) ISL, Fibre
Channel (
). (4) Fabric Services, Storage Nam e Server, Managem ent
Server, FSPF routing ..
570

Fabric Identifier . FID


Fabric Loop Port . FL_Port
Fabric Operating System . FOS
Fabric Port . F_Port
Fabric Shortest Path First . FSPF
FC Fibre Channel
SAN. IP Ethernet
FC
.
FC-0 Fibre Channel
FC-1 Fibre Channel.
8b/10b 1G, 2G 4G;
64b/66b 10G.
FC-2 , ,
/
(ordered sets) Fibre Channel
FC-3 Fibre Channel
FC-4
(ULP), SCSI IP, FC
FC-FC Routing Service ( FCR service).
Fibre Channel, L SAN ,

,
. . FCR.
FCIP Tunneling Service FCIP TCP/IP


IP-. SAN
,
Send feedback to bookshelf@brocade.com

571

SAN

FC. E_Port
FCIP
FCIP
.
E_Port, FCIP.
FC-NAT Fibre Channel Network Address T ranslation


, NAT

.
FCP Fibre Channel SCSI
Fibre Channel. , -,

.
FCR Fibre Channel Routers (
)
, FCFC. FCR
, , ,
FCR,
FCIP, iSCSI.
FCRP Fibre Channel Router Protocol -
Brocade FCR
, ,
backbone.
Fibre Channel . FC
Fibre Channel Router . FCR
Fibre Channel Router Protocol . FCRP
FID Fabric ID
Meta SAN. . Global Header, SFID DFID.
Field Programmable Gate Array . FPGA
Field Replaceable Unit . FRU
572

Flannel Brocade ASIC


FC -AL FC.
SilkWorm 1000 Stitch
ASIC.
FL_Port Fabric loop,
.
NL_Ports loop.
FOS Brocade Fabric Operating System
,
Brocade.
Fabric
OS 4.x. ( 6.1). . XPath.
FPGA Field Programmable Gate Array
ASIC, FPGA
.
ASIC ,
.
Frame ,
Sta rt-of-Frame (SoF),
,
, CRC (Cyclic Redundancy
Check) End-of-Frame
(EoF). 0
2112 ( EX_Port
2048 ), CRC 4 .
FRU Field Replaceab le Units ,

FSPF Fabric Shortes t Path Firs t (


) Brocade

Fibre Channel

Send feedback to bookshelf@brocade.com

573

SAN

Full Duplex ( )


G_Port (Generic port)
,

E_, F_
FL_Port
GBIC Gigabit Interface Controller ( Conve rter)
-,
SFP.
Generic Port . G_Port
Gigabit Interface Controller . GBIC
Global Header - BB
Meta SAN.
(interfabric addressing header, IFA header)

FC-FS, SID DID,

PID
-.
BB.
HBA Host Bus Adapter (HBA- )
Fibre
Channel SAN
Host Bus Adapter . HBA
Hot Swappable ( )
,

IEEE Institute of Electrical and Electronics Engineers


,

IETF Internet Engineering Task Force ,
Internet
574

iFCP Internet Fibre Channel Protocol - ,


FCIP
Fibre Channel IP WAN.

,
FCIP.
IFL Inter-Fabric L
ink

.
ISL. EX_Port
E_Port EX_Port EX_Port,
. . EX-IFL
EX2-IFL.
ILM Inform ation Life cycle Managem ent (
)
,


,

In Order Delivery . IOD
In-Band
Fibre Channel. FSPF FCRP -
in-band.
Initiator () Fibre Channel,
.
. HBA.
Information Lifecycle Management . ILM
Institute of Electrical & Electronics Engineers .
IEEE
Inter-Fabric Link . IFL
Internet Engineering Task Force . IETF
Internet Fibre Channel Protocol . iFCP
Send feedback to bookshelf@brocade.com

575

SAN

Internet Protocol . IP
Internet Storage Name Server . iSNS
Inter-Switch Link . ISL
IOD In Order Delive ry ,
,

, .
,
.
IP Internet Protocol - TCP/IP
IPsec In ternet Pro tocol Security - ,
.
VPN

/ .
iSCSI Gateway Service iSCSI
SCSI IP.

ISL Inter -Switch Link

E_Port

iSNS Internet Storage Name Server


iSCSI Fibre Channel SNS.
JBOD Just a Bunch Of Disks ( );
, Arbitrated Loop

Just a Bunch Of Disks . JBOD
L_Port Node Loop ,
FC_AL
LAN Local Area Network ( ) ,

5 .
Latency () ,
576


,
. (
).
LED Light Emitting Diode ,
.
Light Emitting Diode . LED
Logical Storage Area Network . LSAN
Loom FC ASIC
.

16
.
SilkW orm 2000. 1Gbit FC.
LSAN SAN
. LSAN

FC
BB.

.
LSAN Zone LSAN.
FC, ,
LSAN
.
, -
FC-NAT
. LSAN


LSAN

WWN WWN ,
LSAN_.
Local Area Network . LAN
Long Wavelength Laser . LWL
Send feedback to bookshelf@brocade.com

577

SAN

LUN Logi cal Unit Num


ber (
)
SCSI , SCSI ID.
Fibre Channel
LUN


WWN/PID.
LWL Long W
avelength Laser
,
1310nm .

FC.
SMF.
MAC Media Access Control
OSI Data Link.
NIC,
.
MAN Metropolitan Area Networks
LAN
WAN. MAN
.
MAN
. MAN
, WAN
.
Mean Time Between Failures . MTBF
Mean Time To Repair . MTTR
Media Access Control . MAC
Meta SAN , ,

BB
, LSAN


FC,
,

. LSAN
FCR Meta SAN

578

, .

internetwork .
Metropolitan Area Network . MAN
MMF Multim ode Fiber
,
500 .
MMF 50 62.5
. SWL.
MTBF Mean Tim e Between Failu res -
.
,
.
.
MTTR Mean Time To Repair - ,
.
Multicast
( unicast , broadcast
). .
Multimode Fiber . MMF.
Multiprotocol () ,
.
,
Ethernet Fibre Channel,
.
N_Port (Node Port)
Fibre Channel

-.
Name Server/Service . SNS
NAS Network Attached Storage

- (
Send feedback to bookshelf@brocade.com

579

SAN

CIFS / NFS).
NAS , , -
UNIX NFS .
Network Attached Storage . NAS
Network Interface Card . NIC
NIC Network Interface Card ,
. HBA.
NL_Port Node Loop,
FC_AL protocol
Node Loop Port . L_Port NL_Port
NPIV (N_Port Id Virtua lization)

N_Port ,
, Access Gateway
OEM Original Equipm
ent Manufacturers
( )
, Brocade

,
,

,
,
.
Open Shortest Path First . OSPF
Original Equipment Manufacturer . OEM
OSPF Open Shortest Path First
IP .
IP
.
PID Port ID Fibre Channel
. PID

- Do main_ID, Area_ID

Port_ID, Br ocade
580

FC-AL AL_P
A .
PID: 010f00.
Point-to-Point -
Fibre Channel ,

Port Identifier . PID
Proxy Device (-)
xlate device ,
,
. PID -
, .
QoS Quality of Se rvice ( )
, ,

,
, ,

.
Quality of Service . QoS
R_A_TOV Resource Alloca tion Tim e Out Value;
,

RAID Redundant Array
of Independent (
Inexpensive) Disks
() . ,
.
RAID,

,
,

.
RAS Reliability Availability and Serviceability
Send feedback to bookshelf@brocade.com

581

SAN

(, )
,
. RAS MTBF MTTR
,
.
Redundancy ()


Redundant Array of Independent Disks . RAID
Registered State Change Notification . RSCN
Reliability Availability and Serviceability . RAS
Resource Allocation Time Out Value . RA_TOV
RETMA Radio Electronics Te levision Manufacturers
Association

.

19-
RETMA
RETMA (rack unit). ,
1.75
Route (1)
FSPF. (2)
Meta SAN FCRP.
Router ()
.
RSCN Registered S
tate Chan ge Notifications
,

SAN Storage Area Networks ( )

. SAN
582

Fibre Channel.
SAN Island ( SAN) SAN
SAN,

,
.

FC-FC.
SCR (State Change Registration)

,
RSCN .
SCSI Small Com
puter System s Interface

15 25 .
- SCSI-2 - SCSI-3.

,
SCSI
, FC IP.
SCSI Inquiry SCSI,
- ,

, .
SNS
Fibre C hannel. iSCSI Gate way Service IP IQN,
SNS .
SDH . SONET/SDH
Sequence

, N_Port
Serial

SFP Sm all Form -Factor Pluggable GBIC
Send feedback to bookshelf@brocade.com

583

SAN

-
Fibre Channel Gi gabit Et hernet,
Gigabit Ethernet

GBIC.
SID Source Identifiers
Fibre Channel,
,
(
). SID 010100 1, 1

.
SilkWorm

Brocade.
McDATA
Brocade.
Simple Name Server . SNS
Single Mode Fiber . SMF
SMF Single Mode Fiber
,
10
. SMF
9-

LWL ELWL.
Small Computer Systems Interface . SCSI
SNS Si mple ( Sto rage) Nam e Server ( Service);

, ,
Fibre Channel.
(directory service).
SONET/SDH (Synchronous Optical Networks) MAN
WAN. FC

584

SONET/SDH.

.
SDH (Synchronous Digital
Hierarchy).
Source Identifier . SID
State Change Registration . SCR
Stitch ASIC Brocade
FC. SilkW orm
1000 Flannel.


Storage Area Network . SAN
Storage Subsystem . Subsystem
Storage Virtualization . Virtualization
Subsystem
()

. . SAN
.
SWL (Short W avelength Laser)

850nm ,
.
.
Synchronous Digital Hierarchy . SDH
Synchronous Optical Networks . SONET/SDH
T11 ANSI,

/

Tapestry Brocade , .
2007 .
Send feedback to bookshelf@brocade.com

585

SAN

File Area Networks (FAN)


Target
SAN
TCP/IP Transmission Control Protocol over Internet Protocol Internet
TCP Transmission Control Protocol -
,
, IP

.

.
TCP Offload Engine . TOE
TCP ,
.
(, HTTP)

web-
,
, web-
.
TOE (TCP Offload Engines) iSCSI NIC
.
TOE NIC

Fibre Chann el HBA
.
Topology () ,

Transceiver () ,

.
-

.
Transmission Control Protocol . TCP
586

Tunneling ()

UC Utility Com puting



,


U_Port Universa l Port
,
G/E/F/FL_Port.
Silkworm
2xxx
Universal P ort,
.
.
ULP Upper Level Protocols
FC , FC4, SCSI, IP VI.
Unicast .
broadcast m ulticast
.
Universal Port . U_Port
Upper Level Protocol . ULP
Utility Computing . UC
Virtual Local Area Network . VLAN
Virtual Private Network . VPN
Virtual Router Redundancy Protocol . VRRP
Virtual Storage Area Network . VSAN
Virtualization
()

( )
. ,
RAID
Send feedback to bookshelf@brocade.com

587

SAN

, , LUN,

, LUN over-provisioning ( LUN
, )
.
VLAN Virtual Local Area Networks
LAN
.
IP/Ethernet
(broadcast storms).
Fibre Channel
.
VPN Virtual Private Network (
)
.
VPN ,
.
VRRP Virtual Router Redundancy Protocol - ,


.
.
IP- (,
Multiprotocol Router iSCSI
FCIP)
, OSPF RIP.
VSAN (Virtual SAN) ,
, .
LSAN. VS AN
,
, LSAN

.
WAN ( Wide Area Network)
,
, , . -

588

. W AN
.
WAFS Tapestry Wide Area File Services
.
Brocade c 2008 .
Wavelength Division Multiplexer . WDM
WDM W
avelength Division Multiplexers
;
-


Wide Area Network . WAN
World-Wide Name . WWN
WWN W orld-Wide Nam e 64 .
WWN: 10:00:00:60:69:51:0e:8b.
XPath Fabric OS
Brocade.
AP7420 Multiprotocol
Router.
xWDM . DWDM CWDM
Zoning
. PID
WWN.

Send feedback to bookshelf@brocade.com

589

Вам также может понравиться