Вы находитесь на странице: 1из 3

TASK0094296

==================
Volume
-----1008
1010
1112
120c
1236
1237
130e
130f
1310
1324
1325
1326
1327
1408
1409
140a
140b
1410
150a
1511
1512
1603
160b
160c
1635
1636
163d
1702
1707
170b
170c
170d
172f
1737

LUN Size(GB) Initiator


Host
---- -------- ---------------- --------5008
48.0 50000972084A39A5 VMAX_10g1
5010
48.0 50000972084A39A5 VMAX_10g1
5112
18.0 50000972084A39A5 VMAX_10g1
520c
18.4 50000972084A39A5 VMAX_10g1
5236
21.0 50000972084A39A5 VMAX_10g1
5237
20.0 50000972084A39A5 VMAX_10g1
530e
48.0 50000972084A39A5 VMAX_10g1
530f
48.0 50000972084A39A5 VMAX_10g1
5310
18.0 50000972084A39A5 VMAX_10g1
5324
25.0 50000972084A39A5 VMAX_10g1
5325
25.0 50000972084A39A5 VMAX_10g1
5326
25.0 50000972084A39A5 VMAX_10g1
5327
34.0 50000972084A39A5 VMAX_10g1
5408
48.0 50000972084A39A5 VMAX_10g1
5409
48.0 50000972084A39A5 VMAX_10g1
540a
48.0 50000972084A39A5 VMAX_10g1
540b
48.0 50000972084A39A5 VMAX_10g1
5410
18.4 50000972084A39A5 VMAX_10g1
550a
48.0 50000972084A39A5 VMAX_10g1
5511
18.0 50000972084A39A5 VMAX_10g1
5512
32.0 50000972084A39A5 VMAX_10g1
5603
48.0 50000972084A39A5 VMAX_10g1
560b
48.0 50000972084A39A5 VMAX_10g1
560c
18.0 50000972084A39A5 VMAX_10g1
5635
25.0 50000972084A39A5 VMAX_10g1
5636
25.0 50000972084A39A5 VMAX_10g1
563d
25.0 50000972084A39A5 VMAX_10g1
5702
48.0 50000972084A39A5 VMAX_10g1
5707
48.0 50000972084A39A5 VMAX_10g1
570b
48.0 50000972084A39A5 VMAX_10g1
570c
30.0 50000972084A39A5 VMAX_10g1
570d
48.0 50000972084A39A5 VMAX_10g1
572f
25.0 50000972084A39A5 VMAX_10g1
5737
25.0 50000972084A39A5 VMAX_10g1

ESS3:
================================================================================
==================================================================
esscli -u ar032893 -p passw0rd -s 162.86.3.152 delete volumeaccess -d "ess=2105.
20449 host=VMAX_10g1 volume=1008,1010,1112,120c,1236,1237,130e,130f,1310,1324,13
25,1326,1327,1408,1409,140a,140b,1410,150a,1511,1512,1603,160b,160c,1635,1636,16
3d,1702,1707,170b,170c,170d,172f,1737"
================================================================================
==================================================================
esscli -u ar032893 -p passw0rd -s 162.86.3.152 delete volumeaccess -d "ess=2105.
20449 host=VMAX_11g1 volume=1008,1010,1112,120c,1236,1237,130e,130f,1310,1324,13
25,1326,1327,1408,1409,140a,140b,1410,150a,1511,1512,1603,160b,160c,1635,1636,16
3d,1702,1707,170b,170c,170d,172f,1737"
================================================================================
==================================================================
esscli -u ar032893 -p passw0rd -s 162.86.3.152 delete volumeaccess -d "ess=2105.
20449 host=VMAX_6g1 volume=1008,1010,1112,120c,1236,1237,130e,130f,1310,1324,132
5,1326,1327,1408,1409,140a,140b,1410,150a,1511,1512,1603,160b,160c,1635,1636,163

d,1702,1707,170b,170c,170d,172f,1737"
================================================================================
==================================================================
esscli -u ar032893 -p passw0rd -s 162.86.3.152 delete volumeaccess -d "ess=2105.
20449 host=VMAX_7g1 volume=1008,1010,1112,120c,1236,1237,130e,130f,1310,1324,132
5,1326,1327,1408,1409,140a,140b,1410,150a,1511,1512,1603,160b,160c,1635,1636,163
d,1702,1707,170b,170c,170d,172f,1737"
================================================================================
==================================================================

ESS4:
162.86.3.154
2105.26930
esscli -u ar032893 -p passw0rd -s
26930 host=VMAX_10g1 volume=ALL"
esscli -u ar032893 -p passw0rd -s
26930 host=VMAX_11g1 volume=ALL"
esscli -u ar032893 -p passw0rd -s
26930 host=VMAX_6g1 volume=ALL"
esscli -u ar032893 -p passw0rd -s
26930 host=VMAX_7g1 volume=ALL"

162.86.3.154 delete volumeaccess -d "ess=2105.


162.86.3.154 delete volumeaccess -d "ess=2105.
162.86.3.154 delete volumeaccess -d "ess=2105.
162.86.3.154 delete volumeaccess -d "ess=2105.

================================================================================
===========================================
usehes5b
162.86.3.159 IBM.2107-75AAVK1
VMAX_EMC V3
chvolgrp -action remove -volume 2002,2003,2102,2103,2202,2203,2302,2303,3103,320
3,3303,3304,3403 V3
2002,2003,2102,2103,2202,2203,2302,2303,3103,3203,3303,3304,3403

dscli> lsfbvol -volgrp V3


Date/Time: 7 de marzo de 2013 14:10:07 ARST IBM DSCLI Version: 5.4.20.27 DS: IBM
.2107-75AAVK1
Name
ID accstate datastate configstate deviceMTM datatype extpool cap
(2^30B) cap (10^9B) cap (blocks)
================================================================================
=================================
usehux69_2002 2002 Online Normal
Normal
2107-900 FB 512 P0
40.0
83886080
usehux69_2003 2003 Online Normal
Normal
2107-900 FB 512 P0
40.0
83886080
usehux69_2102 2102 Online Normal
Normal
2107-900 FB 512 P1
40.0
83886080
usehux69_2103 2103 Online Normal
Normal
2107-900 FB 512 P1
40.0
83886080
usehux69_2202 2202 Online Normal
Normal
2107-900 FB 512 P2
40.0
83886080
usehux69_2203 2203 Online Normal
Normal
2107-900 FB 512 P2
40.0
83886080
usehux69_2302 2302 Online Normal
Normal
2107-900 FB 512 P3
40.0
83886080

usehux69_2303
40.0
ux69_3103
40.0
ux69_3203
40.0
ux69_3303
40.0
ux69_3304
40.0
ux69_3403
40.0

2303 Online Normal


83886080
3103 Online Normal
83886080
3203 Online Normal
83886080
3303 Online Normal
83886080
3304 Online Normal
83886080
3403 Online Normal
83886080

Normal

2107-900 FB 512

P3

Normal

2107-900 FB 512

P21

Normal

2107-900 FB 512

P22

Normal

2107-900 FB 512

P23

Normal

2107-900 FB 512

P23

Normal

2107-900 FB 512

P24

Hello Miroslav,
Here you are my comments:
a) New HW should be ordered no matter which option will be chosen
b) Firstly it doesn't follow best practices to deliver space to a production ser
ver through ISLs. Besides this option is not suitable since it'll use ISLs links
in order to pass all storage traffic between SAN switches.
Nowadays, through these ISLs are passing storage replication traffic (demb_310sv
c01 => demb_H1svc01). The host traffic could cause a bottleneck on these connect
ions becoming performance issues on luns delivered through the ISLs and over the
whole metro-mirror replications a part of SAN issues. Another thing is that cus
tomer is requiring space with replication between SVCs, in this case it couldn't
be delivered the mirror space.
c) It is the more suitable option, we can assign 2 x 400 GB luns with mirror to
demb_H1svc01 and customer can start working on this new space while upgrade is c
oming. Then, we'll be able to deliver the 2.2 TB (1.1 for each server) on demb_3
01svc01 and the mirror space on demb_H1svc01.
d) I'm not aware of decommission scheduled or coming on this site.

Вам также может понравиться