Вы находитесь на странице: 1из 16

EMC CLARiiON Storage system types

AX150 :-Dual storage processor enclosure with Fibre-Channel interface to host and SATA-2 disks. AX150i :-Dual storage processor enclosure with iSCSI interface to host and SATA-2 disks. AX100 :- Dual storage processor enclosure with Fibre-Channel
interface to host and SATA-1 disks.

AX100SC
Single storage processor enclosure with Fibre-Channel interface to host and SATA-1 disks.

AX100i
Dual storage processor enclosure with iSCSI interface to host and SATA-1 disks.

AX100SCi
Single storage processor enclosure with iSCSI interface to host and SATA-1 disks.

CX3-80
SPE2 - Dual storage processor (SP) enclosure with four Fibre-Channel front-end ports and four back-end ports per SP.

CX3-40
SP3 - Dual storage processor (SP) enclosure with two Fibre Channel front-end ports and two back-end ports per SP. CX3-40f SP3 - Dual storage processor (SP) enclosure with four Fibre Channel front -end ports and four back-end ports per SP

CX3-40c
SP3 - Dual storage processor (SP) enclosure with four iSCSI front-end ports, two Fibre Channel front -end ports, and two back-end ports per SP.

CX3-20
SP3 - Dual storage processor (SP) enclosure with two Fibre Channel front-end ports and a single back-end port per SP.

CX3-20f
SP3 - Dual storage processor (SP) enclosure with six Fibre Channel front-end ports, and a single back-end port per SP.

CX3-20c
SP3 - Dual storage processor (SP) enclosure with four iSCSI front-end ports, two Fibre Channel front-end ports, and a single back-end port per SP.

CX600, CX700
SPE - based storage system with model CX600/CX700 SP, Fibre-Channel interface to host, and Fibre Channel disks

CX500, CX400, CX300, CX200


DPE2 - based storage system with model CX500/CX400/CX300/CX200 SP, Fibre-Channel interface to host, and Fibre Channel disks.

CX2000LC
DPE2- based storage system with one model CX200 SP, one power supply (no SPS),Fibre-Channel interface to host, and Fibre Channel disks.

C1000 Series
10-slot storage system with SCSI interface to host and SCSI disks

C1900 Series
Rugged 10-slot storage system with SCSI interface to host and SCSI disks

C2x00 Series
20-slot storage system with SCSI interface to host and SCSI disks

C3x00 Series
30-slot storage system with SCSI or Fibre Channel interface to host and SCSI disks

FC50xx Series
DAE with Fibre Channel interface to host and Fibre Channel disks

FC5000 Series
JBOD with Fibre Channel interface to host and Fibre Channel disks

FC5200/5300 Series
iDAE -based storage system with model 5200 SP, Fibre Channel interface to host, and Fibre channel disks

FC5400/5500 Series
DPE -based storage system with model 5400 SP, Fibre Channel interface to host, and Fibre channel disks

FC5600/5700 Series
DPE -based storage system with model 5600 SP, Fibre Channel interface to host, and Fibre Channel disks

FC4300/4500 Series
DPE -based storage system with either model 4300 SP or model 4500 SP, Fibre Channel interface to host, and Fibre Channel disks

FC4700 Series
DPE -based storage system with model 4700 SP, Fibre Channel interface to host, and Fibre Channel disks

IP4700 Series
Rackmount Network-Attached storage system with 4 Fibre Channel host ports and Fibre Channel disks.

Host To CLARiiON Configuration


Here we are looking at only three possible ways in which a host can be attached to a Clariion. From talking with customers in class, these seem to be the three most common ways in which the hosts are attached. The key points to the slide are:1. The LUN, the disk space that is created on the Clariion, that will eventually be assigned to the host, is owned by one of the Storage Processors, not both.2. The host needs to be physically connected via fibre, either directly attached, or through a switch.

CONFIGURATION ONE
In Configuration One, we see a host that has a single Host Bus Adapter (HBA), attached to a single switch. From the Switch, the cables run once to SP A, and once to SP B. The reason this host is zoned and cabled to both SPs is in the event of a LUN trespass. In Configuration One, if SP A would go down, reboot, etc...the LUN would trespass to SP B. Because the host is cabled and zoned to SP B, the host would still have access to the LUN via SP B. The problem with this configuration is the list of Single Point(s) of Failure. In the event that you would lose the HBA, the Switch, or a connection between the HBA and the Switch (the fibre, GBIC on the switch, etc...), you lose access to the Clariion, thereby losing access to your LUNs.

CONFIGURATION TWO

In Configuration Two, we have a host with two Host Bus Adapters. HBA1 is attached to a switch, and from there, the host is zoned and cabled to SP B. HBA2 is attached to a separate switch, and from there , the host is zoned and cabled to SP A. The path from HBA2 to SP A, is shown as the "Active Path" because that is the path data will leave the host from to get to the LUN, as it is owned by SP A. The path from HBA1 to SP B, is shown as the "Standby Path" because the LUN doesn't belong to SP B. The only time that the host would use the "Standby Path" is in the event of a LUN Trespass. The advantage of using Configuration Two over Configuration One, is that there is no single point of failure. Now, let's say we install PowerPath on the host. With PowerPath, the host has the potential to do two things. First, it allows the host to initiate the Trespass of the LUN. With PowerPath on the host, if there is a path failure (HBA gone bad, switch down, etc...), the host will issue the trespass command to the SPs, and the SPs will move the LUN, temporarily, from SP A to SP B. The second advantage of PowerPath on a host, is that it allows the host to 'Load Balance' data from the host. Again, this has nothing to do with load balancing the Clariion SPs. We will get there later. However, in Configuration Two, we only have one connection from the host to SP A. This is the only path the host has and will use to move data for this LUN.

CONFIGURATION THREE
In Configuration Three, hardware wise, we have the same as Configuration Two. However, notice that we have a few more cables running from the switches to the Storage Processors. HBA1 is into the switch and zoned and cabled to SP A and SP B. HBA2 is into the switch and zoned and cabled to SP A and SP B. What this does now is to give HBA1 and HBA2 an 'Active Path' to SP A, and HBA1 and HBA2, 'Standby Paths' to SP B. Because of this, the Host now can route data down each active path to the Clariion, allowing the host "Load Balancing" capabilities. Also, the only time a LUN should trespass from one SP to another is if there is a Storage Processor failure. If the host were to lose HBA1, it still has HBA2 with an active path to the Clariion. The same goes for a switch failure and connection failure.

Type and Benefits of Meta LUN

The purpose of a MetaLUN is that a Clariion can grow the size of a LUN on the fly. Lets say that a host is running out of space on a LUN. From Navisphere, we can Expand a LUN by adding more LUNs to the LUN that the host has access to. To the host, we are not adding more LUNs. All the host is going to see is that the LUN has grown in size. We will explain later how to make space available to the host.There are two types of MetaLUNs, Concatenated and Striped. Each has their advantages and disadvantages, but the end result which ever you use, is that you are growing, expanding a LUN. A Concatenated MetaLUN is advantageous because it allows a LUN to be grown quickly and the space made available to the host rather quickly as well. The other advantage is that the Component LUNs that are added to the LUN assigned to the Host can be of a different RAID type and of a different size. The host writes to Cache on the Storage Processor, the Storage Processor then flushes out to the disk. With a Concatenated MetaLUN, the Clariion only writes to one LUN at a time. The Clariion is going to write to LUN 6 first. Once the Clariion fills LUN 6 with data, it then begins writing to the next LUN in the MetaLUN, which is LUN 23. The Clariion will continue writing to LUN 23 until it is full, then write to LUN 73. Because of this writing process, there is no performance gain. The Clariion is still only writing to one LUN at a time.A Striped MetaLUN is advantageous because if setup properly could enhance performance as well as protection. Lets look first at how the MetaLUN is setup and written to, and how performance can be gained. With the Striped MetaLUN, the Clariion writes to all LUNs that make up the MetaLUN, not just one at a time. The advantage of this is more spindles/disks. The Clariion will stripe the data across all of the LUNs in the MetaLUN, and if the LUNs are on different Raid Groups, on different Buses, this will allow the application to be

striped across fifteen (15) disks, and in the example above, three back-end buses of the Clariion. The workload of the application is being spread out across the back-end of the Clariion, thereby possibly increasing speed. As illustrated above, the first Data Stripe (Data Stripe 1) that the Clariion writes out to disk will go across the five disks on Raid Group 5 where LUN 6 lives. The next stripe of data (Data Stripe 2), is striped across the five disks that make up RAID Group 10 where LUN23 lives. And finally, the third stripe of data (Data Stripe 3) is striped across the five disks that make up Raid Group 20 where LUN 73 lives. And then the Clariion starts the process all over again with LUN6, then LUN 23, then LUN 73. This gives the application 15 disks to be spread across, and three buses. As for data protection, this would be similar to building a 15 disk raid group. The problem with a 15 disk raid group is that if one disk where to fail, it would take a considerable amount of time to rebuild the failed disk from the other 14 disks. Also, if there were two disks to fail in this raid group, and it was RAID 5, data would be lost. In the drawing above, each of the LUNs is on a different RAID group. That would mean that we could lose a disk in RAID Group 5, RAID Group 10, and RAID Group 20 at the same time, and still have access to the data. The other advantage of this configuration is that the rebuilds are occurring within each individual RAID Group. Rebuilding from four disks is going to be much faster than the 14 disks in a fifteen disk RAID Group.The disadvantage of using a Striped MetaLUN is that it takes time to create. When a component LUN is added to the MetaLUN, the Clariion must restripe the data across the existing LUN(s) and the new LUN. This takes time and resources of the Clariion. There may be a performance impact while a Striped MetaLUN is re-striping the data. Also, the space is not available to the host until the MetaLUN has completed re-striping the data.

What is MetaLUN?
metaLUN is a type of LUN whose maximum capacity can be the combined capacities of all the LUNs that compose it. The metaLUN feature lets you dynamically expand the capacity of a single LUN (base LUN) into a larger unit called a metaLUN. You do this by adding LUNs to the base LUN. You can also add LUNs to a metaLUN to further increase its capacity. Like a LUN, a metaLUN can belong to a Storage Group, and can participate in SnapView, MirrorView and SAN copy sessions. MetaLUNs are supported only on CX-Series storage systems. A metaLUN may include multiple sets of LUNs and each set of LUNs is called a component. The LUNs within a component are striped together and are independent of other LUNs in the metaLUN. Any data that gets written to a metaLUN component is striped across all the LUNs in the component. The first component of any metaLUN always includes the base LUN. The number of components within a metaLUN and the number of LUNs within a component depend on the storage system type. The following table shows this relationship: Storage System Type LUNs Per metaLUN Component Components Per metaLUN CX700, CX600 32 16 CX500, CX400 32 8 CX300, CX200 16 8 You can expand a LUN or metaLUN in two ways stripe expansion or concatenate expansion. A stripe expansion takes the existing data on the LUN or metaLUN you are expanding, and restripes (redistributes) it across the existing LUNs and the new LUNs you are adding. The stripe expansion may take a long time to complete. A concatenate expansion creates a new metaLUN component that includes the new expansion LUNs, and appends this component to the existing LUN or metaLUN as a single, separate, striped component. There is no restriping of data between the original storage and the new LUNs. The concatenate operation completes immediately. During the expansion process, the host is able to process I/O to the LUN or metaLUN, and access any existing data. It does not, however, have access to any added capacity until the expansion is complete. When you can actually use the increased user capacity of the metaLUN depends on the operating system running on the servers connected to the storage system.

How does a CLARiiON array flush cache?


If you open Navisphere Manager and select any Frame/Array and click properties of Array, you will see a cache tab, which will give you cache configuration. There you need to setup configuration like LOW watermark, Hight Watermark. Did you think how CLARiiON behave on these percentage. Lets look close FLUSHING method what CLARiiON does: There will many situation when CLARiiON Processor has to Cache Flushing to keep some free space in cache Memory.There are different size for cache memory for different series of CLARiiON.

There are three levels of flushing:


IDLE FLUSHING (LUN is not busy and user I/O continues)Idle flushing keeps some free space in write cache when I/O activity to a particular LUN is relatively low. If data immediacy were most important, idle flushing would be sufficient. If idle flushing cannot maintain free space, though, watermark flushing will be used. WATERMARK FLUSHING The array allows the user to set two levels called watermarks: the High Water Mark (HWM) and the Low Water Mark (LWM). The base software tries to keep the number of dirty pages in cache between those two levels. If the number of dirty pages in write cache reaches 100%, forced flushing is used. FORCED FLUSHING Forced flushes also create space for new I/Os, though they dramatically affect overall performance. When forced flushing takes place, all read and write operations are halted to clear space in the write cache. The time taken for a forced flush is very short (milliseconds), and the array may still deliver acceptable performance, even if the rate of forced flushes is in the 50 per second range.

CLARiiON SAN fan-in and fan-out configuration rules


Fan-In Rule: A server can be zoned to a maximum of four storage systems.

Fan-Out Rule:

For FC5300 with Access Logix software - 1 - 4 servers (eight initiators) to 1 storage system. For FC4500 with Access Logix - 15 servers to 1 storage system; each server with a maximum of one (single) path to an SP. For FC4700 with Base or Access Logix software 8.42.xx or higher - 32 initiators per SP port for a maximum of 128 initiators per FC4700. Each port on each SP supports 32 initiators. Ports 0 and 1 on each SP in a FC4700 handles server connections. Port 1 on each SP in a FC4700 with MirrorView also handles remote mirror connections. In a remote mirror configuration, each path between SP A port 1 on one storage system and SP A port 1 on another storage system counts as one initiator for each port 1. Likewise, each path between SP B port 1 on one storage system and SP B port 1 on another storage system counts as one initiator for each port 1. For FC4700 with Base or Access Logix software 8.41.xx or lower - 15 servers to 1 storage system; each server with a maximum of one (single) path to an SP. For CX200 - 15 initiators per SP, each with a maximum of one (single) path to an SP; maximum of 15 servers. Fan-Out for CX300 - 64 initiators per SP for a maximum of 128 initiators per storage system.

For CX400 - 32 initiators per SP port for a maximum of 128 initiators per CX400. Each port on each SP supports 32 initiators. Ports 0 and 1 on each SP in a CX400 handles server connections. Port 1 on each SP in a CX400 with MirrorView also handles remote mirror connections. In a remote mirror configuration, each path between SP A port 1 on one storage system and SP A port 1 on another storage system counts as one initiator for each port 1. Likewise, each path between SP B port 1 on one storage system and SP B port 1 on another storage system counts as one initiator for each port 1. Fan-Out CX500 - 128 initiators per SP and maximum of 256 initiators per CX500 available for server connections. Ports 0 and 1 on each SP handle server connections. Port 1 on each SP in a CX500 with MirrorView/A or MirrorView/S enabled also handles remote mirror connections. Each path used in a MirrorView or SAN Copy relationship between two storage system counts as an initiator for both storage systems. For CX600 - 32 initiators per SP port and maximum of 256 initiators per CX600 available for server connections. Ports 0, 1, 2, and 3 on each SP in any CX600 handle server connections. Port 3 on each SP in a CX600 with MirrorView also handles remote mirror connections. In a remote mirror configuration, each path between SP-A port 3 on one storage system and SP-A port 3 on another storage system counts as one initiator for each port 3. Likewise, each path between SP-B port 3 on one storage system and SPB port 3 on another storage system counts as one initiator for each port 3. Fan-Out CX700 - 256 initiators per SP and maximum of 512 initiators per CX700 available for server connections. Ports 0, 1, 2, and 3 on each SP in any CX700 handle server connections. Port 3 on each SP in a CX700 with MirrorView/A or MirrorView/S enabled also handles remote mirror connections. Each path used in a MirrorView or SAN Copy relationship between two storage system counts as an initiator for both storage systems An initiator is any device with access to an SP port. Each port on each SP supports 32 initiators. Check with your support provider to confirm that the above rules are still in effect.

Different Vendor Worldwide Names WWN Details


Twenty-four of the sixty-four bit World Wide Name must be unique for every vendor. A partial listing of those vendors most familiar to EMC with regard to Symmetrix Fibre Channel connectivity. If decoding a HBA WWN, then issue an 8f, command to view the WWN in the FA login table. Bytes 13 of the World Wide Names contain the unique vender codes. Note that if there is a switch connected between the FA and the host bus adapter, then the name and fabric servers of the switch will login to the FA. These WWNs can be decoded in the same way as the HBA WWNs. In the following example the unique vendor code is 060B00, this indicates that the HBA attached was supplied by Hewlett Packard. UTILITY 8F -- SCSI Adapter utility : TIME: APR/23/01 01:23:30 HARD LOOP ID : 000 (ALPA=EF) LINK STATE : ONLINE: LOOP CHIP TYPE/REV: 00/00 Q RECS TOTAL: 3449 CREDIT: 0 RCV BUFF SZ: 2048 IF FLAGS : 01/ TAGD/NO LINK/NO SYNC/NO WIDE/NO NEGO/NO SOFT/NO ENVT/NO CYLN IF FLAGS1: 08/NO PBAY/NO H300/NO RORD/ CMSN/NO QERR/NO DQRS/NO DULT/NO SUNP IF FLAGS2: 00/NO SMNS/NO DFDC/NO DMNQ/NO NFNG/NO ABSY/NO SQNT/NO NRSB/NO SVAS IF FLAGS3: 00/NO SCI3/NO ..../NO ..../NO ..../NO ..../NO ..../NO ..../NO .... FC FLAGS : 57/ ARRY/ VOLS/ HDAD/NO HDNP/ GTLO/NO PTOP/ WWN /NO VSA FC FLAGS1: 00/NO VCM /NO CLS2/NO OVMS/NO ..../NO ..../NO ..../NO ..../NO .... FC FLAGS2: 00/NO ..../NO ..../NO ..../NO ..../NO ..../NO ..../NO ..../NO .... FC FLAGS3: 00/NO ..../NO ..../NO ..../NO ..../NO ..../NO ..../NO ..../NO ....

HOST SID PORT NAME (WWN) NODE NAME RCV BUF CREDIT CLASS 000001 50060B0000014932 50060B0000014933 992 EE 4 3 PRLI REQ: IFN RXD DONE. The following are common HBA vendor codes: Refer to the open systems host matrix if you need to know whether these HBAs are supported for specific hosts. 00-00-D1 (hex) ADAPTEC INCORPORATED 0000D1 (base 16) ADAPTEC INCORPORATED 00-30-D3 (hex) Agilent Technologies 0030D3 (base 16) Agilent Technologies 00-60-69 (hex) BROCADE COMMUNICATIONS SYSTEMS 006069 (base 16) BROCADE COMMUNICATIONS SYSTEMS 00-02-A5 (hex) Compaq Computer Corporation 0002A5 (base 16) Compaq Computer Corporation 00-60-48 (hex) EMC CORPORATION 006048 (base 16) EMC CORPORATION 00-00-C9 (hex) EMULEX CORPORATION 0000C9 (base 16) EMULEX CORPORATION 00-E0-24 (hex) GADZOOX NETWORKS 00E024 (base 16) GADZOOX NETWORKS 00-60-B0 (hex) HEWLETT-PACKARD CO. 0060B0 (base 16) HEWLETT-PACKARD CO. 00-50-76 (hex) IBM 005076 (base 16) IBM 00-E0-69 (hex) JAYCOR NETWORKS, INC. 00E069 (base 16) JAYCOR NETWORKS, INC. 08-00-88 (hex) MCDATA CORPORATION 080088 (base 16) MCDATA CORPORATION 08-00-0E (hex) NCR CORPORATION 08000E (base 16) NCR CORPORATION 00-E0-8B (hex) QLOGIC CORP. 00E08B (base 16) QLOGIC CORP. 00-00-6B (hex) SILICON GRAPHICS INC./MIPS 00006B (base 16) SILICON GRAPHICS INC./MIPS

00-10-9B (hex) VIXEL CORPORATION 00109B (base 16) VIXEL CORPORATION This information will help you to identify the vendor of particularar HBA's WWN.

What are the differences between failover modes on a CLARiiON array?


What are the differences between failover modes on a CLARiiON array? A CLARiiON array is an Active/Passive device and uses a LUN ownership model. In other words, when a LUN is bound it has a default owner, either SP-A or SP-B. I/O requests traveling to a port SPA can only reach LUNs owned by SP-A and I/O requests traveling to a port on SP-B can only reach LUNs owned SP-B. It is necessary to have different failover methods because in certain situations a host will need to access a LUN on the non-owning SP.

The following failover modes apply: Failover Mode 0 LUN Based Trespass Mode This failover mode is the default and works in conjunction with the Auto-trespass feature. Auto-trespass is a mode of operation that is set on a LUN by LUN basis. If AutoTrespass is enabled on the LUN, the non-owning SP will report that the LUN exists and is available for access. The LUN will trespass to the SP where the I/O request is sent. Every time the LUN is trespassed a Unit Attention message is recorded. If Auto-trespass is disabled, the non-owning SP will report that the LUN exists but it is not available for access. If an I/O request is sent to the non-owning SP, it is rejected and the LUNs ownership will not change. Note: The combination of Failover Mode 0 and Auto-Trespass can be dangerous if the host is sending I/O requests to both SP-A and SP-B because the LUN will need to trespass to fulfill each request. This combination is most commonly seen on an HP-UX server using PV-Links. The Autotrespass feature is enabled through the Initiator Type setting of HP-AutoTrespass. A host with no failover software should use the combination of Failover Mode 0 and Auto-trespass disabled. Failover Mode 1 Passive Not Ready Mode In this mode of operation the non-owning SP will report that all non-owned LUNs exist and are available for access. Any I/O request that is made to the nonowning SP will be rejected. A Test Unit Ready (TUR) command sent to the non-owning SP will return with a status of device not ready. This mode is similar to Failover Mode 0 with Auto-Trespass disabled. Note: This mode is most commonly used with PowerPath. To a host without PowerPath, and configured with Failover Mode 1, every passive path zoned, for example, a path to SP-B for a LUN owned by SP-A, will show to the server as Not Ready. This will show as with offline errors on a Solaris server, SC_DISK_ERR2 errors with sense bytes 0102, 0700, and 0403 on an AIX server or buffer to I/O errors on a Linux server. If PowerPath is installed, these types of messages should not occur.

Failover Mode 2 DMP Mode In this mode of operation the non-owning SP will report that all nonowned LUNs exist and are available for access. This is similar to Failover Mode 0 with Auto-trespass Enabled. Any I/O requests made to the non-owning SP will cause the LUN to be trespassed to the SP that is receiving the request. The difference between this mode and Auto-trespass mode is that Unit Attention messages are suppressed. Note: This mode is used for some Veritas DMP configurations on some operating systems. Because of the similarities to Auto-Trespass, this mode has been known to cause Trespass Storms. If a server runs a script that probes all paths to the Clariion, for instance format on a Solaris server, the LUN will trespass to the non owning SP when the I/O request is sent there. If this occurs for multiple LUNs, a significant amount of trespassing will occur. Failover Mode 3 Passive Always Ready Mode In this mode of operation the non-owning SP will report that all non-owned LUNs exist and are available for access. Any I/O requests sent to the Nonowning SP will be rejected. This is similar to Failover Mode 1. However, any Test Unit Ready command sent from the server will return with a success message, even to the non-owning SP. Note: This mode is only used on AIX servers under very specific configuration parameters and has been developed to better handle a CLARiiON non-disruptive upgrade (NDU) when AIX servers are attached.

Creating the Navisphere Agent file: agentID.txt


you have a multihomed host and are running like : IBM AIX, HP-UX, Linux, Solaris, VMware ESX Server (2.5.0 or later), or Microsoft Windows you must create a parameter file for Navisphere Agent, named agentID.txt About the agentID.txt file: This file, agentID.txt (case sensitive), ensures that the Navisphere Agent binds to the correct HBA/NIC for registration and therefore registers the host with the correct storage system. The agentID.txt file must contain the following two lines: Line1: Fully-qualified hostname of the host Line 2: IP address of the HBA/NIC port that you want Navisphere Agent to use For example, if your host is named host28 on the domain mydomain.com and your host contains two HBAs/NICs, HBA/NIC1 with IP address 192.111.222.2 and HBA/NIC2 with IP address 192.111.222.3, and you want the Navisphere Agent to use NIC 2, you would configure agentID.txt as follows: host28.mydomain.com 192.111.222.3 To create the agentID.txt file, continue with the appropriate procedure for your operating system: For IBM AIX, HP-UX, Linux, and Solaris: 1. Using a text editor that does not add special formatting, create or edit a file named agentID.txt in either / (root) or in a directory of your choice. 2. Add the hostname and IP address lines as described above. This file should contain only these two lines, without formatting. 3. Save the agentID.txt file.

4. If you created the agentID.txt file in a directory other than root, for Navisphere Agent to restart after a system reboot using the correct path to the agentID.txt file, set the environment variable EV_AGENTID_DIRECTORY to point to the directory where you created agentID.txt. 5. If a HostIdFile.txt file is present in the directory shown for your operating system, delete or rename it. The HostIdFile.txt file is located in the following directory for your operating system: AIX :- /etc/log/HostIdFile.txt HP-UX :- /etc/log/HostIdFile.txt Linux :- /var/log/HostIdFile.txt Solaris :- /etc/log/HostIdFile.txt 6. Stop and then restart the Navisphere Agent. NOTE: Navisphere may take some time to update, however, it should update within 10 minutes. 7. Once the Navisphere Agent has restarted, verify that Navisphere Agent is using the IP address that is entered in the agentID.txt file. To do this, check the new HostIdFile.txt file. You should see the IP address that is entered in the agentID.txt file.The HostIdFile.txt file is in the following directory for your operating system: AIX :/etc/log/HostIdFile.txt HP-UX :/etc/log/HostIdFile.txt Linux :-/var/log/HostIdFile.txt Solaris :-/etc/log/HostIdFile.txt For VMware ESX Server 2.5.0 and later 1. Confirm that Navisphere agent is not installed. 2. Using a text editor that does not add special formatting, create or edit a file named agentID.txt in either / (root) or in a directory of your choice. 3. Add the hostname and IP address lines as described above. This file should contain only these two lines, without formatting. 4. Save the agentID.txt file. 5. If you created the agentID.txt file in a directory other than root, for subsequent Agent restarts to use the correct path to the agentID.txt file, set the environment variable EV_AGENTID_DIRECTORY to point to the directory where you created agentID.txt. 6. If a HostIdFile.txt file is present in the /var/log/ directory, delete or rename it. 7. Reboot the VMWARE ESX server. 8. Install and start Navisphere Agent and confirm that it has started. NOTE: Before installing Navisphere Agent, refer to the EMC Support Matrix and confirm that you are installing the correct version. NOTE: If necessary, you can restart the Navisphere Agent NOTE: Navisphere may take some time to update, however, it should update within 10 minutes. 9. Once the Navisphere Agent has restarted, verify that Navisphere Agent is using the IP address that is entered in the agentID.txt file. To do this, check the new HostIdFile.txt file which is located in the /var/log/ directory. You should see the IP address that is entered in the agentID.txt file. For Microsoft Windows: 1. Using a text editor that does not add special formatting, create a file named agentID.txt in the directory C:/ProgramFiles/EMC/Navisphere Agent. 2. Add the hostname and IP address lines as described above. This file should contain only these two lines, without formatting. 3. Save the agentID.txt file. 4. If a HostIdFile.txt file is present in the C:/ProgramFiles/EMC/Navisphere Agent directory, delete or rename it. 5. Restart the Navisphere Agent 6. Once the Navisphere Agent has restarted, verify that Navisphere Agent is using the correct IP address that is entered in the agentID.txt file. Either:

a. In Navisphere Manager, verify that the host IP address is the same as the IP address that you you entered in the agentID.txt file. If the address is the same, the agentID.txt file is configured correctly. b. Check the new HostIdFile.txt file. You should see the IP address that is entered in the agentID.txt file.

What is the Navisphere Host Agent?


We have discussed about CLARiiON. How to create LUN,RAID Group etc. before I shold discuss about adding storage to host. I must discuss about Navisphere Host agent. This is most important service/daemon which runs on host and communicate with CLARiiON. Without Host agent you can not resgister host with storage group automatically. Then if you want register host with Navishere Host agent then you need to register manually.

The Host Agent registers the servers HBA (host bus adapter) with the attached storage system when the Agent service starts. This action sends the initiator records for each HBA to the storage system. Initiator records are used to control access to storage-system data. The Agent can then retrieve information from the storage system automatically at startup or when requested by Navisphere Manager or CLI. The Host Agent can also: Send drive mapping information to the attached CLARiiON storage systems. Monitor storage-system events and can notify personnel by email, page, or modem when any designated event occurs. Retrieve LUN WWN (world wide name) and capacity information from Symmetrix storage systems.

How do I change the Navisphere Manager password?


You can change the storage system password in Navisphere Manager as follows: 1. Open Navisphere Manager 2. Click Tools > Security > Change Password. 3. In the Change Password window, enter the old (current) password in the Old Password text box. 4. Enter the new password in the New Password text box and then enter it again in the Confirm New Password text box. 5. Click OK to apply the new password or Cancel to keep the current password. 6. In the confirmation popup, click either Yes to change the password or No to cancel the change and retain your current (old) password. Note: If you click Yes, you will briefly see "The operation successfully completed" and then you will be disconnected. You will need to log back in using the new password.

CLARiiON LAB Exercise- Session -I


I am going to demonstrate full LAB exercise of CLARiiON. If anybody interested to any specific LAB exercise please send me mail I will try to help and give LAB exercise. There are many exercise like: 1) Create RAID Group 2) Bind the LUN 3) Create Storage Group 4) Register the Host 5) Present LUN to Host 6) Create Meta LUN etc.

I will try to cover all the exercise including if you need anything extra exercise. Very Easy way to allocate the storage using Allocation wizard provided everything connected and visible to CLARiiON. CLARiiON LAB Session -I I am going to demonstrate LAB Exercise for Allocation Storage to Host from CX Array using Allocation Wizard of Navisphere Manager. I will be giving demo other method as well like allocating storage without wizard because some time host will not login to CX Frame. I will be discussing command line as well who are more interested in scripting. Steps 1: Login to Navisphere Manager ( Take any IP of any SPs in your domain and type on browser).

You can see the all the clariion listing under each Domain.

Steps 2: Click Allocation on Left Side Menu Tree.

Steps 3: Click next once you have selected Host name (Whom you are going to present LUN) You can select Assign LUN to this server or you can continue without assigning.

Steps 4: Select Next and Select CX frame where you want to create LUN.

Steps 5: Select Next, If you have created RAID Group It will be listed here otherwise you can create new Raid Group by selecting New Raid Group.( I will be discussing later how to create different RAID Group)

Steps 6: Select RAID Group ID and depending on Raid Group select number of disk for example if you are creating Raid 5 (3+1) then select 4 disks. Once You have created raid group. It will list under RAID Group dialog box.Click Next and select the Number of LUN you want to create on same RAID Group. For example RAID Group created for 3+1 disk of 500 GB each disk means you can use roughly 500X4X70% GB. Now you want to create different size of each LUN on the same RAID Group

Steps 7: Once you have selected Number of LUN and Size of LUN. You can verify the configuration before you run the finish button.

Steps 8: Once you click the Finish Button you can see the status. System will create Storage Group with Server Name (You can change storage group name later) and add created LUN into storage Group.

You can verify the entire configuration by clicking storage group name: This is end of first CLARiiON LAB exercise. Hope this exercise will be useful for beginner. I will try to cover as much i can for all the EMC product lab exercise. If anybody interested to clear the any EMC Proven foundation

Вам также может понравиться