Вы находитесь на странице: 1из 46

SAN Host Installation and Configuration Guide

Version 1.0

Cranford Community College

Title: SAN Host Installation and Configuration Guide Description: This document covers the process of SAN host installation and configuration and allocating SAN space to hosts.
Content: i. Upgrading a Server and making it a SAN Host Diagram (page 2) ii. Step by Step Guide of Upgrading a Server and making it a SAN Host (page3) iii. SAN host installation information (Cranford version) (page 10) iv. Allocating SAN space to Hosts (page 35)

Version Control
Version
1.0

Date
16-06-2008

Reason and Change Owner


ESK

ESK

Cranford Community College Confidential

SAN Host Installation and Configuration Guide

Version 1.0

Take the server OFFLINE

Stop the Services (step 2)

Install BESR on the Server

Server 2000?

For the detailed procedure see: Step by Step Guide of Upgrading a Server and making it a SAN Host

Yes Backup System State NTBackup Backup SystemState /F C:\ systemstate.bkf No Take a Backup of C: on external drive via BESR

Upgrading a Server and making it a SAN Host

Take a backup of the remaining drives (step 6)

Install new Server Hardware

Use BESR Boot CD to boot the new server from the CD

Restore Drive C: (step 9)

Upgrading the Server ?

Yes Do the Server Upgrade and install the necessary No drivers

Make the Server a SAN Host (See SAN Host Installation Procedure) Restore the remaining drives (step 12)

Start all stopped services

Bring server ONLINE

ESK

Cranford Community College Confidential

SAN Host Installation and Configuration Guide

Version 1.0

ii. Step by Step Guide of Upgrading a Server and making it a SAN Host 1. Take the server OFFLINE. 2. Stop the following services: - RM Services - Backup Exec - Symantec Antivirus - SQL Server/Agent 3. Install Backup Exec System Restore on the server. 4. Take a backup of the System State using the CLI bellow. The backup file name must be systemstate.bkf (This step is needed ONLY if you are upgrading a 2000 server to 2003. There is no need to run this step if server is 2003): - NTBackup Backup SystemState /F C:\systemstate.bkf 5. Take a backup of drive C using BESR and save the backup file on an external Fire wire Drive. 6. Take backup of the other drives D, F, G, H using NT Backup or Veritas on an external drive (Exclude 1st option).

7. Put in place the new server hardware. 8. Use BESR Boot CD to boot the new server from the CD. 9. Restore drive C to the new server (You must load the Raid drivers for the new server at this stage so that C partition to be recognised for the restore). C drive can be resized at this stage. Ignore installing any other drivers because you will be upgrading to 2003 and then these drivers can be installed.

ESK

Cranford Community College Confidential

SAN Host Installation and Configuration Guide

Version 1.0

a. Load the Raid drivers.

b. Open Command Shell Window.

ESK

Cranford Community College Confidential

SAN Host Installation and Configuration Guide

Version 1.0

c. Prepare the partition before restoring drive C using the following commands: - diskpart - list disk - select disk 1 (number may be different) - clean - exit

d. Start recovering process

ESK

Cranford Community College Confidential

SAN Host Installation and Configuration Guide

Version 1.0

e. View by filename

f. Select the drive C backup file.

ESK

Cranford Community College Confidential

SAN Host Installation and Configuration Guide

Version 1.0

g. Select the partition you want to restore to (that will become drive C)

h. On next screen press Cntrol-Shift together to allow all 3 boxes to appear.

ESK

Cranford Community College Confidential

SAN Host Installation and Configuration Guide

Version 1.0

i. Resize drive C and select Set driveactive

j. Check that all setting are correct.

ESK

Cranford Community College Confidential

SAN Host Installation and Configuration Guide

Version 1.0

k. Select Yes to start the recovery

l. The following window will appear for all the drivers. Install Raid drivers only and ignore the rest because they will be installed once the new OS (server 2003) is installed.

10. Upgrade windows 2000 server to 2003 server and install all the remaining drivers. 11. Make the new server a SAN Host (See SAN Host Installation Procedure) 12. Restore drives D, F,G and H to the new server (because the new server is a SAN Host the drives virtually are on the server but the data is stored on the SAN). During the restore ensure that Restore Security box on advance options is ticked. Important: Because we upgrading from 2000 to 2003 RMUsers folder location will have to be changed from drive D to drive H. So when restoring drive D do not restore RMUsers folder. Instead restore this folder separately to drive H that we have created. Once the RMUsers folder is restored on drive H the following 2 fixes must be done: - Import drive D quotas from the old server to drive H on the new server and enable quota management (ensure you have exported drive D old server quotas before you destroying the server) - In all Registry entries target drive must be changed from D:\RMUsers to H:\RMUsers (To easily achieve this use RegReplace.exe tool but ensure Server Service is stopped before doing any registry changes). 13. Start all stopped services 14. Bring server ONLINE.

ESK

Cranford Community College Confidential

SAN Host Installation and Configuration Guide

Version 1.0

iii. SAN host installation information (Cranford version as per RM Notes)

Use of this document Where this document is used by anyone other than those directly employed by RM please note the following: Whilst every care has been taken to ensure the accuracy of the installation procedure set out in the attached documents and any related materials, they do not constitute advice and should not be relied upon in making (or refraining from making) any decision. Errors and omissions do occur and you should not take the accuracy of the information for granted. Because RM does not have control over the use of the information any warranty, express or implied, as to the quality, accuracy, timeliness, completeness, performance, fitness for a particular purpose of the materials is excluded. RM will not be liable for any damages (including without limitation damages for loss of profit, revenue or anticipated savings) arising in contract, tort or otherwise from the use of or inability to use the materials or from any action taken (or refrained from being taken) as a result of using the materials.

ESK

Cranford Community College Confidential

10

SAN Host Installation and Configuration Guide

Version 1.0

Table of Contents
Use of this document .....................................................................................................................................10 Table of Contents .....................................................................................................................................................11 Product Description ..................................................................................................................................................12 Document Scope ......................................................................................................................................................12 Terms........................................................................................................................................................................12 Information - SAN Setup Overview................................................................................................................14 Procedure - Testing additional network connections.....................................................................................15 Procedure - iSCSI initiator installation ...........................................................................................................16 Procedure - Tweak TCP parameters.............................................................................................................16 Procedure - PowerPath installation ...............................................................................................................17 Procedure - Install PowerPath Engineering Fix.............................................................................................18 Procedure - iSCSI Initiator configuration .......................................................................................................18 Procedure - Navisphere host agent installation.............................................................................................21 Procedure - Navisphere host agent configuration .........................................................................................22 Information - The Navisphere Manager console ...........................................................................................23 Procedure - Register the server with storage array.......................................................................................23 Information LUNS, RAID Groups and Storage Groups ..............................................................................25 Information - LUNS ........................................................................................................................................25 Information - RAID Groups ............................................................................................................................25 Information - Storage Groups ........................................................................................................................25 Information LUN ID and naming conventions.............................................................................................26 Procedure Check to see created LUNS and Storage Groups....................................................................26 Procedure Create LUNs and Storage Groups............................................................................................27 Procedure Load balance new LUNs across Storage Processors ..............................................................28 Procedure - Create Storage Group ...............................................................................................................28 Procedure Add LUNs to Storage Group .....................................................................................................28 Procedure - Add host to storage group .........................................................................................................29 Procedure - Disk initialisation ........................................................................................................................30 Procedure - Partition creation ........................................................................................................................31 Procedure iSCSI initiator Bound / Persistent settings ................................................................................32 Procedure Set service dependencies.........................................................................................................32 Procedure Final tasks .................................................................................................................................34

ESK

Cranford Community College Confidential

11

SAN Host Installation and Configuration Guide

Version 1.0

Product Description
This document describes the process used by RM engineers to attach a Windows 2003 Server into an iSCSI Storage Area Network (SAN) to turn it into a SAN host. The purpose of this is to allow sections of the SAN storage array to be presented to the server so they can be treated as local drives.

Document Scope
This document is provided to RMs SAN customers for information only. Should you need to carry out any of the processes described in this document, it is strongly recommended that you seek advice from RM Support. The following points are covered in this document: iSCSI SANs with a EMC Clariion CX3-10 or CX300i disk array Windows 2003 Server CC3 SR6 and above servers as SAN hosts The following are not covered: Configuration of SAN hardware OS versions other than Windows Server 2003 SP1 and above Fibre Channel based SAN architectures Clustering, virtualisation.

Terms
SAN host Disk Array or Storage Array SAN Switches A server that is connected into the SAN The EMC Clariion CX3 series disk storage array The Gigabit Ethernet switches that make up the SAN architecture

ESK

Cranford Community College Confidential

12

SAN Host Installation and Configuration Guide

Version 1.0

Windows drives hosted on the SAN array must be configured as Basic Disks. Do not convert to Dynamic Disks under any circumstances.

ESK

Cranford Community College Confidential

13

SAN Host Installation and Configuration Guide

Version 1.0

Information - SAN Setup Overview

Figure 1 - iSCSI SAN overview

The diagram above depicts the iSCSI SAN. iSCSI is SCSI over IP. It uses Ethernet components. The IP details of the Clariion Storage array are as follows: (some installations may vary) Port on storage array SPA iSCSI Port 0 (SPA FE0) SPA iSCSI Port 1 (SPA FE1) SPB iSCSI Port 0 (SPB FE0) SPB iSCSI Port 1 (SPB FE1) SPA management port SPB management port IP Address 172.31.1.150 172.31.2.150 172.31.1.151 172.31.2.151 172.31.1.200 172.31.1.201 Switch connection 1 2 1 2 1 1

Subnet mask is 255.255.255.0 for all SAN components. SAN Switch 1 and SAN switch 2 are on separate subnets and there is no connectivity between them. They are completely independent switches to make up the resilient SAN architecture.

ESK

Cranford Community College Confidential

14

SAN Host Installation and Configuration Guide

Version 1.0

It is vital that the IP and subnet details are entered accurately on all components of the SAN; errors will cause IP routing issues leading to instability.

Procedure - Testing additional network connections


1. On the server, rename the network connections to SAN Switch 1 & SAN Switch 2 as appropriate. 2. Change the IP addresses on the Network Connections as follows:

First SAN host Second SAN host Third SAN host nth SAN host

Connection into SAN switch 1 172.31.1.101 172.31.1.102 172.31.1.103 172.31.1.10n

Connection into SAN switch 2 172.31.2.101 172.31.2.102 172.31.2.103 172.31.2.10n

Subnet mask is 255.255.255.0 No gateway or DNS settings. If server is already a DNS server (i.e. you are migrating an existing CC3 DC) you will receive a warning about DNS. OK this. 3. On each of the SAN network connections, IP properties / advanced / DNS tab / untick register this connections settings in DNS. 4. If this server is already a DNS server (i.e. you are migrating an existing CC3 DC) do the following: i. Administrative tools, DNS ii. Right click the server name, properties. iii. On the interface tab, select the Only the following IP addresses option. iv. Highlight and remove each of the SAN IP addresses. Only the LAN IPs should remain. Apply and OK. v. Expand the forward lookup zones. Sort by data, and delete entries that relate to the SAN connections. vi. Close DNS console. 5. Identify SAN switch 1 and SAN switch 2. (SAN Switch 1 has the connections to the array management ports.) Connect the server into the switches. 6. From a command prompt, ping all six IPs on the array as per the table on the previous page (i.e. the four iSCSI and 2 management ports). Ensure you have connectivity before continuing. If you only have a single Network Card connecting into the SAN, this must connect into SAN switch 1. Servers with only one connection into the SAN are not suitable for critical or disk intensive applications, i.e. CC3. 7. Open Network Connections from Control Panel. From the Advanced Menu, select Advanced Settings. 8. On the Adapters And Bindings Tab, Connections (the top box), move the active LAN connection to be at the top of the list by using the up and down arrows next to the connection box. On a standard CC3 network, this may be called Default LAN

ESK

Cranford Community College Confidential

15

SAN Host Installation and Configuration Guide

Version 1.0

Connection. If adapter teaming or similar is in use on the server, the connection relating to the teamed adapter should be at the top of the list, not the team members. Move any disabled or inactive connections to the bottom of the list. This stage is necessary to support APCs PowerChute UPS software, as the standard version will only support agents on a single subnet.

Procedure - iSCSI initiator installation This section requires a server restart. You will require the Microsoft iSCSI initiator software version 2.04.
1. On the server, run the Microsoft iSCSI initiator version 2.04 MSI. 2. When the Software Update Installation Wizard opens, click Next 3. Installation Options screen, select the following (as per picture): Initiator Service Software Initiator Microsoft MPIO Multipathing Support for iSCSI

Figure 2 - iSCSI initiator installation 4. Click Next. 5. Read and agree to the license agreement and click Next to install the software. 6. Click Finish, allowing the server to reboot.

Procedure - Tweak TCP parameters This section requires a server restart


1. Logon to the server. Run Regedit. 2. Locate and then click the following registry subkey: HKEY_LOCAL_MACHINE > SYSTEM > CurrentControlSet > Services > Tcpip > Parameters > Interfaces

ESK

Cranford Community College Confidential

16

SAN Host Installation and Configuration Guide

Version 1.0

3. The interfaces will be listed below the Interfaces folder by automatically generated GUIDs, for example, {064A622F-850B-4C97-96B3-0F0E99162E56} 4. Identify which are the relevant GUIDs by the IP address value contained within them. You only want to add the registry entries into the 2 GUIDs that relate to the SAN connections. (Note there may be more GUIDs present than physical network connections in the server. This is not a problem.) 5. For each of the 2 SAN interface GUIDs select the GUID and perform the following steps: Select Edit > New > DWORD value. Name the new value TcpAckFrequency and assign it a value of 1. 6. Repeat for other SAN connection.

Figure 3 - TcpAckFrequency settings 7. Exit the Registry Editor. 8. Restart Windows for this change to take effect.

Procedure - PowerPath installation This section requires a server restart. You will require the EMC PowerPath CD and a PowerPath license code.
1. Logon to the server. 2. Right click my computer, properties, hardware, driver signing and change setting to Ignore. Ok.OK 3. Insert PowerPath CD. Close the Autorun explorer window. 4. Browse CD to \PP460\w2003\ and run EMCPP.W2003_32.4.6.0.GA.exe .It takes a few seconds to launch.

ESK

Cranford Community College Confidential

17

SAN Host Installation and Configuration Guide

Version 1.0

5. In the Choose Language Setup window, select English (United States) as the language you want for this installation and click OK. 6. You may receive an information screen about MPIO framework versions. Read & click OK. 7. In the setup wizard Welcome window, click Next. 8. In the CLARiiON AX-Series window, select No and click Next. 9. In the Customer Information dialog box, enter your name and organization and click Next. 10. If the Destination Folder dialog box opens, click Next to install PowerPath in the default directory. 11. In the Ready to Install the Program dialog box, click Install. 12. In the EMC PowerPath Licensing Tool window, enter your 24-digit registration number in the License Key field, and click Add. Make a note of the entry in the capabilities column (e.g. All or Base). Then OK. Note: if you are installing a server with only one NIC connection into the SAN, you can leave the code blank.

Figure 4 - PowerPath Licensing 13. Finish. Yes to reboot.

Procedure - Install PowerPath Engineering Fix This section requires a server restart. You will require the EMC PowerPath Engineering Fix 1 file.
1. Logon to the server. 2. Right click my computer, properties, hardware, driver signing and change setting to Ignore. Ok.OK. This is necessary as the setting may have been reset by a policy. 3. Run EMCPower.W2003_32.4.6.0.EF1.GA.exe. 4. Next, Install, Finish, Yes to reboot.

ESK

Cranford Community College Confidential

18

SAN Host Installation and Configuration Guide

Version 1.0

Procedure - iSCSI Initiator configuration The iSCSI initiator applet can be accessed via control panel. The installation may also have put a shortcut on the desktop. Note during the installation we are not using iSNS.
1. Logon to the server. 2. Launch the Microsoft iSCSI initiator applet (from desktop or control panel) 3. Select the Discovery tab. Under Target Portals (top) section, select Add. 4. Enter the IP address iSCSI Port 0 on Storage Processor A (SPA FE 0). This will normally be 172.31.1.150. Leave the port at 3260 and select advanced. In the local adapter box, select Microsoft iSCSI Initiator from the drop down. Leave other settings alone.

Figure 5 - iSCSI initiator setup 5. Select OK and OK to return to the iSCSI Initiator Properties, discovery Tab. 6. Dont add any other IPs from the storage array into the Target Portals box.

Figure 6 - iSCSI initiator setup 7. On the Targets tab, four entries should be present relating to the four iSCSI front end (FE) ports on the storage array. They will all show as inactive.

ESK

Cranford Community College Confidential

19

SAN Host Installation and Configuration Guide

Version 1.0

Figure 7 - iSCSI initiator setup 8. Select each entry in turn, select Log On.

Figure 8 - iSCSI initiator logon 9. In the Log On to Target dialogue, select both options (Automatically restore and Enable multi-path) and select OK. 10. Repeat the above for each of the 4 entries. If you make a mistake and miss either option, highlight the entry, select details, tick the box in the identifier field and select logoff and OK. You can then repeat the logon process correctly.

Figure 9 - iSCSI initiator connected 11. When all connections are connected, select OK to close the iSCSI initiator applet.

ESK

Cranford Community College Confidential

20

SAN Host Installation and Configuration Guide

Version 1.0

Procedure - Navisphere host agent installation This section requires a server restart. You will require the EMC CX-Series Server Support Products CD.
The Navisphere host agent installation cannot overwrite an existing agent. If a version of the product is already installed, you will need to uninstall it before proceeding.

1. Logon to the server. 2. Insert the CX-Series Server Support Products CD in the server drive. It will autorun. 3. The CX-Series Server Support Products menu opens. 4. From the main menu select Install Products on Server. The INSTALL PRODUCTS page opens. 5. Select Navisphere Host Agent. 6. If you have not already removed a previous version of Agent, a dialog box opens to inform you that you must remove the installed version. 7. In the Navisphere Agent Setup dialog box, click Next at the Welcome screen. 8. Continue, yes to create directory structure. Next. 9. In the License Agreement dialog box, read the license agreement, and click Yes to accept the terms. 10. In the Customer Information dialog box, enter the appropriate information, and click Next. 11. In the Choose Destination Location dialog box, click Next to select the default location. 12. When you are prompted about using the Microsoft iSCSI Initiator, click Yes as we are using the Microsoft iSCSI Software Initiator. 13. Depending on your operating system, the default location is one of the following: On Windows Server 2003 (32-bit) - C:\Program Files\EMC\Navisphere Agent On Windows Server 2003 (64-bit) - C:\Program Files (x86)\EMC\Navisphere Agent 14. The setup program copies files to the destination folder, and then it displays the message Navisphere Agent Service installed successfully. 15. In the Navisphere Agent Installer dialog box, click OK. 16. The Initialize Privileged User List dialog box opens so you can add privileged users to a new or existing Host Agent configuration file. 17. Add two users into the Privileged User List To add a user to the list, click Add. The Add Privileged User dialog box opens. In User Name, enter system (lowercase) In System Name, enter the IP address of the Management Port of the Storage Processor A (172.31.1.200) Click OK. 18. Repeat the above to add a user relating to the management port of Storage Processor B (172.31.1.201)

ESK

Cranford Community College Confidential

21

SAN Host Installation and Configuration Guide

Version 1.0

Figure 10 - Navisphere Host Agent setup 19. Click OK to save the new privileged user list and OK at the confirmation screen. The privileged user information is stored in the agent.config file 20. The program saves the Host Agent configuration file with the new privileged user entries and starts the Host Agent. 21. In the Navisphere Agent Setup dialog box, click Finish. A command line window opens indicating that the Host Agent service is starting. 22. If you are prompted to reboot the server, click Yes. Otherwise reboot anyway.

Procedure - Navisphere host agent configuration


1. Logon to the server and browse to C:\Program Files\EMC\Navisphere Agent\ 2. Create a txt file called agentid.txt in this directory. 3. Edit the file in notepad to contain the following information: On the first line, type the fully qualified domain name of the server, i.e. Server1.school.internal On the second line, type the IP address of the network connection into the SAN switch 1, i.e. the one in the 172.31.1.x subnet. (172.31.1.101, 172.31.1.102 etc)

The IP address of the connection with access to the array management ports must be provided (i.e. the connection into SAN switch 1). This information is used to provide the array with the server IP details for communicating via the array management ports. 4. Ensure there is a return at the end of the second line. 5. Save the file and exit notepad. Ensure it is named agentid.txt. 6. Stop and restart the Navisphere Agent service.

ESK

Cranford Community College Confidential

22

SAN Host Installation and Configuration Guide

Version 1.0

Information - The Navisphere Manager console The management of the Clariion storage array is done through the Navisphere Manager Console; this is the Clariions web-based management interface. Navisphere Manager requires Java Runtime to be installed on the server or (optional) management workstation on the SAN. One or more servers may have Java installed as part of the SAN hardware commissioning service; if so, do not install onto further servers without consulting the customer. You may need to make proxy exceptions or disable proxy to connect to the Navisphere Manager Console. The address is http://ipaddress_of_Management_interface:2162 for example http://172.31.1.200:2162 Note: On some installations the port number is 80 instead of 2162 Either management interface (SPA or SPB) can be used. It is necessary to type the http:// if port 2162 is used. You many need to add to trusted sites / allow downloads etc. Usernames and passwords are both case sensitive.

Procedure - Register the server with storage array


1. Logon to the Navisphere Manager Console. 2. Ensure that the Storage Tab is selected in the Enterprise Storage window (the main window) 3. On the Storage Tab, right-click the array (this is the first level under Domain and will typically be named after the school / college). Select Connectivity Status. A Connectivity Status window will open. 4. Locate the four connections relating to the iSCSI initiator on the server. Ensure that they show as Yes in both the registered and logged in columns. The information in this screen is not genuine real-time so it may be necessary to wait a couple of minutes if the Navisphere agent service or server has just been restarted. If necessary, close the Connectivity Status window, highlight and right-click the array and select update now before relaunching the connectivity status window. 5. The four initiators should show as registered and logged in, and the server name shows the same as the entry in the agentid.txt file. If this is not the case, wait and refresh. Do not continue.

ESK

Cranford Community College Confidential

23

SAN Host Installation and Configuration Guide

Version 1.0

Figure 11 - Navisphere manager - Connectivity Status 6. If all shows OK, cancel the connectivity status window and select the hosts tab (next to the storage tab, not the hosts entry under the array) 7. The server should be shown in here with the FQDN as typed into the agentid.txt file, and the symbol should not have a U (for unmanaged) through it. Do not continue if this is not the case.

Figure 12 - Navisphere Manager - Hosts tab

ESK

Cranford Community College Confidential

24

SAN Host Installation and Configuration Guide

Version 1.0

Information LUNS, RAID Groups and Storage Groups Information - LUNS Disk space on the SAN storage device is divided into sections referred to as LUNs (Logical UNits or Logical Unit Numbers). Each SAN host is configured to see one or more LUNs; these appear to the host operating system as local hard drives. The process of creating a LUN is referred to as Binding. The process of destroying a LUN is called unbinding; it is destructive to all data on that LUN. Use only with extreme caution. On the Clariion CX arrays, LUNs are owned by one or other Storage Processor, SPA or SPB. On a resilient switched architecture (which we have), the ownership is only important for load balancing purposes, since all hosts can communicate with both storage processors. Some LUNS will have been configured as part of the SAN hardware commissioning service. Information - RAID Groups The physical disks in the storage array are grouped into Raid Groups. Key Points: The LUNs reside on top of the RAID groups. Each Raid Group can hold multiple LUNs. Each LUN within a Raid Group has the same Raid Level (type). An array can comprise multiple Raid Groups, each can have a different Raid Level. A Clariion array may contain 2 different disk technologies, Fibre Channel (FC) and SATA. FC disks are high performance, so LUNs on RAID groups on FC disks should be used for primary data, i.e. exchange mail stores, CC3 user home directories & data. SATA disks are lower performance and should be used mainly for secondary data, i.e. backup to disk (part of a hybrid D2D2T solution) or multimedia images. The first enclosure of disks will always be FC. Subsequent enclosures (if present) can be FC or SATA. Disks technologies cannot be mixed within an enclosure. Disks within a Raid Group must be of the same technology, i.e. FC or SATA. Disks within a Raid Group should be of the same size. We are recommending RAID 5 (default recommendation) or RAID 10. The RAID groups will have been configured as part of the SAN hardware commissioning service.

ESK

Cranford Community College Confidential

25

SAN Host Installation and Configuration Guide

Version 1.0

Information - Storage Groups The LUNs are tied to their owning servers by placing them both into a storage group. Golden Rules: Each server will have its own storage group; only one server per storage group. Do not attempt to add a server to more than one storage group; data corruption will occur. A LUN must only appear in only one Storage Group. Do not attempt to put a LUN into more than one Storage Group; data corruption will occur. A storage group can (and will) contain multiple LUNs. A LUN should exist for each server drive that will be hosted on the SAN, i.e. for a new CC3 server, a total of 4 LUNs would exist (for D: F: G: and H:) Some Storage Groups and LUNS will have been configured as part of the SAN hardware commissioning service. Information LUN ID and naming conventions The LUNs and LUN properties can be viewed in several places within the Navisphere Manager console. For example a LUN is visible under its RAID group and under its owning Storage Processor (SPA or SPB); it also may appear under user customisable folders on some versions of Navisphere.

Figure 13 - LUN naming conventions The above diagram depicts 3 LUNs on RAID Group 0. With reference to this diagram, the format of the LUN string is as follows: A. [MINS-SVR-001 D:] This is the LUN Name as manually entered on the general tab of the LUN Properties window. This can be changed; the default LUN Name is the same as the LUN ID. B. [0;] This is the LUN ID; a unique number between 0 and 1023. The LUN ID is specified when LUN is bound and cannot be changed. C. [RAID 5;] This is the RAID type of the RAID group that the LUN is housed on. D. [MIN-SVR-001.MINSTER.INTERNAL D:\;] This information is updated by the Navisphere Host Agent when the Storage Group is configured and the drives are partitioned. This information is not updated in real-time; restarting the Navisphere Host Agent service will prompt an update. E. [FC] This is the disk technology of the RAID group that houses the LUN. It may show FC or blank for Fibre Channel; ATA for SATA.

ESK

Cranford Community College Confidential

26

SAN Host Installation and Configuration Guide

Version 1.0

Procedure Check to see created LUNS and Storage Groups


1. On the storage tab on Navisphere manager, expand the Storage System and browse down the tree and expand the Storage Groups node. 2. If the required Storage Group already exists for the server (this should be obvious from the Storage Group Name), Carry out the following: i. Select the Storage Group, right click and select properties. ii. Check on the LUNs tab to see which LUNs have been assigned to this storage group (the selected LUNs section). iii. OK and confirm.

Procedure Create LUNs and Storage Groups This section is only necessary if the LUNs and Storage Groups were not created as part of the SAN hardware commissioning service.
Ensure that you have read and understood the information on LUNs, RAID Groups and Storage Groups. If you are unsure, contact your support provider for advice.

To create a LUN:
1. Within Navisphere Manager, Identify a RAID group with sufficient capacity to host the LUN, of the required RAID type and disk technology. Highlighting the RAID group and viewing the properties will assist with this. Note that when creating a LUN, the maximum size permitted is the continuous free space on the RAID group, not the total free space on the RAID group. 2. Once the correct RAID group has been identified, right click it and select Bind LUN. The Bind LUN window will open. The Bind LUN dialogue can be misleading as it shows available options for the parameters chosen. Therefore care must be taken to ensure that the LUN is bound onto the intended RAID group. 3. Check the RAID type and RAID Group for new LUN. 4. Select the LUN size required, select an appropriate LUN ID from the dropdown box. Note that the LUN ID cannot be changed once the LUN is created, so be sensitive to the customers LUN numbering convention. i.e. RAID Group 0 may house LUN IDs from 0-49; RAID Group 50 may house LUN IDs 50-99 etc. This is a recommended naming convention.

ESK

Cranford Community College Confidential

27

SAN Host Installation and Configuration Guide

Version 1.0

Figure 14 - Bind LUN window 5. Click Apply & OK to create the LUN. This process will be instant on new systems; on established systems the LUN may show as transitioning for a period of time. 6. Navisphere Manager returns to the Bind LUN screen. Create any further LUNs required or cancel the Bind LUN window. 7. Locate the new LUN under the relevant RAID group and select its properties. On the general tab, the default LUN name will be the same as the LUN ID. Change the LUN name to comply with the customers naming convention; for example the LUN name may indicate the intended server and drive letter.

Figure 15 - LUN Properties General tab

Procedure Load balance new LUNs across Storage Processors Once the new LUNs are created, ensure that the ownership of the LUNs is load balanced across the storage processors, SPA and SPB, by opening these two folders under the LUNs folder.

ESK

Cranford Community College Confidential

28

SAN Host Installation and Configuration Guide

Version 1.0

Note: All LUNs on a SATA disk RAID group should have the same owner, i.e. SPA or SPB. LUNs on Fibre Channel disk RAID groups can be shared across SPA and SPB. Only move LUNs that you have just created, if necessary.
1. Identify any newly created LUNs that need relocating to the other storage processor. 2. Open the LUN properties and change the default owner on the general tab. Apply , confirm & OK, OK. This changes the default ownership. 3. Right click the LUN and select Trespass; Confirm & OK. This moves the current ownership to the other Storage Processor. 4. Repeat as necessary

Procedure - Create Storage Group


1. Navigate to the Storage Groups folder in the Navishpere Manager. 2. Right click Storage Groups, Create Storage Group 3. Enter a name for the storage group. Ensure the name complies with the customers naming convention. It is strongly advised that the storage group identifies the server, for example Server1 Storage Group. 4. OK and confirm to create the storage group.

Procedure Add LUNs to Storage Group


1. Open the properties of the Storage Group and select the LUNs tab.

Figure 16 - Storage Group Properties LUNs tab 2. Expand SPA and SPB in the Available LUNs section and select the required LUNs. (note that the version of Navisphere Manager on the CX300 has a slightly different interface to the one shown here)

ESK

Cranford Community College Confidential

29

SAN Host Installation and Configuration Guide

Version 1.0

Do not change the Show LUNs dropdown from Not in other Storage Groups. This is a safeguard to prevent potential data corruption 3. On the version of Navisphere depicted above, it is possible to setup the host ID manually, so that you can dictate the order that the LUNs are presented in disk manager. This option is only available when you add a LUN into the storage group; it cannot be done retrospectively. In the selected LUNs section, highlight the space in the Host ID column and select an ID from 0 to 7 from the dropdown. Repeat for each LUN. This is the order the disks will show in disk manager. 4. OK and confirm.

Procedure - Add host to storage group


1. Open the properties of the Storage Group and select the Hosts tab.

Figure 17 - Storage Group Properties Hosts tab 2. On the Hosts tab, the required server should be visible in the available hosts section. Move it to the hosts to be connected section. DO NOT ADD MORE THAN ONE SERVER INTO A STORAGE GROUP. Do not change the Show Hosts dropdown from Not Connected. This is a safeguard to prevent potential data corruption 3. OK and confirm.

With the LUNs and the host both in the storage group, the disks will be available to the server.

Procedure - Disk initialisation


1. Logon to the server. 2. Run Computer Management, select Disk Management.

ESK

Cranford Community College Confidential

30

SAN Host Installation and Configuration Guide

Version 1.0

3. The LUNs should appear as Unknown disks. If they are not showing, wait a few moments and then right-click Disk Management, rescan disks.

Figure 18 - Disk Management 4. Right click one of the uninitialised disks and select initialise. (To access this option you need to select the section where the Disk ID and the red circle is, i.e. Disk 1, Disk 2) 5. Select all of the uninitialised the disks. If prompted, do NOT convert the disks to dynamic.

Drives hosted on the SAN array must be configured as Basic Disks. Do not convert to Dynamic Disks under any circumstances.

Procedure - Partition creation The creation of the partition on the basic disk must be done through the Diskpart.exe command line utility, as this utility contains required functionality that is not available through Disk Manager. The required functionality is only available on the Windows 2003 SP1 or higher versions of Diskpart.
This document is for hosts with Windows 2003 SP1 or higher. If your host doesnt have SP1 or higher, you can however use the windows utility kit tool Diskpar (without the T) to achieve the same result. The syntax for diskpar is different from Diskpart; instructions for using Diskpar are not included here. 1. In Disk Management, make a note of the numbers of the new disks, i.e. disk 1, disk 2 etc. 2. Open a command prompt on the server and run diskpart.exe

ESK

Cranford Community College Confidential

31

SAN Host Installation and Configuration Guide

Version 1.0

3. At the Diskpart prompt, type Select Disk X and press enter where X is the number of the disk you want to partition. The change of selected disk will be shown. 4. At the Diskpart prompt, type create partition primary align=64 and press enter. 5. You will see the change reflected in disk management; a primary partition will have been created on the disk using the full capacity of the disk. 6. Repeat the above for other disks. When complete, type exit to leave diskpart. 7. Return to Disk Management and format the partitions with NTFS and assign drive letters.

Procedure iSCSI initiator Bound / Persistent settings These settings ensure that the SAN based partitions are initialised in a timely fashion during server boot.
1. Open iSCSI initiator applet from Control Panel or Desktop. 2. Select the Bound Volumes / Devices tab. Select the Bind All option. The window should then populate with the SAN based partition drive letters. If this is the case, select OK.

Figure 19 - iSCSI initiator settings

Procedure Set service dependencies This procedure sets the server service as a dependent of the iSCSI initiator service. This ensures that the server does not attempt to recreate shares until the iSCSI initiator is fully started and partitions available. This procedure is only required on CC3 servers.

ESK

Cranford Community College Confidential

32

SAN Host Installation and Configuration Guide

Version 1.0

There are Microsoft articles indicating that this modification is not required on the current version of the iSCSI initiator; however we have evidence to the contrary. 1. On the server, run regedit. It helps if you log on first. 2. Navigate to HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\lanmanserver 3. Add a new Multi-string value (REG_MULTI_SZ) and name it DependOnService 4. Set the value of the string to be MSiSCSI 5. Close regedit.

Figure 20 - Setting service dependencies

6. Open Computer Management and select Services. Open the properties for the Server service and select the dependencies tab. If the modification to the registry was successful, the Microsoft iSCSI initiator service will show in the top section.

ESK

Cranford Community College Confidential

33

SAN Host Installation and Configuration Guide

Version 1.0

Figure 21 - Setting service dependencies 7. Reboot the server and ensure that there are no errors at startup.

Procedure Final tasks This section contains details for any additional post-commissioning or post-migration tasks. DNS
If this server is a DNS server (i.e. CC3 DC) and this has not already been done, carry out the following: 1. Administrative tools, DNS 2. Right click the server name, properties. 3. On the interface tab, select the Only the following IP addresses option. 4. Highlight and remove each of the SAN IP addresses. Only the LAN IPs should remain. Apply and OK. 5. Expand the forward lookup zones. Sort by data, and delete entries that relate to the SAN connections. 6. Close DNS console.

ESK

Cranford Community College Confidential

34

SAN Host Installation and Configuration Guide

Version 1.0

iv. Allocating SAN space to Hosts 1. Log on to a SAN host (i.e:Cranford-w2k-4) and enter 172.31.1.200 in IE browser. 2. Login using the right credentials

3. Go to Raid Group

ESK

Cranford Community College Confidential

35

SAN Host Installation and Configuration Guide

Version 1.0

4. Do a right click and select Bind LUN

5. On the Bind LUN window select: - Raid Group (0 or 50) - LUN ID will pick up automatically the next one available - SPA or SP B - LUN size (mb or gb)

ESK

Cranford Community College Confidential

36

SAN Host Installation and Configuration Guide

Version 1.0

6. New LUN appears under Raid group

7. Go to Storage Groups

ESK

Cranford Community College Confidential

37

SAN Host Installation and Configuration Guide

Version 1.0

8. Go to the right server and select LUNs

9. There are two ways of increasing the space of a drive: A- Expand (The extra space appears immediately) B- Migrate (This procedure is longer and the partition where the existing space is migrating to must be the same or larger) A- Expand Method 10A. Right click the drive you want and select expand

ESK

Cranford Community College Confidential

38

SAN Host Installation and Configuration Guide

Version 1.0

11A.

Click next on the new wizard

12A.

Select Concatenation

ESK

Cranford Community College Confidential

39

SAN Host Installation and Configuration Guide

Version 1.0

13A.

Select the new available LUN created earlier

14A.

Click next

ESK

Cranford Community College Confidential

40

SAN Host Installation and Configuration Guide

Version 1.0

15A.

Ensure the information is correct before clicking next

ESK

Cranford Community College Confidential

41

SAN Host Installation and Configuration Guide

Version 1.0

16A.

Click Finish

ESK

Cranford Community College Confidential

42

SAN Host Installation and Configuration Guide

Version 1.0

17A. Going back to Storage Group\Server Name\LUNs, expanding the drive you just increased in space you can see the components that made up the drive

B- Migrate Method (note: the partition you are migrating to must be bigger in size) 10B. Right click the partition you are migrating and select Migrate

ESK

Cranford Community College Confidential

43

SAN Host Installation and Configuration Guide

Version 1.0

11B.

Select the new LUN created earlier

12B. Click Yes

ESK

Cranford Community College Confidential

44

SAN Host Installation and Configuration Guide

Version 1.0

13B.

Click OK

14B.

As you can see the status of the drive is Migrating

ESK

Cranford Community College Confidential

45

SAN Host Installation and Configuration Guide

Version 1.0

15B.

While in migrating stage the drive appears also in LUN Folders\MetaLUNs

16B.

Once migration process finishes this is how status appears

17B. 18.

End of migration process End of document

ESK

Cranford Community College Confidential

46

Вам также может понравиться