Вы находитесь на странице: 1из 22

Present NetApp iSCSI LUN to Linux host Consider the following scenario (which is in fact a real case).

You have a High Performance Computing (HPC) cluster where users usually generate hellova research data. Local hard drives on a frontend node are almost always insufficient. There are two options. First is presenting a NFS share both to frontend and all compute nodes. Since usually compute nodes connect only to private network for communication with the frontend and dont have public ip addresses it means a lot of reconfiguration. Not to mention possible security implications. The simpler solution here is to use iSCSI. Unlike NFS, which requires direct communication, with iSCSI you can mount a LUN to the frontend and then compute nodes will work with it as ordinary NFS share through the private network. This implies configuration of iSCSI LUN on a NetApp filer and bringing up iSCSI initiator in Linux. iSCSI configuration consists of several steps. First of all you need to create FlexVol volume where you LUN will reside and then create a LUN inside of it. Second step is creation of initiator group which will enable connectivity between NetApp and a particular host. And as a last step you will need to map the LUN to the initiator group. It will let the Linux host to see this LUN. In case you disabled iSCSI, dont forget to enable it on a required interface. vol create scratch aggrname 1024g lun create -s 1024g -t linux /vol/scratch/lun0 igroup create -i -t linux hpc igroup add hpc linux_host_iqn lun map /vol/scratch/lun0 hpc iscsi interface enable if_name Linux host configuration is simple. Install iscsi-initiator-utils packet and add it to init on startup. iSCSI IQN which OS uses for connection to iSCSI targets is read from /etc/iscsi/initiatorname.iscsi upon startup. After iSCSI initiator is up and running you need to initiate discovery process, and if everything goes fine you will see a new hard drive in the system (I had to reboot). Then you just create a partition, make a file system and mount it. iscsiadm -m discovery -t sendtargets -p nas_ip fdisk /dev/sdc mke2fs -j /dev/sdc1 mount /dev/sdc1 /state/partition1/home I use it for the home directories in ROCKS cluster suite. ROCKS automatically export /home through NFS to compute nodes, which in their turn mount it via autofs. If you

intend to use this volume for other purposes, then you will need to configure you custom NFS export.

Presenting LUNs to hosts for NetApp FAS You can present LUNs to hosts. Perform the following steps to present LUNs to hosts: 1. Log on to the NetApp FAS. 2. Go to Filer View and authenticate. 3. Click LUNs on the left panel. 4. Click Manage. A list of LUNs is displayed. 5. Click the LUN that you want to map. 6. Click Map LUN. 7. Click Add Groups to Map. 8. Select the name of the host or initiator group from the list and click Add. Notes: a. You can leave the LUN ID section blank. A LUN ID is assigned based on the information the controllers are currently presenting. b. If you are re-mapping the LUN from one host to another, you can also select the Unmap box. 9. Click Apply. Parent topic: Configuring NetApp FAS systems

Adding a Netapp LUN on the SAN to a RHEL 5 host April 15, 2011willsnotesLeave a commentGo to comments This note will provide the steps to add Netapp LUNs to a running RHEL 5 linux host.

First, ensure the LUNs have been shared/zoned correctly per Netapp requirements. This doc also assumes Netapp Linux Host software has been installed. Next, scan the HBAs for the new LUNs: echo - - > /sys/class/scsi_host/<host listings>/scan <host listings> refer to the scsi host instances (for the hbas). if you do a listing of the scsi_host directory, you will see something similar to the following: [root@r08u6 scsi_host]# ll total 0 drwxr-xr-x 2 root root 0 Aug 4 15:52 host0 drwxr-xr-x 2 root root 0 Aug 4 15:52 host1 drwxr-xr-x 2 root root 0 Aug 4 15:52 host2 as you can see, on this server it shows 3 instances of scsi host entries. by using the echo command above on each of these directorys scan files it will force a re-read of the specific scsi bus. This procedure has been tested on RHEL5 successfully without interruption to the system. Next, run: #sanlun lun show all controller: lun-pathname device filename adapter protocol size lun state IRVFAS01B: /vol/BI_TEST/BI_TEST_LUN_2 /dev/sdz host3 FCP (2199023255552) GOOD IRVFAS01B: /vol/BI_TEST/BI_TEST_LUN_1 /dev/sdaa host3 FCP (2199023255552) GOOD IRVFAS01B: /vol/BI_TEST/BI_TEST_LUN_2 /dev/sdab host3 FCP (2199023255552) GOOD IRVFAS01B: /vol/BI_TEST/BI_TEST_LUN_1 /dev/sdac host3 FCP (2199023255552) GOOD IRVFAS01B: /vol/BI_TEST/BI_TEST_LUN_2 /dev/sdap host4 FCP (2199023255552) GOOD IRVFAS01B: /vol/BI_TEST/BI_TEST_LUN_1 /dev/sdaq host4 FCP (2199023255552) GOOD IRVFAS01B: /vol/BI_TEST/BI_TEST_LUN_2 /dev/sdbd host4 FCP (2199023255552) GOOD IRVFAS01B: /vol/BI_TEST/BI_TEST_LUN_1 /dev/sdbe host4 FCP (2199023255552) GOOD In the example above, we see 2 LUNs mapped with 4 paths to each.

lun 2t 2t 2t 2t 2t 2t 2t 2t

Now, verify using multipath: #multipath -ll Note, the WWID that are shown. You can use them below to map the devices to an alias device name of your choosing. If the new LUNs arent listed, reload the configs: #service multipathd reload If you want to use an alias name for the device, modify /etc/multipath.conf and add a stanza similar to: multipath { wwid 360a98000486e642f50346331694a7247 alias mynewLUN1 path_grouping_policy failover } This should be part of the multipaths section. ADVERTISEMENT Like Be the first to like this. Categories:Linux, Netapp, Redhat Comments (2)Trackbacks (0)Leave a commentTrackback 1.

Ravindra Patel March 8, 2012 at 5:30 pm | #1 Reply | Quote Hi, Thanks for helpful article. At the end of your article your wrote wwid 360a98000486e642f50346331694a7247 with /etc/multipathd.conf from where did you get this wwid number?

I checked your output with #sanlun lun show all but on that output this wwid number is not showing anywhere. Once again Thanks for your helpful article.
o

willsnotes March 8, 2012 at 10:49 pm | #2 Reply | Quote HI Ravindra, You can use the command multipath -ll to find the WWID of any particular LUN. Will

Masking and Presenting LUNs in NetApp Filers


Email ThisBlogThis!Share to TwitterShare to Facebook

Im documenting the easy method to this process having just gone through the more roundabout backdoor method. The proper process to mask and present LUNs on a NetApp array is through the creation of LUNs and iGroups (mask) which is wizard driven inside the NetApp System Manager. This can all be done, of course, via the CLI which I have done but this way is much simpler. This all assumes that you have already properly setup FCP or iSCSI on your array and that all requisite fiber-channel zoning is complete. My filers are running ONTAP 7.3.2 and I have 2 fiber-channel fabrics powered by Brocade switches. Open System Manager and navigate to the controller node that will be hosting your LUN. Expand Storage and click LUNs. Click create under LUN Management. Enter a name for your new LUN along with the size and presentation type.

Next allow ONTAP to create a new volume or choose an existing container. Ill create a new volume:

The next step will mask the LUN to a host of your choosing via the creation of an iGroup. Click Add Initiator Host and select the WWpN of the host HBA you wish to connect to the LUN. You can only add a single initiator in this step. Once complete move your new iGroup from Known initiator hosts to Hosts to connect.

Review the changes to be made in the summary and click next to start the process. When complete you will see your new LUN and iGroup with the hosts WWpN. By default NetApp likes to put each initiator port in its own iGroup. Because I am building a SQL cluster and need all hosts to see all of the same LUNs I will be adding all cluster members to the same iGroup. This can be done either by creating a new iGroup and selecting the WWpN you want to add followed by reassigning it to the proper iGroup, or you can copy/paste the port name into the iGroup you just created.

Make sure to enable ALUA if your array supports these features.

Verify the iGroup via the CLI if you wish by issuing the igroup show command:

Now, assuming you have ONTAP DSM (MPIO) installed, you should see the new LUN on your host.

Posted by Weestro on Tuesday, February 23, 2010 Tags: Clustering, NetApp, SAN, Server2008, Server2008R2, servers, sql, Storage

Windows Server TechCenter > Windows Server Forums > Setup Deployment > iSCSI Initiator Setup? iSCSI Initiator Setup?

Ask a question

Tuesday, January 31, 2012 10:23 PM

lkubler 25 Points

Sign In to Vote Hi, I have a Windows 2003 server as a backup media server running Backup Exec. I want to retire this server and use a new one running Windows Server 2008 R2. The old server is using the iSCSI Initiator to connect to our SAN and Im trying to get that working on the new server. I think I have the iSCSI Initiator set up on the new server but when I go into Disk Management it prompts me to initialize a disk before Logical Disk Manager can access it. What kind of initialization is this? Will it wipe all the data on the disks I want to connect with, my SAN? My goal was to have the system all setup so when I installed the Backup Exec everything will be ready to configure jobs and start backing up my systems. But I dont want to break my existing system until the new one is running. This is my first attempt at setting up an iSCSI Initiator so any help is greatly appreciate. Thanks in advance, Linn
o

Reply

Quote

Answers

Wednesday, February 01, 2012 2:18 AM

Vincent Hu (MSFT CSG) 28,735 Points

Sign In to Vote Hi,

You can configure the iSCSI initiator on Windows Server 2008 R2 as you did on Windows Server 2003.

However, according to the description, it seems that you want to use the LUN for Windows Server 2003 on Windows Server 2008 R2 too. This is possible, however, you will not be able to access the LUN on Windows Server 2008 R2 before you offline the LUN on Windows Server 2003 computer, because it is not allow to access the same LUN from different computers.

By the way, you can create a new LUN from your SAN, attach it to Windows Server 2008 R2 computer, set up everything and then detach the old LUN from the Windows Server 2003 server, attach it to Windows Server 2008 R2 computer as an alternative disk.

Best Regards, Vincent Hu

Marked As Answer by Vincent HuMicrosoft Contingent Staff, Moderator Tuesday, February 07, 2012 10:00 AM

o o

Reply

Quote

Wednesday, February 01, 2012 5:23 PM

Dave Guenthner [MSFT] Microsoft 3,335 Points

Sign In to Vote The storage device/SAN should have a zone containing a piece of unique information about the Windows 2003 box so the LUN is presented to it only. For example, most vendors use iqn name which can be found in the iSCSI GUI. What you can do is gracefully take the volume offline as Vincent indicated and logoff iSCSI initiator in from Windows 2003. Update zone on storage so that iqn from Windows 2008 R2 is only one present. Login from iSCSI from Windows 2002 R2 machine and you should be able to online the disk.

Perhaps doing this scenario with a couple VM's would give you a dress rehearsal of the change and familiarize yourself with iSCSI. You never want to expose a disk to more than one Windows system, exception would be failover clustering which is out of scope here.

Dave Guenthner [MSFT] This posting is provided "AS IS" with no warranties, and confers no rights. http://blogs.technet.com/b/davguents_blog
o

Marked As Answer by Vincent HuMicrosoft Contingent Staff, Moderator Tuesday, February 07, 2012 10:00 AM

o o

Reply

Quote

All Replies

Wednesday, February 01, 2012 2:18 AM

Vincent Hu (MSFT CSG) 28,735 Points

Sign In to Vote Hi,

You can configure the iSCSI initiator on Windows Server 2008 R2 as you did on Windows Server 2003.

However, according to the description, it seems that you want to use the LUN for Windows Server 2003 on Windows Server 2008 R2 too. This is possible, however, you will not be able to access the LUN on Windows Server 2008 R2 before you offline the LUN on Windows Server 2003 computer, because it is not allow to access the same LUN from different computers.

By the way, you can create a new LUN from your SAN, attach it to Windows Server 2008 R2 computer, set up everything and then detach the old LUN from the Windows Server 2003 server, attach it to Windows Server 2008 R2 computer as an alternative disk.

Best Regards, Vincent Hu

Marked As Answer by Vincent HuMicrosoft Contingent Staff, Moderator Tuesday, February 07, 2012 10:00 AM

o o

Reply

Quote

Wednesday, February 01, 2012 1:56 PM

lkubler 25 Points

Sign In to Vote Hi Vincent, Unfortunately I didnt setup the original initiator on the Win 2003 server. This is my first attempt at setting this up so forgive my lack of understanding. I did not know you could not access one LUN from more than one computer using iSCSI Initiator, I just assumed it was a shared resource. Unfortunately I dont have any available disk space on my SAN to create a new LUN. So I guess I will have to break the old system before I can get the new system running. Thanks for the response, Linn
o

Reply

Quote

Wednesday, February 01, 2012 5:23 PM

Dave Guenthner [MSFT] Microsoft 3,335 Points

Sign In to Vote The storage device/SAN should have a zone containing a piece of unique information about the Windows 2003 box so the LUN is presented to it only. For example, most vendors use iqn name which can be found in the iSCSI GUI. What you can do is gracefully take the volume offline as Vincent indicated and logoff iSCSI initiator in from Windows 2003. Update zone on storage so that iqn from Windows 2008 R2 is only one present. Login from iSCSI from Windows 2002 R2 machine and you should be able to online the disk. Perhaps doing this scenario with a couple VM's would give you a dress rehearsal of the change and familiarize yourself with iSCSI. You never want to expose a disk to more than one Windows system, exception would be failover clustering which is out of scope here.

Dave Guenthner [MSFT] This posting is provided "AS IS" with no warranties, and confers no rights. http://blogs.technet.com/b/davguents_blog
o

Marked As Answer by Vincent HuMicrosoft Contingent Staff, Moderator Tuesday, February 07, 2012 10:00 AM

o o

Reply

Quote

Thursday, February 02, 2012 3:15 PM

lkubler 25 Points

Sign In to Vote Thanks for the response, Dave. I eventually did get some direct help from tech support of the SAN. They had me configure it all from the SAN's control program, all very confusing. But the technician did say he knew very little about setting up iSCSI Initiator on Windows. Good news is I can now see my two drives from Computer on the new server. Bad news(?) is that when I go into Disk Management on the computer I still get a pop up asking me to Initialize a hand full of disks. This still has me concerned because I'm not sure what it is asking me to do. Does this initilization basically reformat the drives? The options are MBR and GPT. Seems odd that it is asking me to initialize 4 disks, Disks 5 through 8, when I only have 3 drives, the local C: drive and the two iSCSI drives E: and F:. How can I figure out what these other drives are? Thanks, Linn
o

Reply

Quote

Thursday, February 02, 2012 6:55 PM

Dave Guenthner [MSFT] Microsoft 3,335 Points

Sign In to Vote

Fortunately there aren't a lot of options in the iSCSI initiator client. Basically, point it to the iSCSI target and login. Document the iqn from ISCSI initiator, on the storage side, verify that the volumes which were presented to the W2K3 box is now pointing to the new machine. So it sounds like perhaps the storage vendor may have presented *new* disks to Windows which is why it's prompting to initialize them, this is expected behavior for new disks not existing ones, which will remove data from drive, it's really writing a new disk signature before giving you the option to format it. Were the disks being presented to Windows 2003 "basic MBR" or GPT disks? Were they dynamic disks. Again, in this scenario, you shouldn't have anything pointing to W2K3 server now. Also are you using multipathing? You can go into device manager and click on disk drives to get a little more information. Likely, you will need to continue your investigation with storage vendor to ensure zone is correct.

Dave Guenthner [MSFT] This posting is provided "AS IS" with no warranties, and confers no rights. http://blogs.technet.com/b/davguents_blog
o

Reply

Quote

Thursday, February 9, 2012

NetApp Command Line Cheat-Sheet

I recently had the opportunity to work on a NetApp storage implementation project. As always I really wanted to get my hands dirty, so I tried to learn as much about their CLI as possible. It also helps when the NetApp System Manager GUI has got crazy bugs like Bug ID 548923, which prevents you from doing any FC related configs. Anyhow, here is a list of commands which should get you up and running in no time. I compiled this from a couple of web sources. The Basics setup (Re-Run initial setup) halt (Reboots controller into bootrom) reboot (Reboots the connected controller) sysconfig -a (Dumps the system configuration) storage show disk (shows physical information about disks) passwd (Changes the password for the current user) sasadmin shelf (shows a graphical layout of your shelves with occupied disk slots) options trusted.hosts x.x.x.x or x.x.x.x/nn (hosts that are allowed telnet, http, https and ssh admin access. x.x.x.x = ip address, /nn is network bits) options trusted.hosts * (Allows all hosts to the above command) Diagnostics Press DEL at boot up during memory test followed by boot_diags and select all

priv set diags (Enter diagnostics CLI mode from the Ontap CLI) priv set (Return to normal CLI mode from diagnostics mode) Software software list (Lists software in the /etc/software directory) software delete (Deletes software in the /etc/software directory) software update 8.1RC2_e_image.zip -r (Install software. The -r prevents it rebooting afterwards) Aggregates aggr create aggregate_name (Creates an Aggregate) aggr destroy aggregate_name (deletes an Aggregate) aggr offline aggregate_name (takes an Aggregate offline) aggr online aggregate_name (brings an Aggregate online) aggr status (shows status of all aggregates) aggr status aggregate_name (show status of a specific Aggregate) aggr show_space aggregate_name (shows specific aggregate space information) Volumes vol create volume_name (Creates a volume) vol status (gives the status of all volumes) Snapshots snap create volume_name snapshot_name (create a snapshot) snap list volume_name (List snapshots for a volume) snap delete volume_name snapshot_name (delete a snapshot on a volume) snap delete -a volume_name (Deletes all snapshots for a volume) snap restore -s snapshot_name volume_name (Restores a snapshot on the specified volume name) options cifs.show_snapshot on (Sets snapshot directory to be browse-able via CIFS) options nfs.hide_snapshot off (Sets snapshot directory to be visible via NFS) SnapMirror options snapmirror.enable on (turns on SnapMirror. Replace on with off to toggle) vol restrict volume_name (Performed on the Destination. Makes the destination volume read only which must be done for volume based replication) snapmirror initialize -S srcfiler:source_volume dstfiler:destination_volume (Performed on the destination. This is for full volume mirror. For example snapmirror initialize -S filer1:vol1 filer2:vol2) snapmirror status (Shows the status of snapmirror and replicated volumes or qtrees) snapmirror status -l (Shows much more detail that the command above, i.e. snapshot name, bytes transferred, progress, etc) snapmirror quiesce volume_name (Performed on Destination. Pauses the SnapMirror Replication. If you are removing the snapmirror relationship this is the first step.) snapmirror break volume_name (Performed on Destination. Breaks or disengages the SnapMirror Replication. If you are removing the snapmirror relationship this is the second step followed by deleting the snapshot) snapmirror resync volume_name (Performed on Destination. When data is out of date, for example working off DR site and wanting to resync back to primary, only performed when SnapMirror relationship is broken) snapmirror update -S srcfiler:volume_name dstfiler:volume_name (Performed on Destination. Forces a new snapshot on the source and performs a replication, only if an initial replication baseline has been already done) snapmirror release volume_name dstfiler:volume_name (Performed on Destination. Removes a snapmirror destination) Cluster cf enable (enable cluster) cf disable (disable cluster)

cf takeover (take over resources from other controller) cf giveback (give back controller resources after a take over) Autosupport options autosupport.support.enable on (Turns Autosupport on, toggle with off) Hot Spares vol status -r (Gives list of spare disks) Disks disk show (Show disk information) disk show -n (Show unowned disks) Luns lun setup (runs the cli lun setup wizard) lun create -s 10g -t windows_2008 -o noreserve /vol/vol1/lun1 (creates a lun of 10GB with type Windows 2008, sets no reservation and places it in the following volume or qtree) lun offline lun_path (takes a lun offline) lun online lun_path (brings a lun online) lun show -v (Verbose listing of luns) Fiber FCP fcadmin config -t target 0a (Changes adapter from initiator to target) fcadmin config (lists adapter state) fcadmin start (Start the FCP service) fcadmin stop (Stop the FCP service) fcp show adapters (Displays adapter type, status, FC Nodename, FC Portname and slot number) fcp nodename (Displays fiber channel nodename) fcp show initiators (Show fiber channel initiators) fcp wwpn-alias set alias_name (Set a fiber channel alias name for the controller) fcp wwpn-alias remove -a alias_name (Remove a fiber channel alias name for the controller) igroup show (Displays initiator groups with WWNs) Cifs cifs setup (cifs setup wizard) cifs restart (restarts cifs) cifs shares (displays cifs shares) cifs status (show status of cifs) cifs domain info (Lists information about the filers connected Windows Domain) cifs testdc ip_address (Test a specific Windows Domain Controller for connectivity) cifs prefdc (Displays configured preferred Windows Domain Controllers) cifs prefdc add domain address_list (Adds a preferred dc for a specific domain i.e. cifs prefdc add netapplab.local 10.10.10.1) cifs prefdc delete domain (Delete a preferred Windows Domain Controller) vscan on (Turns virus scanning on) vscan off (Turns virus scanning off) vscan reset (Resets virus scanning) HTTP Admin options httpd.admin.enable on (enables web admin) SIS (Deduplication) sis status (Shows SIS status) sis config (Shows SIS config) sis on /vol/vol1 (Turns on deduplication on vol1) sis start -s /vol/vol1 (Runs deduplication manually on vol1) sis status -l /vol/vol1 (Displays deduplication status on vol1) df -s vol1 (View space savings with deduplication)

sis stop /vol/vol1 (Stops deduplication on vol1) sis off /vol/vol1 (Disables deduplication on vol1) DNS dns flush (Flushes the DNS cache) /etc/resolv.conf (edit this file to change your dns servers)

Вам также может понравиться