Вы находитесь на странице: 1из 34

NETAPPS Run Book

Document Revision History


Version Doc Name Changes Made Prepared By Approved By Approved Dt
#
1.0 NETapps Run Base Document MphasiS
Book.doc Team
1.1

Internal Page 1
NETAPPS Run Book

Table of Contents
VMS NETAPPS ENVIRONMENT....................................................................................................................3

UNIX Level 2 Team Responsibilities.............................................................................................................4

PROVISIONING.........................................................................................................................................4

AGGREGATE CREATION...........................................................................................................................4

VOLUME CREATION.................................................................................................................................8

QTREES..................................................................................................................................................12

DEFINING QUOTA’S...............................................................................................................................14

LUN ASSIGNEMENT...............................................................................................................................17

CREATING INITIATOR GROUP (IGROUP)................................................................................................19

MANAGING SNAP SHOT........................................................................................................................20

PROCEDURE THROUGH COMMAND LINE..................................................................................................23

Creation of aggregate in netapps..........................................................................................................24

Creating an aggregate 2 with raid dual parity including 4 disk..............................................................25

MODIFICATION OF VOLUMES................................................................................................................28

MODIFICATION OF LUNS...........................................................................................................................29

STORAGE AVAILABILTY AND PERFORMANCE MONITORING.....................................................................31

STORAGE ALLOCATION REPORT................................................................................................................34

Internal Page 2
NETAPPS Run Book
VMS NETAPPS ENVIRONMENT
In VMS (Palo Alto) there are 2nos. 2 node clusters. This netapps box serves space allocation to VMware ESX
servers. The space allocation from Netapps boxes through NFS, CIFS protocols so here netapps also used as file
server for windows clients. For windows client quota has been implemented after creating qtree on volumes so
that user have not access on whole volume instead of it they are bind with their respective folders within their
limit. If any client request for more space then it’s quota limit can be increased.

For instant recovery of user data snap shot has been enabled on every node. Also “Snap Mirror” is also
implemented to get quick recovery in case of any disaster. Most of the task performed in Netapps by command
line but it’s not mandatory user can use Filer view also when it required.

Netapps Node Information

NODE NAME IPADDRSS Ontap Version CLUSTER

Pa-netapp1 132.190.50.5 7.3.2p7 Yes, with 132.190.50.6

Pa-netapp2 132.190.50.6 7.3.2p7 yes

Pa-netapp3 132.190.50.7 7.3.2 Yes, with 132.190.50.8

Pa-netapp4 132.190.50.8 7.3.2 yes

Pa-netapp5 10.0.120.15 8.0.1 yes

Pa-netapp6 10.0.120.16 8.0.1 yes

In VMS environment with CIFS protocol Netapps act as file server. Whereas NFS to allocate space to ESX (Vmware)
server. Sometime there are also request come for LUN assignment.

Internal Page 3
NETAPPS Run Book
UNIX Level 2 Team Responsibilities

PROVISIONING
Storage provisioning is the process of assigning storage, usually in the form of server disk drive space, in
order to optimize the performance of a storage area network. In Netapps provisioning can be executed
by either Filer View or command line. Below steps/procedure show you provisioning from scratch to
end in both scenario. Firstly we are going with Filer view then repeat same step with command line.

AGGREGATE CREATION
Open Filer View navigation pane and switch to Aggergate  Manage, this will show you the current
aggregate status in Filer. As there are three aggregates with aggr0,1,2 so we are going to create aggr3.

In same section (Aggregate), click on add and provide aggregate no. in new box. Also go mark in double parity for
disk protection.

Internal Page 4
NETAPPS Run Book
AggregateAdd

In next screen select raid group size which specifies the maximum number of disks that can exist per RAID
group. For netapp storage system by default it’s 16.

Here select how many disk should be allocated in aggregate, we can go with 4, as we already selected “Dual Parity”
previously so two disk used for dual parity and for data remaining two disk will be used.

Internal Page 5
NETAPPS Run Book

Once review the setting click on commit to save changes and aggregate created successfully.

Internal Page 6
NETAPPS Run Book

To confirm go in manage section of Aggregate, here you are able to view the recently created aggregate.

Internal Page 7
NETAPPS Run Book
VOLUME CREATION
Now in next activity, we are going to create volumes on recently created aggregate. Switch to Volume section in
filer view. Click on manage to show previously created volumes.

Go to Volume  ADD

Select volume type as Flexible which is common for most of the file system. Flexible volumes represent a
significant administrative improvement over traditional volumes. Using multiple flexible volumes enables you
can do the following:

Internal Page 8
NETAPPS Run Book
 Perform administrative and maintenance tasks (for example, backup and restore) on individual
flexible volumes rather than on a single, large file system.
 Set services (for example, snapshot schedules) differently for individual flexible volumes.

 Minimize interruptions in data availability by taking individual flexible volumes offline to perform
administrative tasks on them while the other flexible volumes remain online.

 Save time by backing up and restoring individual flexible volumes instead of all the file systems
an aggregate contains.

Assign name to volume name.

Internal Page 9
NETAPPS Run Book

Select on which aggregate you are going to create volume.

Enter the size of volume you can give complete space of aggregate or a chunk from it. Here we are creating 250M
including snap shot which is by default 20%, if we want to give more value to snap shot we can increase this value
but it decrease the space of actual volume size.

Internal Page 10
NETAPPS Run Book

Once finalize the setting go on commit to save changes.

Internal Page 11
NETAPPS Run Book

Can go again on Manage to view the recently created volume.

QTREES

Click here on ADD on qtree section, give qtree name and assign security style. As we are going to implement qtree
for windows client so we are giving it in NTFS mode.

Internal Page 12
NETAPPS Run Book

We are not using quota’s in VMS…

Internal Page 13
NETAPPS Run Book
DEFINING QUOTA’S
Go to Quota and provide name of volume on which quota should be implement.

Define soft limit, hard limit and threshold value for Quota..

Internal Page 14
NETAPPS Run Book

Go in next screen and turn on quota.

Internal Page 15
NETAPPS Run Book

Click on volume on which quota should be on.

LUN ASSIGNEMENT
We can check the below status for current status of already defined luns.

Internal Page 16
NETAPPS Run Book

Go to ADD LUN and provide path of lun(completely) along with size.

It will show you newly created lun.

Internal Page 17
NETAPPS Run Book

Internal Page 18
NETAPPS Run Book

CREATING INITIATOR GROUP (IGROUP)

Internal Page 19
NETAPPS Run Book
MANAGING SNAP SHOT

A Snapshot copy is a read-only image of the entire Data ONTAP file system. It reflects the state of the file system at
the time the Snapshot copy is created. 

Snapshot copy files carry the same permissions as the original file. A user who has permission to read a file in the
active file system can still read that file in the Snapshot copy. A user without read permission to the same file
cannot read that file in a Snapshot copy. Write permission does not apply because Snapshot copies are read-only.
Snapshot copies can be accessed by any user with the appropriate permissions.

Every directory in the storage system's active file system contains a directory named .snapshot through which
users can access old versions of files in that directory.

FilerView enables you to configure and manage all Snapshot copies in all volumes on the storage system.

Give name of Volume on which snap shot to be implement.

Internal Page 20
NETAPPS Run Book

Define schedule for SNAP SHOT i.e. Weekly, Nightly, Hourly according to requirement.

Disable by default – depend upon application normally disable.

Internal Page 21
NETAPPS Run Book

Internal Page 22
NETAPPS Run Book
PROCEDURE THROUGH COMMAND LINE
The Sysconfig command give you hardware information for netapps. There are different switches available with
sysconfig. In above command “sysconfig –r “ it displays RAID configuration information.The command output
prints information about all aggregates, volumes, file system disks,spare disks, maintenance disks, and failed disks.

For more information about sysconfig type “man sysconfig” in command line.

Creation of aggregate in netapps

Internal Page 23
NETAPPS Run Book

Before creating aggregate check status of your disk space and on netappfiler with command “Storage show disk”.
In above output 16, 17 and 18 disk are used for agg0 and rest of the disk are free.

Creating an aggregate 2 with raid dual parity including 4 disk.

Internal Page 24
NETAPPS Run Book

Check with “df –Ah” , it will show you newly created aggregate. Here switches “A” & “h” with df indicates
aggregate and human readable respectively.

To show the complete hierarchy of aggregate 2 use command again “sysconfig –r” and now able to see newly
created aggregagte 2 with raid dp and disk shelf hierarchy.

Internal Page 25
NETAPPS Run Book

Now create a Volume of 700M, here we are not taking complete size of aggregate as volume which is 854M. As we
can resize it later. Execute command as snap shot “Vol create <Vol_name> <aggregatename> size of volume.

Internal Page 26
NETAPPS Run Book
Now check the status of newly create volume name “Solaris” under aggregate 2 as in below snap shot.

Now create a “lun” of 100M from volume “solaris” having name L001 as below:

Here in command = -s for size, -t for type of OS

Now create Igroup, but before that check which Igroup already present in filer. Check with command “Igroup
show” . Here one Igroup with initiator name ESX_SRV is present with WWN no. which is currently not “logged in” .

Now create a Igroup with following command:

Here switches with igroup are: -f = optional, -t = type of OS, and last WWN no.

Now check status of “Igroup mapping” with command “lun show –m”. Here we are only view one igroup ESX_Srv
which is mapped with LUN ID 1. So now we are going to map our newly create igroup “DB_Server” with LUN ID 2 as
shown in second command. Now check with “lun show –m” command to make it confirm.

Internal Page 27
NETAPPS Run Book

Upto here we created an aggregate Volume Lun Igroup MAP Igroup. After this you can assign your LUN to
requester and they can use according to their requirement.

MODIFICATION OF VOLUMES

As we didn’t allocate complete space to volume from aggregate now we can resize the volume size by growing
with 100M. But before that we can check once again the current scenario of present space hierarchy with
command as shown below.

Here the complete aggregate 2 size is 855M from which we already allocated 700M earlier and there was
remaining 151 M left. Now from 151M we allocate 100M to our current volume name “Solaris” with command:

“Vol size <vol name> + # space

DATA store or volume low on space … expand the volume or create a new data sotre or vokume is too big –4 TB –
ESX volume – de-duplication -- >4TB go for new volume creation

Now check again with “df –Ah” command it will show you the increased space.

Internal Page 28
NETAPPS Run Book

MODIFICATION OF LUNS
As we grown our volume the same way we can resize our lun also.. go with following snap shot command which
will exceed lun with +250M i.e earlier it was 100 now it become 350M

A qtree is similar in concept to a partition. It creates a subset of a volume to which a quota can be applied to limit
its size. As a special case, a qtree can be the entire volume. A qtree is more flexible than a partition because you
can change the size of a qtree at any time. In addition to a quota, a qtree possesses a few other properties. A qtree
enables you to apply attributes such as oplocks and security style to a subset of files and directories rather than to
an entire volume. Single files can be moved across a qtree without moving the data blocks. Directories cannot be

Internal Page 29
NETAPPS Run Book
moved across a qtree. However, since most clients use recursion to move the children of directories, the actual
observed behavior is that directories are copied and files are then moved.

We can also share space of from netapps through NFS i.e. instead of creating LUN we can directly give volume to
requester and they can use it according to their choice. “Qtree” is technique through which we can allocate space
either “CIFS client” or “NFS client”. With qtree we can define and restrict user as imposing quota.

“qtree status” is command which will show you the status already created qtrees in filer. As shown below there is
qtree with name “qtree_1” with ntfs security has been enabled. When security style is ntfs then we are giving
access with CIFS (Common interface file system) that is to “Windows Users”

As we can give space through UNIX security style, in that case the sharing mode is “NFS” which is UNIX based
sharing protocol.

Now going to create qtree on volume “Solaris” and giving name to qtree is “qtree_sol”

Now as this is UNIX style security so open file “ /etc/exports” to view the present status.

# rdfile /etc/exports

vm-netapp01> rdfile /etc/exports

#Auto-generated by setup Sat Dec 4 21:11:06 GMT 2010

/vol/vol0 -sec=sys,ro,rw=192.168.1.5,root=192.168.1.5,nosuid

/vol/vol0/home -sec=sys,rw,root=192.168.1.5,nosuid

/vol/Vol_VM1 -sec=sys,rw,root=192.168.1.5,nosuid

/vol/solaris -sec=sys,rw=192.168.1.6,root=192.168.1.4,nosuid

/vol/solaris/sol_qtree -sec=sys,rw=192.168.1.6,root=192.168.1.4,nosuid

Internal Page 30
NETAPPS Run Book

As we seen there is entry for already created qtree along with recently created qtree on Volume solaris with name
sol_qtree. Here we can also give access upto volume level because the “security style” is UNIX. As once unix user
will mount nfs share they can give permission and other security and quota according to their choice. But this is
not very much frequent with CIFS share as there we are creating individual qtree for respective user and define
quota from storage end.

STORAGE AVAILABILTY AND PERFORMANCE MONITORING


Sysstat –x 1

Sysstat

Advance mode : ps

Netapp sysstat is like vmstat and iostat rolled into one command. It reports filer

performance statistics like CPU utilization, the amount of disk traffic, and tape traffic. When

run with out options, sysstat will prints a new line every 15 seconds, of just a basic amount

of information. You have to use control-C (^c) or set the interval count (-c count ) to stop

sysstat after time. For more detailed information, use the -u option. For specific information

to one particular protocol, you can use other options. I'll list them here.

 -f FCP statistics

 -i iSCSI statistics

 -b SAN (blocks) extended statistics

 -u extended utilization statistics

 -x extended output format. This includes all available output fields. Be aware that

this produces output that is longer than 80 columns and is generally intended for

"off-line" types of analysis and not for "real-time" viewing.

 -m Displays multi-processor CPU utilization statistics. In addition to the percentage

of the time that one or more CPUs were busy (ANY), the average (AVG) is displayed,

as well as, the individual utilization of each processor. This is only handy on multi

proc systems. Won't work on single processor machines.

Internal Page 31
NETAPPS Run Book
Here are some explanations on the columns of netapp sysstat command.

Cache age : The age in minutes of the oldest read-only blocks in the buffer cache. Data in

this column indicates how fast read operations are cycling through system memory; when

the filer is reading very large files, buffer cache age will be very low. Also if reads are

random, the cache age will be low. If you have a performance problem, where the read

performance is poor, this number may indicate you need a larger memory system or 

analyze the application to reduce the randomness of the workload.

Cache hit : This is the WAFL cache hit rate percentage. This is the percentage of times

where WAFL tried to read a data block from disk that and the data was found already

cached in memory. A dash in this column indicates that WAFL did not attempt to load any

blocks during the measurement interval.

CP Ty : Consistency Point (CP) type is the reason that a CP started in that interval. The CP

types are as follows:

 - No CP started during sampling interval (no writes happened to disk at this point of

time)

 number Number of CPs started during sampling interval

 B Back to back CPs (CP generated CP) (The filer is having a tough time keeping up

with writes)

 b Deferred back to back CPs (CP generated CP) (the back to back condition is getting

worse)

 F CP caused by full NVLog (one half of the nvram log was full, and so was flushed)

 H CP caused by high water mark (rare to see this. The filer was at half way full on

one side of the nvram logs, so decides to write on disk).

 L CP caused by low water mark

Internal Page 32
NETAPPS Run Book
 S CP caused by snapshot operation

 T CP caused by timer (every 10 seconds filer data is flushed to disk)

 U CP caused by flush

 : continuation of CP from previous interval (means, A cp is still going on, during 1

second intervals)

The type character is followed by a second character which indicates the phase of the CP at

the end of the sampling interval. If the CP completed during the sampling interval, this

second character will be blank. The phases are as follows:

 0 Initializing

 n Processing normal files

 s Processing special files

 f Flushing modified data to disk

 v Flushing modified superblock to disk

CP util : The Consistency Point (CP) utilization, the % of time spent in a CP.  100% time in

CP is a good thing. It means, the amount of time, used out of the cpu, that was dedicated

to writing data, 100% of it was used. 75% means, that only 75% of the time allocated to

writing data was utilized, which means we wasted 25% of that time. A good CP percentage

has to be at or near 100%.

Internal Page 33
NETAPPS Run Book

STORAGE ALLOCATION REPORT


Weekly reports of activities and operational status

Internal Page 34

Вам также может понравиться