Академический Документы
Профессиональный Документы
Культура Документы
POST: Power on self test, It will detect hardware, machine host ID,serial No, architecture type, memory
and Ethernet address and it will load the primary program called bootblk.
Init Phase : It will started by executing of /etc/init program and start other process reading the
/etc/inittab files, as the directory in the /etc/inittab files.
1
Is it possible to edit the corntab using vi
It is not recommended but it is possible by editing
# vi /var/spool/cron/crontabs/root
Explain inode
It contain the information of the files and directory
Like ( date, home directory, rights, modified date, etc)
How many file to modify the host name to be changed without rebooting the system.
There are 6 files.
#vi /etc/hosts
#vi /etc/nodename
#vi /etc/hostname.hme
#vi /etc/net/ticlts/hosts
#vi /etc/net/ticosts/hosts
#vi /etc/net/ticotsord/hosts
This will be quite complicative, because kernel is the core of the operating system, its an image of the
OS. whereas /etc/path_to_inst are the drivers are stored for the enabled hardware.
2
How to find the hardware configuration
OK banner --> from the open boot prompt
# prtconf
# sysdef
# /use/platform/sun4u/sbin/prtdiag
3
How will see the version of the patches
# showrev p
# patchadd P
What is UMASK
UMASK is a Unix environment variable, which automatically sets file permissions on newly created files,
Default value is 022
Hardlink : link between same file systems and inode number will be same
(eg) /U3 - /U3
#ln s /U3/file1 /U3/file2
4
Explain setuid, setgid and stickybit
Setuid : When setuid permission set on a executable file, user who access the file is
granted access permission of the owner of the file.
# find / -prem 4000
setgid : Permission similar to setuid, The process is changed to owner of the file.
# find / -prem 2000
Stickybit : It is a special permission that protect the files within a public writable directory
Stickybit permission set the shared directory, user can create a files or directory
But only by owner of the directory can modify or delete.
# find / -prem 1000
# /usr/dt/bin/dtconfig e (enable)
# /usr/dt/bin/dtconfig d ( disable)
We have edit the /etc/passwd file and modify a user forget to give the shell will user able to
loging?
If Passwd f option given In which files it will update.
After creating swap file update the same to /etc/vfstab what will be the fstype.
temfs
5
How will you find out enough memory?
# /use/platform/sun4u/sbin/prtdiag
# prtconf | grep i mem
Explain inode
It contain the information of the files and directory
Like ( date, home directory, rights, modified date, etc)
Explain FSCK
Utility for checking and repairing the files system inconsistence due to abnormal shutdown.
It has 5 phases
Phase 1 : Check block and size
Phase 2 : Check pathname
Phase 3 : Check connectivity
Phase 4 : Check reference count
6
Phase 5 : Check cylinder group
Soft mount:It allows automatic unmounting if the filesystem is idle for a specified timeout
period. It is mainly used for network filesystems like NFS It can be configured
using Autofs and the network filesystem can be soft mounted.
For 32 bit
# eeprom boot-file=/kernel/unix
or
OK printenv boot-file
OK settenv boot-file kernel/unix
7
How do you check the run level
# who r
What are thing you must ensure to provide security the system
1. Latest patches
2. Access to the system:
/etc/default/login
sshd.config
3. Limited su access
4. Stop unnecessary service at run level
/etc/inetd.config : finger, discard, daytime,charger,tftp,spary & etc
What is nslookup
To find the hostname and ip address
To resolve the hostname into ip and ip into hostname
8
How to find the network card speed
# ndd get /dev/hme link_speed
1 = 100mbps, 0 = 10mbps
How to modify network card speed
# ndd set /dev/hme instance 0
# ndd get /dev/hme link_status
# ndd get /dev/hme link_mode
To modify
# ndd set /dev/eri instance 0
# ndd set /dev/eri adv_100T4_cap0
# ndd set /dev/eri adv_100fdx_cap1
# ndd set /dev/eri adv_100hdx_cap0
# ndd set /dev/eri adv_10fdx_cap0
# ndd set /dev/eri adv_10hdx_cap0
# ndd set /dev/eri adv_autoneg_cap0
100=full duplex
10=half duplex
0=
1= autoneg
2
root on BUILD kirkbiz06 # ndd -set /dev/bge3 adv_autoneg_cap 0
root on BUILD kirkbiz06 # ndd -get /dev/bge3 link_speed
100
root on BUILD kirkbiz06 # ndd -get /dev/bge3 link_status
1
root on BUILD kirkbiz06 # ndd -get /dev/bge3 link_duplex
2
root on BUILD kirkbiz06 # ndd -get /dev/bge3 link_autoneg
0
root on BUILD kirkbiz06 # ndd -set /dev/bge3 adv_autoneg_cap 1
root on BUILD kirkbiz06 # ndd -get /dev/bge3 link_duplex
2
root on BUILD kirkbiz06 # ndd -get /dev/bge3 link_autoneg
1
root on BUILD kirkbiz06 #
1. mountd Handles file system mount requests from remote systems, and provides access control (server)
2. nfsd Handles client file system requests (both client and server)
3. statd Works with the lockd daemon to provide crash recovery functions for the lock manager (server)
5. nfslogd Provides filesystem logging. Runs only if one or more filesystems is mounted with log attribute.
biod: On the client end, handles asynchronous I/O for blocks of NFS files.
How to start / stop the nfs server
# /etc/init.d/nfs.server start
# /etc/init.d/nfs.server stop
9
What are performance tool used
Iostat ,vmstat , prstat , sar ,netstat, top
How to find out the shared file system from server and client
Server : # share & dfmount
Client : # showmount e (hostname) and dfshares
#vi /etc/ssh/sshd_config
PermitRootLogin no or Yes
# dumpadm.conf
# Configuration parameters for system crash dump.
# Do NOT edit this file by hand -- use dumpadm(1m) instead.
DUMPADM_DEVICE=/dev/dsk/c0t10d0s3
DUMPADM_SAVDIR=/var/crash/isd250
DUMPADM_CONTENT=kernel
DUMPADM_ENABLE=yes
$ pwd
/etc
$ ls -l sav*
-r-xr-xr-x 1 root bin 1112912 Jun 4 2004 save
10
-r-xr-xr-x 62 root bin 10044 Jan 23 2005 savecore
lrwxrwxrwx 1 root other 6 Mar 29 2006 savepnpc -> ./save
$ pwd
SDS
Concatenation: Concatenation is joining of two or more disk slices to add up the disk space.
Concatenation is serial in nature i.e. sequential data operations are performed serially on first disk then
second disk and so on. Due to serial nature new slices can be added up without having to take the
backup of entire concatenated volume, adding slice and restoring backup.
Striping: Spreading of data over multiple disk drives mainly to enhance the performance by
distributing data in alternating chunks - 16 k interleave across the stripes. Sequential data operations
are performed in parallel on all the stripes by reading/writing 16k data blocks alternatively form the
disk stripes.
Mirroring: Mirroring provides data redundancy by simultaneously writing data on to two sub mirrors of
a mirrored device. A submirror can be a stripe or concatenated volume and a mirror can have three
mirrors. Main concern here is that a mirror needs as much as the volume to be mirrored.
RAID 5: RAID 5 provides data redundancy and advantage of striping and uses less space than
mirroring. A RAID 5 is made up of at least three disks, which are striped with parity information written
alternately on all the disks. In case of a single disk failure the data can be rebuild using the parity
information from the remaining disks.
Creating New FS in LUNs and new mount point to the Oracle filesystem
# metainit d111 -p d200 20G
d111: Soft Partition is setup
# newfs /dev/md/rdsk/d111
newfs: construct a new file system /dev/md/rdsk/d111: (y/n)? y
# mkdir ora13data
# chown oracle:dba /ora13data
11
# ls -la ora13data
# mount /dev/md/dsk/d111 /ora13data
#df -k
Found Enclosure(s):
SUNWGS INT FCBPL Name:FCloop Node WWN:50800200001bcf28
Logical Path:/dev/es/ses0
Logical Path:/dev/es/ses1
12
or
# /usr/sbin/luxadm insert_device <enclosure_name,sx>
luxadm insert_device /dev/rdsk/c1t49d0s2
where sx is the slot number
or
# /usr/sbin/luxadm insert_device (if enclosure name is not known)
Note: In many cases, luxadm insert_device does not require the enclosure
name and slot number.
Use the following to find the slot number:
#metadevadm u c1t0d0
Attach mirrors:
#metattach d0 d30
#metattach d1 d31
# metadb
flags first blk block count
a m p luo 16 8192 /dev/dsk/c1t0d0s7
a p luo 8208 8192 /dev/dsk/c1t0d0s7
a p luo 16400 8192 /dev/dsk/c1t0d0s7
13
Following file systems are not able to open, while using dk k its shows i/o error.
Step 1
[root drcs1] ksh$
[root drcs1] ksh$ metastat -s meter d18
meter/d18: Trans
State: Hard Error
Size: 4087280 blocks
Master Device: meter/d17
Logging Device: meter/d5
meter/d17: Mirror
Submirror 0: meter/d15
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 4087280 blocks
meter/d5: Mirror
Submirror 0: meter/d3
State: Okay
Submirror 1: meter/d1
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 132240 blocks
14
Device Start Block Dbase State Hot Spare
c1t4d0s6 0 No Okay
Step 2:- Analyzed both the disk and no error found disks are okay.
analyze> test
Ready to analyze (won't harm data). This takes a long time,
but is interruptable with CTRL-C. Continue? yes
15
#umount /oraredo/METR
#umount /redoarch/METR
Check df k whether the file system are umounted
Step 8:- Find all the Trans device configuration has cleared
[root drcs1] ksh$ metastat -s meter -p
Step 10:- Attache the mirror device meter/d5 with sub mirror meter/d1
[root drcs1] ksh$ metattach meter/d5 meter/d1
meter/d5: submirror meter/d1 is attached
16
#vi /mnt/etc/vfstab
After making changes, boot the clone disk -----Done
Backups
How will you take ufsdump and ufsrestore in a sing command line?
# ufsdump 0f - /dev/rdsk/c0t0d0s6 | (cd /mnt/prasad ufsrestore xf -)
Tar:
1. Used for single or multiple files backup.
2. Can't backup special character & block device files.
3. Works only on mounted file system.
Options in ufsdump
S = size estimate amount of space need on tape
L = auto loaded
O = offline once the backup completed & if possible to eject the media
U = update the /etc/dumdate files (Indicate:Name of the file system,Level of the backup 0-9,Date.
F = specified the tape devices name
Options in ufsrestore
T= list the content of the media
R =restore entire file system
X = restore only the file named on the command line
I = interactive mode
V = verbose mode
F = specified the tape devices name
17
How will you comment error line in /etc/system file
# Vi /etc/system (To comment the error line in /etc/system files, we have to use *)
How will you come to know wheather hme 0r eri or to configuring the network card.
Base on Ethernet card
Disaster recovery steps if OS corrupted
Ok boot cdrom s
# newfs /dev/rdsk/c0t0d0s0
# mkdir a
# mount /dev/dsk/c0t0d0s0 /a
# cd a
# ufsrestore rf /dev/rmt/0
# rm restoresymtable
# cd /usr/platform/uname-m/lib/fs/ufs
# installboot bootblk /dev/rdsk/c0t0d0s0
# cd /
# umount /a
# fsck /dev/rdsm/c0t0d0s0
# init 6
How may will you find the process id and disk utilizations.
# pr
18
# sudo l
Veritas
How to find the demo?
#Ps ef |grep vxvm
Disk to recover
#vxrecover sn g dgname newdiskname
How to Unencapsulating root disk
Run the following command to prevent VxVM from starting up after reboot:
touch /etc/vx/reconfig.d/state.d/install-db
19
Detach second mirror
# vxplex -o rm dis opt-02 rootvol-02 swapvol-02 usr-02 var-02
unencapsulate a root disk that
#/etc/vx/bin/vxunroot
Reboot system #init 6
How to remove the root mirror disk permanently
#vxunroot
Reboot the system (it will remove the entries of VXVM from /etc/systems & Filesystem from /etc/vfstab)
Remove the plexes of the rootnew
#vxplex dis rootvol-02 (remove the plexes of the rootnew)
Mirror all the other volumes from the current root disk to the new root disk.Do not mirror swap volumes. Swap slices
will be created on the new disk manually. In this example, the volumes to mirror are var and opt.
# vxassist -g rootdg mirror var newroot
# vxassist -g rootdg mirror opt newroot
How you will identify that how may DG creation a particular VXVM version support
root on BUILD kirkcmis3 # vxdctl support
Support information:
vxconfigd_vrsn: 21
dg_minimum: 10
dg_maximum: 120
kernel: 15
protocol_minimum: 40
20
protocol_maximum: 60
protocol_current: 0
Run:
21
vxconfigrestore -l /etc/vx/cbr/bk/ -c devdg ==> to commit the restoration.
vxconfigrestore -l /etc/vx/cbr/bk/ -d devdg ==> to abort the restoration.
What are the steps to be follow to add a disk in veritas & before add the disk what are the
steps to be follow
Before adding the disk,
take an output from the format command.
take an output of vxdisk list
after the disk is added, do the following:
#devfsadm
22
#format --> label the disk
#vxdctl enable
#vxdiskadm choose the option 1 and then it will ask you the diskgroup once it has been added it will
ask for the encapsulation, say no then it will ask the device name, assign the name, that's it,
#vxdisk list, this will tell the status of the newly added disk as online
How ill you remove the subdisk and plexus
to dissociate a subdisk try
vxsd dis disk##-##
remove a subdisk by
vxedit rm disk##-##
23
#vxassist shrinkto vol_name 1000
will shrink a volume by 1000 sectors,
make sure you don't shrink a volume below the current
size of the filesystem
This approach can be used for both first time complete refresh and ongoing mirroring
process
24
1. Should know the volume name
2. Give new temporary snapshot volume name
3. Find the disk available space to copy the snapshot volume.
Command to execute
Take a copy of
#vxprint -Aht | more
#vxprint list
Verify snapshot is completed: ( it will show 2 number of plex for the volume)
# vxprint g <give the dg name> snapdb1
# vxprint g <give the dg name> snapdb2
# vxprint g <give the dg name> snapdb3
# vi /etc/dfs/dfstab
share F nfs o rw= <server name> / snap-db3
share F nfs o rw= <server name> / snap-db3
share F nfs o rw= <server name> / snap-db3
Mount the file system to client or you can put the entre in /etc/vfstab on client side.
25
----------------End-----------------
If you want to take backup the snapshot files follow the below processor
Solution:-
veritas volume made stale & cleaned
7001 vxvea
7004 vxrecover -s -g cusmarp2_dg vol_ora1data
7005 vxrecover -v -g cusmarp2_dg vol_ora1data
7006 vxprint -Ath | more
7009 datapath query device | more
7010 vxprint -Ath | more
7011 mount -F vxfs /dev/vx/dsk/cusmarp2_dg/vol_ora1data /ora1data/CUSMARP2
7012 vxdiskadm
7015 vxdisk list
7016 vxprint -Ath | more
7021 ./vxse &
7027 vxdiskadm
7049 mount -F vxfs /dev/vx/dsk/cusmarp2_dg/vol_ora1data /ora1data/CUSMARP2
7050 vxprint -Ath
7051 vxmend -g cusmarp2_dg fix stale vol_ora1data-01
7052 vxprint -Ath
7053 vxmend -g cusmarp2_dg fix clean vol_ora1data-01
7054 vxprint -Ath
7055 vxvol -g cusmarp2_dg start vol_ora1data
7056 vxprint -Ath
7057 mount -F vxfs /dev/vx/dsk/cusmarp2_dg/vol_ora1data /ora1data/CUSMARP2
7058 fsck -F vxfs /dev/vx/rdsk/cusmarp2_dg/vol_ora1data
7059 mount -F vxfs /dev/vx/dsk/cusmarp2_dg/vol_ora1data /ora1data/CUSMARP2
How to bring the existing data disk into VERITAS volume manger control.
Through Encapsulations method
26
How to change the mirror status from 0:1 to 0:5 and how?
#vxassist g dgname v volume name relayout layout=strip
How to find the plex, sub disk, Volume group, disk status, free spaces, disk controller,
Volume controller?
Displays info about plexes
#vxprint -lp
#vxprint -l plex_name
What is the difference between the VERITAS 3.0 and VERITAS 4.0?
27
In VERITAS 3.0 , the root dg is present by default
In VERITAS 4.0, the root dg has to be created manually
In Vertias 4.0 cdsdisk has introduced which means, in any os it can be exported
How to rename the old root disk. In this example, rootdisk is being renamed as rootold.
# vxedit -g rootdg rename rootdisk rootold
What is a resource?
Resource are h/w or s/w which work together to provide service to client in a client/server environment
It is monitored and controlled by vcs.
What is HA?
It is a daemon of a cluster which is in the form of Active Passive, i.e. No load balancing
HA--> Highly Available means, two or more systems are connected with the same configuration, if one
fails the other will take care the resources
28
How to clear the failing flag?
#vxedit set failing=off mydg02
#vxdisk list
initialize newdisk
vxdisksetup i <diskname>
Add disk to disk group
vxdg g oradg adddisk oradg05= <diskname>
vxdg g oradg adddisk oradg06= <diskname>
Command Syntax
You can now back up the snapshot volume by whatever means you prefer. To avoid wasting space, you
can then remove the snapshot volume, which occupies as much space as the original volume
29
Node Cluster 2
Minimum 2 nodes, 2 etherned address, shared disk and HA applications (ex) oracle
30
#/etc/VRTSvcs/conf/config/main.cf
#/etc/VRTSvcs/conf/config/sysname
How to bring the resource to online and offline
# /opt/VRTSvcs/bin/hagrp -online (service_group) -sys (system_name)
# /opt/VRTSvcs/bin/hagrp -offline (service_group) -sys (system_name)
31
System node1
System node2
Snmp mycluster
#hacf verify .
#hacf cftocmd .
#hastart
#hastatus sum
Now Start the Cluster on this terminal first by using the following command and use same
commands on each node
# hastart -force
T3 Storage
1)Vol add volname data undn raid n standby undn
2)Vol stat
3)Vol init volname data
4)Vol mount vol name
5)Vol list
6)Mkdir /dev/es
7)Luxadm insert
8)if above solaris 7 exclude the steps 6 & 7
9) format and partition .
32
What is WWN on storage
World wide number _______________continues_____________________
To view the LUNs on a Solaris host, you need to use cfgadms
For example if you remove SB3 board on 6800 server for replaceing faulty memory or
faulty CPU
cfgadm -c unconfigure N0.SB3 ------For unconfigure the entire(only SB3) CPU board
root@kbl-db-02 # cfgadm -c disconnect N0.SB3 ---- Disconecting from physical path
root@kbl-db-02 # cfgadm -al |more --for confirming removed or not
/N0/SB3/P2/B1/d2
cfgadm -c configure N0.SB3 ---After replacing to configure the same board
SSAADM:- ssaadm command is now link to the luxadm command
LUXADM:-
The Luxadm program is an administrative command that manages both the sun storage A5000 and
SPARC storage array disk arrays, Lunadm performa a variety of control and query task, depending on
the command-line arguments and options used.
# vxdctl -c mode
mode: enabled: cluster active - MASTER
master: bits
# vxdctl upgrade
NOTE: All nodes need to be joined in the Cluster Volume Manager cluster before running the above
command.
To confirm that the protocol version has been updated, the following can be run:
# vxdctl protocolversion
Cluster running at protocol 50
What is Multipathing
Multipathing is the use of redundant storage network components responsible for transfer of data between the server
and storage. ...Multipathing allows for two or more data paths to be simultaneously used for read/write operations,
enhancing performance by automatically and equally dispersing data access across all the available paths.
Splitbrain : If the private network fails there will not be connectivity between the nodes,
Qourm device will take place in this senario, Quarm devie have the information of both
the nodes. It will distroy one of the node information and make other node to be owner of
the service group.
(or)
Splitbrain : Enables only the partition (subcluster) with a majority of votes to run as the cluster (only
one partition can exist with such a majority). After a node loses the race for quorum, that node panics.
Seeding; It is use to protect the cluster in pre-existing network, One seeding system can run vcs
Automatic seeding #gabconfig c n < no of nodes>
Manual seeding #gabconfig c x
Amnesia: Guarantees that when a cluster is booted, it has at least one node that was a member of the
most recent cluster membership (and thus has the latest configuration data).
Jeopardy Defined
The design of VCS requires that a minimum of two heartbeat-capable channels be available between nodes to protect
against network failure. When a node is missing a single heartbeat connection, VCS can no longer discriminate
between a system loss and a loss of the last network connection. It must then handle loss of communications on a
single network differently from loss on multiple networks. This procedure is called "jeopardy." As mentioned
previously, low latency transport (LLT) provides notification of reliable versus unreliable network communications to
global atomic broadcast (GAB). GAB uses this information, with or without a functional disk heartbeat, to delegate
cluster membership. If the system heartbeats are lost simultaneously across all channels, VCS determines the system
has failed. The services running on that system are then restarted on another. However, if the node was running with
one heartbeat only (in jeopardy) prior to the loss of a heartbeat, VCS does not restart the applications on a new node.
This action of disabling failover is a safety mechanism that prevents data corruption.
I/O Fencing SCSI III Reservations - I/O Fencing (VxFEN) is scheduled to be included in the VCS 4.0 version. VCS
can have parallel or failover service groups with disk group resources in them. If the cluster has a split-brain, VxFEN
should force one of the subclusters to commit suicide in order to prevent data corruption. The subcluster which
commits suicide should never gain access to the disk groups without joining the cluster again. In parallel service
groups, it is necessary to prevent any active processes from writing to the disks. In failover groups, however, access to
the disk only needs to be prevented when VCS fails over the service group to another node. Some multipathing
products will be supported with I/O Fencing.
The cluster resource group and resources showing ERROR_STOP_FAILED, then follow the below
mentioned steps.
1. -- Resource Groups --
Group Name Node Name State
---------- --------- -----
Group: pspd-rg phys-pspd1 Error--stop failed
Group: pspd-rg phys-pspd2 Offline
=======================================================================
For clearing the STOP_FAILED flag ---- -c is for clear flag, -h for nodename, -j for
resource name, -f for error flag.
For Bring down the resource group ----- (If bring down the resource group STOP_FAILED
error will clear and it goes to Offline state)
root@phys-pspd1 # scswitch -F -g pspd-rg
=======================================================================
2. root@phys-pspd1 # scstat -g
-- Resource Groups and Resources --
-- Resource Groups --
Group Name Node Name State
---------- --------- -----
Group: pspd-rg phys-pspd1 Offline
Group: pspd-rg phys-pspd2 Offline
Resource: pspd-oralisten-res phys-pspd1 Offline Offline
root@phys-pspd1 #
=======================================================================
To bring up the resource group--
root@phys-pspd1 # scswitch -Z -g pspd-rg
=======================================================================
34
root@phys-pspd1 # scstat -g
-- Resource Groups and Resources --
Resources: pspd-rg pspd pspd-hastorageplus-res pspd-orasrv-res pspd-
oralisten-res
Resource: pspd-oralisten-res phys-pspd1 Online Online
Resource: pspd-oralisten-res phys-pspd2 Offline Offline
Comunicate to OPS to ignore the alerts on this servers - phys-hhdc1 & phys-hhdc2.
ii) Switch back the resource group "hhda-rg" from phys-hhdc1 to phys-hhdc2 using the command shown below:
scswitch -z -g hhda-rg -h phys-hhdc2
iii) Check if the resouce group is available on phys-hhdc2.
iv) Comunicate to OPS to start monitoring the alerts on this servers - phys-hhdc1 & phys-hhdc2
35