Вы находитесь на странице: 1из 8

SPARC: Setting up Disks for ZFS File Systems (Task

Map)
The following task map identifies the procedures for setting up a ZFS root pool disk for a ZFS root file system or a
non-root ZFS pool disk on a SPARC based system.

Task
1. Set up the disk
for a ZFS root file
system.

Description
Disk for a ZFS Root File System
Connect the new disk or replace the existing root
pool disk and boot from a local or remote Oracle
Solaris DVD.

For Instructions
SPARC: How to Set
Up a Disk for a ZFS
Root File System

2. Create a disk sliceCreate a disk slice for a disk that is intended for a SPARC: How to Create
for a ZFS root file ZFS root pool. This is a long-standing boot
a Disk Slice for a ZFS
system
limitation.
Root File System
3. Install the boot If you replace a disk that is intended for the root
SPARC: How to Install
blocks for a ZFS
pool by using the zpoolreplace command, then Boot Blocks for a ZFS
root file system, if you must install the boot blocks manually so that the Root File System
necessary.
system can boot from the replacement disk.
4. Set up a disk for Disk for a ZFS File System
SPARC: How to Set
ZFS file system.
Up a Disk for a ZFS
Set up a disk for a ZFS file system.
File System

SPARC: Setting Up Disks for ZFS File Systems


Although the procedures that describe how to set up a disk can be used with a ZFS file system, a ZFS file system is
not directly mapped to a disk or a disk slice. You must create a ZFS storage pool before creating a ZFS file system.
For more information, see Oracle Solaris Administration: ZFS File Systems.
The root pool contains the root file system that is used to boot the Oracle Solaris OS. If a root pool disk becomes
damaged and the root pool is not mirrored, the system might not boot. If a root pool disk becomes damaged, you
have two ways to recover:

You can reinstall the entire Oracle Solaris OS.


Or, you can replace the root pool disk and restore your file systems from snapshots or from a backup
medium. You can reduce system down time due to hardware failures by creating a redundant root pool. The only
supported redundant root pool configuration is a mirrored root pool.

A disk that is used in a non-root pool usually contains space for user or data files. You can attach or add a another
disk to a root pool or a non-root pool for more disk space. Or, you can replace a damaged disk in a pool in the
following ways.

A disk can be replaced in a non-redundant pool if all the devices are currently ONLINE.

A disk can be replaced in a redundant pool if enough redundancy exists among the other devices.

In a mirrored root pool, you can replace a disk or attach a disk and then detach the failed disk or a smaller
disk to increase a pool's size.

In general, setting up a disk on the system depends on the hardware so review your hardware documentation when
adding or replacing a disk on your system. If you need to add a disk to an existing controller, then it might just be a
matter of inserting the disk in an empty slot, if the system supports hot-plugging. If you need to configure a new
controller, see Dynamic Reconfiguration and Hot-Plugging.

SPARC: How to Set Up a Disk for a ZFS Root File


System
Refer to your hardware installation guide for information on replacing a disk.
1.

Disconnect the damaged disk from the system, if necessary.

2.

Connect the replacement disk to the system and check the disk's physical connections, if necessary.

3.

Follow the instructions in the following table, depending on whether you are booting from a local
Oracle Solaris DVD or a remote Oracle Solaris DVD from the network.

Boot Type
Action
From an Oracle Solaris DVD in a local drive1. Make sure the Oracle Solaris DVD is in the drive.
2. Boot from the media to single-user mode:
ok bootcdroms

From the network

Boot from the network to single-user mode:


ok bootnets

4.

After a few minutes, the root prompt (#) is displayed.

After You Set Up a Disk for a ZFS Root File System ...
After the disk is connected or replaced, you can create a slice and update the disk label. Go to SPARC: How to
Create a Disk Slice for a ZFS Root File System.

SPARC: Creating a Disk Slice for a ZFS Root File System


You must create a disk slice for a disk that is intended for a ZFS root pool. This is a long-standing boot limitation.
Review the following root pool disk requirements:

Must contain a disk slice and an SMI (VTOC) label.

An EFI label is not supported for a root pool disk.

Must be a single disk or be part of mirrored configuration. A non-redundant configuration nor a RAIDZ
configuration is supported for the root pool.

All subdirectories of the root file system that are part of the OS image, with the exception of /var, must be
in the same dataset as the root file system.

All Solaris OS components must reside in the root pool, with the exception of the swap and dump devices.

In general, you should create a disk slice with the bulk of disk space in slice 0. Attempting to use different slices on a
disk and share that disk among different operating systems or with a different ZFS storage pool or storage pool
components is not recommended.

SPARC: How to Create a Disk Slice for a ZFS Root File


System
In general, the root pool disk is installed automatically when the system is installed. If you need to replace a root pool
disk or attach a new disk as a mirrored root pool disk, see the steps below.
1.

Become an administrator.

2.

Offline and unconfigure the failed disk, if necessary.


Some hardware requires that you offline and unconfigure a disk before attempting the zpool
replace operation to replace a failed disk. For example:

#zpoolofflinerpoolc2t1d0s0
#cfgadmcunconfigurec2::dsk/c2t1d0
3.

Physically connect the new or replacement disk to the system, if necessary.


a.

Physically remove the failed disk.

b.

Physically insert the replacement disk.

c.

Configure the replacement disk, if necessary. For example:

#cfgadmcconfigurec2::dsk/c2t1d0
On some hardware, you do not have to reconfigure the replacement disk after it is inserted.
4.

Confirm that the disk is accessible by reviewing the format output.


For example, the format command sees 4 disks connected to this system.

#formate
AVAILABLEDISKSELECTIONS:
0.c2t0d0<SUN36Gcyl24620alt2hd27sec107>
/pci@1c,600000/scsi@2/sd@0,0
1.c2t1d0<SEAGATEST336607LSUN36G030733.92GB>
/pci@1c,600000/scsi@2/sd@1,0
2.c2t2d0<SEAGATEST336607LSUN36G050733.92GB>
/pci@1c,600000/scsi@2/sd@2,0
3.c2t3d0<SEAGATEST336607LSUN36G050733.92GB>
/pci@1c,600000/scsi@2/sd@3,0
5.

Select the disk to be used for the ZFS root pool.

6.

Confirm that the disk has an SMI label by displaying the partition (slice) information.

For example, the partition (slice) output for c2t1d0 shows that this disk has an EFI label because it identifies
first and last sectors.

Specifydisk(enteritsnumber):1
selectingc2t1d0
[diskformatted]
format>p
PARTITIONMENU:
0change`0'partition
1change`1'partition
2change`2'partition
3change`3'partition
4change`4'partition
5change`5'partition
6change`6'partition
expandexpandlabeltousewholedisk
selectselectapredefinedtable
modifymodifyapredefinedpartitiontable
namenamethecurrenttable
printdisplaythecurrenttable
labelwritepartitionmapandlabeltothedisk
!<cmd>execute<cmd>,thenreturn
quit
partition>p
Currentpartitiontable(original):
Totaldisksectorsavailable:71116508+16384(reservedsectors)
PartTagFlagFirstSectorSizeLastSector
0usrwm25633.91GB71116541
1unassignedwm000
2unassignedwm000
3unassignedwm000
4unassignedwm000
5unassignedwm000
6unassignedwm000
8reservedwm711165428.00MB71132925
partition>
7.

If the disk contains an EFI label, relabel the disk with an SMI label.
For example, the c2t1d0 disk is relabeled with an SMI label, but the default partition table does not provide an
optimal slice configuration.

partition>label
[0]SMILabel
[1]EFILabel
SpecifyLabeltype[1]:0
Autoconfigurationviaformat.dat[no]?
AutoconfigurationviagenericSCSI2[no]?
partition>p

Currentpartitiontable(default):
Totaldiskcylindersavailable:24620+2(reservedcylinders)
PartTagFlagCylindersSizeBlocks
0rootwm090128.37MB(91/0/0)262899
1swapwu91181128.37MB(91/0/0)262899
2backupwu02461933.92GB(24620/0/0)71127180
3unassignedwm00(0/0/0)0
4unassignedwm00(0/0/0)0
5unassignedwm00(0/0/0)0
6usrwm1822461933.67GB(24438/0/0)70601382
7unassignedwm00(0/0/0)0
partition>
8.

Create an optimal slice configuration for a ZFS root pool disk.


Set the free hog partition so that all the unallocated disk space is collected in slice 0. Then, press return through
the slice size fields to create one large slice 0.

partition>modify
Selectpartitioningbase:
0.Currentpartitiontable(default)
1.AllFreeHog
Choosebase(enternumber)[0]?1
PartTagFlagCylindersSizeBlocks
0rootwm00(0/0/0)0
1swapwu00(0/0/0)0
2backupwu02461933.92GB(24620/0/0)71127180
3unassignedwm00(0/0/0)0
4unassignedwm00(0/0/0)0
5unassignedwm00(0/0/0)0
6usrwm00(0/0/0)0
7unassignedwm00(0/0/0)0
Doyouwishtocontinuecreatinganewpartition
tablebasedonabovetable[yes]?
FreeHogpartition[6]?0
Entersizeofpartition'1'[0b,0c,0.00mb,0.00gb]:
Entersizeofpartition'3'[0b,0c,0.00mb,0.00gb]:
Entersizeofpartition'4'[0b,0c,0.00mb,0.00gb]:
Entersizeofpartition'5'[0b,0c,0.00mb,0.00gb]:
Entersizeofpartition'6'[0b,0c,0.00mb,0.00gb]:
Entersizeofpartition'7'[0b,0c,0.00mb,0.00gb]:
PartTagFlagCylindersSizeBlocks
0rootwm02461933.92GB(24620/0/0)71127180
1swapwu00(0/0/0)0
2backupwu02461933.92GB(24620/0/0)71127180
3unassignedwm00(0/0/0)0

4unassignedwm00(0/0/0)0
5unassignedwm00(0/0/0)0
6usrwm00(0/0/0)0
7unassignedwm00(0/0/0)0
Okaytomakethisthecurrentpartitiontable[yes]?
Entertablename(rememberquotes):"c2t1d0"
Readytolabeldisk,continue?yes
partition>quit
format>quit
9.

Let ZFS know that the failed disk is replaced.

10.

#zpoolreplacerpoolc2t1d0s0
#zpoolonlinerpoolc2t1d0s0
On some hardware, you do not have to online the replacement disk after it is inserted.
If you are attaching a new disk to create a mirrored root pool or attaching a larger disk to replace a smaller disk,
use syntax similar to the following:

#zpoolattachrpoolc0t0d0s0c1t0d0s0
11.

If a root pool disk is replaced with a new disk, apply the boot blocks after the new or replacement
disk is resilvered.
For example:

#zpoolstatusrpool
installbootFzfs/usr/platform/`unamei`/lib/fs/zfs/bootblk
/dev/rdsk/c2t1d0s0
12.

Verify that you can boot from the new disk.

13.

If the system boots from the new disk, detach the old disk.
This step is only necessary if you attach a new disk to replace a failed disk or a smaller disk.

#zpooldetachrpoolc0t0d0s0
14.

Set up the system to boot automatically from the new disk, either by using the eeprom command or
the setenv command from the SPARC boot PROM.

After You Have Created a Disk Slice for a ZFS Root File System ...

After you have created a disk slice for the ZFS root file system and you need to restore root pool snapshots to
recover your root pool, see How to Replace a Disk in a ZFS Root Pool in Oracle Solaris Administration: ZFS File
Systems.

SPARC: How to Install Boot Blocks for a ZFS Root File


System
1.

Become an administrator.

2.

Install a boot block for a ZFS root file system.

3.

#installbootFzfs/usr/platform/`unamei`/lib/fs/zfs/bootblk
/dev/rdsk/cwtxdys0
For more information, see
installboot
(1M).

4.

Verify that the boot blocks are installed by rebooting the system to run level 3.

#init6
Example 12-1 SPARC: Installing Boot Blocks for a ZFS Root File System
If you physically replace the disk that is intended for the root pool and the Oracle Solaris OS is then reinstalled, or you
attach a new disk for the root pool, the boot blocks are installed automatically. If you replace a disk that is intended for
the root pool by using the zpoolreplace command, then you must install the boot blocks manually so that the
system can boot from the replacement disk.
The following example shows how to install boot blocks for a ZFS root file system.
#installbootFzfs/usr/platform/`unamei`/lib/fs/zfs/bootblk/dev/rdsk/c0t1d0s0

SPARC: How to Set Up a Disk for a ZFS File System


If you are setting up a disk to be used with a non-root ZFS file system, the disk is relabeled automatically when the
pool is created or when the disk is added to the pool. If a pool is created with whole disks or when a whole disk is
added to a ZFS storage pool, an EFI label is applied. For more information about EFI disk labels, see EFI Disk Label.
Generally, most modern bus types support hot-plugging. This means you can insert a disk in an empty slot and the
system recognizes it. For more information about hot-plugging devices, see Chapter 6, Dynamically Configuring
Devices (Tasks).
1.

Become an administrator.

2.

Connect the disk to the system and check the disk's physical connections.
Refer to the disk's hardware installation guide for details.

3.

Offline and unconfigure the failed disk, if necessary.

Some hardware requires that you offline and unconfigure a disk before attempting the zpool
replace operation to replace a failed disk. For example:

#zpoolofflinetankc1t1d0
#cfgadmcunconfigurec1::dsk/c1t1d0
<Physicallyremovefaileddiskc1t1d0>
<Physicallyinsertreplacementdiskc1t1d0>
#cfgadmcconfigurec1::dsk/c1t1d0
On some hardware, you do not to reconfigure the replacement disk after it is inserted.
4.

Confirm that the new disk is recognized.


Review the output of the format utility to see if the disk is listed under AVAILABLEDISKSELECTIONS.
Then, quit the format utility.

#format
5.

Let ZFS know that the failed disk is replaced, if necessary.

6.

#zpoolreplacetankc1t1d0
#zpoolonlinetankc1t1d0
Confirm that the new disk is resilvering.

#zpoolstatustank
7.

Attach a new disk to an existing ZFS storage pool, if necessary.


For example:

#zpoolattachtankmirrorc1t0d0c2t0d0
Confirm that the new disk is resilvering.

#zpoolstatustank

Вам также может понравиться