Академический Документы
Профессиональный Документы
Культура Документы
The AI installer provides the flexibility of installing a ZFS root pool on the default boot disk or on a target
disk that you identify. You can specify the logical device, such as c1t0d0s0, or the physical device path. In
addition, you can use the MPxIO identifier or the device ID for the device to be installed.
After the installation, review your ZFS storage pool and file system information, which can vary by
installation type and customizations. For example:
# zpool status
pool: rpool
state: ONLINE
scrub: none requested
config:
In the above output, the Active field indicates whether the BE is active now represented by N and active on
reboot represented by R, or both represented by NR.
https://docs.oracle.com/cd/E23824_01/html/821-1448/gjtuk.html# 1/6
1/23/2018 Managing Your ZFS Root Pool - Oracle Solaris Administration: ZFS File Systems
# beadm list
BE Active Mountpoint Space Policy Created
-- ------ ---------- ----- ------ -------
solaris NR / 8.41G static 2011-01-13 15:31
In the above output, NR means the BE is active now and will be the active BE on reboot.
You can use the pkg update command to update your ZFS boot environment. If you update your ZFS BE by
using the pkg update command, a new BE is created and activated automatically, unless the updates to the
existing BE are very minimal.
2. Reboot the system to complete the BE activation. Then, confirm your BE status.
# init 6
.
.
.
# beadm list
BE Active Mountpoint Space Policy Created
-- ------ ---------- ----- ------ -------
solaris - - 6.25M static 2011-09-26 08:37
solaris-1 NR / 3.92G static 2011-09-26 09:32
3. If an error occurs when booting the new BE, activate and boot back to the previous BE.
# beadm activate solaris
# init 6
1. Become an administrator.
2. Mount the alternate BE.
# beadm mount solaris-1 /mnt
https://docs.oracle.com/cd/E23824_01/html/821-1448/gjtuk.html# 2/6
1/23/2018 Managing Your ZFS Root Pool - Oracle Solaris Administration: ZFS File Systems
For information about replacing a disk in a root pool, see How to Replace a Disk in a ZFS Root Pool.
SPARC: Confirm that the disk has an SMI (VTOC) disk label and a slice 0. If you need to
relabel the disk and create a slice 0, see Creating a Disk Slice for a ZFS Root File System in
Oracle Solaris Administration: Devices and File Systems.
x86: Confirm that the disk has an fdisk partition, an SMI disk label, and a slice 0. If you need to
repartition the disk and create a slice 0, see Creating a Disk Slice for a ZFS Root File System in
Oracle Solaris Administration: Devices and File Systems.
In the above output, the resilvering process is not complete. Resilvering is complete when you see
messages similar to the following:
https://docs.oracle.com/cd/E23824_01/html/821-1448/gjtuk.html# 3/6
1/23/2018 Managing Your ZFS Root Pool - Oracle Solaris Administration: ZFS File Systems
5. Verify that you can boot successfully from the new disk.
6. Set up the system to boot automatically from the new disk.
SPARC: Set up the system to boot automatically from the new disk, either by using the eeprom
command or the setenv command from the boot PROM.
The root pool is too small and you want to replace it with a larger disk
The root pool disk is failing. In a non-redundant pool, if the disk is failing so that the system won't
boot, you'll need to boot from an alternate media, such as a CD or the network, before you replace the
root pool disk.
In a mirrored root pool configuration, you might be able to attempt a disk replacement without having to
boot from alternate media. You can replace a failed disk by using the zpool replace command or if you have
an additional disk, you can use the zpool attach command. See the steps below for an example of attaching
an additional disk and detaching a root pool disk.
Systems with SATA disks require that you offline and unconfigure a disk before attempting the zpool
replace operation to replace a failed disk. For example:
On some hardware, you do not have to online or reconfigure the replacement disk after it is inserted.
For information about relabeling a disk that is intended for the root pool, see How to Label a Disk in
Oracle Solaris Administration: Devices and File Systems.
For example:
# zpool attach rpool c2t0d0s0 c2t1d0s0
Make sure to wait until resilver is done before rebooting.
For example:
state: ONLINE
scan: resilvered 5.36G in 0h2m with 0 errors on Thu Sep 29 18:11:53 2011
config:
5. Verify that you can boot from the new disk after resilvering is complete.
ok boot /pci@1f,700000/scsi@2/disk@1,0
Identify the boot device pathnames of the current and new disk so that you can test booting from the
replacement disk and also manually boot from the existing disk, if necessary, if the replacement disk
fails. In the example below, the current root pool disk (c2t0d0s0) is:
/pci@1f,700000/scsi@2/disk@0,0
6. If the system boots from the new disk, detach the old disk.
For example:
# zpool detach rpool c2t0d0s0
SPARC: Set up the system to boot automatically from the new disk, either by using the eeprom
command or the setenv command from the boot PROM.
After you activate and boot from the new BE in the second root pool, it will have no information about the
previous BE in the first root pool. If you want to boot back to the original BE, you will need to boot the
system manually from the original root pool's boot disk.
1. Create a second root pool with an SMI (VTOC)-labeled disk. For example:
# zpool create rpool2 c4t2d0s0
3. Set the bootfs property on the second root pool. For example:
# zpool set bootfs=rpool2/ROOT/solaris2 rpool2
5. Boot from the new BE but you must boot specifically from the second root pool's boot device.
ok boot disk2
7. Update the /etc/vfstab entry for the new swap device. For example:
/dev/zvol/dsk/rpool2/swap - - swap - no -
10. Reset your default boot device to boot from the second root pool's boot disk.
SPARC – Set up the system to boot automatically from the new disk, either by using the eeprom
command or the setenv command from the boot PROM.
11. Reboot to clear the original root pool's swap and dump devices.
# init 6
https://docs.oracle.com/cd/E23824_01/html/821-1448/gjtuk.html# 6/6