Вы находитесь на странице: 1из 6

How can I add a filesystem to an existing zone?

There are four methods. The following list uses UFS examples, but other types of file
systems, such as HSFS and VxFS, can be used in the zonecfg "fs" resource
type property or attached by mount(1M).

1. Create and mount the filesystem in the global zone and use LOFS to mount it
into the non-global zone (very safe)

global# # mkdir -p /zones/data/au11qsn01ttels2/optapporacle


global#zonecfg -z au11qsn01rtels2
zonecfg:m1> add fs
zonecfg:m1:fs> set dir=/opt/app/oracle
zonecfg:m1:fs> set special=/zones/data/au11qsn01rtels2/optapporacle
zonecfg:m1:fs> set type=lofs
zonecfg:m1:fs> add options [ro,nodevices]
zonecfg:m1:fs> end
zonecfg:m1:fs> verify
zonecfg:m1:fs> commit
zonecfg:m1:fs>exit

global#mount -F lofs /zones/data/au11qsn01rtels2/optapporacle


/zones/au11qsn01rtels2/root/opt/app/oracle

2. Create the filesystem in the global zone and use zonecfg to mount the
filesystem into the zone as a UFS filesystem (very safe)

3.Export the device associated with the disk partition to the non-global
zone, create the filesystem in the non-global zone and mount it. Security
consideration: If a _block_ device is present in the zone, a malicious user
could create a corrupt filesystem image on that device, and mount a
filesystem. This might cause the system to panic. The problem is less
acute with raw (character) devices. Disk devices should only be placed into
a zone that is part of a relatively trusted infrastructure.

4.Mount a UFS filesystem directly into the non-global zone’s directory


structure (allows dynamic modifications to the mount without rebooting the
non-global zone)

A. How to Import Raw and Block Devices by Using zonecfg


This procedure uses the lofi file driver, which exports a file as a block device.

1. Become superuser, or assume the Primary Administrator role.


To create the role and assign the role to a user, see Using the Solaris Management
Tools With RBAC (Task Map) in System Administration Guide: Basic
Administration.

2. Change directories to /usr/tmp.

global# cd /usr/tmp

3. Create a new UFS file system.

global# mkfile 10m fsfile

4. Attach the file as a block device.

The first available slot, which is /dev/lofi/1 if no other lofi devices have been
created, is used.

global# lofiadm -a `pwd`/fsfile

You will also get the required character device.

5. Import the devices into the zone my-zone.

global# zonecfg -z my-zone


zonecfg:my-zone> add device
zonecfg:my-zone:device> set match=/dev/rlofi/1
zonecfg:my-zone:device> end
zonecfg:my-zone> add device
zonecfg:my-zone:device> set match=/dev/lofi/1
zonecfg:my-zone:device> end

6. Reboot the zone.

global# zoneadm -z my-zone boot

7. Log in to the zone and verify that the devices were successfully imported.

my-zone# ls -l /dev/*lofi/*

8. You will see a display that is similar to this:


brw------- 1 root sys 147, 1 Jan 7 11:26 /dev/lofi/1
crw------- 1 root sys 147, 1 Jan 7 11:26 /dev/rlofi/1

B. How to Mount the File System Manually


You must be the zone administrator and have the Zone Management profile to perform
this procedure. This procedure uses the newfs command, which is described in the
newfs(1M) man page.

1. Become superuser, or have the Zone Management rights profile in your list of
profiles.
2. In the zone my-zone, create a new file system on the disk.

my-zone# newfs /dev/lofi/1

3. Respond yes at the prompt.

newfs: construct a new file system /dev/rlofi/1: (y/n)? y

4. You will see a display that is similar to this:

/dev/rlofi/1: 20468 sectors in 34 cylinders of 1 tracks, 602 sectors


10.0MB in 3 cyl groups (16 c/g, 4.70MB/g, 2240 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 9664, 19296,

5. Check the file system for errors.

my-zone# fsck -F ufs /dev/rlofi/1

6. You will see a display that is similar to this:

** /dev/rlofi/1
** Last Mounted on
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
2 files, 9 used, 9320 free (16 frags, 1163 blocks, 0.2% fragmentation)
7. Mount the file system.

my-zone# mount -F ufs /dev/lofi/1 /mnt

8. Verify the mount.

my-zone# grep /mnt /etc/mnttab

9. You will see a display similar to this:

/dev/lofi/1 /mnt ufs


rw,suid,intr,largefiles,xattr,onerror=panic,zone=foo,dev=24c0001
1073503869

C. How to Place a File System in /etc/vfstab to Be Mounted When


the Zone Boots
This procedure is used to mount the block device /dev/lofi/1 on the file system path /mnt.
The block device contains a UFS file system. The following options are used:

• logging is used as the mount option.


• yes tells the system to automatically mount the file system when the zone boots.
• /dev/rlofi/1 is the character (or raw) device. The fsck command is run on the raw
device if required.

1. Become superuser, or have the Zone Management rights profile in your list of
profiles.
2. In the zone my-zone, add the following line to /etc/vfstab:

/dev/lofi/1 /dev/rlofi/1 /mnt ufs 2 yes logging

D. How to Mount a File System From the Global Zone Into a Non-
Global Zone
Assume that a zone has the zonepath /export/home/my-zone. You want to mount the disk
/dev/lofi/1 from the global zone into /mnt in the non-global zone.

You must be the global administrator in the global zone to perform this procedure.

1. Become superuser, or assume the Primary Administrator role.


To create the role and assign the role to a user, see Using the Solaris Management
Tools With RBAC (Task Map) in System Administration Guide: Basic
Administration.

2. To mount the disk into /mnt in the non-global zone, type the following from the
global zone:

global# mount -F ufs /dev/lofi/1 /export/home/my-zone/root/mnt

How to Exporting VxVM volumes to a non-global zone ?


A volume device node can be exported for use in non-global zone using the
zonecfg command. The following procedure makes a volume vol1 available
in the non-global zone myzone.
Caution: Exporting raw volumes to non-global zones has implicit security risks. It
is possible for the zone administrator to create malformed file systems
that could later panic the system when a mount is attempted. Directly
writing to raw volumes, exported to non-global zones, and using utilities
such as dd can lead to data corruption in certain scenarios.

To export VxVM volumes to a non-global zone

1  Create a volumevol1 in the global zone:

global# ls -l /dev/vx/rdsk/rootdg/vol1
crw------- 1 root root 301, 102000 Jun 3
12:54 /dev/vx/rdsk/rootdg/vol1crw------- 1 root sys 301, 10200
0 Jun 3 12:54 /devices/pseudo/vxio@0:rootdg,vol1,102000,raw

2  Add the volume device vol1 to the non-global zone myzone:

global# zonecfg -z myzone


zonecfg:myzone> add device
zonecfg:myzone:device> set match=/dev/vx/rdsk/rootdg/vol1
zonecfg:myzone:device> end
zonecfg:myzone> commit

3  Ensure that the devices will be seen in the non-global zone:

global# zoneadm -z myzone halt


global# zoneadm -z myzone boot

4  Verify that /myzone/dev/vx contains the raw volume node and that the non-
global zone can perform I/O to the raw volume node.
The exported device can now be using for performing I/O or for creating file
systems. Symantec recommends using VxFS file systems, due to the increased
fault tolerance provided by VxFS

Just login non-global zone and create the file system , modify /etc/vfstab &
mount it as per Symantec .

Вам также может понравиться