BACKRUSH  À¯´Ð½º¸í·É  ´ÙÀ½  ÀÚ·á½Ç  Ascii Table   ¿ø°ÝÁ¢¼Ó  ´Þ·Â,½Ã°£   ÇÁ·Î¼¼½º   ½©
ÁöÇÏö³ë¼±   RFC¹®¼­   SUN FAQ   SUN FAQ1   C¸Þ´º¾ó   PHP¸Þ´º¾ó   ³Ê±¸¸®   ¾Æ½ºÅ°¿ùµå ¾ÆÀÌÇǼ­Ä¡

±Û¾´ÀÌ: ZFS [SUN]ZFS Pool관리 Á¶È¸¼ö: 9349


Using Disks in a ZFS Storage Pool
The most basic element of a storage pool is a piece of physical storage. Physical storage can be any block device of at least 128 Mbytes in size. Typically, this device is a hard drive that is visible to the system in the /dev/dsk directory.

A storage device can be a whole disk (c1t0d0) or an individual slice (c0t0d0s7). The recommended mode of operation is to use an entire disk, in which case the disk does not need to be specially formatted. ZFS formats the disk using an EFI label to contain a single, large slice. When used in this way, the partition table that is displayed by the format command appears similar to the following:



Current partition table (original):
Total disk sectors available: 71670953 + 16384 (reserved sectors)

Part Tag Flag First Sector Size Last Sector
0 usr wm 34 34.18GB 71670953
1 unassigned wm 0 0 0
2 unassigned wm 0 0 0
3 unassigned wm 0 0 0
4 unassigned wm 0 0 0
5 unassigned wm 0 0 0
6 unassigned wm 0 0 0
7 unassigned wm 0 0 0
8 reserved wm 71670954 8.00MB 71687337

To use whole disks, the disks must be named using the standard Solaris convention, such as /dev/dsk/cXtXdXsX. Some third-party drivers use a different naming convention or place disks in a location other than the /dev/dsk directory. To use these disks, you must manually label the disk and provide a slice to ZFS.

ZFS applies an EFI label when you create a storage pool with whole disks. Disks can be labeled with a traditional Solaris VTOC label when you create a storage pool with a disk slice.

Slices should only be used under the following conditions:

The device name is nonstandard.

A single disk is shared between ZFS and another file system, such as UFS.

A disk is used as a swap or a dump device.

Disks can be specified by using either the full path, such as /dev/dsk/c1t0d0, or a shorthand name that consists of the device name within the /dev/dsk directory, such as c1t0d0. For example, the following are valid disk names:

c1t0d0

/dev/dsk/c1t0d0

c0t0d6s2

/dev/foo/disk

Using whole physical disks is the simplest way to create ZFS storage pools. ZFS configurations become progressively more complex, from management, reliability, and performance perspectives, when you build pools from disk slices, LUNs in hardware RAID arrays, or volumes presented by software-based volume managers. The following considerations might help you determine how to configure ZFS with other hardware or software storage solutions:

If you construct ZFS configurations on top of LUNs from hardware RAID arrays, you need to understand the relationship between ZFS redundancy features and the redundancy features offered by the array. Certain configurations might provide adequate redundancy and performance, but other configurations might not.

You can construct logical devices for ZFS using volumes presented by software-based volume managers, such as SolarisTM Volume Manager (SVM) or Veritas Volume Manager (VxVM). However, these configurations are not recommended. While ZFS functions properly on such devices, less-than-optimal performance might be the result.

For additional information about storage pool recommendations, see the ZFS best practices site:

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

Disks are identified both by their path and by their device ID, if available. This method allows devices to be reconfigured on a system without having to update any ZFS state. If a disk is switched between controller 1 and controller 2, ZFS uses the device ID to detect that the disk has moved and should now be accessed using controller 2. The device ID is unique to the drive's firmware. While unlikely, some firmware updates have been known to change device IDs. If this situation happens, ZFS can still access the device by path and update the stored device ID automatically. If you inadvertently change both the path and the ID of the device, then export and re-import the pool in order to use it.

Using Files in a ZFS Storage Pool
ZFS also allows you to use UFS files as virtual devices in your storage pool. This feature is aimed primarily at testing and enabling simple experimentation, not for production use. The reason is that any use of files relies on the underlying file system for consistency. If you create a ZFS pool backed by files on a UFS file system, then you are implicitly relying on UFS to guarantee correctness and synchronous semantics.

However, files can be quite useful when you are first trying out ZFS or experimenting with more complicated layouts when not enough physical devices are present. All files must be specified as complete paths and must be at least 128 Mbytes in size. If a file is moved or renamed, the pool must be exported and re-imported in order to use it, as no device ID is associated with files by which they can be located.

Identifying Virtual Devices in a Storage Pool
Each storage pool is comprised of one or more virtual devices. A virtual device is an internal representation of the storage pool that describes the layout of physical storage and its fault characteristics. As such, a virtual device represents the disk devices or files that are used to create the storage pool.

Two top-level virtual devices provide data redundancy: mirror and RAID-Z virtual devices. These virtual devices consist of disks, disk slices, or files.

Disks, disk slices, or files that are used in pools outside of mirrors and RAID-Z virtual devices, function as top-level virtual devices themselves.

Storage pools typically contain multiple top-level virtual devices. ZFS dynamically stripes data among all of the top-level virtual devices in a pool.

Replication Features of a ZFS Storage Pool
ZFS provides data redundancy, as well as self-healing properties, in a mirrored and a RAID-Z configuration.

Mirrored Storage Pool Configuration

RAID-Z Storage Pool Configuration

Self-Healing Data in a Redundant Configuration

Dynamic Striping in a Storage Pool

Mirrored Storage Pool Configuration
A mirrored storage pool configuration requires at least two disks, preferably on separate controllers. Many disks can be used in a mirrored configuration. In addition, you can create more than one mirror in each pool. Conceptually, a simple mirrored configuration would look similar to the following:



mirror c1t0d0 c2t0d0

Conceptually, a more complex mirrored configuration would look similar to the following:



mirror c1t0d0 c2t0d0 c3t0d0 mirror c4t0d0 c5t0d0 c6t0d0

For information about creating a mirrored storage pool, see Creating a Mirrored Storage Pool.

RAID-Z Storage Pool Configuration
In addition to a mirrored storage pool configuration, ZFS provides a RAID-Z configuration. RAID-Z is similar to RAID-5.

All traditional RAID-5-like algorithms (RAID-4. RAID-5. RAID-6, RDP, and EVEN-ODD, for example) suffer from a problem known as the

°ü·Ã±Û : ¾øÀ½ ±Û¾´½Ã°£ : 2007/10/23 13:06 from 218.38.35.251

  [Mount]fuserÀÌ¿ëÇÑ umount ¸ñ·Ïº¸±â »õ±Û ¾²±â Áö¿ì±â ÀÀ´ä±Û ¾²±â ±Û ¼öÁ¤ [SUN]ZFS ÀÌÇØÇÏ±â  
BACKRUSH  À¯´Ð½º¸í·É  ´ÙÀ½  ÀÚ·á½Ç  Ascii Table   ¿ø°ÝÁ¢¼Ó  ´Þ·Â,½Ã°£   ÇÁ·Î¼¼½º   ½©
ÁöÇÏö³ë¼±   RFC¹®¼­   SUN FAQ   SUN FAQ1   C¸Þ´º¾ó   PHP¸Þ´º¾ó   ³Ê±¸¸®   ¾Æ½ºÅ°¿ùµå ¾ÆÀÌÇǼ­Ä¡