United States-English |
|
|
Managing Serviceguard Fifteenth Edition > Chapter 3 Understanding
Serviceguard Software ComponentsVolume Managers for Data Storage |
|
A volume manager is a tool that lets you create units of disk storage known as storage groups. Storage groups contain logical volumes for use on single systems and in high availability clusters. In Serviceguard clusters, storage groups are activated by package control scripts. In Serviceguard, there are two types of supported shared data storage: mirrored individual disks (also known as JBODs, for “just a bunch of disks”), and external disk arrays which configure redundant storage in hardware. Two types of mirroring are RAID1 and RAID5. Here are some differences between the two storage methods:
HP-UX releases up to and including 11i v2 use a naming convention for device files that encodes their hardware path. For example, a device file named /dev/dsk/c3t15d0 would indicate SCSI controller instance 3, SCSI target 15, and SCSI LUN 0. HP-UX 11i v3 introduces a new nomenclature for device files, known as agile addressing (sometimes also called persistent LUN binding). Under the agile addressing convention, the hardware path name is no longer encoded in a storage device’s name; instead, each device file name reflects a unique instance number, for example /dev/[r]disk/disk3, that does not need to change when the hardware path does. Agile addressing is the default on new 11i v3 installations, but the I/O subsystem still recognizes the pre-11.i v3 nomenclature. This means that you are not required to convert to agile addressing when you upgrade to 11i v3, though you should seriously consider its advantages. For instructions on migrating a system to agile addressing, see the white paper Migrating from HP-UX 11i v2 to HP-UX 11i v3 at http://docs.hp.com.
For more information about agile addressing, see following documents at http://www.docs.hp.com.:
See also the HP-UX 11i v3 intro(7) manpage, and “About Multipathing” of this manual. Figure 3-20 “Physical Disks Within Shared Storage Units” shows an illustration of mirrored storage using HA storage racks. In the example, node1 and node2 are cabled in a parallel configuration, each with redundant paths to two shared storage devices. Each of two nodes also has two (non-shared) internal disks which are used for the root file system, swap etc. Each shared storage unit has three disks, The device file names of the three disks on one of the two storage units are c0t0d0, c0t1d0, and c0t2d0. On the other, they are c1t0d0, c1t1d0, and c1t2d0.
Figure 3-21 “Mirrored Physical Disks” shows the individual disks combined in a multiple disk mirrored configuration. Figure 3-22 “Multiple Devices Configured in Volume Groups” shows the mirrors configured into LVM volume groups, shown in the figure as /dev/vgpkgA and /dev/vgpkgB. The volume groups are activated by Serviceguard packages for use by highly available applications. Figure 3-23 “Physical Disks Combined into LUNs” shows an illustration of storage configured on a disk array. Physical disks are configured by an array utility program into logical units or LUNs which are then seen by the operating system.
Figure 3-24 “Multiple Paths to LUNs” shows LUNs configured with multiple paths (links) to provide redundant pathways to the data.
. Finally, the multiple paths are configured into volume groups as shown in Figure 3-25 “Multiple Paths in Volume Groups”. Serviceguard allows a choice of volume managers for data storage: Separate sections in Chapters 5 and 6 explain how to configure cluster storage using all of these volume managers. The rest of the present section explains some of the differences among these available volume managers and offers suggestions about appropriate choices for your cluster environment.
Logical Volume Manager (LVM) is the default storage management product on HP-UX. Included with the operating system, LVM is available on all cluster nodes. It supports the use of Mirrordisk/UX, which is an add-on product that allows disk mirroring with up to two mirrors (for a total of three copies of the data). Currently, the HP-UX root disk can be configured as an LVM volume group. (Note that, in this case, the HP-UX root disk is not the same as the Veritas root disk group, rootdg, which must be configured in addition to the HP-UX root disk on any node that uses Veritas Volume Manager 3.5 products. The rootdg is no longer required with Veritas Volume Manager 4.1 and later products.) The Serviceguard cluster lock disk also is configured using a disk configured in an LVM volume group. LVM continues to be supported on HP-UX single systems and on Serviceguard clusters. The Base Veritas Volume Manager for HP-UX (Base-VxVM) is provided at no additional cost with HP-UX 11i. This includes basic volume manager features, including a Java-based GUI, known as VEA. It is possible to configure cluster storage for Serviceguard with only Base-VXVM. However, only a limited set of features is available. The add-on product, Veritas Volume Manager for HP-UX provides a full set of enhanced volume manager capabilities in addition to basic volume management. This includes features such as mirroring, dynamic multipathing for active/active storage devices, and hot relocation. VxVM can be used in clusters that:
A VxVM disk group can be created on any node, whether the cluster is up or not. You must validate the disk group by trying to import it on each node. With VxVM, each disk group is imported by the package control script that uses the disk group. This means that cluster startup time is not affected, but individual package startup time might be increased because VxVM imports the disk group at the time the package starts up.
You may choose to configure cluster storage with the Veritas Cluster Volume Manager (CVM) instead of the Volume Manager (VxVM). The Base-VxVM provides some basic cluster features when Serviceguard is installed, but there is no support for software mirroring, dynamic multipathing (for active/active storage devices), or numerous other features that require the additional licenses. VxVM supports up to 16 nodes, and CVM supports up to 8. CFS 5.0 also supports up to 8 nodes; earlier versions of CFS support up to 4. The VxVM Full Product and CVM are enhanced versions of the VxVM volume manager specifically designed for cluster use. When installed with the Veritas Volume Manager, the CVM add-on product provides most of the enhanced VxVM features in a clustered environment. CVM is truly cluster-aware, obtaining information about cluster membership from Serviceguard directly. Cluster information is provided via a special system multi-node package, which runs on all nodes in the cluster. The cluster must be up and must be running this package before you can configure VxVM disk groups for use with CVM. Disk groups must be created from the CVM Master node. The Veritas CVM package for version 3.5 is named VxVM-CVM-pkg; the package for CVM version 4.1 and later is named SG-CFS-pkg. CVM allows you to activate storage on one node at a time, or you can perform write activation on one node and read activation on another node at the same time (for example, allowing backups). CVM provides full mirroring and dynamic multipathing (DMP) for clusters. CVM supports concurrent storage read/write access between multiple nodes by applications which can manage read/write access contention, such as Oracle Real Application Cluster (RAC). CVM 4.1 and later can be used with Veritas Cluster File System (CFS) in Serviceguard. Several of the HP Serviceguard Storage Management Suite bundles include features to enable both CVM and CFS. CVM can be used in clusters that:
Heartbeat is configured differently depending on whether you are using CVM 3.5 or 4.1 and later. See “Redundant Heartbeat Subnet Required ”. Shared storage devices must be connected to all nodes in the cluster, whether or not the node accesses data on the device. All shared disk groups (DGs) are imported when the system multi-node’s control script starts up CVM. Depending on the number of DGs, the number of nodes and the configuration of these (number of disks, volumes, etc.) this can take some time (current timeout value for this package is 3 minutes but for larger configurations this may have to be increased). Any failover package that uses a CVM DG will not start until the system multi-node package is up. Note that this delay does not affect package failover time; it is a one-time overhead cost at cluster startup. CVM disk groups are created on one cluster node known as the CVM master node. CVM verifies that each node can see each disk and will not allow invalid DGs to be created. HP recommends that you configure all subnets that connect cluster nodes as heartbeat networks; this increases protection against multiple faults at no additional cost. Heartbeat configurations are configured differently depending on whether you are using CVM 3.5, or 4.1 and later. You can create redundancy in the following ways: 1) dual (multiple) heartbeat networks The following table summarizes the advantages and disadvantages of the volume managers. Table 3-4 Pros and Cons of Volume Managers with Serviceguard
|
Printable version | ||
|