|
» |
|
|
|
The following sections describe how to configure
a VM Host as a Serviceguard node. In this configuration, if any of
the resources used by a guest fail on the primary VM Host system,
the guest fails over to an adoptive VM Host system, as illustrated
in Figure 11-4. To configure VMs as Serviceguard Packages: Create the Serviceguard package, as described in Section . Modify the Serviceguard package configuration files
to match your guest environment, as described in Section . Start the Serviceguard package as described in Section .
| | | | | NOTE: When using AVIO networking devices for guests
that are configured as Serviceguard Packages, be sure that all Serviceguard
standby LANs are configured using PPA devices supported by AVIO. | | | | |
Creating Guests as Packages | |
Serviceguard A.11.18 provides a new process for
configuring packages called modular packages. This new process is
simpler and more efficient, because it allows you to build packages
from smaller modules and eliminates the separate package control script
and the need to distribute it. Packages created using Serviceguard
A.11.17 or earlier are referred to as legacy packages. If you need
to reconfigure a legacy package or create a new one, see the Managing Serviceguard manual. The hpvmsg_package script
can repackage your virtual machine as either a legacy or modular package. On the VM Host, use the following procedure to
create a package configuration file and control script for the guest: Install Integrity VM and create the guest with all
necessary virtual storage devices and vswitches. Repeat this procedure
on each node in the multiserver environment. Install, configure, and run Serviceguard on every
node in the multiserver environment. Because Serviceguard can be bundled with the OE, bring
up the virtual machines and manually remove the Serviceguard product. Start the guest on the primary node using the hpvmstart command. Use the hpvmstatus command to verify the guest name and to make sure that it is running. Create a Serviceguard package by running the hpvmsg_package script from the HP Serviceguard for Integrity
VM Toolkit, which is installed in the /opt/cmcluster/toolkit/hpvm/ directory when you install Integrity VM. | | | | | NOTE: The default KILLTIME of 10 seconds may be too aggressive in
some environments and can result in a file system corruption on Linux
guests. HP recommends that you tune this value so that the file systems
on the guests are successfully unmounted before the guest is powered
off. | | | | |
Use the following command to create a package: # /opt/cmcluster/toolkit/hpvm/hpvmsg_package [-VQLs] [-m {0|1}] [-P] vm_name |
Where: -P vm_name — Indicates the virtual machine name -m — Specifies whether maintenance
mode is enabled or disabled. 1 — Enabled, 0 — Disabled -L — Creates a legacy package.
— Default is modular. -Q — Quietly performs command
taking default actions without additional prompts. -s — Sanity checks the specific
command, but does perform the requested action.
The following command creates a modular package for the virtual
machine named compass1: |
# /opt/cmcluster/toolkit/hpvm/hpvmsg_package compass1
This is the HP Virtual Machine Serviceguard Toolkit Package Template Creation
script.
This script will assist the user develop and distribute a set of Serviceguard
package configuration template files and associated start, stop and monitor scripts.
The templates generated by these scripts will handle many guest configurations,
but it is only a template and may not be appropriate for your particular
configuration needs. You are encouraged to review and modify these template
files as needed for your particular environment.
Do you wish to continue? (y/n):y
[Virtual Machine Details]
Virtual Machine Name VM # OS Type State #VCPUs #Devs #Nets Memory Runsysid
==================== ===== ======= ========= ====== ===== ===== ======= ========
compass1 1 HPUX Off 1 5 1 512 MB 0
[Storage Interface Details]
Guest Physical
Device Adaptor Bus Dev Ftn Tgt Lun Storage Device
====== ========== === === === === === ========= =========================
disk scsi 0 0 0 0 0 disk /dev/rdisk/disk0
disk scsi 0 0 0 1 0 lv /dev/vgsglvm/rlvol1
disk scsi 0 0 0 2 0 file /hpvm/g1lvm/hpvmnet2
disk scsi 0 0 0 3 0 lv /dev/vx/rdisk/sgvxvm/sgvxvms
disk scsi 0 0 0 4 0 file /hpvm/g1vxvm/hpvmnet2
disk scsi 0 0 0 5 0 disk /dev/rdisk/disk5
[Network Interface Details]
Interface Adaptor Name/Num Bus Dev Ftn Mac Address
========= ========== ========== === === === =================
vswitch lan vswitch2 0 1 0 ea-5c-08-d3-70-f2
vswitch lan vswitch5 0 2 0 f2-c7-0d-09-ac-8f
vswitch lan vswitch6 0 4 0 92-35-ed-1f-6c-67
Would you like to create a failover package for this Virtual Machine summarized above? (y/n):y
Would you like to distribute the package to each cluster member? (y/n):y
The failover package template files for the Virtual Machine were successfully created. |
|
The script asks you to confirm the following actions: Creating a failover package Distributing the package to all the cluster nodes
Respond to both prompts by entering y. The hpvmsg_package script creates the virtual machine package
template files in the etc/cmcluster/guest-name/ directory. If the package is a modular package,
it creates the following templates files: If the package is a legacy package, it create the following
template files: The hpvmsg_package is a utility that you
can use to configure a guest as a Serviceguard package. The utility
uses the guest name that you supply as an argument to create and populate
the /etc/cmcluster/guest-name/ directory with a set of template files that contain
basic Serviceguard parameter settings. HP recommends that you review
and modify these template files as needed for your specific multiserver
environment. For more information, see Section and the Managing Serviceguard manual. Stop the guest by entering the appropriate operating
system command, or use the hpvmstop -F command
on the VM Host system. (Because the guest has been configured as a
Serviceguard package, the -F option is necessary.)
For example:Enter the following command on the guest: # /usr/sbin/shutdown -g now |
# hpvmstop -P guest-name -F -g |
Unmount all file backing stores and deactivate any LVM
logical volumes or deport VxVM volumes used as backing stores for
the guests. Verify that the package is set up correctly by entering
the following command: # cmcheckconf -v -C /etc/cmcluster/cluster-name.config \
-P /etc/cmcluster/guest-name/guest-name.config |
where: cluster-name is the name
of the Serviceguard cluster. guest-name is the name
of the guest.
Update and redistributed the binary configuration
files to the /etc/cmcluster/guest-name/ directory on all cluster nodes: # cmapplyconf -v -C /etc/cmcluster/cluster-name.config -P /etc/cmcluster/guest-name/guest-name.config |
At the prompt that asks whether to modify the
cluster configuration, enter y. For example: |
# cmapplyconf -v -C /etc/cmcluster/cluster1.config \
-P /etc/cmcluster/compass1/compass1.config
Checking cluster file: /etc/cmcluster/cluster.config
Checking nodes ... Done
Checking existing configuration ... Done
Gathering configuration information ... Done
Gathering configuration information ... Done
Gathering configuration information ..
Gathering storage information ..
Found 10 devices on node host1
Found 10 devices on node host2
Analysis of 20 devices should take approximately 3 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 7 volume groups on node charm
Found 7 volume groups on node clowder
Analysis of 14 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
.....
Gathering Network Configuration ......... Done
Cluster cluster1 is an existing cluster
Parsing package file: /etc/cmcluster/compass1/compass1.config.
Package hpvmnet2 already exists. It will be modified.
Checking for inconsistencies .. Done
Cluster cluster1 is an existing cluster
Maximum configured packages parameter is 10.
Configuring 3 package(s).
7 package(s) can be added to this cluster.
200 access policies can be added to this cluster.
Modifying configuration on node host1
Modifying configuration on node host2
Modify the cluster configuration ([y]/n)? y
Marking/unmarking volume groups for use in the cluster
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Modifying the cluster configuration for cluster cluster1.
Modifying node host1 in cluster cluster1.
Modifying node host2 in cluster cluster1.
Modifying the package configuration for package compass1.
Completed the cluster creation. |
|
If the package configuration file contains the
appropriate settings, start the Serviceguard service as described
in Section . Modifying the Package Configuration Files | |
The Serviceguard for Integrity VM toolkit creates
templates that supply basic arguments to Serviceguard parameters.
Review and modify the Serviceguard parameters based on the information
for your Serviceguard cluster and the information supplied in the Managing Serviceguard manual. Make the appropriate changes
to the guest-name.config and guest-name files. Edit the package configuration file to add any
LVM volume groups that are used by the distributed guest. Include
a separate VOLUME_GROUP parameter for each cluster-aware volume group.
These volume groups will be initialized with the cluster ID when the cmapplyconf command is used. Starting the Distributed Guest | |
To start the distributed guest:, enter the following
command: For example, to start the guest named compass1, enter the following command: # cmrunpkg -v compass1
Running package compass1 on node host1.
cmrunpkg : Successfully started package compass1.
cmrunpkg : Completed successfully on all packages specified. |
Verify that the guest is on and running. Use both
the Integrity VM hpvmstatus command and the Serviceguard cmviewcl command to verify the status. For example: # hpvmstatus -P compass1
[Virtual Machines]
Virtual Machine Name VM # OS Type State #VCPUs #Devs #Nets Memory Runsysid
==================== ===== ======= ========= ====== ===== ===== ======= ========
compass1 1 HPUX On 1 5 1 512 MB 0
|
|
|
# cmviewcl -v compass1
CLUSTER STATUS
cluster1 up
NODE STATUS STATE
host1 up running
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 0/2/1/0/4/1 lan7
PRIMARY up 0/2/1/0/6/1 lan9
PRIMARY up 0/5/1/0/7/0 lan6
STANDBY up 0/1/2/0 lan1
STANDBY up 0/2/1/0/4/0 lan2
STANDBY up 0/2/1/0/6/0 lan8
STANDBY up LinkAgg0 lan900
STANDBY up 0/0/3/0 lan0
PACKAGE STATUS STATE AUTO_RUN NODE
compass1 up running disabled host1
Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover configured_node
Failback manual
Script_Parameters:
ITEM STATUS MAX_RESTARTS RESTARTS NAME
Service up 0 0 host1
Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary up enabled host1 (current)
Alternate up enabled host2
NODE STATUS STATE
host 2 up running
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 0/2/1/0/4/1 lan7
STANDBY up 0/1/2/0 lan1
STANDBY up 0/2/1/0/4/0 lan2
STANDBY up 0/2/1/0/6/0 lan8
STANDBY up LinkAgg0 lan900
PRIMARY up 0/5/1/0/7/0 lan6
PRIMARY up 0/2/1/0/6/1 lan9
STANDBY up 0/0/3/0 lan0
|
|
To enable autorun and failover, enter the cmmodpkg command. Starting the vswitch Monitor | |
The vswitch monitor monitors the activities of
the Serviceguard network monitor and, when appropriate, moves the
vswitch configuration between primary and standby network interfaces.
The vswitch monitor does not require user configuration and is installed
as part of the Integrity VM product. If Serviceguard is running and
distributed guests are configured, the vswitch monitor automatically
starts on the VM Host system when the VM Host system boots. To start
the vswitch monitor manually, enter the following command: # /sbin/init.d/vswitchmon start |
To verify that the vswitch monitor is running,
enter the following command: # ps -ef | grep vswitchmon |
Verifying That Distributed Guests Can Fail Over | |
To verify that the guests configured as Serviceguard
packages and the multiserver environment are working properly, use
the following commands to perform a manual failover: On the original node (host1), verify that the package named compass1 is running: host1# cmviewcl -v -p compass1 |
Halt the compass1 package on host1: host1# cmhaltpkg compass1
Halting package compass1. |
Start the package on the other VM Host system (host2): host2# cmrunpkg -n host2 compass1 |
Enable the package: host2# cmmodpkg -e compass1 |
On the adoptive node, verify that the compass1 package has started: host2# cmviewcl -v -p compass1 |
On the adoptive node, verify that the guest named compass1 is on: host2# hpvmstatus -P compass1 |
Managing Distributed Guests | |
To start, stop, or monitor distributed guests,
use the Serviceguard commands described in this section. Do not use
the Integrity VM commands (hpvmstart, hpvmstop, and hpvmmigrate) to manage
distributed guests. Starting Distributed GuestsTo start a distributed guest, enter the following
command: Stopping Distributed GuestsTo stop a distributed guest, enter the following
command: Monitoring Distributed GuestsTo monitor the distributed guest, enter the following
command: # cmviewcl -v -p guest-name |
Modifying Distributed GuestsYou can modify the resources for the distributed
guest using the hpvmmodify command. However, if
you modify the guest on one VM Host server, you must make the same
changes on the other nodes in the multiserver environment. After you modify vswitches, logical volumes, or
file backing stores used by distributed guests, make sure that Serviceguard
can continue to monitor the guests. To update the Serviceguard information,
run the hpvmsg_package script and restart the guest
packages. Monitoring Network Connections | |
The vswitch monitor runs the vswitchmon script on the VM Hosts in the multiserver environment and monitors
the Serviceguard Network Manager by monitoring the syslog.log file. When it detects that Serviceguard is failing over a primary
network to a standby network, the vswitch monitor halts, deletes,
creates, and boots the vswitch associated with the primary network
onto the the standby network. When the primary network is restored,
Serviceguard and the vswitch monitor move the network and associated
vswitch back to the primary network.
|