Re: Hypothetical Question
I am in the middle of testing a two-node cascading HACMP 4.3 cluster under
AIX 4.3.2 with Netview 5.1 and Tivoli framework 3.6 as highly available
resources. The other cascading resource group is the Netview database
option using Orcacle as the dbms.
Once the installation of Netview 5.1 and the Framework 3.6 has been done on
both nodes (this is decidedly nontrivial), the failover works fine except
for the following bug I just discovered.
I have HACMP set up for hardware address takeover as well as IP address
takeover. The hardware address takeover option uses a fake service adapter
MAC address. Unfortunately, the Tivoli framework uses the boot adapter's
MAC address as the host node's identifier. This means that when Netview
fails over to the alternate node, there is a flood of authen failure traps
in the control desk until the router refreshes its ARP.
Aside from this bug which Tivoli support is addressing, most of the
questions and activities that you listed in your original question are
automatically performed by HACMP.
One advantage of using HACMP rather than running two versions of Netview as
Mr. Shanks mentioned is that less machine resources are dedicated to
Netview under HACMP, i.e. if you run two versions of Netview on two
machines, these hosts do nothing else while under the HACMP cluster, one
host is running Netview while the other is running something else and
serves as the failover node.
Incidentally, when using HACMP, the framework and netview filesystems are
placed on twintailed, shared external disk drives (in my case I have using
7135 model 110 RAID scscis). The Netview shared filesystems I am using
If you have any followup questions, feel free to ask.
Archive operated by Skills 1st Ltd
See also: The NetView Web