That's a well-thought-out and well-executed HACMP setup. Don't think I
have ever seen better. And I think that it will work just fine if you
can get a fix for those authentication errors. Good job.
I usually don't recommend HACMP for NetView and prefer the two machine
approach instead. The reason is that hardware is a one-time expense and
people are not. They require training and an understanding of what they
are doing. I find that many customers leave NetView in the hands of
relatively unskilled folks, and expecting them to do the right things, like
reset_ci, at takeover time, is not always a good bet. But anyone can
understand a backup procedure called "walk over to the other box ".
Just my two cents.
Tivoli (NetView for UNIX) L3 Support
"Ken Garst." <KGarst@GIANTOFMARYLAND.COM> on 02/04/99 02:19:57 PM
Please respond to Discussion of IBM NetView and POLYCENTER Manager on
cc: (bcc: James Shanks)
Subject: Re: Hypothetical Question
I am in the middle of testing a two-node cascading HACMP 4.3 cluster under
AIX 4.3.2 with Netview 5.1 and Tivoli framework 3.6 as highly available
resources. The other cascading resource group is the Netview database
option using Orcacle as the dbms.
Once the installation of Netview 5.1 and the Framework 3.6 has been done on
both nodes (this is decidedly nontrivial), the failover works fine except
for the following bug I just discovered.
I have HACMP set up for hardware address takeover as well as IP address
takeover. The hardware address takeover option uses a fake service adapter
MAC address. Unfortunately, the Tivoli framework uses the boot adapter's
MAC address as the host node's identifier. This means that when Netview
fails over to the alternate node, there is a flood of authen failure traps
in the control desk until the router refreshes its ARP.
Aside from this bug which Tivoli support is addressing, most of the
questions and activities that you listed in your original question are
automatically performed by HACMP.
One advantage of using HACMP rather than running two versions of Netview as
Mr. Shanks mentioned is that less machine resources are dedicated to
Netview under HACMP, i.e. if you run two versions of Netview on two
machines, these hosts do nothing else while under the HACMP cluster, one
host is running Netview while the other is running something else and
serves as the failover node.
Incidentally, when using HACMP, the framework and netview filesystems are
placed on twintailed, shared external disk drives (in my case I have using
7135 model 110 RAID scscis). The Netview shared filesystems I am using
If you have any followup questions, feel free to ask.