nv-l
[Top] [All Lists]

Re: Implementing Netview on HA

To: nv-l@lists.tivoli.com
Subject: Re: Implementing Netview on HA
From: Ken Garst <Ken.Garst@KP.ORG>
Date: Tue, 22 Feb 2000 11:41:13 -0800
From: Ken Garst
To: FerreiraL@nabisco.com@Internet
cc: NV-L@UCSBVM.UCSB.EDU@Internet
Subject: Re: Implementing Netview on HA

"Ferreira, Linda" wrote:
>
> Has anyone implemented Netview successfully on an HA (high availability)
> platform?  Particularly Netview 5.1.1 .
> Any thoughts, suggestions, comments would be greatly appreciated
>
"chris.cowan" wrote:

Yes, I've done it because a customer insisted.

Having said that, I would never architect an HA configuration for
Netview.   If you're going to buy multiple machines, you're much better
off using multiple managers in a Backup Manager configuration.   (It's
described in Chapter 6 of the Netview Admin guide).

Backup Managers give you load balancing, along with more flexibility in
terms of topology, and better fault resilience.



ken garst writes:

I have two points to make about Chris' idea:

(1)  James Shanks suggested a year ago, and I agree, that the simplest way to 
get
high availability in NetView is to just run two instances on two different 
hosts.
(The trick here is that the sysadministration effort is exactly the same so by 
copying
databases between the hosts to sync up the data, there is little additional 
effort to maintain
the two instances.

(2)  On the other hand, if NetView is just one of your network management 
products and
you are using others (our group is using both Cisco Essentials & CWSI and 
Optivity)then
HACMP is the ideal solution for high availability for ALL products.


Here are some former posts about HACMP and NetView that might help you.

Thursday, October 28, 1999 10:03 AM
>I have installed a 2-node cascading cluster on 39H hosts using HACMP 4.3,
AIX 4.3.2, NetView 5.1.2, Tivoli Framework 3.6.1 and Oracle  with the RIM
server,
> TMR server and NetView server on one node and the Oracle DBMS on the other
> node.
>
>First, the cluster does NOT have to be concurrent access.  It most
definitely
> can be either rotating or cascading.
>
>Second, what event did you define the pre-event scripts to be triggered
from?
> I am not sure what your pre-event scripts are to do, but HACMP takes care
of
> the varyon of all shared external filesystems.  To make NetView highly
> available, the /usr/OV filesystem should be located on the shared external
> disks.  I did the same for the Tivoli Framework.  (Incidentally, in my
case
> the alternate node was NOT a Tivoli managed node or an endpoint client.)
I
> also considered my cluster to use IPADDR takeover AND hardware address
> takeover.
>
>Third, although I did not do it, an application CAN be registered with
HACMP
> so that when the cluster manager monitors it and when the application
fails,
> HACMP can initiate a node failover if you promote the appl failure to a
node
> failure.
>
>Fourth, although you didn't mention it, I activated Tivoli's kerberos for
my
> cluster but not the HACMP enhanced security.  The Tivoli kerberos is a
manual
> setup but works slick.  Unfortunately for us, the Tivoli kerberos daemon
is
> twice the size in bytes that the SP's PSSP kerberos that HACMP enhance
> security uses.  One distinct difference between the two kerberi is that
the
> Tivoli kerberos does not support multiple realms while the PSSP kerberos
> does.

Date: Friday, October 29, 1999 8:18 AM
Subject: Re: netview & HACMP


>For a two-node cascading HACMP cluster running highly available NetView,
TMR server and RIM host on one node and a highly available Oracle dbms on
the other node, here are the filesystems I used on shared external disks
that were twin-tailed to each node:
>
>node 1= netview host, TMR server, RIM host
>nvservervg
>        lvnetivew
>                /usr/OV
>        lvoptivity
>                /opt
>        lvtivoli
>                /tivoli
>        lvdynatext
>                /usr/ebt
>        lvdatabases
>                /usr/OV/databases
>        lvoraclient
>                /oracle/client (this is for oracle's sqlnet client when the
oracle server node is different from the netview server node.)
>
>node2 = oracle dbms server
>nvdbasevg
>        lvorahome
>                /oracle/product/7.3.4
>        lvoradmin
>                /oracle/admin
>        lvoradata1
>                /oracle/oradata1
>        lvoradatau01
>                /u01/oradata
>        lvoradatau02
>                /u02/oradata
>        lvoradatau03
>                /u03/oradata
>
>Recall that for the oracle server and client to work correctly in a 2-node
cascading cluster, that the sqlnet client must be running on the NetView
server node but after failover to the oracle server node that sqlnet is not
used so that the database talks directly to the application.  The
>oracle "two-task" variable must be set correctly for this to occur.
>
>NetView resolves the hostname to the ipadapter label so that after
failover, reset the hostname in the netview startup script.  This hostname
should be the same as the ipadapter label for the HACMP service ipaddress
for this highly-available application.  Do NOT use reset_ci.  In addition,
for Tivoli to work correctly, use the same hostname but set it in the
wlocaltemp file on both nodes.  Oracle doesn't care where it is running
vis-a-vis hostnames, unames or ipadapter labels so long as the oracle
environmental variables are set correctly for the tnslistener.
>
>Regards,
>ken
>ken.garst@kp.org
>
>(P.S.:  If you are running NetView clients via nfsmounts for maps to the
HACMP cluster node for NetView, certain special things have to be done for
the failover to occur correctly such as defining a pre-event to NODE DOWN
where NetView is located.  The pre-event issues a nvstopall forceall on the 
NetView
client node that correctly stops all client sessions so the nfs files can be
unmounted.  In addition, there is a modification to the nfs unmount script
in HACMP that must be made for this to work, which is documented in the
HACMP installation manual.)

Date: Friday, October 29, 1999 12:14 PM
Subject: Re: netview & HACMP


>I forgot to respond to question 2 about installing NetView/Tivoli/Optivity
in a cluster environment.
>
>The hard way but the sure way is to do double installs, one for each node
for every application but deleting the installed files from the shared disks
in between the installs.  There is one trick though.  For highly available
applications like NetView/Tivoli/Optivity, always do the first install on
the node that is NOT going to be the server and then do the last install on
the node that is going to be the primary server.  Do the installs this way
guarantees that any licensing problems or install problems will occur on the
primary server and will be resolved.  If you do it the other way around, you
might not discover any problems until a HACMP failover fails.
>
>However, there is a better way.  Do the installs on the primary node for
everything and then clone the system's rootvg via a mksysb.
>
>Regards,
>ken
>ken.garst@kp.org
>

To:     pezzutti@banespa.com.br@Internet
cc:     /c=/admd= 
/prmd=/o=BANESPA/ou=NASBE/cn=Usuarios.Exchange/@ncalhub@Internet
Subject:        Re: HACMP and Netview

Here is my attempt to answer your questions:

(1)  HaView is part of the HACMP lpp and is installed AFTER NetView is 
installed and the HACMP manuals explain how startup Haview.

(2)  The integration of HaView and NetView is supposed to be automatic, just 
follow the instructions in the HACMP manuals.

(3)  As for the necessity of knowing about cluster status changes vi SNMP 
traps, then do the following:
        (3.1)  On the HACMP cluster nodes, make certain that the /etc/snmp.conf 
has an entry for the NetView server as the trap destination.
        (3.1)  On the NetView server, select the option to load the HACMP MIB.
        (3.1)  Select the options for adding/customizing SNMP traps.
        (3.2)  Add an enterprise id using the HACMP oid
        (3.3)  Under the HACMP enterprise id, add as many of the following 
traps that you think are need, these have specific numbers and definitions 
given in the /usr/sbin/cluster/hacmp.my file:

-- State Event traps
        trapSwapAdapter
        trapSwapAdapterComplete
        trapJoinNetwork
        trapFailNetwork
        trapJoinNetworkComplete
        trapFailNetworkComplete
        trapJoinNode
        trapFailNode
        trapJoinNodeComplete
        trapFailNodeComplete
        trapJoinStandby
        trapFailStandby
        trapEventNewPrimary
        trapClusterUnstable
        trapClusterStable
        trapConfigStart
        trapConfigComplete
        trapClusterConfigTooLong
        trapClusterUnstableTooLong

        trapEventError
        trapDareTopology
        trapDareTopologyStart
        trapDareTopologyComplete
        trapDareResource
        trapDareResourceRelease
        trapDareResourceAcquire
        trapDareResourceComplete

Regards,
ken
ken.garst@kp.org


<Prev in Thread] Current Thread [Next in Thread>

Archive operated by Skills 1st Ltd

See also: The NetView Web