I'd consider having another NetView as a backup before I would do anything
this complex. Then you could configure the agents to send traps to both
NetViews, which would save lots of time. And you would always know when
your backup was working. I know of other customers who in fact implement
3 virtually identical NetViews for exactly this reason (backup) but they
also found that this allowed them to farm out snmpCollect to the least busy
box, to "stage" in config changes, and a host of other minor benefits.
Duplicate hardware is still the easiest and ultimately cheapest (when you
consider setup time, testing time, and on going maintenance) backup
strategy.
James Shanks
Tivoli (NetView for UNIX) L3 Support
Gast <gast@MUS.DE> on 02/04/99 07:40:33 AM
Please respond to Discussion of IBM NetView and POLYCENTER Manager on
NetView <NV-L@UCSBVM.ucsb.edu>
To: NV-L@UCSBVM.ucsb.edu
cc: (bcc: James Shanks)
Subject: Re: Hypothetical Question
Hi Chris,
there is no problem for NetView to start at a new "Managed Node", but
you get a problem if you want to configere it or to
update/install/deinstall it.
NetView dosn't need the Framework after installing to work right.
regards Stefan
Chris Cowan wrote:
>
> Scenario:
>
> A multi machine AIX 4.3.1 HACMP cluster.
>
> Machine 1 - TMR 3.6 Server
> Machine 2 - TEC 3.6 Server
> Machine 3 - Netview 5.1
>
> Tivoli Filesystems (/usr/local/Tivoli, /var/spool/Tivoli).
> Endpoints presently in the default /opt/Tivoli
> All 3 machines have access to a shared disk cabinet (Right now it's an
> IBM SSA, later it may be EMC).
>
> We are probably going to go with unique filesystems like:
> /machine1/usr/local/Tivoli
> /machine1/var/spool/Tivoli
> /machine2
> .
> .
> .
> /machine3/var/spool/Tivoli
>
> And then switch by making and breaking symlinks to /usr/local/Tivoli and
> /var/spool/Tivoli.
>
> The failover scenario is to move the either the TMR (with priority) or
> the TEC server over to the Netview machine.
> The sequence would be:
> - Detect the machine going down
> - Stop Netview (ovstop)
> - Stop Netview's oserv
> - Umount Netview's MN Filesystems
> - Assume the failed-over machine's IP address
> - Mount failed machine's MN Filesystems
> - Start oserv
>
> The billion dollar question is:
> Can Netview be brought up again with a "new" Managed Node running
> underneath of it???????
>
> Obviously, we would have to do the "reset_ci" procedure since the
> hostname/ip address of the Netview Manager will have changed. How easy
> would it be to move the change the underlying MN configuration and keep
> the Administrative and NV Event Adapter stuff running.
>
> The following things come to my mind:
> - SNMP traps from agents would have to be retargeted (or all agents
> should be configured for multiple targets from the outset.
> - All three MN would have to have a Netview Server object installed on
> them.
> - The local snmpd and snmpd.conf may have to be messed with.
> - These three systems are also running Endpoints (for Event Adapter
> support). Presently, they are configured for /opt/Tivoli. I can
> reinstall them to run in /usr/local/Tivoli/lcf. (Probably a good idea
> for several reasons).
>
> Also, are there an implications of reversing this procedure?
>
> Please don't tell me it's bad idea, not my call. I may have to
> implement this regardless.
>
> Thanks
> Chris
|