I love API questions, I might be the only one.
I don't understand some of what you have exaplined, but Ill give it a
shot. Also, cant comment on your internal daemons, I don't think anyone
can unless you were able to explain them in greater detail. However, I
think this is the real question: "Is there a way to setup Netview to
status (via SNMP) multiple IP addresses for the same node."
Yes, though I think this will depend on your SNMP stack on the remote
systems. Did you write the stack yourself or using some COTS agent? In
the past I have configured NetView to manage hundreds of end systems
with multiple Network Interface Cards (NIC). NetView knows about this
because a demand poll for two interfaces will return the same Node Name.
NetView will then internalize this as one "Node" on the map but with one
or many NICs for that asset. This is functionality all built within
NetView and has worked very well for me in the past. Now, if the SNMP
stack you have on those assets isnt quite "to standard" it may be
reporting erroneous information about the asset. In our specific
configuration, he had 2 entries for each asset in a localized host file
on the NetView machine (NMS). NetView would status both of those
hostnames from the host file but determined they both respresented the
same device. NetView would magically choose one of the hostnames from
the hosts file (never quite figured out the logic behind which one it
chose, sometimes it seemed random) and named a node on the map by it.
We were then able to have status through both of the interfaces within
NetView, everything worked well. During failure situations for one of
the NICs, the failed NIC would change to red and the node would change
to Yellow (compound propigation). From your email below, it seems you
are not seeing this. Is that correct? I have seen it work in the past
and would think the issue resides somewhat with your configuration.
My guess would be that you have a routing issue. Are you able to get
NetView to discover both NIC and both of them reside under the same
node? Is the same NIC receiving both SNMP requests?
I don't think you need to do the route of a respawning daemon. I have
writte a number of stand alone applications (the reside on the NetView
device) that utilize NetViews API to status remote devices and take
action on different scenarios. It sounds like this is the direction you
want to go, though I am not sure. If you have more information on
exactly what you are trying to do, we may be able to help. Also, can
you explain in more detail what you mean by "Virtual IP"? Does this IP
route to a valid SNMP enabled device?
Thanks,
Jason Allison
Principal Engineer
ARINC Incorporated
-----Original Message-----
From: owner-nv-l@lists.us.ibm.com [mailto:owner-nv-l@lists.us.ibm.com]
On Behalf Of James Shanks
Sent: Wednesday, March 30, 2005 12:55 PM
To: nv-l@lists.us.ibm.com
Subject: Re: [nv-l] Monitoring nodes running ospf
Custom daemons using the OVsnmp APIs? I presume that you've done a lot
of
reading in the Programmer's Guide and especially the Programmer's
Reference about using these routines. I can't say I quite follow what
you are trying to do with them, however.
In any case, OVsnmpOpen uses the NetView SNMP database, the one you
maintain with xnmsnmpconf to obtain the peer address for the open.
Before opening the session it does an OVsnmpResolveConfEntry( ) to get
the IP
Address or proxy address to use. That means that if there's already
an
IP Address in the xnmsnmp cache, he'll use that one. If there isn't
one, he'll try gethostbyname to determine one, and take the first one he
gets back.
So, offhand, my advice would be to trying clearing the xnmsnmpconf cache
before you establish your session. That would force a re-evaluation.
You
could even dump it and see what it contains. But if that doesn't do
it,
then the issue has to be name resolution.
HTH
James Shanks
Level 3 Support for Tivoli NetView for UNIX and Windows
Tivoli Software / IBM Software Group
jeff.ctr.vandenbu
ssche@faa.gov
Sent by:
To
owner-nv-l@lists. nv-l@lists.us.ibm.com
us.ibm.com
cc
Subject
03/29/2005 04:07 [nv-l] Monitoring nodes running
PM ospf
Please respond to
nv-l
I have a small network with ~ 12 boxes. Each box consists on 2 Ethernet
ports, and then a 3rd "virtual" interface, with ospf running on each
box. My problem is with several custom polling daemons that status MIB
variables on these various nodes. I have each node defined in the host
file on the Netview box with the virtual IP. The polling daemons use
Netview API calls
(OVsnmpOpen) to open an snmp session with each node using the virtual
IP/hostname. The problem occurs during failures. If I fail the active
interface on the remote box, the snmp polling responses timeout, and
don't switch over to the other route to the nodes It seems as if the
snmp session binds to the active interface, rather than the virtual IP
that is defined.
Also, if I shutdown the active interface on the Netview box, I lose all
statuses from all nodes. I am not sure if this is a Netview
configuration, a Netview API, or an snmp issue. Any suggestions??
Is there a way to setup Netview to status (via SNMP) multiple IP
addresses for the same node. Can a daemon/lrf be set up to respawn? I
could then have the session timeout, then respawn.
Thanks,
Jeff VanDenbussche
JSA/ATO-E
HNL Support
(609) 485-4200
|