This thread seems to have two subjects going on, an of course I have
opinions on both of them.
The netmon option (undocumented) that controls how many outstanding
ping requests you can have is -q. It is 16 by default in V5, and may be set
as high as 32. You can tell what it is set to by the number of entries at the
top of the output (in netmon.trace) of netmon -a 3. Note that the increase
does NOT seem to apply to the number of outstanding snmp requests
(wouldn't THAT be nice!) Just add -q 32 to the netmon.lrf file and process it
manually (ovdelobj, ovaddobj, stop/start netmon).
Usually the red things you are talking about are considered false alarms,
and are due to having too short a timeout or retry setting. However there
was recently a lengthy discussion on this forum about network characteristics
that lead to lost pings that tuning timeouts won't help. A bunch of clues but
nothing conclusive, I think. You might want to search the archives for the last
15-30 days, unless it clears up with the tuning of the timeouts.
I've got one customer where this is a continuous problem. I went so far as
to set up a rule to ping the interface when there is an interface down event,
and found that the script would have to actually sleep about 30 seconds
before pinging, for the ping to get through. I have no idea. I'm a software
person. But Netview is doing its job in reporting that there is SOME kind
of problem.
Cordially,
Leslie A. Clark
IBM Global Services - Systems Mgmt & Networking
We are experiencing a problem where a node will appear to go down, and it will
stay red. If I go in and manually ping the node then Netview will send a node
up message and the node will go green again. Can anyone suggest some
diagnostics?
Thanks
Bill Painter
william.t.painter@lmco.com
|