For the record, since I forgot again...
Cordially,
Leslie A. Clark
IBM Global Services - Systems Mgmt & Networking
Detroit
"Westphal,
Raymond" To: Leslie
Clark/Southfield/IBM@IBMUS
<RWestphal@era cc:
c.com> Subject: RE: [nv-l] ovobjprint -S
and netmon
?
03/26/02 10:15
AM
Thanks very much.
-----Original Message-----
From: Leslie Clark [mailto:lclark@us.ibm.com]
Sent: Tuesday, March 26, 2002 12:16 AM
To: Westphal, Raymond
Subject: Re: [nv-l] ovobjprint -S and netmon ?
Yes, what you are seeing is probably normal. Your seedfile probably
contains @OID entries or name wildcards. This allows the creation of
tiny stubb objects in the database, placeholders that mean "don't
discover this object". They get checked daily to see if they have
turned into something that we want to discover. These things show
up in the count provided by 'ovobjprint -S', and they also show up
in netmon's configuration polling check - that daily checking I mentioned.
When you do an ovtopofix, these stubs get deleted. As soon as netmon
starts backup, though, they begin to be rediscovered. This is normal.
There is a certain amount of overhead associated with it, though,
so what you do is you exclude as many of them as you can with one
or more negative address wildcards in the seedfile. What I do is
exclude the DHCP address ranges, if they are anything like tidy.
That level of effort seems to be a good balance with the benefit
gained. I don't go after every little one.
When sizing the database, especially for ovwdb cache, keep the
highest number in mind.
Cordially,
Leslie A. Clark
IBM Global Services - Systems Mgmt & Networking
Detroit
"Westphal,
Raymond" To: "NV List (E-mail)"
<RWestphal@era <nv-l@lists.tivoli.com>
c.com> cc:
Subject: [nv-l] ovobjprint -S
and netmon ?
03/25/02 12:09
PM
Hello Everyone.
NetView 7.1.1 on AIX 4.3.3 ML9
Last year a message was posted to the list server that contained scripts,
pingstatus.sh and snmpstatus.sh. The scripts provide an easy way to tell
how
well netmon is performing. pingstatus informs you how netmon is doing on
status polls. snmpstatus obviously shows how netmon is doing on snmp polls.
My dilemna is that the number of objects in the database grows quickly
after
the database cleanup runs on the weekend. At 6 a.m. the number of objects
defined in the database was 17,890. Right now at 11 a.m. it is up to
38,149.
When I check netmon, it is usually too busy to even create the trace file
or
is 8,000+ objects behind in polling. I suspect this is because netmon has
to
continuously recalculate the polling cycle. Of course, then NetView isn't
really proactively warning us of node and interface failures.
Is this normal behavior? Or is there something I can do to keep netmon
polling effectively?
Here are the netmon.lrf contents:
netmon:/usr/OV/bin/netmon:
OVs_YES_START:nvsecd,ovtopmd,trapd,ovwdb:-P,-S,-s/usr/OV/conf/netmon.seed,
-A
,-u,-l,-h,-K 0,-q 32,-Q 32:OVs_WELL_BEHAVED:15:
Thanks in advance.
Ray Westphal
Enterprise Rent-A-Car
---------------------------------------------------------------------
To unsubscribe, e-mail: nv-l-unsubscribe@lists.tivoli.com
For additional commands, e-mail: nv-l-help@lists.tivoli.com
*NOTE*
This is not an Offical Tivoli Support forum. If you need immediate
assistance from Tivoli please call the IBM Tivoli Software Group
help line at 1-800-TIVOLI8(848-6548)
|