This is a multi-part message in MIME format.
Also - are you using smartsets in your SNMP data collection? I had to remove
the smartsets I was using in SNMP configuration and data collection because
it appears the smart set queries were driving up CPU.
-----Original Message-----
From: Luc BARNOUIN [mailto:luc.barnouin@thalesatm.com]
Sent: Tuesday, July 17, 2001 3:45 AM
To: IBM NetView Discussion
Subject: Re: [NV-L] snmpCollect
reamd@nationwide.com a écrit :
Hi all,
I am running AIX 4.3.3 with 6.02.. My snmpCollect is constantly
using 60% of my CPU. I do a snmpstatus.sh and it shows 0 behind in
polling.
I currently have 106000 objects in the database maintained by ovwdb....
Dave,
We had similar problem with our configuration where snmpCollect was using
from 30% to 60% of the CPU. When activating the traces, we found it was
looping in querying the sysUpTime OID for one particular node. I don't know
why, but each time snmpCollect queries an OID on a node, it first get the
sysUpTime, and then the requested OID. On this node, we were experimenting a
new SNMP agents wich had its own implementation of the sysUpTime (i.e. a
different value from standard snmpd). Depending of wich agent was running at
snmpCollect start (data collection initialisation) and then during data
collection cycle, we get in this situation (I'm sure that Tivoli software
team could explain this behavior...) - a stop/restart of snmpCollect fixed
the problem until the next time....
So check snmpCollect traces to see wether your are in the same
situation...
Regards
Luc
Also -
are you using smartsets in your SNMP data collection? I had to remove the
smartsets I was using in SNMP configuration and data collection because it
appears the smart set queries were driving up CPU.
reamd@nationwide.com a écrit :
Hi all,
I am running AIX 4.3.3
with 6.02.. My snmpCollect is constantly using 60% of my CPU. I do a
snmpstatus.sh and it shows 0 behind in polling. I currently have 106000
objects in the database maintained by ovwdb.... Dave,
We had similar problem with our configuration where snmpCollect was using
from 30% to 60% of the CPU. When activating the traces, we found it was
looping in querying the sysUpTime OID for one particular node. I don't know
why, but each time snmpCollect queries an OID on a node, it first get the
sysUpTime, and then the requested OID. On this node, we were experimenting a
new SNMP agents wich had its own implementation of the sysUpTime (i.e. a
different value from standard snmpd). Depending of wich agent was running at
snmpCollect start (data collection initialisation) and then during data
collection cycle, we get in this situation (I'm sure that Tivoli software team
could explain this behavior...) - a stop/restart of snmpCollect fixed the
problem until the next time....
So check snmpCollect traces to see wether your are in the same situation...
Regards Luc
|