If the rulesets use smartset queries - and there is any problem with NVCOLD
this might affect trap processing. Same effect if OVWDB is running very high.
By any chance are you running discovery with OID wildcards? If so, then OVWDB
has numerous memory leaks / CPU performance issues (Just getting fixes now,
IY32422)
-----Original Message-----
From: Scott Hammons [mailto:s.hammons@ais-nms.com]
Sent: Tuesday, July 30, 2002 12:08 PM
To: IBM NetView Discussion
Subject: RE: [nv-l] trapd processing very slow
Greg,
We had a similar problem at a customer site. Although the processing was
not as slow as yours, it was still slow (5 to 10 minutes). We tried
checking everything within NetView but after about a week, it came down to a
DNS issue effecting the way NetView was resolving names. You might want to
check to make sure your DNS tables are correct. Hope this helps.
Scott
Scott Hammons, Senior Consultant
Tivoli Certified Consultant
Advanced Integrated Solutions, Inc.
www.ais-nms.com
Email s.hammons@ais-nms.com
Cellular (210) 378-8229
-----Original Message-----
From: Gregory Adams [mailto:gadams@us.ibm.com]
Sent: Tuesday, July 30, 2002 11:37 AM
To: nv-l@lists.tivoli.com
Subject: [nv-l] trapd processing very slow
Hello
My environment is:
NetView 7.1.2
AIX 4.3.3 maint lvl 9
I am seeing a major delay (hours) for traps to get processed by trapd. When
I do a netstat -a | grep trapd I see a constantly high number of messages
in the recv_queue:
tcp4 0 0 *.nvtrapd- *.* LISTEN
70610400 stream 0 0 166c2e80 0 0 0
/usr/OV/socket
s/trapd.socket
7066e400 stream 276 0 0 70614880 0 0
/usr/OV/socket
s/trapd.socket
706a8400 stream 0 0 0 70614640 0 0
/usr/OV/socket
s/trapd.socket
7067ca00 stream 65466 0 0 7007f600 0 0
/usr/OV/socket
s/trapd.socket
706b1a00 stream 0 0 0 7043cbc0 0 0
/usr/OV/socket
s/trapd.socket
706b5800 stream 0 0 0 70765a40 0 0
/usr/OV/socket
s/trapd.socket
7058b600 stream 245 0 0 0 0 0
/usr/OV/socket
s/trapd.socket
In trying to debug this problem I have removed all of my ESE.automation
rulesets in hopes that perhaps
there was something in those rulesets that was causing this problem.
iptrace on port 162 shows the traps have arrived but for example I
generated a test trap and watched the packet appear in the iptrace
after close to 3 hours the trap appeared in the trapd.log and subsequently
2 and a half hours later the trap appeared in the nvevents application.
I have trapd.trace running as well and dont see the application queues ever
exceed what I have them set at, I had increased them from 2000 to 10000
just in case that could be the problem.
Has anyone seen this behaviour or have any ideas ?
Thankyou
Greg Adams
---------------------------------------------------------------------
To unsubscribe, e-mail: nv-l-unsubscribe@lists.tivoli.com
For additional commands, e-mail: nv-l-help@lists.tivoli.com
*NOTE*
This is not an Offical Tivoli Support forum. If you need immediate
assistance from Tivoli please call the IBM Tivoli Software Group
help line at 1-800-TIVOLI8(848-6548)
---------------------------------------------------------------------
To unsubscribe, e-mail: nv-l-unsubscribe@lists.tivoli.com
For additional commands, e-mail: nv-l-help@lists.tivoli.com
*NOTE*
This is not an Offical Tivoli Support forum. If you need immediate
assistance from Tivoli please call the IBM Tivoli Software Group
help line at 1-800-TIVOLI8(848-6548)
|