all of those things, thank you for suggesting something. I figured it would be
the change to ovwdb's cache that created this, but apparently not so. I agree
TFNC should be blind to such minor changes. This is some seriously obscure
stuff, and making me look real hard and fast at the new integration introduced
in 7.1.2. Unfortunately, the supplied netview.rls file for TEC does not compile,
so I need to open a PMR.
The trapd queue
size change affects only trapd internally. All you did was enlarge the
queue he uses to store outbound events for connected applications. TFNC
is blind to that. But the fact that you would have had to recycle trapd
in order to change these settings might be a problem. I presume the TFNC
daemon was up and working when you recycled trapd. Is he supposed to
reconnect after a sudden loss of connection to trapd? Perhaps that's the
source of the issue -- he's trying to re-establish a connection over a dead
socket? That could eat up your CPU real fast. Or did the TFNC
daemon also go down when you took down trapd? Perhaps you might
consider taking everything down and restarting it or even rebooting the box.
Just a guess
Level 3 Support for Tivoli NetView for UNIX and
Tivoli Software / IBM Software Group
||"Van Order, Drew \(US - Hermitage\)"
Sent by: firstname.lastname@example.org
01/13/2004 11:14 AM
Please respond to nv-l
Subject: RE: [nv-l] A plea for
anyone who used the old Tivoli Manager for Network Connectivity
Thanks for your reply Paul--the dev server only has 3000 objects, so
wasn't necessary to change it from the default. Production servers
around 6,000 objects currently, so we do want that setting to stay.
we are trying to determine is why the TFNC sm_ipfm process
spiked after changing trapd/ovwdb settings, and stays spiked even
reverting to previous settings. We expected it to go back to normal.
processes are taking very little CPU.
On Behalf Of Paul
January 13, 2004 2:54 PM
Subject: Re: [nv-l] A
plea for anyone who used the old Tivoli Manager
for Network Connectivity
Why did you change your ovwdb cache to a smaller size? Do
have more that 5,000 objects in your database? If you do I
highly suggest changing the cache size to number of objs + 10%.
think you will find that helps. If the cache size is less than the
of objects in the database, it causes NetView to thrash
Van Order, Drew (US - Hermitage)
> We are still using TFNC with NV 7.1.4. The product has been
> bulletproof for us; we've never needed to touch anything other than
> config files. Our NV environment has grown, so we decided to tweak 2
> settings--number of trap applications from 2000 to 4096, and changed
> the ovwdb cache to 10,000 from 5000. The sm_ipfm process has suddenly
> started consuming considerable CPU. I changed the NV settings back,
> but sm_ipfm did not change. We have a failover server and a dev
> box--same behavior. Documentation, the list archives, and IBM's
> support DB have very little about TFNC. While things are running, we
> would like to know why sm_ipfm suddenly changed and what we can do to
> better understand what's going on. Our 6 CPU, 4 GB RAM AIX box went
> from uptime of .2 to consistently over 1.1, periodically pegging
> individual CPU's to where they alert us. The smaller boxes are taking
> a bigger beating.
> We know we need to move off TFNC,
but I have not seen how we can get
> the same results with the newer
versions of NV and TEC. Any advice on
> that is appreciated as well.
> */ Drew Van Order /*
> */ ESM
> */ (615) 882-7836 Office /*
> */ (888) 530-1012
> This message (including any attachments)
> information intended for a specific individual
and purpose, and is
> protected by law. If you are not the intended
recipient, you should
> delete this message. Any disclosure, copying,
or distribution of this
> message, or the taking of any action based on
This message (including any attachments) contains confidential information intended for a specific individual and purpose, and is protected by law. If you are not the intended recipient, you should delete this message. Any disclosure, copying, or distribution of this message, or the taking of any action based on it, is strictly prohibited.