nv-l
[Top] [All Lists]

RE: [nv-l] A plea for anyone who used the old Tivoli Manager for Network

To: <nv-l@lists.us.ibm.com>
Subject: RE: [nv-l] A plea for anyone who used the old Tivoli Manager for Network Connectivity product
From: "Van Order, Drew \(US - Hermitage\)" <dvanorder@deloitte.com>
Date: Tue, 13 Jan 2004 13:35:30 -0600
Delivery-date: Tue, 13 Jan 2004 19:40:43 +0000
Envelope-to: nv-l-archive@lists.skills-1st.co.uk
Importance: normal
Reply-to: nv-l@lists.us.ibm.com
Sender: owner-nv-l@lists.us.ibm.com
Thread-index: AcPaCaMAWyoIHmGuQ92wi3GEJxubDQAAdiJA
Thread-topic: [nv-l] A plea for anyone who used the old Tivoli Manager for Network Connectivity product
Tried all of those things, thank you for suggesting something. I figured it would be the change to ovwdb's cache that created this, but apparently not so. I agree TFNC should be blind to such minor changes. This is some seriously obscure stuff, and making me look real hard and fast at the new integration introduced in 7.1.2. Unfortunately, the supplied netview.rls file for TEC does not compile, so I need to open a PMR.
 
Thanks James--Drew
-----Original Message-----
From: owner-nv-l@lists.us.ibm.com [mailto:owner-nv-l@lists.us.ibm.com] On Behalf Of James Shanks
Sent: Tuesday, January 13, 2004 1:14 PM
To: nv-l@lists.us.ibm.com
Subject: RE: [nv-l] A plea for anyone who used the old Tivoli Manager for Network Connectivity product


The trapd queue size change affects only trapd internally.  All you did was enlarge the queue he uses to store outbound events for connected applications.  TFNC is blind to that.  But the fact that you would have had to recycle trapd in order to change these settings might be a problem.  I presume the TFNC daemon was up and working when you recycled trapd.  Is he supposed to reconnect after a sudden loss of connection to trapd?  Perhaps that's the source of the issue -- he's trying to re-establish a connection over a dead socket?  That could eat up your CPU real fast.  Or did the TFNC daemon also go down when you took down trapd?   Perhaps you might consider taking everything down and restarting it or even rebooting the box.  

Just a guess

James Shanks
Level 3 Support  for Tivoli NetView for UNIX and Windows
Tivoli Software / IBM Software Group



"Van Order, Drew \(US - Hermitage\)" <dvanorder@deloitte.com>
Sent by: owner-nv-l@lists.us.ibm.com

01/13/2004 11:14 AM
Please respond to nv-l

       
        To:        <nv-l@lists.us.ibm.com>
        cc:        
        Subject:        RE: [nv-l] A plea for anyone who used the old Tivoli Manager for Network Connectivity product



Thanks for your reply Paul--the dev server only has 3000 objects, so it
wasn't necessary to change it from the default. Production servers have
around 6,000 objects currently, so we do want that setting to stay. What
we are trying to determine is why the TFNC sm_ipfm process suddenly
spiked after changing trapd/ovwdb settings, and stays spiked even after
reverting to previous settings. We expected it to go back to normal. NV
processes are taking very little CPU.

-----Original Message-----
From: owner-nv-l@lists.us.ibm.com [mailto:owner-nv-l@lists.us.ibm.com]
On Behalf Of Paul
Sent: Tuesday, January 13, 2004 2:54 PM
To: nv-l@lists.us.ibm.com
Subject: Re: [nv-l] A plea for anyone who used the old Tivoli Manager
for Network Connectivity product


Why did you change your ovwdb cache to a smaller size? Do you
have more that 5,000 objects in your database? If you do I would
highly suggest changing the cache size to number of objs + 10%.
I think you will find that helps. If the cache size is less than the
number of objects in the database, it causes NetView to thrash
about badly.

Paul




Van Order, Drew (US - Hermitage) wrote:

> We are still using TFNC with NV 7.1.4. The product has been
> bulletproof for us; we've never needed to touch anything other than
> config files. Our NV environment has grown, so we decided to tweak 2
> settings--number of trap applications from 2000 to 4096, and changed
> the ovwdb cache to 10,000 from 5000. The sm_ipfm process has suddenly
> started consuming considerable CPU. I changed the NV settings back,
> but sm_ipfm did not change. We have a failover server and a dev
> box--same behavior. Documentation, the list archives, and IBM's
> support DB have very little about TFNC. While things are running, we
> would like to know why sm_ipfm suddenly changed and what we can do to
> better understand what's going on. Our 6 CPU, 4 GB RAM AIX box went
> from uptime of .2 to consistently over 1.1, periodically pegging
> individual CPU's to where they alert us. The smaller boxes are taking
> a bigger beating.
>
> We know we need to move off TFNC, but I have not seen how we can get
> the same results with the newer versions of NV and TEC. Any advice on
> that is appreciated as well. Many thanks--Drew
>
> */ Drew Van Order /*
> */ ESM Architect /*
> */ (615) 882-7836 Office /*
> */ (888) 530-1012 Pager /*
>
>
> This message (including any attachments) contains confidential
> information intended for a specific individual and purpose, and is
> protected by law. If you are not the intended recipient, you should
> delete this message. Any disclosure, copying, or distribution of this
> message, or the taking of any action based on it, is strictly
prohibited.
>





This message (including any attachments) contains confidential information intended for a specific individual and purpose, and is protected by law. If you are not the intended recipient, you should delete this message. Any disclosure, copying, or distribution of this message, or the taking of any action based on it, is strictly prohibited.

<Prev in Thread] Current Thread [Next in Thread>

Archive operated by Skills 1st Ltd

See also: The NetView Web