nv-l
[Top] [All Lists]

RE: [nv-l] NV tuning for Data collection

To: "'nv-l@lists.us.ibm.com'" <nv-l@lists.us.ibm.com>
Subject: RE: [nv-l] NV tuning for Data collection
From: "Liu, David" <david.liu@eds.com>
Date: Thu, 13 Jan 2005 14:55:57 -0000
Delivery-date: Thu, 13 Jan 2005 14:58:35 +0000
Envelope-to: nv-l-archive@lists.skills-1st.co.uk
Reply-to: nv-l@lists.us.ibm.com
Sender: owner-nv-l@lists.us.ibm.com
James,
 
Thanks for your advice. I'll check the priority of the Cisco devices.
 
The two MIBs I got error messages are SNMP2 MIBs which I cannot browse through the MIB browser.
 
Regards,
David
-----Original Message-----
From: owner-nv-l@lists.us.ibm.com [mailto:owner-nv-l@lists.us.ibm.com]On Behalf Of James Shanks
Sent: Thursday, January 13, 2005 3:47 PM
To: nv-l@lists.us.ibm.com
Subject: RE: [nv-l] NV tuning for Data collection

You need to check the MIB you have loaded for this. snmpCollect "expects" to get back what's in the SNMPv1 MIB database. Start the MIB browser xnmbrowser and follow down to the appropriate entries and click the Describe button to see what data type that is.

It is not likely that this has anything to do with the delays. It is far more likely that your Cisco devices are configured to give a low priority to SNMP requests when they are busy


James Shanks
Level 3 Support for Tivoli NetView for UNIX and Windows
Tivoli Software / IBM Software Group
Inactive hide details for "Liu, David" <david.liu@eds.com>"Liu, David" <david.liu@eds.com>


          "Liu, David" <david.liu@eds.com>
          Sent by: owner-nv-l@lists.us.ibm.com

          01/13/2005 09:17 AM
          Please respond to
          nv-l


To

"'nv-l@lists.us.ibm.com'" <nv-l@lists.us.ibm.com>

cc


Subject

RE: [nv-l] NV tuning for Data collection

Joe and Leslie,

Thanks for your advice. I'll do some further investigation and testing, especially to avoid collecting those "non-reply" and "non-response" objects.

One other "problem" appearing in the trace file:

"MIB cpmCPUTotal5min, (and ciscoMemoryPoolFree) on 'hostname' gave type Gauge, expect TIMESTICKS (due to mib.coerce file?)"

1) I did not specify anything in the coerce file and checked at Cisco site that CPU and Memory should be type Gauge (32). Why they expect TIMESTICKS?

2) Does this error have impact on the quality of data collection (e.g. delay etc.)?

Regards,
David

      -----Original Message-----
      From:
      owner-nv-l@lists.us.ibm.com [mailto:owner-nv-l@lists.us.ibm.com]On Behalf Of Leslie Clark
      Sent:
      Wednesday, January 12, 2005 11:32 PM
      To:
      nv-l@lists.us.ibm.com
      Subject:
      Re: [nv-l] NV tuning for Data collection


      Startup of snmpCollect may be slow. Turn on the tracing and watch what it is doing at startup, so you know whether to worry or not.

      I've found snmpCollect to be pretty efficient at collecting, and at minimizing its impact on the devices by grouping stuff together. Where you may have trouble is with those per-interface values on devices with lots of interfaces. There is a parameter for the snmpCollect daemon for the maxpdu that controls how much data it will request at once. It defaults to 100 somethings. I've sometimes changed it to 50 somethings, so snmpCollect breaks the request into smaller packages. This avoids loss of data caused by the device refusing to deliver too-large responses.

      There is also, under the Options....SNMP Config, the timeout and retries settings. I know this applies to snmp requests from other parts of Netview, but I have never been sure whether it applied to snmpCollect or not.

      You will also have trouble with some of those interface counters if the interfaces are very high speed. Netview currently will only collect 32bit values (Counter32), and for gig interfaces, the values wrap much too quickly. So take a look at which instances you really need, and what the rate of flow really is. There is no point collecting it if it is bad data. Look for sub-interface instances with lower rates of flow and see if that will give you what you need.

      When you update snmpCollect configuration via the gui, it updates /usr/OV/conf/snmpCol.conf and stop/starts snmpCollect. You can edit that file manually. If you find that you want to collect a variety of different interface instances on each device, you could generate that file programmatically. I'm suggesting that with large numbers of devices, entering *.*.*.* or Routers just because it is easy is not always the best choice. Try making fancy entries via the gui and then check the results in snmpCol.conf. Then write a little script to generate the repetitive parts.

      Cordially,

      Leslie A. Clark
      IBM Global Services - Systems Mgmt & Networking


      "Liu, David" <david.liu@eds.com>
      Sent by: owner-nv-l@lists.us.ibm.com

      01/11/2005 03:53 AM

      Please respond to
      nv-l

      To
      "'nv-l@lists.tivoli.com'" <nv-l@lists.tivoli.com>
      cc
      Subject
      [nv-l] NV tuning for Data collection




      Hi list,

      I've been reading nv-l archives for quite some times and benefit from them.
      Now I post my first question to get your advice/help.

      We are collecting quite a lot data every 15 mins (supposed to), but till now
      only part of the collection happened (everyday less than half i.e. 40
      collections per definition per device, sometimes even null). My basic
      question is: can NV handle that many collections? Because when I suspended
      more than half of the collections (interface data). It seemed working fine.
      If yes, how can I tune the NV? Where can I find the document for the tuning
      (snmpCollect daemon settings)?

      Here's some basic info.

      NV 7.1.2 on Solaris 2.8.

      Data collection of about 600 devices with

      1) SysUptime for all of them

      2) cpmCPUTotal5min for 400 devices (cisco)

      3) ciscoMemoryPoolUsed for 400 devices

      4) ciscoMemoryPoolFree for 400 devices

      5) ifInUcastPkts, ifOutUcastPkts, ifInErrors, ifOutErrors, ifAdminStatus,
      ifOperStatus, ifLastChange, ifInOctets, ifOutOctets, ifInDiscards,
      ifOutDiscards, ifInNUcastPkts, ifOutNUcastPkts for about 200 devices with
      lot of interfaces

      6) Some latency data for 10 routers

      Our current snmpCollect (daemon) settings:

      Defer time:                                                     60
      Max PDU:                                                                    50
      Config check interval:                                  1440
      Max concurrent SNMP sessions: 50
      Verbose trace mode :                                   Yes
      Polling interval for nvcold:                 60

      Thank you in advance.

      Regards,
      David



<Prev in Thread] Current Thread [Next in Thread>

Archive operated by Skills 1st Ltd

See also: The NetView Web