To: | nv-l@lists.us.ibm.com |
---|---|
Subject: | RE: [nv-l] NV tuning for Data collection |
From: | James Shanks <jshanks@us.ibm.com> |
Date: | Thu, 13 Jan 2005 09:20:54 -0700 |
Delivery-date: | Thu, 13 Jan 2005 16:21:28 +0000 |
Envelope-to: | nv-l-archive@lists.skills-1st.co.uk |
In-reply-to: | <C353F42ACF29E240B9050B86F1852A4F0C267E@nlspm204.emea.corp.eds.com> |
Reply-to: | nv-l@lists.us.ibm.com |
Sender: | owner-nv-l@lists.us.ibm.com |
So that would mean that you forced snmpCollect to gather that data by making a manual update to the snmpCol.conf file.
James, Thanks for your advice. I'll check the priority of the Cisco devices. The two MIBs I got error messages are SNMP2 MIBs which I cannot browse through the MIB browser. Regards, David
From: owner-nv-l@lists.us.ibm.com [mailto:owner-nv-l@lists.us.ibm.com]On Behalf Of James Shanks Sent: Thursday, January 13, 2005 3:47 PM To: nv-l@lists.us.ibm.com Subject: RE: [nv-l] NV tuning for Data collection You need to check the MIB you have loaded for this. snmpCollect "expects" to get back what's in the SNMPv1 MIB database. Start the MIB browser xnmbrowser and follow down to the appropriate entries and click the Describe button to see what data type that is.
Joe and Leslie, Thanks for your advice. I'll do some further investigation and testing, especially to avoid collecting those "non-reply" and "non-response" objects. One other "problem" appearing in the trace file: "MIB cpmCPUTotal5min, (and ciscoMemoryPoolFree) on 'hostname' gave type Gauge, expect TIMESTICKS (due to mib.coerce file?)" 1) I did not specify anything in the coerce file and checked at Cisco site that CPU and Memory should be type Gauge (32). Why they expect TIMESTICKS? 2) Does this error have impact on the quality of data collection (e.g. delay etc.)? Regards, David
From: owner-nv-l@lists.us.ibm.com [mailto:owner-nv-l@lists.us.ibm.com]On Behalf Of Leslie Clark Sent: Wednesday, January 12, 2005 11:32 PM To: nv-l@lists.us.ibm.com Subject: Re: [nv-l] NV tuning for Data collection Startup of snmpCollect may be slow. Turn on the tracing and watch what it is doing at startup, so you know whether to worry or not. I've found snmpCollect to be pretty efficient at collecting, and at minimizing its impact on the devices by grouping stuff together. Where you may have trouble is with those per-interface values on devices with lots of interfaces. There is a parameter for the snmpCollect daemon for the maxpdu that controls how much data it will request at once. It defaults to 100 somethings. I've sometimes changed it to 50 somethings, so snmpCollect breaks the request into smaller packages. This avoids loss of data caused by the device refusing to deliver too-large responses. There is also, under the Options....SNMP Config, the timeout and retries settings. I know this applies to snmp requests from other parts of Netview, but I have never been sure whether it applied to snmpCollect or not. You will also have trouble with some of those interface counters if the interfaces are very high speed. Netview currently will only collect 32bit values (Counter32), and for gig interfaces, the values wrap much too quickly. So take a look at which instances you really need, and what the rate of flow really is. There is no point collecting it if it is bad data. Look for sub-interface instances with lower rates of flow and see if that will give you what you need. When you update snmpCollect configuration via the gui, it updates /usr/OV/conf/snmpCol.conf and stop/starts snmpCollect. You can edit that file manually. If you find that you want to collect a variety of different interface instances on each device, you could generate that file programmatically. I'm suggesting that with large numbers of devices, entering *.*.*.* or Routers just because it is easy is not always the best choice. Try making fancy entries via the gui and then check the results in snmpCol.conf. Then write a little script to generate the repetitive parts. Cordially, Leslie A. Clark IBM Global Services - Systems Mgmt & Networking
Hi list, I've been reading nv-l archives for quite some times and benefit from them. Now I post my first question to get your advice/help. We are collecting quite a lot data every 15 mins (supposed to), but till now only part of the collection happened (everyday less than half i.e. 40 collections per definition per device, sometimes even null). My basic question is: can NV handle that many collections? Because when I suspended more than half of the collections (interface data). It seemed working fine. If yes, how can I tune the NV? Where can I find the document for the tuning (snmpCollect daemon settings)? Here's some basic info. NV 7.1.2 on Solaris 2.8. Data collection of about 600 devices with 1) SysUptime for all of them 2) cpmCPUTotal5min for 400 devices (cisco) 3) ciscoMemoryPoolUsed for 400 devices 4) ciscoMemoryPoolFree for 400 devices 5) ifInUcastPkts, ifOutUcastPkts, ifInErrors, ifOutErrors, ifAdminStatus, ifOperStatus, ifLastChange, ifInOctets, ifOutOctets, ifInDiscards, ifOutDiscards, ifInNUcastPkts, ifOutNUcastPkts for about 200 devices with lot of interfaces 6) Some latency data for 10 routers Our current snmpCollect (daemon) settings: Defer time: 60 Max PDU: 50 Config check interval: 1440 Max concurrent SNMP sessions: 50 Verbose trace mode : Yes Polling interval for nvcold: 60 Thank you in advance. Regards, David (See attached file: graycol.gif)(See attached file: ecblank.gif)
graycol.gif
ecblank.gif |
<Prev in Thread] | Current Thread | [Next in Thread> |
---|---|---|
|
Previous by Date: | Re: [nv-l] Loading ibm2100.mib, James Shanks |
---|---|
Next by Date: | Re: [nv-l] Netview Polling, awatthey |
Previous by Thread: | Re: [nv-l] Loading ibm2100.mib, James Shanks |
Next by Thread: | RE: [nv-l] NV tuning for Data collection, Liu, David |
Indexes: | [Date] [Thread] [Top] [All Lists] |
Archive operated by Skills 1st Ltd
See also: The NetView Web