nv-l
[Top] [All Lists]

Re: nvDColData -T ddmmyyyhhmm (how do I get $today - 14 days???)

To: nv-l@lists.tivoli.com
Subject: Re: nvDColData -T ddmmyyyhhmm (how do I get $today - 14 days???)
From: "Joel A. Gerber" <joel.gerber@USAA.COM>
Date: Sat, 17 Oct 1998 21:25:27 -0500
Reply-to: Discussion of IBM NetView and POLYCENTER Manager on NetView <NV-L@UCSBVM.UCSB.EDU>
Sender: Discussion of IBM NetView and POLYCENTER Manager on NetView <NV-L@UCSBVM.UCSB.EDU>
You should take a look at SAS Institute's IT Service Vision product.  You
can run it on the platform of your choice: MVS, AIX, NT, etc.  It knows how
to extract data from the snmpCollect files (it makes a call to snmpColDump).
It is almost effortless to maintain the databases.  The reporting can be
easy or quite challenging depending on your reporting requirements.  If you
have no one with SAS expertise, I would recommend contracting with a SAS
consultant to get you started (SAS Institute provides the service, but there
are LOTS of other private companies that can work with ITSV).  SAS is not an
inexpensive solution, but you get the benefit of producing any reports you
want in text,  graphics, web-based, etc.

We are running ITSV on an AIX platform (J40 with 1GB RAM), collecting data
from about 250 devices, and 1700 interfaces.  The raw SNMP data is imported
into the SAS database at midnight every night (takes about 4 1/2 hours) and
then several hundred reports are generated (takes another 3-4 hours).  We
are implementing a report-on-demand function through the web which takes
about 10 seconds to generate a report.  Once this is in place the several
hundred reports will be cut down to about 30, so we can save the processing
and disk space.  Our SAS consultants have built a very flexible exception
reporting engine, baseline reports, trending reports, "Top N" reports, etc.

        -----Original Message-----
        From:   Dave Luiz [SMTP:daveluiz@jps.net]
        Sent:   Friday, October 16, 1998 22:07
        To:     NV-L@UCSBVM.UCSB.EDU
        Subject:        Re: nvDColData -T ddmmyyyhhmm  (how do I get $today
- 14 days???)

        Yes, we are. At first, we too wanted to keep x days worth of data in
the
        collection binary files. Our manuals have an example on how to do
this but it
        contained many syntax errors. We had to play with it a bit to get it
to work.
        However, once we got it working we discovered two things. The first
is that the
        binary reload process was way too slow for the amount of data we
were pushing
        through it. The second thing is that keeping anymore that one days
worth of data
        in the binary files was too much data to effectively see in the
graphs anyway
        (we have 58,000+ objects in our database.)

        So, now we dump the binary collection files every day in the early
morning
        hours, then delete them.
        This effectively assures we keep no more than 24 hours worth of
collection data
        in the binary files.
        After dumping the binary files, we reformat the ASCII output to our
liking, then
        FTP to MVS/DB2 which does an automatic DB load as soon as the FTP
x-fer is done.

        We collect Cisco's avgBusy5 CPU load MIB and Input/Output line
utilization via
        MIB expressions every 15 mins on nearly 800 routers (some of which
have over 140
        interfaces.) The entire process takes anywhere from 3-5 hours to run
depending
        on network traffic (we make additional SNMP calls during the running
of
        snmpColDump). Every 24 hours, we dump, re-format, FTP and load into
MVS/DB2,
        over 140 MB's of collection data! The database purges data older
than 60 days.
        The process finishes several hours before our help desk staff arrive
and is
        remarkably reliable.

        Now if we could only find a graphing/trending tool like Concord's
Network Health
        that didn't cost an arm and a leg, and could use our collected data
as input, we
        would be set. Our customers are demanding rather loudly,
performance/trending
        stats on how their networks are performing.

        I can email you privately if you want to talk specifics.

        Brook, Bryan S wrote:

        >         Anyone here sending their Netview collections to a RDBMS?
We are
        > and want to store only the last 2 weeks worth of data.  So we want
to script
        > up "delete collection data > 2 weeks old" .  How do you get a "- 2
weeks"
        > with the date format as shown?  I can't believe this isnt a common
request,
        > however the command wont support this without some prior math
manipulation.
        > Anyone have it down already?
        >
        >         Thanks,
        >         Bryan

<Prev in Thread] Current Thread [Next in Thread>

Archive operated by Skills 1st Ltd

See also: The NetView Web