Hi Jason,
Start with Leslie's recommendations - spot on!
The parameter to netmon that leyts you discover local network, backbone
or All is relatively new. The default is local network and I'd leave it
that way probably. If your scenario is ONLY to find routers and
switches throughout your environment, then maybe the network option is
appropriate but most NetViews end up managing a few important servers
too. If you set this parameter to All then NetView will try and
discover the world (any holes you have out to the internet where your
ISP still uses a community name of public on their routers (and I've
seen that!!), and you may get a much bigger database than you want....)
Re seedfiles - follow Leslie's approach. I generally put in a few more
routers as explicit discovery nodes to speed up the discovery process.
(Note the comment that says entries in a seedfile must support SNMP,
only refer to explicit discovery nodes). Leslie's sample has a comment
that says "Force the discovery of these core switches by the interfaces
that resolve to these names" - the implication behind this is that you
have a good DNS available - if not, build one on your NetView server.
Put the loopback / management address in your forward DNS files and make
sure you have all interfaces configured in the reverse DNS files.
How much memory does your Linux system have?? There is good, new
planning info at the back of the NV 7.1.4 Release Notes. You don't say
what the ouput of "ovobjprint -S" was (but it shouldn't take 10
minutes!) - from your ovtopodump output, it looks like you are in the
"Medium Network" bracket of 10,000 - 25,000 interfaces. The suggestion
is a 2-4 processor system with 1-2 Gb memory. They also offer some
tuning tips, including Leslie's comment to check the ovwdb cache size -
this can be a real "golden screwdriver" magic fix to performance
problems. Given the spec you have, I would say you definitely shouldn't
be seeing such poor performance (provided you are not using the box for
something else as well??). The other big memory consumer is lots of
concurrent X interfaces - see page 67 of the Release Notes for a formula
to help calculate this.
Cheers,
Jane
Duppong, Jason wrote:
Hello,
A couple of months ago I volunteered to take ownership of our Netview
installation and start supporting it (I was a complete newbie to Netview
btw). We've had an installation in house for many years and the group
responsible for it was given other tasks, while my group is the one
responsible for monitoring our Data Center environment. For the last couple
of months I have been searching and reading these forum discussions, reading
the documentation provided by IBM and receiving training from the group that
had support. I was hoping I could benefit from some experienced Netview
admins about how they do some things so please bear with me in some of these
questions... :-)
One of the benefits to the swap of ownership was the rebuilding of our
Netview environment. Our current environment is running 7.1.1 on AIX and
the new environment is being rebuilt on Redhat Linux 7.2 running Netview
7.1.4. Based on what I've been reading, I don't think the current
environment is utilizing the netmon.seed file the way it was intended which
leads me to my first question. What is the best way to make use of the
netmon.seed file? The way I understand the recommended use is to add all
SNMP aware devices that you want Netview to know about and keep those IP's
listed in the file (currently the file is overwritten each time a new device
is added, listing just the new device IP, then doing a netmon -Y). The
documentation clearly states that you should only add SNMP addresses in this
file and not IP's that don't respond to SNMP. If you happen to have a list
of addresses that you want Netview to keep track of, which could include
IP's for servers that may or may not respond to SNMP, I'm guessing that
discovery would be setup so that each of the non-SNMP IP's behind switches
would then be found.
I have been playing a lot with discovery and the documentation is a bit
confusing on the subject. There are two different places where discovery
behavior can be changed, the range of automatic discovery and the "discover
new nodes" option. What are the effects of these options on Netview? I've
played with both trying to figure it out and have not had much luck. With
the range set to backbone I seem to eventually get the same devices I would
if I were to set it to all networks. Today I cleared the database and set
the range to local subnet and again Netview discovered roughly the same
amount of devices. I used the same complete netmon.seed file in each case.
I haven't found the documentation to be real clear on explaining what these
settings do, so I was hoping someone would be kind enough to help out here.
Furthering the discussion of discovery, what choices are the most common or
appropriate? Is there a best practices document that one should follow?
Originally I wrote a script that duplicated the current Netview environment
on the new systems, properly setting SNMP information, loading the hosts,
setting up the rules for smart set membership, etc. When I started reading
about discovery I thought that would be the best way to go. The old group
responsible for Netview has told me a number of times to go with manually
maintaining nodes and bypass discovery. Apparently we have a number of
devices on our networks that would be continually picked up by Netview that
would just need to be deleted or unmanaged over and over again. I found a
fantastic table in one of the Netview documents explaining the "Discovery
Process Using a Seed File". It lists what the minimum and the maximum
discovered nodes would be based on discovery settings and the entries in the
netmon.seed file. If I were to manually maintain the list of nodes but
wanted to follow the recommended netmon.seed use would I use the loadhost
utility to insert servers directly into the database?
Last but not least, I'm curious as to the size of the databases and
performance on Linux that you are receiving. I have 2 systems that I'm
playing with and I thought it would be fun to set a discovery range of all
networks with a fully populated netmon.seed file and just let it run. After
clearing the databases and 5 days of discovery, Netview returned the
following:
# ovtopodump -l
TOPO OBJECT ID: 261
TIME CREATED: Fri 09 Jan 2004 03:36:39 PM CST
TIME MODIFIED: Fri 16 Jan 2004 11:09:19 AM CST
GLOBAL FLAGS:
NUMBER OF NETWORKS: 2493
NUMBER OF SEGMENTS: 2493
NUMBER OF NODES: 15277
NUMBER OF INTERFACES: 18571
NUMBER OF GATEWAYS: 516
Doing an ovobjprint -s takes more then 10 minutes to complete while looking
up a single node seems relatively quick. The X interface is extremely
sluggish and snmpCollect and ovwdb will consistently share the pegging of
one of the 2 processors in this server with a 5 minute status polling
interval. The servers contain dual 1266MHz Pentium 3's with 2 gig of ram
and 2 gig of swap space. I've read the diagnosing performance issues
document and I am beginning to think that what I'm seeing here is normal.
How does the discovery options affect performance? Based on my testing
today with the local subnets, I'm guessing that with a fully loaded
netmon.seed file I should have just local subnets turned on.
I apologize for the size of this message. I wasn't sure if breaking this
into a bunch of messages would be more appropriate, but I thought we all
probably get enough mail as it is so I wrote this one. Next time I'll be
much more precise with my questions :)
If you get this far, thanks for reading and any help/advice you give in
advance!
Jason Duppong
--
Tivoli Certified Consultant & Instructor
Skills 1st Limited, 2 Cedar Chase, Taplow, Bucks, SL6 0EU, UK
Tel: +44 (0)1628 782565
Copyright (c) 2004 Jane Curry <jane.curry@skills-1st.co.uk>. All rights
reserved.
|