Hello,
A couple of months ago I volunteered to take ownership of our Netview
installation and start supporting it (I was a complete newbie to Netview
btw). We've had an installation in house for many years and the group
responsible for it was given other tasks, while my group is the one
responsible for monitoring our Data Center environment. For the last couple
of months I have been searching and reading these forum discussions, reading
the documentation provided by IBM and receiving training from the group that
had support. I was hoping I could benefit from some experienced Netview
admins about how they do some things so please bear with me in some of these
questions... :-)
One of the benefits to the swap of ownership was the rebuilding of our
Netview environment. Our current environment is running 7.1.1 on AIX and
the new environment is being rebuilt on Redhat Linux 7.2 running Netview
7.1.4. Based on what I've been reading, I don't think the current
environment is utilizing the netmon.seed file the way it was intended which
leads me to my first question. What is the best way to make use of the
netmon.seed file? The way I understand the recommended use is to add all
SNMP aware devices that you want Netview to know about and keep those IP's
listed in the file (currently the file is overwritten each time a new device
is added, listing just the new device IP, then doing a netmon -Y). The
documentation clearly states that you should only add SNMP addresses in this
file and not IP's that don't respond to SNMP. If you happen to have a list
of addresses that you want Netview to keep track of, which could include
IP's for servers that may or may not respond to SNMP, I'm guessing that
discovery would be setup so that each of the non-SNMP IP's behind switches
would then be found.
I have been playing a lot with discovery and the documentation is a bit
confusing on the subject. There are two different places where discovery
behavior can be changed, the range of automatic discovery and the "discover
new nodes" option. What are the effects of these options on Netview? I've
played with both trying to figure it out and have not had much luck. With
the range set to backbone I seem to eventually get the same devices I would
if I were to set it to all networks. Today I cleared the database and set
the range to local subnet and again Netview discovered roughly the same
amount of devices. I used the same complete netmon.seed file in each case.
I haven't found the documentation to be real clear on explaining what these
settings do, so I was hoping someone would be kind enough to help out here.
Furthering the discussion of discovery, what choices are the most common or
appropriate? Is there a best practices document that one should follow?
Originally I wrote a script that duplicated the current Netview environment
on the new systems, properly setting SNMP information, loading the hosts,
setting up the rules for smart set membership, etc. When I started reading
about discovery I thought that would be the best way to go. The old group
responsible for Netview has told me a number of times to go with manually
maintaining nodes and bypass discovery. Apparently we have a number of
devices on our networks that would be continually picked up by Netview that
would just need to be deleted or unmanaged over and over again. I found a
fantastic table in one of the Netview documents explaining the "Discovery
Process Using a Seed File". It lists what the minimum and the maximum
discovered nodes would be based on discovery settings and the entries in the
netmon.seed file. If I were to manually maintain the list of nodes but
wanted to follow the recommended netmon.seed use would I use the loadhost
utility to insert servers directly into the database?
Last but not least, I'm curious as to the size of the databases and
performance on Linux that you are receiving. I have 2 systems that I'm
playing with and I thought it would be fun to set a discovery range of all
networks with a fully populated netmon.seed file and just let it run. After
clearing the databases and 5 days of discovery, Netview returned the
following:
# ovtopodump -l
TOPO OBJECT ID: 261
TIME CREATED: Fri 09 Jan 2004 03:36:39 PM CST
TIME MODIFIED: Fri 16 Jan 2004 11:09:19 AM CST
GLOBAL FLAGS:
NUMBER OF NETWORKS: 2493
NUMBER OF SEGMENTS: 2493
NUMBER OF NODES: 15277
NUMBER OF INTERFACES: 18571
NUMBER OF GATEWAYS: 516
Doing an ovobjprint -s takes more then 10 minutes to complete while looking
up a single node seems relatively quick. The X interface is extremely
sluggish and snmpCollect and ovwdb will consistently share the pegging of
one of the 2 processors in this server with a 5 minute status polling
interval. The servers contain dual 1266MHz Pentium 3's with 2 gig of ram
and 2 gig of swap space. I've read the diagnosing performance issues
document and I am beginning to think that what I'm seeing here is normal.
How does the discovery options affect performance? Based on my testing
today with the local subnets, I'm guessing that with a fully loaded
netmon.seed file I should have just local subnets turned on.
I apologize for the size of this message. I wasn't sure if breaking this
into a bunch of messages would be more appropriate, but I thought we all
probably get enough mail as it is so I wrote this one. Next time I'll be
much more precise with my questions :)
If you get this far, thanks for reading and any help/advice you give in
advance!
Jason Duppong
|