Thanks James, and to everyone else who replied. Ultimately there will be
central site control. We're consolidating from over 100 Data Centers to 10 and
there will be two NOC's. Currently there is no NOC.
A consultant previously came in and did set up two redundant boxes that managed
the whole network. They spent a great deal of effort creating very intricate
seed files and location files. The problem is the network changes frequently
and no one was keeping these up. In a very short time no one was using NetView
at all.
The network is distributed all over the US with a lot of slow links in a
cascading hub and spoke design. Some locations can have as few as 10 devices.
Bandwidth is a big problem so the goal has been to distribute the polling and
only send up significant events to TEC. So far, only one of the 10 regions has
been partially set up. NetView is allowed to discover everything but then a
ruleset is used to unmanage anything that is not in the Cisco Devices, Tivoli
Objects or Server SmartSet. The Server SmartSet is created by hostname (all
servers start with "sv"). In the end there will probably be about 4000 devices
being monitored enterprise wide.
The intent is to automate as much as possible and minimize network traffic. I
wasn't recommending a master map but was given marching orders to investigate
its feasibility. I think either a central NetView or having multiple web
clients in the NOC to view the regional NetViews might make more sense.
Thanks again
Bob
________________________________
From: owner-nv-l@lists.us.ibm.com on behalf of James Shanks
Sent: Mon 11/1/2004 8:54 AM
To: nv-l@lists.us.ibm.com
Subject: Re: [nv-l] Master Map
There have been some good replies to this issue, but I haven't yet seen anyone
else ask what I take to be the pertinent questions.
(1) What 's the point? Are you planning on instituting central site control or
does your home office just want to know what's going on in the regions, without
having to ask? Is this supposed to be for backup or just information?
(2) Have you considered a low-overhead alternative, like having multiple web
clients, each connected to a remote region? Each of those regions could even
make a separate map for you with only the pertinent devices managed and
everything else unmanaged. Call it the "headquarters map". That's a lot
easier to implement, I think, than the programming you are proposing, unless
you are not using web clients at all.
(3) How big a box can you get for this? That's a key issue here I think,
because that may well determine what we can do. I'm presuming that you were
planning to have this master NetView on a separate machine.
(4) Of the 4000 nodes at each of the ten locations, how many actually fall into
the class of those you want to monitor -- servers, switches, and routers?
Knowing that will allow you to figure out the minimum size box you'll need,
memory-wise. There are sizing rules in the books so you can match the hardware
you have to what has been found in the past to be minimally sufficient.
Consider this. 40,000 nodes is not out of the question for NetView to manage
from one machine, given that he has good connectivity and a big enough box,
with lots of memory and at least a four-way processor. So your central
location could just start with a location.conf file to partition out the ten
regions, and go from there. If your regions have their own location.conf
files, you could just import those into the new one, and turn netmon loose.
Just ten good seeds, a router from each region, and he ought to discover most
of the whole thing in a just a couple of days or so.
My view is basically that you'd better off with a real central NetView rather
than one which is just a shell. Even if that turns out to be infeasible from a
performance view, you could populate the database initially by letting netmon
do it, rather than loadhosts. It's easier to unmanage or even delete what you
don't want than to load it. Then you can try a sample our ruleset and update
script and see how it works. The idea of having a shell master NetView is not
one which has been studied, so far as I'm aware, so it's not clear to me that
you can get much help determining in advance how feasible it is, unless by
chance, someone else has already done it.
James Shanks
Level 3 Support for Tivoli NetView for UNIX and Windows
Tivoli Software / IBM Software Group
"Quinn, Bob" <Bob_Quinn@sra.com>
Sent by: owner-nv-l@lists.us.ibm.com
10/29/2004 02:11 PM
Please respond to
nv-l
To
<nv-l@lists.us.ibm.com>
cc
Subject
[nv-l] Master Map
Excuse the newbie question but ...
I have a co-worker who is not a NetView expert who would like me to make
NetView do something it is not designed to do. I'd like to tell him he's nuts.
We will have several NetView installations (7.1.4 FP2 AIX 5.1) in different
regions across the US each discovering and monitoring devices only in its own
region (about 4000 nodes per region - 10 regions total). He believes there
must be a way to create a master map that does not do its own discovery or
polling (disabled in Options Topology/Status Polling) but is fed from the
regional NetViews. If a regional NetView discovers a device and it is a
router, switch or server (controlled by SmartSets) he proposes it send a trap
to the master console that will then execute a script that runs loadhosts and
adds the device to the master map. He also proposes that status changes
detected by the regional NetViews initiate traps to the master and change the
status on the master map. I've read James info that was posted a while back on
changing the status of an icon. While each individual piece of what my
coworker is proposing seems techically feasible on the surface, the solution as
a whole doesn't seem practical to me.
So which one of us is nuts?
Thanks
Bob
[attachment "winmail.dat" deleted by James Shanks/Raleigh/IBM]
<<winmail.dat>>
|