nv-l
[Top] [All Lists]

Netview 6.0.2 & 8274 Problem

To: nv-l@lists.tivoli.com
Subject: Netview 6.0.2 & 8274 Problem
From: ΦΟΥΝΤΟΥΛΑΚΗΣ ΑΓΓΕΛΟΣ <agf@gnto.gr>
Date: Fri, 23 Mar 2001 00:48:18 +0200
This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.
Hi from Greece,

This is my first post to this group.

I am having a "small" problem with a Netview Installation at a customer's
Site.

The network consists of Token ring (80%) and Ethernet (20%) devices
We have:
2 IBM 8274 GRS Switches
6 IBM 8250 Multiport Token ring Hubs (with Advanced Management Modules)
1 IBM 8272 Nways Token Switch
1 IBM 8273 Ethernet Switch
Various Motorola Routers (6560, 6520, 320 etc.... about 20 of them)

The Netview Server (6.0.2) is running on NT 4.0 Enterprise Server sp6a
and is connected to the Ethernet side of the network (8274 Ethernet card)

The 8274 connects (via a Token ring card) to an IBM 8250 Hub, which is
connected to the central switch 8272 of the network.  (All 8250 hubs connect
to the 8272 TR switch) (We could NOT connect the 8274 TR side to the 8272!
so we had to go through an IBM 8250.... IBM Greece tried this and had no
luck also....)

The central router (Motorola 6560) has 12 WAN and 2 Token ring interfaces
The token ring side connects to the 8250 Hub mentioned above (the one with
the 8274 and 8272)

The 8274 GRS serves as a BRIDGE for our network.... It is the device that
connects the 802.3 part with the 802.5 part of our LAN.

After 3 days of operation, the Netview Server has discovered only 150 nodes
(out of 400) and the discovery process is VERY slow. (I had to ping some
nodes manually in order to get them into the map)

We are running the latest Microcode on ALL IBM machines, we have set the
correct community names and trap settings on our network devices and we have
edited the oid_to_type file to avoid badOIDs (ex. 8272, 8274, 8273)

The 8250 Hubs show up as 3Com devices (no problem there)
The 8272 Token ring switch gets drawn at the Ethernet side of the map (along
with ALL of the PC's that are not running SNMP agents, which is the default
map (bus)).  The 8272 is running SNMP with the correct conf and we have
loaded the appropriate MIB files from IBM (tr_a523.mib & dtrca523.mib). We
had to add the full objectId for the 8272 @ the oid_to_type file for Netview
to recognize it as a Hub.


The REAL problem is that after I traced netmon (netmon -M -1), I get a lot
of ping timeouts (some of these errors (from netmon.trace)) :


----------------------------netmon.trace---------------------------
03/22/01 04:59:14:sending ping to 172.17.0.17 seqnum = 128 ident = 122
timeout = 4
03/22/01 04:59:14:-- received stuff 0x1 **
03/22/01 04:59:14:handle_icmp_error()-> received a port unreach with
sport!=snmp_port from 172.16.100.12
03/22/01 04:59:14:recv_pings finished within time slice, used 0 of 60000 ms.
03/22/01 04:59:14:-- received stuff 0x1 **
03/22/01 04:59:14:handle_echo_reply rtt=5759
----------------------------netmon.trace---------------------------

Note:  IP 172.16.100.12 is an IBM 8274 GRS
Note2: IP 172.17.0.17 is a PC not running snmp that is alive

I also used a sniffer prog to capture the tcp/ip traffic flow on the Netview
Server's Ethernet NIC and saw that I get no ARP/RARP requests answered.  The
Netview seems to be using broadcast (alot) to find various nodes and
something is ?denying? broadcast traffic, making discovery a very slow
process.

If I ping (cmd) a device, Netview immediately recognizes it and maps it.

I think that the 8274 has something to do with this situation (denying
broadcasts and ?sometimes? icmp traffic)

Can it be because the 8274 treats Netview traffic as "garbage" ?
What kind of settings / threshold limits are available on the 8274 regarding
icmp and broadcast traffic ?
Any experience on Netview and 8274 ? (without NWays Manager)
Does it appear ok on your network maps?

Thanx in advance

Angelos Fountoulakis
Systems Engineer
Sysware S.A.
+301-687 2200

Ps: I apologize for the extent of my post

Hi from Greece,

This is my first post to this group.

I am having a "small" problem with a Netview Installation at a customer's
Site.

The network consists of Token ring (80%) and Ethernet (20%) devices
We have:
2 IBM 8274 GRS Switches
6 IBM 8250 Multiport Token ring Hubs (with Advanced Management Modules)
1 IBM 8272 Nways Token Switch
1 IBM 8273 Ethernet Switch
Various Motorola Routers (6560, 6520, 320 etc.... about 20 of them)

The Netview Server (6.0.2) is running on NT 4.0 Enterprise Server sp6a
and is connected to the Ethernet side of the network (8274 Ethernet card)

The 8274 connects (via a Token ring card) to an IBM 8250 Hub, which is connected to the central switch 8272 of the network.  (All 8250 hubs connect to the 8272 TR switch) (We could NOT connect the 8274 TR side to the 8272! so we had to go through an IBM 8250.... IBM Greece tried this and had no luck also....)

The central router (Motorola 6560) has 12 WAN and 2 Token ring interfaces
The token ring side connects to the 8250 Hub mentioned above (the one with the 8274 and 8272)

The 8274 GRS serves as a BRIDGE for our network.... It is the device that connects the 802.3 part with the 802.5 part of our LAN.

After 3 days of operation, the Netview Server has discovered only 150 nodes (out of 400) and the discovery process is VERY slow. (I had to ping some nodes manually in order to get them into the map)

We are running the latest Microcode on ALL IBM machines, we have set the correct community names and trap settings on our network devices and we have edited the oid_to_type file to avoid badOIDs (ex. 8272, 8274, 8273)

The 8250 Hubs show up as 3Com devices (no problem there)
The 8272 Token ring switch gets drawn at the Ethernet side of the map (along with ALL of the PC's that are not running SNMP agents, which is the default map (bus)).  The 8272 is running SNMP with the correct conf and we have loaded the appropriate MIB files from IBM (tr_a523.mib & dtrca523.mib). We had to add the full objectId for the 8272 @ the oid_to_type file for Netview to recognize it as a Hub.


The REAL problem is that after I traced netmon (netmon -M -1), I get a lot of ping timeouts (some of these errors (from netmon.trace)) :


----------------------------netmon.trace---------------------------
03/22/01 04:59:14:sending ping to 172.17.0.17 seqnum = 128 ident = 122 timeout = 4
03/22/01 04:59:14:-- received stuff 0x1 **
03/22/01 04:59:14:handle_icmp_error()-> received a port unreach with sport!=snmp_port from 172.16.100.12
03/22/01 04:59:14:recv_pings finished within time slice, used 0 of 60000 ms.
03/22/01 04:59:14:-- received stuff 0x1 **
03/22/01 04:59:14:handle_echo_reply rtt=5759
----------------------------netmon.trace---------------------------

Note:  IP 172.16.100.12 is an IBM 8274 GRS
Note2: IP 172.17.0.17 is a PC not running snmp that is alive

I also used a sniffer prog to capture the tcp/ip traffic flow on the Netview Server's Ethernet NIC and saw that I get no ARP/RARP requests answered.  The Netview seems to be using broadcast (alot) to find various nodes and something is ?denying? broadcast traffic, making discovery a very slow process.

If I ping (cmd) a device, Netview immediately recognizes it and maps it.

I think that the 8274 has something to do with this situation (denying broadcasts and ?sometimes? icmp traffic)

Can it be because the 8274 treats Netview traffic as "garbage" ?
What kind of settings / threshold limits are available on the 8274 regarding icmp and broadcast traffic ?
Any experience on Netview and 8274 ? (without NWays Manager)
Does it appear ok on your network maps?

Thanx in advance

Angelos Fountoulakis
Systems Engineer
Sysware S.A.
+301-687 2200

Ps: I apologize for the extent of my post





<Prev in Thread] Current Thread [Next in Thread>

Archive operated by Skills 1st Ltd

See also: The NetView Web