Archives ||  About Us ||  Advertise ||  Feedback ||  Subscribe-
Issue of November 2002 
 Home > Focus
 Print Friendly Page ||  Email this story

Focus: Network Monitoring
Network Management: How much do you need?

There is no doubt you need a good network management system, but how do you decide how much is enough? by Graeme K. Le Roux

Most desktop PCs these days have built-in Ethernet NICs, and many of them have management agents such as SNMP and/or RMON. But before you enable them, consider what you need to know from users' NICs. Note that we are not talking about the PC's operating system or the applications it is running, but about its connection to the company network. You'll also need to consider how much of that information you can use remotely.

The two most common problems with a NIC are that it has been mis-configured, or it is dead. If you are using DHCP and everyone's PCs are working, then it means the user has been fiddling and the solution does not involve network management. If the NIC is dead you have to change it, and again it's not a management problem.

So what about the hub or switch that the desktop PC is attached to? What useful management data can you pull from that, and what configuration options can you make use of?

The most obvious piece of information a hub can give you is whether a port is working, but this is not always useful. In most cases hub ports either work or they don't, and most are auto-sensing so configuration is automatic. If the port is dead you'll have to go to the hub and physically move a cable from the dead port to a working one, and then get the hub port fixed. You may wish to disable a hub port if it is not in use, or configure it to accept traffic only from a particular MAC layer address, but most companies won't need, or want that level of control.

Being able to remotely disable or enable a hub port is of dubious value when users can always add a desktop concentrator to their existing desktop port (in 100Base-Tx environments this can be done if you pick the right box). For filtering based on MAC layer addresses, you could consider doing it at a central point between your local hubs and server farm using anything from a "secure" switch to a firewall. Other problems between the desktop and the hub are mostly due to cable problems, and the fastest and most cost effective way to find and fix these is with a network analyzer.

One argument for having manageable hubs and switches to connect your users' PCs is that it allows you to implement VLANs, via 802.11p and 802.11q. However, this capability can also be provided by using cheap, unmanaged switches and multiple backbone cables to a central and large switch farm in your IT glass house. Which option you take depends on how you intend to use VLANs, how many users you need each hub to support, and the cost of equipment.

Assuming you use zone cabling for horizontal runs on each of your building's floors and fiber in your risers, the cost difference between using VLANs based on peripheral hubs as against central ones is largely the cost of terminating the extra fibers and the cost of cable.

The simple fact is that, given reasonable design and reliable equipment, the average network will benefit more from trained support personnel plus a good network testing and analysis tool, than sophisticated network management capabilities in every device outside the data centre. Even if you do have sophisticated management capabilities in your network, you are going to need people who know how to use them. Arguably, the difference between the two approaches is whether network diagnostic tools cost less than the premium you will pay for manageable devices; in the average large network they cost much less.

What about the data centre? All segments of your network will be presented as either physical cables, or VLANs on physical cables in your data centre. It follows that you need to attach these cables and VLANs to a sophisticated and therefore manageable switch/hub, but do you really need to? The primary reason for segmenting a network is that if a hub dies or a user decides to "backup" their 80 GB hard disk to a file server, it doesn't clobber the whole network. The reasons you use a VLAN are mainly to separate specific types of traffic, and to reserve bandwidth for specific applications, such as VoIP. None of these applications requires sophisticated management.

It is worth remembering that SNMP was originally intended as a way to manage the configuration of devices from a central location; its reporting features were supposed to be an aid in diagnosis and testing, not something you would leave running all the time. SNMP did not have a way for a device to tell the management console when something happened—this function was achieved by having the management console periodically probe the device. This probing generated significant network overheads, which is why it was not intended to run all the time.

If you have a WAN with several hundred routers spread all over the country and you want to manage it from a central operations centre, you'll need a way for remote boxes to tell you when something odd happens—and this is why RMON was invented. In general, network administrators in such situations set up triggers for no more than half a dozen critical events—link state changes, traffic levels, power outages, thermal problems (room temperature, dead fans, etc.), administrative access and excessive dropped packets are some common ones.

Marketing hype aside, the simple fact is that most networks need very little by way of sophisticated management. Good and solid configuration utilities are much more worthwhile, but splashing out for a full blown SNMP/RMON-based management system is justified in far fewer cases than vendors would have you think. In most cases, it is better to spend some money on good network design and the bulk of your management budget on systems for higher layers such as applications management, access control, authentication and auditing.

It is in these higher layers that most things (which you really need to keep a close eye on) happen. After all, which is more important: the one alert from a firewall that someone is running a hacking tool, or the 30,000 results of a few seconds worth of SNMP probes to users' desktop PCs, which are all working perfectly well?

Management by Design

Keeping it simple is the first and most important rule for designing reliable, easy to manage networks. Bearing this in mind, choose a single drop zone topography CAT 5e (or better) cable system for horizontal runs, and use fiber for vertical runs within a building or runs between buildings. Segment your networks so that roughly 40 users or devices (e.g. printers) are connected to any one commodity hub or card in a chassis-based unit. Connect each hub to a single fiber link leading to your central glass house. Where possible, use switches to connect your users to the network. Switches prevent a device attached to one port from seeing traffic sent to or from other ports, and this provides a useful level of traffic privacy. Switches also allow you to provision your network in such a way that a single user is prevented from chewing up so much bandwidth that other users are adversely affected.
Typically, you would match 10 Mbps desktop links to 100 Mbps backbone links, or 100 Mbps on the desktop to Gigabit backbone links.
In your data centre, look at a Gigabit or Gigabit capable switch between the switches on each of your building floors and your server farm. Pick something with redundant power supplies and all the other fault tolerant features you can afford. If your data centre switch has the ability to group ports, connect a network traffic analysis tool to one port. You can then use the switch's port grouping ability to connect this tool to any segment of the network you need to monitor, as and when required.
Do not ignore commodity hubs and switches from good quality vendors. In most cases these devices will use the same basic chip set like other expensive devices from the same vendor. What is generally missing from commodity hubs and switches is sophisticated management, high port density, and an upgrade path. In practice this is usually not a problem; you often don't need the management, there is a limit to the number of ports you want to put in one device, and in any case you often find that buying several commodity devices will give you the same number of ports at about the same prices as a single sophisticated device. As far as the upgrade path is concerned, commodity devices in a well-designed network generally recover their cost well within their service life, so if you end up replacing them rather than upgrading them, there is no financial penalty.
Switches are more expensive on a per port basis than simple concentrators, but they are also more flexible in that they can support multiple interface speeds and modes at once. If you are contemplating to deploy VoIP, they also make the job easier and the quality of your voice connections better, as the network latency is more stable than it would be if you had used concentrators.

Graeme K. Le Roux is the director of Moresdawn (Australia), a company which specializes in network design and consultancy. Got some management best practices? E-mail at

- <Back to Top>-  

Copyright 2001: Indian Express Group (Mumbai, India). All rights reserved throughout the world. This entire site is compiled in Mumbai by The Business Publications Division of the Indian Express Group of Newspapers. Site managed by BPD