Archives ||  About Us ||  Advertise ||  Feedback ||  Subscribe-
-
Issue of July 2002 
[an error occurred while processing this directive]
-
  -  
 
 Home > Focus
 Print Friendly Page ||  Email this story

Focus: Traffic Management
Juggling data

A look at caching, load balancing and traffic management. by Graeme K. Le Roux

Feeding a fire hose from a drinking straw isn't very efficient but it could be done. Feeding a drinking straw from a fire hose is just as inefficient, and a lot messier. Unfortunately network designers frequently face networking equivalent of both. To mitigate the inefficiencies, they can do caching (to tackle the former case) and load balancing (to tackle the latter).

Most of the network adaptors sold today are 10/100 Mbps full-duplex capable types, and most networks today typically run its servers at 100 Mbps full-duplex. In the average corporate backbone the fiber versions of 100 Mbps full-duplex Ethernet are common, and in larger campuses 1 and/or 10 Gbps Ethernet systems are rapidly becoming the norm.

In a properly configured and provisioned gigabit backbone, caching and load balancing are seldom necessary. But in reality, it is common to find WAN links with less than 2 Mbps bandwidth mixing it up with 100 Mbps full-duplex LAN backbones; or Wintel servers running 100 Mbps full-duplex and users on simple 10Base-T connections. As any plumber or engineer will tell you, losses will always occur where there are impedance mismatches. Here we have the fire hose and straw situation.

Matching impedance
Let's work from the source. In practice, a PCI bus is good for a little over 200 Mbps, a dual bus server can therefore, in theory, run a 100Base-Tx full-duplex NIC at full speed. This will service a maximum of twenty 10 Mbps connections at best.

That means 20 simple Ethernet segments which, using the normal optimum of 20 users per segment, give a theoretical maximum of 400 concurrent users. In practice, it is not uncommon to find large buildings with more than 800 users.

Furthermore, if we apply a practical factor of safety (or losses) to theoretical calculations, we'll end up with 300 users as a maximum acceptable load. And once you start using full-duplex connections for your users as would be the case if you were to deploy a VoIP system the number of maximum users now fall to under 200.

You can address this problem by applying the most rudimentary form of load balancing: Install multiple full-duplex 100 Mbps backbone links, a dumb 100 Mbps switch in the data centre and split services across multiple servers.

This works to a point, but if you have an internal Web server which hosts a file set you can't easily split across multiple servers, this solution will not work. You will then have the situation of a fire hose (the network of users) feeding a drinking straw (the Web server).

One surefire way around this problem is to build a SAN, since users now can't overload either the network or the server.

Unfortunately you haven't solved the problem yet, you've just moved it. The problem of impedance mismatch remains: On the one hand, you have a non-blocking 100Base-Tx full-duplex backbone system between your SAN-based Web server which is provisioned to cope with any number of local users; on the other hand, you have an arbitrary number of users on the far end of several less than 2 Mbps WAN links.

Capacity planning: Some empirical rules
Planning a network is a little more complex that sitting down with a piece of paper and drawing lines between the cheapest boxes you can find, yet this is basically the way a lot of networks are "planned". Even if, as a network manager, you get a professional to design your network for you there is still a need to check the proposed design for errors.

The most common errors made in planning the average corporate network are in relation to capacity planning, or "provisioning" in telecommunications terms. There are all sorts of tools available for various network and telecommunications environments, but you still need some idea of the capacity you need in a network; or else how are you going to tell whether or not the results derived from the tools are likely to be valid? So here are some rules of thumb for network capacity planning.

First look at your cable system. Any system installed today should be category 5e or better and any UTP system installed in the last five years should consist of a zone cable plant saturating each floor of your building. Vertical links between floors should be multimode fiber. Ideally there should be one or more fiber links per floor each of which should run back to your data centre.

Next look at the sort of workstations your users will be using and the types of data they will be working with. Your workstations will generally be PC (includes laptops) or thin client (Windows terminal, X-terminal, etc), with or without VoIP telephone handset working with small files (includes legacy terminal emulation, etc), average files (tens to hundreds of kilobytes) or large files (one megabyte and up).

For PCs working with small or average files or thin clients working with any size file and no VoIP phone system, consider simple 10Base-T Ethernet to the desktop with no more than 20 users per segment. If you have a VoIP phone system, use switched and wherever possible full-duplex 10Base-T to the desktop. For PCs using large files consider 100Base-Tx to the desktop and if VoIP is deployed use switched and/or full-duplex 100Base-Tx.

All fiber links should be 100 Mbps full-duplex or, depending upon the proportion of PC workstations handling large files, full-duplex Gigabit Ethernet. In general it is cheaper and more effective to have several full-duplex 100 Mbps links per floor rather than one Gigabit Ethernet link per floor. Try to avoid putting more than 200 users on a single backbone link under any circumstances if it breaks 200 users is more than enough to annoy at any one time.

If you use switches on each building floor (necessary for full-duplex operation) choose units which support 802.1p and 802.1q to allow you to create VLANs where appropriate. This can be useful for such simple things as isolating printer traffic and preventing it from having a negative impact on network performance by effectively capping its use of bandwidth. VLANs may also be necessary to ensure a VoIP system works properly.

Ensure all your servers and hosts have, at least, 100Base-Tx full-duplex NICs and run them that way. Place all servers in your data centre.

Mirror, mirror ...
In this situation you have two choices (aside from investing on a really high speed WAN link like a 155 Mbps ATM). You can either mirror the Web server for each remote site or provide a proxy cache at each remote site.

A mirror server is a full blown Web server which is scaled in exactly the same way as the Web server it is mirroring. The mirror provides access to an exact copy of the primary server's file set. When the file set on the primary server changes, those changes are copied (or "replicated" in a Windows environment) to each mirror server. This is generally done at a time when the WAN links have the least traffic on them, say overnight.

Mirroring works well when there are relatively few changes to the server's data set, such as with simple static HTML pages. Unfortunately, most Web pages are not static.

For example, if you use a Web server to front end a database engine the response to a query will be different each time it is run, the server doesn't have a fixed file set, and in such situations it is rarely practical to replicate whole databases.

Of course some things don't change in this situation, like the Web page containing the form which the user fills in to run the query, the script which actually constitutes the query, page elements used in formatting the response to the query, etc. These elements you can easily store at each remote site using a proxy-based cache such as Sun's CacheRaQ appliances. By taking this cache based approach you can, in theory, strip everything off your WAN link except the actual query strings and the data contained in the responses from the server. Everything else is delivered from the cache, which is typically a hard disk.

When to mirror, cache or upgrade your WAN links? As always, it depends on cost. If you have two sites that are within clear line of sight, you might opt to retire your leased line-based WAN link in favor of a microwave link which would be likely to cost the same as a large enterprise server.

Proxy-based cache is often used by ISPs to save money. Consider this: a proxy cache residing on a typical ISP's backbone link to the Internet will service about 40 percent of object requests. If a retail ISP pay their wholesale ISP 15 cents per megabyte and their users download 100 GB per month then the cache will (by servicing 40 GB per month) directly save the ISP US$39,100 a year, or an equivalent of a server every six to eight months.

For smaller sites, cache appliances which cost about the same as a high-end PC can be deployed with similar cost effect. As a general rule the more users who use a single cache server the more "hits" there will be on the cache simply because the probability of more than one user requesting the same object is a function of the number of users making requests.

Caching works particularly well in environments where the user population is likely to want access to the same websites.

For example, a school whose pupils are all studying the same thing at the same time and accessing a preferred list of websites specified in their course notes, or a company Intranet where all the users are all making requests of a single remote Web server.

Graeme K. Le Roux is the director of Morsedawn (Australia), a company which specialises in network design and consultancy and writes for Network Computing-Asian Edition.

 
     
- <Back to Top>-  

Copyright 2001: Indian Express Group (Mumbai, India). All rights reserved throughout the world. This entire site is compiled in Mumbai by The Business Publications Division of the Indian Express Group of Newspapers. Site managed by BPD