Archives ||  About Us ||  Advertise ||  Feedback ||  Subscribe-
-
Issue of January 2003 
-
  -  
 
 Home > Cover Story
 Print Friendly Page ||  Email this story

Techscope 2003: Multi-Layer Switching
Multi-layer switching in the enterprise

Gigabit Ethernet and multi-layer switching routers eliminate choke points in the network and deliver a performance of more than 100 times that of traditional routers, at a fraction of the cost. by Uday Birje

Networks are designed to support applications that can make businesses more effective and efficient. But a combination of factors like server consolidation, rich media types, and bandwidth-hungry applications, can create situations in which the demand for applications outstrips the available bandwidth. When this occurs the network acts like a 'funnel'—applications compete for bandwidth or are kept off the network altogether.

The choke points in these 'funnel' networks are at the aggregation spots—in wiring closets and backbones—where performance and services intersect. Legacy software-based routers which have traditionally occupied these aggregation points were never designed for the enormous traffic loads and the anywhere-to-anywhere traffic that is now the norm. Gigabit Ethernet and multi-layer switching routers eliminate these choke points, in a sense, flipping the 'funnel' over. By delivering a performance of more than 100 times that of traditional routers at a fraction of the cost, these devices offer true scalability, providing the bandwidth required for current applications and future applications as well.

Adding bandwidth is only one piece of the solution. However, as the mix of applications in a network becomes more complicated, IS professionals need the ability to manage the traffic flowing through their network. In order to manage this traffic, they must first measure and track the traffic flows. Once traffic patterns are understood, advanced services such as security and prioritization can be used to optimize the network.

It is the ability to satisfy the dual requirements of performance and control that has created the excitement around the new breed of products called switching routers.

Packet Switching Performance
The shortcomings of software-based routing are well known. When network traffic remained predominantly in the workgroup, software-based routers were adequate. Since the majority of the traffic did not cross a router boundary, a router's slow performance was not a crippling detriment—the router's role was predominantly to control the modest amount of traffic that came its way. This became known as the 80/20 rule: 80 percent of the traffic remained in the workgroup and 20 percent crossed workgroups.

But the environment has changed —the use of Web technologies has exploded, traffic patterns have become unpredictable, the number of users has increased exponentially. While controlling traffic remains a crucial network requirement, the performance penalty that software-based routers bring is no longer acceptable. We are now hearing that the rule has reversed to 20/80.

In the industry buzz surrounding switching routers, performance has taken center stage. Indeed, the performance of switching routers is impressive. If software-based routers forwarded packets at the rate of several hundred thousand packets per second, switching routers forward packets at rates of tens of millions of packets per second—an increase of two orders of magnitude.

This 100-fold improvement in performance occurs because of an architectural change: Legacy routers use software running on microprocessors to forward packets. Switching routers, on the other hand, use hardware, namely, Application Specific Integrated Circuits (ASICs).

Network Functionality and Control
A single client/server conversation generates a stream of packets between the client and the server. This stream, called a flow, can be identified at Layer 2, Layer 3 or Layer 4. Each layer provides more detailed information about the flow. The fundamental task in managing a network is controlling these flows of traffic.

At Layer 2, each packet in the flow is identified by the MAC address of the source and destination end-stations. The ability to control the flow is thus limited to the broadcast domain. Traditionally, products that switch traffic at Layer 2 deliver performance but little functionality, since the source and destination MAC address is a crude translation of the information in the packet.

At Layer 3, flows are identified by source and destination network addresses, and the ability to control the flow is limited to source/destination pairs. Some of the switching routers, often marketed as Layer 3 switches, operate at this level of granularity. If a client is using several applications from the same server, Layer 3 information does not provide visibility into each application flow, so individual rules cannot be applied to each flow.

Legacy routers always had the ability to read into the Layer 4 header. For example, in software-based routers, Layer 4 information is used to set security filters, an important component in controlling network traffic. But for software-based routers, reading deeply into the packet was extremely costly in terms of performance. Indeed, in many software-based routers, performance dropped by as much as 70 percent when security filters were enabled.

Benefits of Application-Level Control
Application-Level QoS: The demand for QoS is undeniable. Rich data types, mixed media, video conferencing, real-time audio and video multicasting, Internet telephony and interactive transaction processing combine with mission-critical applications to create the need for tight control over latency and throughput.

True QoS strategy strives to meet the needs of all traffic flows in the network by providing wire-speed bandwidth and low latency to all applications. However, when output wires on a switch are overloaded and internal buffers are filled, QoS is required to prioritize traffic by creating rules or 'policies' that stipulate priority. Policy-based QoS gives network managers control over latency and throughput.

Layer 4 switching allows QoS policies to be set on application-level flows, thereby giving network managers complete control over bandwidth usage in the network backbone. With Layer 2 or Layer 3 switching, QoS policies can only set priorities for traffic based on source or destination addresses. Applying QoS policies on Layer 4 application flows means priorities can be set on individual host-to-host application conversations.

Application-Level Security: Traditional routers have used security filters and access control lists for secure access to the corporate networks and databases. Historically, access control consisted of software-based processing of Layer 2, Layer 3 and Layer 4 information in every packet, and comparing the data with a list of permissible addresses and applications. A natural consequence of software-based processing was that router performance severely degraded whenever security filters were enabled. This was due to the increased number of instructions that the central processing unit (CPU) was required to execute on every packet. For example, setting a DNS filter in some routers may result in up to a 70 percent drop in performance.

Layer 4 switching eliminates the performance loss associated with security features. A true Layer 4 switch should deliver wire-speed performance when all the advanced features, including security are activated. For example, access to corporate information can be controlled as per the user's application instead of blocking all users of a particular application. This gives the network administrator better flexibility and control over the corporate network.

Support for the Full Range of Routing Protocols
While switching routers gain their performance/functionality boost through hardware implementations, route processing remains a software-based activity. Route processing is the process through which the route table is dynamically updated. This activity, often described as 'control plane,' is separate from the 'forwarding path' described above.

Switching routers vary in their support for the dynamic routing protocols. Rudimentary switching routers (often fixed-configuration as opposed to chassis-based) support only Router Information Protocol (RIP), a distance vector protocol. For a simple network, RIP is often adequate. It provides periodic updates to the routing tables, convergence around failed links, etc.

More complicated networks require a more complicated routing protocol. Switching routers designed for implementation in large networks require Open Shortest Path First (OSPF) routing protocol. While it is significantly more complicated than RIP, OSPF has some very desirable properties—including rapid convergence around failed links and few route updates in stable topologies.

Conclusion
Switching routers that do not support all these routing protocols will be relegated to providing partial solutions. Conversely, switching routers that can deliver performance, functionality, and the rich mix of protocols will be the future building blocks of durable networks.

The writer is Country Manager, Enterasys Networks

 
     
- <Back to Top>-  

Copyright 2001: Indian Express Newspapers (Bombay) Limited (Mumbai, India). All rights reserved throughout the world.
This entire site is compiled in Mumbai by the Business Publications Division (BPD) of the Indian Express Newspapers (Bombay) Limited. Site managed by BPD.