About Us

Home >Technology > Full Story

Content Distribution Networks and Internet Caching
By Bhavish Sood

Caching technologies accelerate the distribution and delivery of Internet content. A sneak preview of strategies and technologies necessary to create, distribute, and store content

While caching waits for surfers to request information, content delivery lets delivery organizations or products proactively push the information into caches close to the user

Digital Content Distribution: New Media New Challenges

Jammed lines, obsolete backbones, rich media content, increasing user base is all contributing to what can be called this generation's Great Internet Traffic Jam. Not only have the number of users accessing the Internet increased but also the applications demand has amplified manifold. The convergence of telecommunications, information technology and broadcasting is probably the single most important reason. The rapid growth in traffic resulting from increased activity in residential and business sectors means that the bandwidth transmission will become an increasingly important component of the overall Internet access market. Until optical networking hits big time and we see the deployment of 40 Gbps equipment, optimizing the existing bandwidth is perhaps the only alternative.

The given complexity of digital media sometimes makes us all wonder why go through all the pain, why not simply print! Apart from better visual relief and distribution promise, digital media provides lifetime preservation.

So what exactly is digital or new media content distribution agenda? An Aberdeen research report, establishes the definition of Digital Content Distribution (DCD) as the interaction of technologies, tools, and events involved in the circulation of text, sound, video, and data combinations over IP networks, between the points of content creation and the points of content consumption.

Operating the Content

Fresh-o-Meter: Doing it the right way

The challenge before content generators is to figure out how and what is the optimum way to accelerate their content, provide faster page downloads and higher quality video and audio output. Content delivery networks resolve performance problems related to Web server processing and Internet delays.

Content delivery networks create and maintain up-to-date copies of frequently accessed content or content, which requires high bandwidth in cache servers at multiple locations at the edges of the Internet. There are two distinct ways of laying out a content delivery network (CDN), firstly by load balancing where in a particular site is hosted at various data centers or by hosting various sites. The other technique is of deploying caching servers across the local ISP's network. Akamai is one such service, which provides for this. It has almost 4200 caches across the globe and is located in 50 countries.

Network based caching can be done through two way transparent cache servers or proxy cache servers. Although proxy cache servers are the implementation choice they are highly risky as proxy failures can result in total access failures.

The essential difference amongst both the techniques is that while caching waits for surfers to request information, content delivery lets delivery organizations or products proactively push the information into caches close to the user, where Web surfers are directed to the nearest cache server through DNS.

Also another emerging trend in the client server arena is Application Server Caching. Some application servers like Vignette use template level caching. Normally when a Web server gets a request for a page it checks the doc root and if the page is not found send a 404 error. The Content Management Server manages the caching in such a scenario. In Vignette the request for a page is intercepted by Vignette Web server plugin, which will check if the page is managed by an application server template in a metafile that has a record of all templates. If the particular template is cached then the Web Server will serve the page directly from the doc root. Page caching in some of these kinds of application servers is done by URLs.

In Vignette each page has so many objects with Object IDs; each has its unique URL. So the cache manager logs change in each object. Each time a page is requested, the Cache Manager Daemon checks if the record has been modified, if it is then a fresh version of the record is fetched by the page generator or else the page from doc root is fetched. Each set of data entered is a record for the application server and is identified by a record ID. The system daemon detects any change in the record and the related template. Thus at any point of time the server has details of the template effected by the changed data.

Future Talk: Delivering as Fresh as MTV Fresh!

The common presumption in the bandwidth market is that since network operators and carriers spent billions building 2.5 Gbps networks and then spent more to upgrade those networks to 10 Gbps, they would rush out and buy 40 Gbps equipment as well. But sadly that may not be scenario due to less capital availability. In this scenario optimizing the current availability looks like a sure shot bet and the markets would exist for service providers like Akamai whose business models hinge on a slowed Internet delivery.

Another major development in the field of rich media transmission has been the development of multicasting backbone. Multicast backbone can be thought of as Internet radio and television. Unlike video on demand where the emphasis is on viewing pre-compressed movies stored on a server, Multicast backbone is used for storing live audio or video in digital form all over the Internet. Multicast bone is actually a virtual overlay network on top of the Internet. It consists of multicast capable islands connected by tunnels, which propagate packets between the islands.

As the debates on caching versus multicasting will continue rich media applications will have to fight for basic bandwidth in a network environment where there are increasing traffic volumes and unpredictable and unstable loads. Against this backdrop the thing to watch out for in the future will be product technologies like Packet Shaper's and Floodgate that prioritizes academic and business Internet traffic over that for entertainment and games.

Although it might seem premature to comment on how optics would impact these technologies, one thing is pretty sure at no point will there be a bandwidth glut (since only 2% of worlds current population is currently online). The challenges before the networking community will lay in bringing out massive mobilization drive leading to ubiquitously accessible bandwidth, bringing out better standards and routing algorithms resulting in low bandwidth loss at peering points.

The author is a Strategist with Plexus Technologies. Write to him at bhavishsood@netscape.net


- <Back to Top>-  

Copyright 2001: Indian Express Group (Mumbai, India). All rights reserved throughout the world. This entire site is compiled in Mumbai by The Business Publications Division of the Indian Express Group of Newspapers. Site managed by BPD