Archives ||  About Us ||  Advertise ||  Feedback ||  Subscribe-
 About Us

Home > Cover Story> Full Story

Unix: Ideal for e-services?
Kamal Dutta


Kamal Dutta is Country Business Manager, Unix Servers and Solutions, HP India. He is responsible for HP India's Unix servers and solutions for Business Customer Sales Organization (BCSO) division. He is presently looking after HP9000 Unix server sales across India.

TCO, cost reduction, consolidation and high performance are the current focus points for businesses offering e-services. The Unix environment was designed to support these factors

The Internet is an important way of life for us and is also critical to businesses. Today a whole host of services are being offered on the Net ranging from paying electricity bills to buying movie tickets online you can even check the status of your bank account at your office desk or from home. The benefits of Web-based services (also called e-services) are simple: speed, automation, cost-efficiency, and accessibility. Yet to fully and cost effectively offer e-services on the Internet, both service providers and enterprises need to ensure functionality that is dependable and that can grow and adapt affordably with company requirements. They should be able to deliver highly available, highly reliable services, which are always on 24 hrs a day, 7 days a week, 365 days a year to meet the needs of their target customers.

Servers running Unix can address this requirement since, Unix is the more reliable, flexible and scalable operating system for any open platform. What's more, Unix is proven it has been around for the last three decades. Unix systems have always been a key component of the computing industry, but the growth of e-commerce Internet sites and application service providers, which rent access to these powerful machines, has increased their value.

The Unix server market has seen an explosive growth over the past 10 years. The top three vendors in the Unix server market have all enjoyed an above average growth. The concept of Total Cost of Ownership (TCO) has received a tremendous amount of attention over recent years. Customers are becoming more savvy and sophisticated in their analysis of systems. Ever increasing budgetary pressures, combined with actual hands-on experience, are causing more IT decision makers to look beyond a system's initial price tag in their purchase evaluations. One area where this is particularly true is with midrange servers, which are often used as the heart of a company's network.

The TCO factor
Although several TCO (Total Cost of Ownership) reports already exist for these servers, they suffer from four limitations. First, most of these reports are outdated at more than a year old. Second, few, if any, take into account performance differences between servers. Third, we are not aware of any study that compares the TCO for Unix servers to those running proprietary architectures. Lastly, none measure the TCO for newly introduced systems. TCO is an important factor when purchasing RISC servers since acquisition costs are generally less than one-third of the server's TCO. This cost advantage is primarily due to the low on-going operational expenses. Today high-end Unix servers can easily match, and at times outperform even mainframe class servers. So customers now have more choice when it comes to replacing or buying systems to lower their TCO.

In order to maximize return on investment, managers factor both the TCO and performance into their server purchase decisions. Other factors such as software features, application availability, and quality of service and support offered, also contribute to the overall value of a server and should not be ignored. In the past, TCO and performance were issues that have typically been addressed separately.

Why consolidate?
Many leading businesses today are deploying a plethora of information technologies seeking to improve competitiveness and streamline operational costs. Concurrently, there is a growing trend toward replacing yesterday's distributed computing model with a more consolidated approach in order to seek greater control, manageability, security and cost savings.

By consolidating applications, you can increase system utilization, reduce resources needed to manage and maintain the systems, and lower software costs and system footprint. This type of environment may also be suited for partitioning technology that is available on Unix servers.

It reduces cost of ownership and administration efforts, brings higher service levels for users, boosts business flexibility in terms of increase in demand, or changes in the environment. Consolidating single and multiple application environments reduces total cost of ownership and boosts operational efficiency.

The issue of how to increase existing server efficiency has been rarely addressed; the server bottleneck caused by the legacy network adapter or network interface card (NIC) has not been fully recognized. So what is the most reasonable, cost-effective, space-efficient solution to meet the need for network speed?

The real catch
When network performance and server response are unsatisfactory, most people deploy more powerful servers with extra processors. By doing so, vast amount of resources are consumed, including personnel, space, real estate, and capital acquisition. But, this does not necessarily speed up the network as expected. The real catch is to design a network where all other loads are distributed to dedicated networks, with specialized tasks in mind, like the network for storage (SAN), for backups, for clustering, etc.

For most server applications, pure MHz horsepower does not yield a benefit commensurate with its cost. That means a costly, faster CPU does not guarantee that the application transmission speed can keep up with the claimed CPU clock speed or wire speed. Other factors like the system bandwidth, memory access speed, latency and cache technology make up for the performance of the server. The evidence is clear organizations deploying the technology can save money, improve server performance, and reduce latency.

Cost reduction
IT managers are justifiably concerned about reliability, performance, scalability, ease of deployment and administration, affordability, and lower TCO. Today's high-end Unix systems address all these concerns.

Server consolidation does not only result in a drastic reduction of the TCO or money savings in terms of hardware and software, personnel and real estate. It can also reduce the burden of administering more servers and help increase productivity across the enterprise. Server consolidation can provide IT professionals an appropriate business solution using existing server technology with emphasis on business savings.

So getting back to Unix, from a simple beginning as a personal research project to an important role in operating systems on a wide range of computer systems from desktop micros to the largest mainframes class systems, Unix has and will have a lot of impact. The strength of Unix is its portability across multiple vendor hardware platforms, vendor independent networking, and the strength of its application-programming interface, along with scalability and reliability.

These benefits are so strong that the relatively weak end-user interface has not slowed the adoption of Unix. End-users are not direct beneficiaries of portability and application program interface. However, end-users have already seen the dramatic drop in the cost of computing when multiple vendors can provide the same operating system and software solutions.

- <Back to Top>-  

Copyright 2001: Indian Express Group (Mumbai, India). All rights reserved throughout the world. This entire site is compiled in Mumbai by The Business Publications Division of the Indian Express Group of Newspapers. Site managed by BPD