Archives || Search || About Us || Advertise || Feedback || Subscribe-
-
Issue of January 2006 
-

[an error occurred while processing this directive]

  -  
 
 Home > Techscope 2006
 Print Friendly Page ||  Email this story

Virtual x86 hardware

Software-based virtualisation works reasonably well, but any software based approach has limitations in terms of performance and scalability. A hardware implementation is always superior. That is why Intel and AMD's efforts to bring hardware virtualisation to the x86 platform is so important. Pentium 4 desktop CPUs featuring this technology are out and Intel's Xeon and Itanium processors supporting a variant of this technology will soon follow. AMD's promised to roll out hardware virtualisation across all its platforms—desktop, mobile and servers.This will lead to a surge in enterprises adopting virtualisation on x86 platforms for non-mission-critical applications. by Anil Patrick R

x86 platforms have moved up from the desktop to being full-fledged competition in the SMB and non-mission-critical server space. Performance improvements, better price/performance and dependability have all contributed to this growth. As the use of x86 server hardware has grown, underutilisation has become an issue and the demand for server virtualisation has risen.

Manoj Chandiramani
Vice-president
Man Financial India

Server virtualisation is the technology that permits multiple operating systems to run on an x86 server simultaneously. The beauty of this approach is that each operating system will act as a distinct server by itself with its own virtually distinct set of resources such as processing, memory, networking, and storage capabilities. This is achieved by using a virtualisation layer generally known as hypervisor on top of the server hardware.

Virtualisation provides multiple benefits for server consolidation, better resource management, and running applications across multiple OSs. “Although server virtualisation is yet to mature, it will have a good future with its ability to run multiple OSs. Developers will find this technology useful,” says Manoj Chandiramani, Vice President, Man Financial India.

This helps enterprises maximise the benefits that can be derived from a single server, while avoiding underutilisation. x86 virtualisation software has also started supporting x86-64 architectures which provide even more pros for the technology. “Virtualisation on x86 has been picking up,” says Naveen Mishra, Senior Analyst, Enterprise Systems Research, Gartner India.

Bull Market Predicted For Virtualisation

IDC’s predictions for 2006 highlight the growth of server virtualisation. According to the report, more than 2.1 million virtual servers will be deployed globally during 2006. This will exceed 20 percent of all physical and virtual server deployments for the first time.

The growing trend of virtualisation on the x86 platform is also substantiated by Gartner’s top ten trends and predictions for 2006. According to the report, virtualisation will drive the need for “Real Time Infrastructure” (RTI) in the APAC region. To cope with the increasing volume and velocity of information, organisations will need to adopt RTI which relies extensively on virtualisation. The technology can improve IT resource utilisation and increase flexibility in adapting to changing requirements and workload. With the addition of service-level, policy-based automation, virtualisation leads to RTI, according to Gartner.

However, despite the excitement surrounding server virtualisation, dividing a server’s capabilities into virtually separate machines each with dedicated resources is a concept that has been toyed with for many years now. Server virtualisation is not a new concept as such. The box: Old wine, new bottle details how server virtualisation has evolved from its initial inception in the late 1960s.

Getting To The Root

There are two types of virtualisation at present—the 'bare metal' approach followed by virtual software loaded on top of an operating system.
The more promising CPU-assisted server virtualisation is still under development and ought to be released soon

So what does virtualisation on the x86 platform entail? There are two types of virtualisation at present—the ‘bare metal’ approach followed by virtual software loaded on top of an operating system. The more promising CPU-assisted server virtualisation is still under development and ought to be released soon.

The best example of bare metal virtualisation is VMware ESX Server followed by Xen. In this approach, the virtualisation software creates a virtualisation layer or environment on top of the hardware (bare metal). This is one of the best and lightest (in terms of resource utilisation optimisation) virtualisation methods available for the x86 platform at present.

Next come virtualisation solutions which are run on top of an existing OS. Being on top of an OS demands more resources. So this approach is still used in the VMware GSX Server’s case more for test and development environments than server applications. VMware GSX Server can run on top of Windows or Linux. Microsoft’s Virtual Server 2005 can run only on top of Microsoft Windows Server 2005. “This approach can create problems especially in the case of Microsoft Virtual Server 2005 since existing OSs like Windows Server 2000 will have to be upgraded to Windows Server 2003. This can be a sizeable investment,” says Chandiramani.

CPU-assisted virtualisation is still under development and spearheaded by AMD (Pacifica) and Intel (VT). This approach will enable full-fledged use of the x86 architecture for virtualisation. “Moving forward we will see virtualisation engines in the processor itself on AMD and Intel platforms. 2006 will be the year when virtualisation on x86 takes off. Despite that, VMware usage is not likely to go down because of the benefits that it provides,” says Doss.

Old Wine, New Bottle
Developed in the early 1970s to improve utilisation on mainframe platforms, virtualisation took a beating during the 1980s when personal computers made their presence felt. The limited capabilities of personal computers made it irrelevant at that time to port virtualisation down to that level. This was followed by a long stretch of inaction wherein virtualisation was limited to the RISC platform (Notably IBM, HP, Sun and SGI boxes). While IBM used logical-level virtualisation methods, HP and Sun used hard-wired virtualisation methods for their RISC platforms.

However, it was not until the late 1990s that virtualisation was developed for the x86 platform by VMware. “The virtualisation space is a big driver for the x86 platform. On this front, VMware is leading the market at present,” says T Mohan Doss, Director, Volume Business, ASEAN & India, Sun Microsystems.

Today there are other major competitors in the field including Microsoft’s Virtual Server 2005, Sun’s Solaris containers on the Solaris 10 OS and XenSource’s open source Xen virtualisation software. These solutions can run multiple (and different) OSs on the same server with capability to virtually provision distinct resources for functions such as processing, memory, and storage.

However, all these methods still rely on software-based virtualisation mechanisms rather than hardware-based methods at the processor level. At best, they can be defined as partial virtualisation since the x86 platform does not inherently allow virtualisation software to use the processor’s capabilities entirely. Things are changing on this front though with AMD’s Pacifica and Intel’s Virtualisation Technology (VT) [née Vanderpool] initiatives to enable virtualisation products to make full use of the x86 architecture. Vanderpool is already out on the desktop and when you consider that most ‘servers’ sold in this country are nothing more than togged up Pentium 4 PCs, that is quite significant. Once Intel moves this technology onto the Xeon, and AMD releases its own technology, things will heat up on the x86 hardware virtualisation front.

Possible Obstacles

Although server virtualisation is catching up, its use is limited to non-mission-critical applications at present. The biggest concern that is limitingmission-critical application deployment on this front is the single point of failure aspect as well as the current high prices of server virtualisation software

Although server virtualisation is catching up, its use is limited to non-mission-critical applications at present. The biggest concern that is limiting mission-critical application deployment on this front is the single point of failure aspect as well as the current high prices of server virtualisation software.

When multiple applications are deployed on a single server, chances of having a single point of failure are increased multi-fold. This can be deadly for verticals like BFSI and telecom. “This can be a big issue in terms of criticality, especially on the financial front,” says Chandiramani.

The next issue is that there is no common standard for hypervisors. However, major vendors (hardware and software) are still working together to reach a consensus and this should be sorted out soon.

Despite these issues, the outlook is still largely optimistic for x86 server virtualisation. When application deployment is carefully evaluated and executed properly, virtualisation is still a boon for SMBs and large enterprises for consolidation of mission-critical applications. “Deploying virtualisation is not a problem but application characteristics have to be evaluated carefully for an optimal implementation,” says Doss.

anilpatrick@networkmagazineindia.com

 
     
- <Back to Top>-  
Untitled Document
 
Indian Express - Business Publications Division

Copyright 2001: Indian Express Newspapers (Mumbai) Limited (Mumbai, India). All rights reserved throughout the world. This entire site is compiled in Mumbai by the Business Publications Division (BPD) of the Indian Express Newspapers (Mumbai) Limited. Site managed by BPD.