Server Update 2006
The emergence of virtual server ecosystems
Virtualisation is a concept that has spread across IT infrastructure
in aspects such as servers, storage and networks. Server virtualisation has
matured to the point where its benefits outweigh the costs. By Anil Patrick
concept of virtualisation holds much promise if executed right. This is more
so in terms of server virtualisation since the costs involved are marginal when
compared to the benefits that can be derived.
Virtualisation on the x86 (and x86-64) server platforms
is a trend that is being discussed and evaluated today in many organisations.
This is one trend which is going to remain for quite a while in the industry,
opines Naveen Mishra, Senior Analyst, Server Markets, Gartner India. So it will
be more accurate to say that 2006 will be the year when Indian organisations
start experimenting with and adopting x86 based server virtualisation.
The growing trend of virtualisation on the x86 platform is
also substantiated by Gartners Top Ten Trends and Predictions for 2006.
According to the report, virtualisation will drive the need for Real Time Infrastructure
(RTI) in the APAC region. To cope with the increasing volume and velocity of
information, organisations will need to adopt RTI which relies extensively on
virtualisation. The technology can improve IT resource utilisation and increase
flexibility in adapting to changing requirements and workload. With the addition
of service-level, policy-based automation, virtualisation leads to RTI, according
Starting a long journey
The concept of server virtualisation basically means that it is possible to
simultaneously run multiple instances of the same operating system or several
operating systems on a single server.
Considering that RISC boxes have had virtualisation capacities for many a year
now, virtualisation is not a ground-breaking proposition as such. However, it
has been virtualisation on the x86 platform that has given the concept more
widespread appeal than its erstwhile RISC associations. Virtualisation
is the abstracting of the software from the underlying implementation. Server
virtualisation has been around for decades now, and it is a mature technology,
comments Prakash Advani, Linux Practice Head, Novell India.
2006 will be the year when server virtualisation on x86 and x86-64-based servers
will start to be accepted by the Indian enterprise. By will start to be
accepted we mean the start of a trend that will last for a long time.
x86 servers get virtualised
This approach of optimally using underlying hardware has
several advantages, as we shall soon see. First of all, the price-performance
ratio has improved tremendously on the x86 platform over the years. This is
one of the reasons why server virtualisation has become a very viable proposition
today for enterprises.
Advantages such as having multiple operating systems
on a single server, easier management, higher optimisation and better
DR provide a strong value proposition for medium-to-large organisations
Some of the other benefits of virtualisation include improved
server utilisation, efficiency and manageability. Better server utilisation
reduces the cost of hardware and manageability, thereby reducing the overall
TCO. The ability to migrate virtual servers from one virtual environment to
another without considerable difficulty provides easier management and DR capabilities
as well. Advantages such as having multiple operating systems on a single
server, easier management, higher optimisation and better DR provide a strong
value proposition for medium-to-large organisations. Virtualisations adoption
will mainly be in large organisations, data centres and other large implementations,
On top of this, 2006 will see many vendors offering new as well as considerably
enhanced server virtualisation solutions (both hardware and software). Club
the value propositions of x86 virtualisation with more maturity in the virtualisation
space and it is clearly evident that 2006 will see considerable adoption in
What has changed now is that there is no need for proprietary
or expensive technology to take advantage of it. Virtualisation,
which was earlier available only on high-end systems,
is now available on commodity hardware, which means
everyone can now take advantage of it, Advani
points out. Adds Mishra, Today, virtualisation
can be mainly classified into hardware and software-based
optimisation. Each approach has its pros and cons, so
selection has to be done based on the specific requirements.
One way or another
There are basically two types of server virtualisation approaches
in use today. The first one requires a host OS to run virtual machines (VM)
on top of it. The second approach, known the hypervisor or bare
metal approach, makes use of an abstraction layer run between the OS instances
and the hardware.
In the hosted OS approach, the virtualised server OS instances are run on top
of the host OS. The virtualisation solution is installed as an application on
top of the host OS. This approach relies on the host OS for all interfacing
between the virtual machines and the server hardware. Microsoft Virtual Server
2005 R2 is an example of the hosted OS virtualisation approach.
On the other hand, solutions using the hypervisor (also known as the virtual
machine monitor) approach are meant to be installed directly on top of the server
hardware without the need for a host OS. In this type of server virtualisation,
the hypervisor creates the interface through which the VMs interface with the
hardware. Server OSs are created inside VMs or containers created
on top of the hypervisor.
The hypervisor is basically a software layer that includes
a virtual machine scheduler to coordinate VM functioning. Hypervisors primarily
assist in memory management and I/O virtualisation. VMware ESX server and the
open source Xen Enterprise are examples of solutions using this approach.
|While virtualisation can provide many advantages,
there are still many issues to be sorted out. The first is the lack of standards,
but vendors are working on it.
The next is the issue of performance for high I/O applications. When running
software virtualisation, applications that need fast I/O access (like databases
and graphical applications) can face issues such as memory and virtualised
processor / video card limitations, as well as scalability. These performance
and scalability issues are many a time limited by the maximum limits imposed
by the container size. Most virtualisation solutions continually work on
this by bringing out solutions with bigger container sizes. The most recent
example is VMware's latest ESX server version that has come with double
the container size.
Performance overheads that the hypervisor brings in is another area that
still needs work. "Stability of the hypervisor in terms of performance
overheads is one of the main concerns in server virtualisation. The challenge
is to bring it down to 1.5 or 2 percent from the usual overheads of 8 to
10 percent," states Arnab Roy, National Sales Manager, Datacentre Practice,
Sun Microsystems India.
Security issues come next. While vendors cry themselves hoarse about the
security that server virtualisation brings in, the fact remains that virtualised
environments are as insecure as normal set-ups. What's more, these may be
even more insecure at times due to API glitches and the like. This means
that all the security measures taken for a normal server environment have
to be applied to the virtualised environment as well. Special measures,
such as training staff to deal with the new set-up's intricacies, will also
have to be undertaken.
Hypervisor approaches also differ in terms of the way the interfacing of VMs
with the hardware happens. Two of the common approaches used on this front are
full virtualisation and paravirtualisation.
Proponents of full virtualisation include VMware, which depends on this approach
for most of its solutions. Paravirtualisation is endorsed by vendors such as
Xensource for their Xen solutions.
The difference between the two approaches lies in paravirtualisations
use of APIs between the hypervisor, virtual OSs and hosted applications. (Figure
1: Full virtualisation vs paravirtualisation, highlights the differences between
the two approaches.)
Many of the OS and CPU options available today are not optimised
for virtualisation. Paravirtualisation attempts to rectify this shortcoming
by using procedure calls for CPU instructions that are difficult to virtualise.
This helps paravirtualisation tweak the virtual OS by modifying it to achieve
better hardware performance.
On to the hardware front
Marketing & Business Development
While software virtualisation can work wonders, maximum efficiency
can be achieved only if the hardware is in sync with the software. This is one
area where AMD and Intel step in with their CPU virtualisation processor technologies.
AMDs Virtualization Technology (nee Pacifica) and Intels
Virtualization Technology attempt to bridge the software-hardware divide by
making virtualisation-aware CPUs. Intel has added virtualisation support for
its latest dual core Itanium (Montecito) as well, which expected to be launched
in mid-July (at the time of writing).
Both the CPU majors are clear that their attempt is not to
replace third-party virtualisation software. Virtualisation technology
for the processor is not going to do away with the hypervisor. Instead, the
role of virtualisation-enabled CPUs will be to accelerate the entire process,
explains Mukund Ramaratnam, Director, Marketing & Business Development,
Regional Manager, APAC
& Internet Solutions Group
Intel Asia Electronics
The objective is to optimise the entire CPU platform
for virtualisation with support from a hardware and virtual machine perspective.
Third party VM monitors are able to understand the hardware better and optimally
use it. This is done using methods such as better instructions and efficient
I/O for hardware-assisted virtualisation platforms, says Narendra Bhandari,
Regional Manager, APAC, Strategic Relations & Internet Solutions Group,
Intel Asia Electronics.
Both vendors have started bringing out CPUs that are virtualisation
optimised. As the movement gathers force, server virtualisation is also getting
many of its kinks ironed out.
The bigger virtuality
The virtual I/O concept aims to create virtual server,
storage and networking clouds for easier consolidation, re-allocation
and management of server-server, LAN/WAN
and storage resources
Virtualisation has made inroads into many aspects of the infrastructure
such as storage and networking as well, so the question which arises is how
server virtualisation fits in.
This deserves a closer look at the virtual I/O concept. This
concept aims to create virtual server, storage and networking clouds for easier
consolidation, re-allocation and management of server-server, LAN/WAN and storage
resources. There are several approaches in vogue today to create this, using
consolidated channels like InfiniBand.
However, irrespective of the hardware or software used, three basic capabilities
will be required for a proper virtual I/O setup. These are the capabilities
to create virtual servers, virtual networking, and virtual SCSI. Virtual networking
is made possible using virtual LAN, whereas virtual SCSI can be achieved using
storage virtualisation technology.
At this point of time, virtualisation manager software (like the IBM virtual
I/O server) is available to orchestrate the functioning of such environments.
However, virtual I/O is still at its initial stages, and has a long way to go
before it can really make a difference to the virtualised world. Till then it
is better to go one virtualisation step at a time. You could consider starting
with the server virtualisation step.