Results from the independent, industry-standard benchmark for virtualisation - called SPECvirt - conducted by the Standard Performance Evaluation Corporation (SPEC) show that performance and scalability advantages no longer reside with a single company. The basic functionality that most organisations consistently use can be found in most offerings on the market today.
George DeBono, Red Hat's General Manager for the Middle East & Africa shares his views for what will happen with virtualisation in the year ahead.
"Expect the potential for a multi-hypervisor datacentre to become a reality in 2013," he says.
"Just as some companies have found that single-vendor strategies for operating systems or hardware do not make sense for them in an agile world, the era of a single vendor for virtualisation is over."
Linux and open source increasingly drive next wave of cloud and virtualisation innovation and adoption
The first wave of virtualisation in most organisations was driven by Windows server consolidation. Windows servers historically were dedicated to one mission-critical application—companies would not run Exchange™ on the same server as their SQL Server database, for example. As such, servers were sized for peak workloads and many were underutilised.
The resulting server sprawl made management difficult, increased administrative and facility costs, and left wasted resources unavailable to the organisation without risking critical workloads. Consolidating these Windows servers made a lot of sense.
Linux workloads, on the other hand, have been more amenable to being mixed on the same server and Linux servers have typically run at higher utilisation than their Windows equivalents. Analyst research indicates that where Windows workloads are currently 60% virtualised, Linux workloads trail behind at 30% on average, and in some organisations they are still mostly bare metal, likely because they haven't needed virtualisation to get higher resource utilisation.
"In 2013, we believe Linux workloads will begin representing a greater percentage of the workloads being virtualised," says DeBono. "What do we see driving this? Organisations are now comfortable with virtualisation and are beginning to institute "virtualisation first" initiatives."
"Also, as hardware is refreshed, even two-socket 1U servers have more CPU and memory than a highly utilised bare metal Linux machine can consume. And, with the continued drive to the cloud, Linux workloads will likely be increasingly deployed both inside the virtualised datacentre as well as outside in the public cloud," he continues.
Coexistence of virtualisation, bare metal, containerisation
When enterprise applications were monolithic, stateful and resource-heavy, it made sense to have a virtualisation strategy focused around the needs of large, stateful and resource-heavy virtual machines. But as cloud deployment models become more frequent, applications are being architected to be lightweight, stateless auto-configuring containers for CPU and memory, with the bulk of storage and network management outsourced to dedicated network and storage virtualisation solutions.
And while virtual machines are a good first step in increasing resource utilisation and manageability, technologies that incorporate virtualisation capability into the operating system otherwise known as "containerisation" also allow virtual machines or bare-metal servers to be subdivided as well to provide application protection and more granular division of resources.
In 2013, DeBono also expects Platform as a Service ("PaaS") platforms to become more prevalent—taking advantage of bare-metal, virtual-machine, and containerised operating systems —to allow an organisation to make best use of its resources to deliver enterprise applications.






