Intel is incorporating hardware-assisted virtualization technology that goes beyond the server consolidation phase typified by groundbreaking software released by VMware in 1999 and others that came later such as Onapp Xen and Linux KVM.
They call this phase Virtualization 2.0 representing fundamental advances in how cloud data centers deal with load balancing, high availability and disaster recovery.
Some of their specific inventions include Virtual Machine Device Queues (VMDQs), which offer a 2x performance in network throughput than those usage models relying solely on software for virtualization.
According to Intel, the above translates into a 9GBS throughput versus a4GBS found in software-only solutions. These VMDQs reduce virtual machine overhead in a particular server host and boost performance.
They then work closely with vendors such as VMware or Redhat to make sure their latest software can leverage these hardware innovations.
Managed cloud host providers such as Virtual Internet make use of both Intel’s hardware and vendor software from VMware to deploy highly scalable Infrastructure-as-as-Service (IaaS) to enterprise customers in the UK and US.
Intel closely studies the data center strategies of web hosts such as Virtual Internet to build usage models, which tap into the power of virtualization 2.O.
For Intel, their research is focused on these three particular usage models, which go well beyond the initial server consolidation phase that has been used in cloud computing so effectively.
Dynamic load balancing
A Virtualization Manager migrates virtual machines from a busy host to an idle host in the network. This may be a result of heavy traffic in the network as e.g. enterprise staff or customers login into applications in the morning versus gentler traffic by lunchtime. These spikes require careful management of virtual machine resources across various nodes in the network.
If a particular host server deploying multiple virtual machines goes down the technology needs to assist in automatically starting up the VMs on another node in the network with minimal downtime. This is often referred to as the ‘self-healing’ power of the cloud typified in a web hosting data center.
Site-to-site-recovery is a very specific usage model and considered an advanced model enabled by the tenants of virtualization. In this scenario, a cluster of servers powering virtual machines on a network is replicated at another site, which is dynamically updated on a regular basis. Generally, if an outage had to occur in the primary data center it would take a bit longer to get up and running on the slave due to the distance involved.
One of the interesting breakthroughs in Intel hardware-assisted virtualization is something referred to as power optimization. It is considered the inverse of load balancing described above. Picture two physical servers each running a collection of virtual machines. If Server B is under utilized, perhaps only running two virtual machines, it is possible to live migrate those two VMs to Server A and power down Server B. The result is a conservation of energy, power and costs by shutting down the cannibalized physical server B.
Thus, while load balancing is concerned with a server being over-utilized potentially delivering slower service, power optimization focuses on maximizing operating costs and making sure all resources in the network are being used correctly.
In conclusion, as more and more mission critical applications are pushed into the virtualized arena inside the cloud-computing model, it becomes critical to optimize the underlying hardware represented by IaaS
This Virtualization 2.0 phase requires that both the hardware and the software work together to deliver the full power of Infrastructure-as-as-Service (IaaS) to enterprise customers seeking to move their data centers into the cloud offered by providers such as Virtual Internet.
This article was brought to you by VI.net, for dedicated server hosting, cloud servers and 24/7 support visit our site here www.vi.net