Virtually anything is impossible
A lot of people seem to have a subconscious opinion regarding the word “virtual”. It is used in many different contexts and it’s often a word with a bit of negative load. Probably because the “real thing” is so much better. We have virtual memory, virtual reality, virtual private networks, virtual tour, virtual particle, virtual world, virtual sex, virtual machine, virtual artifact, virtual circuit, etc… This listing could be much longer, but you get the idea.. Most of these virtual entities are not considered equally good as their physical counterparts.
People who are not very familiar with the current state of virtualization and cpu technologies may have heard that virtual machines are not equally good as physical ones and that they may only be used for “low hanging fruit” or maybe just testing and development. As a consultant I meet people from time to another who have systems that are so special that they will not consider running them inside virtual machines. They are concerned about the performance. Some are concerned about security. Or availability. Latency. Stability. Yes, there’s an endless list of things that people worry about. An ISV told me that their application suite wouldn’t work at all in a virtualized environment.
Lets have a closer look at the current state of these issues with the latest technology:
Performance & Latency
Project VRC revealed in their newly released report that by virtualizing a 32 bit XenApp farm on vSphere 4.0 with a vMMU (RVI/EPT) capable cpu you would get almost twice as many users on the system than they did last year (with the software and hardware available back then).
VMware has also shown that with an extra powerful storage system (3xCX4-960 with solid state drives) you could achieve over 350 000 random io/s from a single ESX host with only three virtual machines running. Latency was below 2 ms. Even though such a storage setup is highly unusual amongst today’s data centers it shows that the hypervisor is not a limiting factor.
These things still doesn't mean that you should avoid tuning your application load if needed.
Security
It’s not a secret that a large part of the technology that VMware is utilizing is derived from research done at Stanford University by VMware fonder Mendel Rosenblum and his associates. A few years ago there was a research paper released titled "A Virtual Machine Introspection Based Architecture for Intrusion Detection”. This technology is present today as an API for third party vendors to integrate into. It allows for monitoring of cpu, memory, disk and network. Yes, you can monitor all of the core four with this API that is known as VMsafe.
Several of the established security vendors such as (EMC) RSA, CheckPoint,(IBM) ISS, Trend Micro and others have partnered up with VMware and are delivering security solutions that support parts or all of this stack. This means that you can have all your VMs firewalled even if they live on the same subnet in the same VLAN. It also means that you can detect if any malware is infecting a VM even if it has no antimalware agent installed. To have VMs on the same subnet protected by a switching firewall in the physical world is also possible if you’re using Crossbeam or similar, but these devices are not known to be cheap.
A quick diagram from one of these products quickly shows how such a setup can help secure an environment:
These kind of security protections are better than the real thing. In the physical world you have no way of looking into the memory or cpu of a computer without installing an agent inside the operating system.
Even without a third party product you can get better protection than physical. If you have a DMZ network with several hosts, these can normally access each other through the network (not protected from each other) so the solution is often to have many VLANs separating these services making the setup more complex to manage. With VMware’s distributed switches you can establish private VLANs in “Isolated mode”. This means that the VMs on the same Isolated PVLAN will only be able to communicate with hosts on the non-local network. The neighboring DMZ server in the same subnet will be invisible and inaccessible. These things make the virtualized networking better than the real thing.
Stability & Availability
VMware’s hypervisor has been here for almost 10 years now. ESX is running a relatively small kernel (vmkernel) that is known to be “dead stable”. The only times I’ve seen it have problems is if there’s a bad hw component.
In addition to this, VMware will load balance your workload across all of your hosts with DRS. With FT VMware will run a single load on two separate hosts in case of hw fault. With VMware HA it will start the VMs that were running on the dead host at other hosts in your cluster. With VMware DRS it will take care of your whole infrastructure in case of a total datacenter crash. All this without configuring anything special in any of the VMs. Traditional clustering typically has a quite large administrative overhead, is complex to setup and (almost) impossible to test. With SRM you can even test your DR plan while the rest of your environment is up and running.
Conclusion
All of this makes a virtualized environment much better than a physical environment. Does this mean you should virtualize 100% of your workload? No, there are still a few exceptions, but 97% of the systems can typically be virtualized without any issues. Existing customers are not migrating their critical systems to VMware despite of virtualization. They are running their critical systems on VMware because of the extra benefits virtualization is giving their infrastructure regarding HA, DR, security and management.
No comments:
Post a Comment