An Introduction to Virtualization
by Liz van Dijk on October 28, 2008 2:00 AM EST- Posted in
- IT Computing
Baby Steps Leading to World-Class Innovations
Many "little" problems have called for companies like VMware and Microsoft to develop software throughout the years. As technology progresses, several hardware types become defunct and are no longer manufactured or supported. This is true for all hardware classes, from server systems to those old yet glorious video game systems that are collecting dust in the attic. Even though a certain architecture is abandoned by its manufacturers, existing software may still be of great (or perhaps sentimental) value to its owners. For that reason alone, virtualization software is used to emulate the abandoned architecture on a completely different type of machine.
A fairly recent example for this (besides the obvious video game system emulators) is found integrated into Apple's OS X: Rosetta. Using a form of real-time binary translation, it is able to change the behavior of applications written for the PowerPC architecture to match that of an x86-app. This allows a large amount of software that would normally have to be recompiled to survive an otherwise impossible change in hardware platforms, at the cost of some of its performance.
Hardware platforms have not been the only ones to change, however, and the changes in both desktop and server operating systems might force a company to run older versions of the OS (or even a completely different one) to allow the use of software coping with compatibility issues. Likewise, developers have a need for securely isolated environments to be able to test their software, without having to compromise their own system.
The market met these demands with products like Microsoft's Virtual PC and VMware Workstation. Generally, these solutions offer no emulation of a defunct platform, but rather an isolated environment of the same architecture as the host system. However, exceptions do exist (Virtual PC for the Mac OS emulated the x86 architecture on a PowerPC CPU, allowing virtual machines to run Windows).
Putting together the results of these methods has lead to a solution for the problem quietly growing in many a company's server room. While the development of faster and more reliable hardware was kicked up a notch, a lot of the actual server software lagged behind, unable to make proper use of the enormous quantity of resources suddenly available to it. Companies were left with irreplaceable but badly aging hardware, or brand new servers that suffered from a very inefficient resource-usage.
A new question emerged: Would it be possible to consolidate multiple servers onto a single powerful hardware system? The industry's collective answer: "Yes it is, and ours is the best way to do it."
14 Comments
View All Comments
Ralphik - Wednesday, October 29, 2008 - link
Hello everybody,I have installed a virtual Win98 on my computer, which is running WinXP. The problem I have is that there are no GeForce7 and higher drivers available for such old Windows platforms - has anyone got a tip or a cracked driver that I could use? It now has a completely useless S3 Virge driver installed . . .
Jovec - Friday, October 31, 2008 - link
Unless I'm missing something (new), your Win98 running in your VM will not see your GeForce video card, or indeed any of the actual hardware in your computer. It just sees the virtual hardware provided by your VM software - typically an emulated basic VGA video adapter and AC'97 sound. VM software emulates an emulates an entire virtual computer on your host PC, but does not use the physical hardware natively.In short, you are not going to get Geforce level graphics power in your Win98 VM.
stmok - Wednesday, October 29, 2008 - link
"Could it be that these two pieces of software are using related techniques for their 3D acceleration? Stay tuned, as we will definitely be looking into this in further research!"=> Parallels took Wine's 3D acceleration component. More specifically, they took the translator that allowed one to translate OpenGL calls to DirectX and vice versa.
There was a minor issue about this when Parallels are not compliant with the open source license of Wine. But that was settled when Parallels complied with the LGPL two weeks later.
=> http://parallelsvirtualization.blogspot.com/2007/0...">http://parallelsvirtualization.blogspot...2007/07/...
=> http://en.wikipedia.org/wiki/Parallels_Desktop_for...">http://en.wikipedia.org/wiki/Parallels_Desktop_for...
What annoys me, is that they never bothered with adding 3D Acceleration support in the Linux version of Parallels. The only option is the very current release of VMware Workstation. (Version 6.5 has technology implemented from their VMware Fusion product).
duploxxx - Tuesday, October 28, 2008 - link
btw is this a teaser for the long announced virtualization performance review?Vidmo - Tuesday, October 28, 2008 - link
I was hoping this article would get into some of the latest hardware technologies designed for better virtualization. It's still quite confusing trying to determine which hardware platforms and CPUs support VT-d for example.The article is a nice software overview, but seems incomplete without getting into the hardware side of the issues.
solusstultus - Tuesday, October 28, 2008 - link
Hardware support for VT is not used by most/any? commercial hypervisors (VMware doesn't use it) and has been shown to actually have lower performance in many cases than binary translation:http://www.vmware.com/pdf/asplos235_adams.pdf">http://www.vmware.com/pdf/asplos235_adams.pdf
duploxxx - Tuesday, October 28, 2008 - link
unfortunately your link is 2 years old.Current statement for Vmware ESX is that you should use the hardware virtualization layer when you have 64bit OS at any time and when virtualization layer 2 aka NPT from amd (ept when intel launches nehalem next year) at any time.
solusstultus - Wednesday, October 29, 2008 - link
While I don't claim to be an expert, that's the most recent study that I have seen that actually lists performance results from both techniques.If you have seen more recent results, do you have a link? I would be interested in reading it.
From what I have seen, NPT addresses overheads associated with switching from the Guest to the VMM during page table updates (which can occur frequently when using small pages). However, the other main source of overhead cited in the paper that I referenced were traps into the VMMs on system calls which could be replaced by less expensive direct links to VMM routines in translated code. So unless the newer hardware support virtualization implementations address this (they might, I haven't looked at the documentation), it seems translation could still be potentially faster for some apps, and that an ideal implementation would make use of both in different situations.
Vidmo - Tuesday, October 28, 2008 - link
Ahh I somehow missed the link to your hardware article.http://it.anandtech.com/IT/showdoc.aspx?i=3263&...">http://it.anandtech.com/IT/showdoc.aspx?i=3263&...
Very well done. Would it be possible to update that article to reflect VT-d and possibly TV-i technologies as well?
LizVD - Tuesday, October 28, 2008 - link
Thanks for the input!The real purpose of this article was to provide a "beginner-safe" intro into the things we have been discussing on Anandtech IT for the past couple of months, so in-depth discussion of each of the technologies is something we avoided on purpose, to keep focus on the basic differences without getting carried away.
Your question is an interesting one, however, and of the sort we'd like to properly address in our blogs, so keep an eye on them, as we'll be looking into it.