This chapter wasn’t quite my cup of tea. I am much more interested in software architecture of the form of design patterns than this type that straddles the boundaries of software and hardware architecture. Hopefully there are still more interesting chapters ahead.
I did find it interesting how the concept of mutual distrust between the guest VMs and the hypervisor host helped improve the reliability of the software, as opposed to the mutual trust relationship necessitated by grid computing.
Allowing guest operating systems to perform privileged operations when running on top of a hypervisor is a challenge because the operating system is no longer at the most privileged level. This means that the guest OSes will not be able to successfully call many crucial low-level commands. Before Xen, when guest VMs would execute privileged operations, some would fail in a way that the hypervisor could trap the instruction, correctly execute the request, and return control to the VM. However, there were many privileged operations that would fail silently, without giving the hypervisor a chance to trap and complete the operation, thus causing the guest VM to fail. That required hypervisors had to scan the guest VMs at run-time and replace privileged calls that would fail silently to go directly to the hypervisor. For the operations that would not fail, hypervisors had adapters that presented an interface to the guest operating systems that looked exactly like the physical hardware, and then the hypervisor would translate the operation and send it to the hardware in the proper way. The way Xen handles the problem is having the guest operating systems be aware that they are running in a virtual machine so that they communicate directly with Xen when they need to execute a privileged operation. This meant that out-of-the box OSes couldn’t run on Xen. The source had to be modified to be compatible.
One of the primary concerns when designing Xen was separating policy from mechanism. In order to achieve this, a special “domain 0” is started with the hypervisor. It runs on top of the hypervisor like guest operating systems but is able to perform privileged operations not offered to guest operating systems. The policy is put into domain 0, which operates at a higher level than the hypervisor and handles many calls from the guest operating systems. The mechanism is in the thin and simple hypervisor. An example that the author provides is the initialization of a new virtual machine. Domain 0 is responsible for most of the heavy lifting involved in the setup and configuration. The hypervisor simply receives commands from domain 0 to setup a new domain and allocate some memory to the new VM.
Eventually chipset manufactures added support for hardware virtualization so that when privileged operations are executed and fail by applications even at the highest privilege levels, i.e. the VMs, the hypervisor is able to trap and intercept all of those failures. The fact that Xen is open source helped during this transition in a few ways. For one, Intel and AMD were able to contribute low-level patches to Xen so that it would work with their new hardware. Also, Xen was able to make use of other open source applications to emulate BIOSes and create virtualized hardware interfaces.
IOMMU one of the most recent forms of hardware support for virtualization. IOMMU allows shared access to hardware to be multiplexed at the hardware level, without the need for processing from Xen à la shadow page tables. The IOMMU ensures that VMs are only able to see and access addresses in hardware that belong to them.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment