System-level virtualization has received renewed interest due in large part to its potential for improved system utilization, enhanced system management capabilities and use with proactive fault tolerance. The virtual machine monitor (VMM), or hypervisor, manages the execution of the virtual machines (VMs). Therefore, interruption to the hypervisor effects all active VMs and causes all VM state to be saved (or migrated). However, in some instances it would be beneficial to enable (and disable) hypervisor features at runtime. For example, it might be beneficial to enable debugging or performance features of the hypervisor at runtime without having them active at all times. Additionally, the ability to briefly modify functionality at runtime without requiring a full system shutdown, recompile, reboot cycle could be useful for such purposes.
A common method for such dynamic extensibility is through the use of loadable modules. For example, the Linux operating system provides this functionality via the modules subsystem, or Loadable Kernel Modules (LKM). This facility allows a privledged user to build and load/unload shared object files (modules) at runtime.
Our work provides a new hypervisor mechanism for loading dynamic shared objects (modules) at runtime. These Loadable Hypervisor Modules (LHM) are modeled after the loadable modules used in Linux. We anticipate LHMs to be useful for further research into system-level virtualization, e.g., to support dynmamic debug tracing or performance analysis.
The current implementation is based on the Xen hypervisor and therefore we use some of their terminology when discussing the current approach, i.e., the administrative domain (dom0) which runs the modified Linux version as the HostOS. Also, we use the term "hypercall" when referring to the calls from domains to the hypervisor.
The basic LHM design is modeled after Linux loadable kernel modules (LKMs). The relocatable ELF objects (modules) have an additional segment denoting a LHM file. The modules sub-system of the HostOS (Xen dom0) detects a LHM and adjusts the relocations using VMM (Xen) addresses (see Figure). Then the LHM is mapped into the hypervisor via a new Xen hypercall. The LHM memory resources remain in the hostOS (dom0) area but the region is write-protected to avoid any accidental access from Linux. The hypervisor maintains a list of LHMs and upon LHM unload the resources are returned and freed via the Linux hostOS for reuse.
This projects public releases are available via the download page.
For more information please send email to: srt-contact@ornl.gov.