Systems Group at Ga Tech

The Virtualization Lab is undertaking a concerted effort to to explore virtualization and security technologies for future multicore and distributed platforms. Applications addressed range from high end codes on single servers to distributed or information flow systems spanning many trusted or untrusted machines.

We are working with the Xen hypervisor to explore (1) scalability issues for future multicore platforms, (2) new functionality enabled by virtualization technologies, and (3) heterogeneous, many-core machines used for combined computational and communication tasks.

Projects

Self-Virtualized I/O: High Performance, Scalable I/O Virtualization in Multi-core Systems

Virtualizing I/O subsystems and peripheral devices is an integral part of system virtualization. This project proposes a hypervisor-level abstraction that permits guest virtual machines to efficiently exploit the multi-core nature of future machines when interacting with virtualized I/O. The concrete instance of S-VIO developed herein (1) provides virtual interfaces to an underlying physical device, the network interface, and (2) manages the way in which the device's physical resources are used by guest operating systems. The performance of this instance differs markedly depending on design choices that include (a) how the S-VIO abstraction is mapped to the underlying host- vs. device-resident resources, (b) the manner and extent to which it interacts with the HV, and (c) its ability to flexibly leverage the multi-core nature of modern computing platforms.

Publications:

  • Himanshu Raj, Karsten Schwan (2007) High Performance and Scalable I/O Virtualization via Self-Virtualized Devices, in HPDC 2007.
  • Himanshu Raj, Ivan Ganev, Karsten Schwan, Jimi Xenidis (2006) Self-Virtualized I/O: High Performance, Scalable I/O Virtualization in Multi-core Systems CERCS tech report GIT-CERCS-06-02, an older version of the paper is available here.
  • Himanshu Raj, Karsten Schwan, Ada Gavrilovska, Sanjay Kumar, Radhika Niranjan (2006) Design of a self virtualizing network interface with the IXP2400 network processor. Intel Embedded and Communications Education Summit 2006i. Presentation.
  • Himanshu Raj, Karsten Schwan (2005) Implementing a Scalable Self-Virtualizing Network Interface on an Embedded Multicore Platform. Workshop on Interaction between Operating System and Computer Architecture (in conjunction with IISWC), 2005, Austin, TX.

Cellule: Lightweight Execution Environments for Virtualized Accelerators

The increasing prevalence of accelerators is giving rise to heterogeneous multi-core platforms consisting of both general purpose and specialized cores. The successful use of such platforms depends on programming and execution models that exploit their hardware to satisfy the performance requirements of applications. This paper describes an accelerator execution environment by evaluating and extending a virtualization environment designed for the use of Cell processor as an accelerator. The project, Cellule, demonstrates certain virtualization technologies that can be used to customize the system environment for application that use accelerators. These include: (1) a Hypervisor that creates a lightweight environment for applications to run in, (2) a flat address space model that is optimized for the memory management capabilities of the accelerator, (3) an execution model that simplifies the interrupt delivery structure in order to reduce the latency for asynchronous events to be delivered from accelerator to the host, and (4) guaranteed security and isolation of these applications from the rest of the system. Experimental evaluations of Cellule as applied to compute intensive workloads like matrix multiplication and Black Scholes are used to demonstrate the performance benefits gained over the latest Linux-based execution environment that is typically used by Cell applications.

Publications:

  • Submitted to VEE 08

Executing with Virtualized Accelerators: Providing for Heterogeneous Virtual Machines

The relentless progress of Moore's Law has periodically inspired major innovations -- both in hardware and software -- at specific points in time to keep performance growth on pace with transistor density. Industry has reached another such point as it encounters intellectual and engineering challenges in the form of power dissipation, processor-memory performance gap, limits to instruction level parallelism, slower frequency growth, and rising non-recurring engineering costs. As a consequence, when we consider how the large number of transistors that will be supplied at future technology nodes will be used to sustain performance growth, there are some inevitable trends, including i) replication of cores, ii) the use of high volume custom accelerators due to the fact that these devices have small footprint and dramatically less power consumption, and iii) innovations in memory hierarchies. The preceding collectively inspire the development of heterogeneous many-core platforms (HVM) -- large scale, heterogeneous systems comprised of homogeneous general purpose cores intermingled with customized heterogeneous cores -- accelerators, and using diverse memory and cache hierarchies. Such will be the case both on chip as well as in rack scale and multirack scale systems.

Publications:

Performance Analysis of Interaction Between CPU and I/O Scheduling in Xen (2006-2007)

Xen is a very important paravirtualized solution for running multiple operating systems on the same hardware. Although the present domain scheduler in Xen does a good job of allocating the physical CPUs to the guest domains, the system still takes a hit in performance when the domains perform extensive IO. In this project, we have studied the performance of the scheduler by running different benchmarks and concluded with some suggestions for improving the overall interaction between IO and domain scheduling to enable Xen to provide better QoS guarantees to the guest domains.

Publications:

  • "High Performance Hypervisor Architectures: Virtualization in HPC Systems", A. Gavrilovska, S. Kumar, H. Raj, K. Schwan, V. Gupta et. al., 1st Workshop on System-level Virtualization for High Performance Computing (HPCVirt), in conjunction with EuroSys 2007, Lisbon, Portugal, Mar. 2007

Sidecore

Use specialized and dedicated cores to improve performance and scalability of VMMs and applications. Use the sidecore abstraction to share hardware accelerators among VMs to improve their application performance.

Publications:

  • Sanjay Kumar, Himanshu Raj et al. "Re-architecting VMMs for Multicore Systems: The Sidecore Approach". Published in WIOSCA’07.
  • Sanjay Kumar, Ada Gavrilovska, Karsten Schwan, Srikanth Sundaragopalan. "C-CORE: Using Communication Cores for High Performance Network Services". The 4th IEEE International Symposium on Network Computing and Applications
  • (NCA), July 2005.

Net-Channel

The aim is to provide seamless access to I/O devices (both virtualized and directly accessible from VMs) during VM migration and mechanisms to provide device-hotswapping.

Publications:

  • Sanjay Kumar, Sandip Agarwala, Karsten Schwan. "Netbus: A Transparent Mechanism for Remote Device Access in Virtualized Systems". CERCS tech report GIT-CERCS-07-08. Also presented a poster in Usenix ATC 2007.
  • Sanjay Kumar, Karsten Schwan. "Netchannel: A VMM-level Mechanism for Continuous, Transparent Device Access During VM Migration". Submitted to Usenix VEE 2008.

Management architecture in virtualized environments

To create abstractions and mechanisms to provide better management at lower cost in virtualized enterprise systems.

Publications:

  • Sanjay Kumar et. al. "M-Channels and M-Brokers: New Abstractions for Co-ordinated Management in Virtualized Systems". Submitted to ASPLOS 2008.

Faculty

  • Mustaq Ahamad (CoC)
  • Henry Owen (ECE)
  • Calton Pu (CoC)
  • Karsten Schwan (CoC)

Students

  • Sanjay Kumar (CoC)

Location

KACB room 3208

Equipment

  • 2 dual core machines in architecture lab
    One with Pentium D (2 logical processors) and one with Pentium EE (extreme ed.) HT enabled (4 logical processors)
  • 2 VT-enabled pre-release machines from Intel which are part of netlab cluster
    One with added TPM module.
  • All machines run Xen hypervisor/Linux 2.6 with RHEL 4.0 distribution.
Contact Us |  Intranet | College of Computing Home | Georgia Tech Home
© 2005-2007 The College of Computing at Georgia Tech :: Atlanta, Georgia 30332

Last Modified:Tuesday, 09-Oct-2007 09:59:39 EDT by Jay Lofstead

Valid HTML 4.0 Transitional