[ Biography |
[ Students | Alumni
| Collaborators |
Prof. Karsten Schwan directs
the Center for Experimental Research
in Computer Systems (CERCS) at Georgia Tech. He is also a professor
in the Systems Research Group
the College of Computing at Georgia
Institute of Technology. His group conducts experimental research targeting
high performance, real-time, and ubiquitous applications. Research topics
include dynamic program adaptation; online program monitoring, tuning,
and steering; task and message scheduling; basic mechanisms and policies
for quality management in operating and communication systems; middleware;
and software tools. This research is conducted on parallel, distributed,
and embedded system platforms, in laboratories shared with end users and
List of current publications.
Publications prior to 2000.
projects and laboratories
addresses future distributed applications
subject to performance constraints when moving and operating on large data
volumes. Our vision is to develop middleware with which it is easy to
create self-managed -- autonomic -- distributed applications. Such
applications are characterized by their:
1 - agility, their ability to both adapt application components using
novel runtime specialization or composition techniques, and to dynamically
deploy new components and change component structures;
The general class of applications addressed by the M-Ware project are
shared with those explored in the Infosphere project at Georgia Tech: we
are focused on information flow rather than computing. Specific examples
studied by our group include the sensor data driven applications in the
mobile domain with soft real-time or energy constraints (see the MORPH
project), large data applications for remote science and online
collaboration (see the SmartPointer application), and the event-driven
operational information systems used in large enterprises (in
collaboration with companies like Delta Air Lines, HP, IBM, and Worldspan).
2 - resource- and needs-awareness, dynamic knowledge about current
resource availability and application needs through online monitoring;
3 - runtime management, the use of online quality management policies,
driven by resources monitoring and by assessing current user needs; and
4 - open infrastructure, the ability to inject general and application-specific
adaptation functionality `into' middleware, system, and network levels, to
continually match execution platforms to application behaviors.
Our prior work focused on the publish/subscribe paradigm for
high performance distributed interactions, creating the PBIO, ECho, JECho,
and IQ-ECho artifacts. Our current work is creating and studying
overlay-level mechanisms to enable the construction of such efficient
large-scale middleware for the more general class of high end information
flow applications. Specific ongoing research includes adaptive methods for
network- or platform-aware middleware operation, online methods for
bottleneck detection or performance isolation, efficient `in-flight' data
manipulation, flexible methods for high reliability and trust in
information flows, and the creation and integration of network- and
system-level support for application-level data movement and
manipulation. Artifacts resulting from current work include the
EVPath data movement overlays, dynamic binary code generation
and data morphing techniques, and the IFLOW methods and tools for
deployment and online adaptation of information flows.
DEOS and ViP Projects:
The DEOS project -- is developing kernel-level abstractions for soft real-time, multimedia
systems, and embedded systems.
Past efforts concerned the
development of efficient real-time task and packet schedulers, using the
DWCS scheduling algorithm (jointly with Richard West at Boston Univ.) and
new methods for user/kernel interaction, termed E-calls (joint with
Christian Poellabauer at Notre Dame). Ongoing efforts include
the creation of a kernel-level quality management infrastructure for
support of end-to-end monitoring and adaptation of enterprise
applications and kernel-level support for multimedia and sensor
The ViP project addresses the performance, scalability, and manageability
challenges posed by system virtualization in embedded, datacente, and high
performance settings, with specific focus on multicore platforms and their
I/O subsystems or actions.
High Performance I/O: --
A challenge in attaining high performance I/O for data-intensive MPP applications is the low level of abstraction presented by current I/O systems. Our group is undertaking a multi-institutional effort to create new I/O abstractions and implementations for peta-scale machines. Termed `Structured Data Streams: Peta-scale I/O and Storage', and jointly undertaken by collaborators at Georgia Tech, Oak Ridge National Lab, the University of New Mexico, and Sandia National Lab, this project is developing novel, higher level I/O abstractions, including APIs and mechanisms for asynchronous, extensible I/O, flexible, parallel, and network-aware data transport, and lightweight, scalable storage capabilities.
Trusted Passages: -- The inherent complexity of applications, technologies, and platforms in
today's large scale distributed systems makes it extremely challenging for
open systems to provide trustworthy services to end-users. The
Passage research project is exploring an approach that uses modern
virtualization techniques and the computational power of multicore
platforms to continuously monitor, supervise, and control the exchange and
manipulation of data across the multiple platforms currently used by
a distributed application. The goal is to create what we term `trusted
passages' across distributed and potentially untrusted execution
The C-CORE project --
is part of the joint effort of the GT Network Processors Group focused on developing hardware and software technologies
for dynamically extending communication infrastructures with
application-specific functionality. The technigues developed are
particularly suitable for
heterogeneous multi-core systems, which include general-purpose, as well
as specialized communication cores. For communication cores, the idea is
manipulate, filter, and transform selected application-level information
`close' to the network, thereby (i) reducing loads on host-internal
like I/O busses, CPUs, and memory and (ii) reducing the latencies of
used inter-host operations like synchronization. The goal is to develop
integrated host/NP systems (e.g., to represent heterogeneous multi-core
platforms) that can deliver i) improved levels of cost/performace to end
and ii) support for innovative communication services. In earlier work,
performance advantages derived from this approach were demonstrated with
I2O boards, on local- and metro-area networks. Ongoing work uses the Intel
network processors, FPGAs, GPUs, and the Cell processor as evaluation
The goal of the MORPH project
is to create methods and techniques
for deploying self-modifying (morphable) application services onto
cooperating devices, so as to continuously meet the Quality of Service
(QoS) requirements of end users. The ideas which support such
self-modifying applications span the domains of `systems', `compilers',
and hardware. In the systems domain, we use component-based middleware to
dynamically deploy and re-deploy services onto distributed computing
platforms. Kernel modules associate dynamic quality management methods
with middleware-based end user applications, with a strong focus on
managing current energy usage. Using standard Linux-based platforms, the
idea of kernel-level support, termed E-to-E Prof, is to span all machines
and devices that currently cooperate, when such cooperation is established
and while it is ongoing. The compiler methods address the manner in which
application code implements the services needed by end users. When power
is low, for instance, code can use power efficient sets of instructions
rather than computing all application-level values with high precision.
When a cooperating platform is not trusted, compiler methods can protect
and distribute critical application state to reduce its exposure to
intrusions. Hardware research serves to further extend the `optimization
space' available to compiler- and system-based methods for service
morphing, by providing novel ways of configuring hardware for improved
power efficiency and security.
The IHPCL/CSV -- the Interactive High Performance Computing Laboratory and the Computational Science Venues project
is a university-wide effort to which Intel Corporation has granted multiple
high performance cluster computers, each of substantial size.
clusters provide a low-cost solution to high performance computing for
parallel and distributed scientific applications. Current research may be
categorized into three areas: (1) grand challenge applications, (2)
interactive high performance computing, and (3) underlying network
support. Grand challenge applications include, but are not limited to,
large-scale optimization problems solved by members of Georgia Tech's
School of Industrial and Systems Engineering, molecular dynamics modeling
conducted by researchers in Georgia Tech's School of Physics, and
turbulent combustion modeling investigated in the School of Aerospace
Engineering. The `I' in IHPCL reflects its key goal of supporting
collaboration and interaction among end users via those applications that
are most meaningful to them. Our research includes dynamic program
steering and monitoring, the efficient transport of large data flows
across heterogeneous machines, the real-time transformation and filtering
of the data needed for remote scientific visualizations, the dynamic
control of such data flows via active user interfaces, and the remote
manipulation of computational tools by multiple end users. An earlier
project addressing related issues was the Distributed Laboratories
Srihari Govindharaj - now at Intercontinental Exchange.
Jancic - now at EMC.
Radhika Niranjan (jointly with Ada Gavrilovska) - now at UC San Diego.
Van Oleson - now at Cisco Systems.
Hailemelekot Seifu - now at Radisys.
Bhumik Sanghavi (jointly with Matt Wolf).
Leo Singleton - now at Citrix Systems.
Srikanth Sundaragopalan - now at Microsoft.
of Computing, Georgia Tech
266 Ferst Dr.,
Atlanta GA 30332-0765
Georgia Tech, KACB - Rm. 3338