Graduate Research Assistantship positions for Ph.D. students
are available in the areas of parallel and multicore algorithms,
high-performance computing, computational science and engineering,
large-scale optimization problems, and in application area of
computational biology and genomics. Several current projects are
described below. Additional new projects are anticipated by the next
Fall semester. Each research assistant will receive a competitive
stipend plus paid tuition.
Applicants should complete an official application for graduate
studies in either the "Computational Science and Engineering (College of Computing)" or the "Computer Science" graduate programs at Georgia Tech,
and select the Computational Science and Engineering track. APPLICATION DEADLINE: December 15.
The Georgia Tech graduate application is available
online at http://www.gradadmiss.gatech.edu/
On Page 1, question 14 (Program of Study), of the online application, click on "Search for Degree and Major", click on "Ph.D", and select "Computational Science and Engineering (College of Computing)" for Graduate Major, and select "GT-Atlanta" for the planned campus.
One Page 4 (Georgia Tech computer science application), question 1, select "High Performance Computing" as your first choice area of interest.
In your statement on Page 4, please include this sentence:
"I wish to be considered for a Graduate Research Assistantship
under the direction of Professor David A. Bader."
Please email Prof. David A. Bader (
First and Last name once you have submitted your online
application and received an Order ID.
PROJECT GRATEFUL: Graph Analysis Tackling power-Efficiency, Uncertainty and Locality
(Funded by DARPA Power Efficiency Revolution for Embedded Computing Technologies (PERFECT))
Georgia Tech has received $561,130 for the first phase of a negotiated three-phase $2.9 million cooperative agreement contract from the U.S. Defense Advanced Projects Research Agency (DARPA) to create the algorithmic framework for supercomputing systems that require much less energy than traditional high-speed machines, enabling devices in the field to perform calculations that currently require room-sized supercomputers.
Awarded under DARPA’s Power Efficiency Revolution for Embedded Computing Technologies (PERFECT) program, the GRATEFUL project is one piece of a national effort to increase the computational power efficiency of "embedded systems" by 75-fold over the best current computing performance in areas extending beyond traditional scientific computing.
PROJECT STING: Graph Analytics for Streaming Data on Emerging Platforms
The growth of graph-structured data sets is outpacing analysis tools
rapidly. Social networks like Facebook are growing quickly, adding an
average of 17 million users per month over the past year to a present
total of 300 million users with 45 million messages posted per day.
Communication systems like Twitter add 25 million messages per day
with rich context linking messages, users, and topics. Even such
“sedate” topics as protein analysis generate millions of updates per
year. Each of these graphs already stress analysis tools for static,
unchanging graphs; simply repeating static analysis is insufficient
for current graph data. We are developing tools to analyze streaming,
dynamic graph data. These tools require adapting static analysis
algorithms and developing new dynamic algorithms. To implement these
algorithms efficiently, we are evaluating data structures and
programming techniques in emerging development platforms like X10 and
on new multithreaded hardware.
PROJECT BURTON: Research Infrastructure for Multithreaded Computing Platforms
(Funded by NSF)
Computer scientists have long debated the merits of message-passing
versus shared-memory architectures for parallel systems. Message
passing with MPI on commodity (e.g. Linux) clusters dominates
high-performance computing today and has a strong infrastructure to
support development and research. The trend towards multicore
processors changes the situation. The major processor developers all
envision placing tens to hundreds of cores on a single die, each
running multiple threads. To take advantage of this, the CS community
must focus on how to develop efficient multithreaded programs in a
globally addressable memory space. Multithreaded computing needs to
grow a support infrastructure comparable to MPI quickly. As part of a
community of diverse groups of researchers with extensive experience
with shared-memory multithreading, we are developing the shared
infrastructure needed for multicore, multithreaded research and
Future and on-going interests
High-performance computing on manycore and multicore archtectures
Rendering currenlty intractable problems feasible for researchers
in bioinformatics, genomics, and other scientific areas through
parallelism advanced algorithms
Exploring trade-offs in performance, energy efficiency, and
productivity in heterogeneous system architectures
Processing massive volumes of streaming data to provide low-latency