Making Effective Use of (Partial) Data Dependencies for Parallelization

Add to Calendar
Date:
January 10, 2013 2:00 pm - 3:00 pm
Location:
KACB 1116W

Data dependencies have strong connections with parallelism. The fundamental observation, going (at least) 30 years back, is that two code blocks that have no (transitive) data dependencies can be executed in parallel, resulting in the same final state as running the codes sequentially. This has been the basis and precondition for sophisticated research on parallelizing compilers for many years. Unfortunately, only in rare cases is this precondition met: The candidate code blocks are often dependent, and even if not, the compiler's (static) dependence analysis is typically too conservative to prove independence, failing due to spurious dependencies.

Tripp will propose a new view of program dependencies, utilizing accurate -- yet potentially partial -- dependence information to tune/specialize a baseline synchronization algorithm while preserving its correctness (i.e. serializability guarantees). This can be done in more than one way, including (i) building specialized, client-specific conflict-detection oracles, (ii) synthesizing concurrency monitors that predict the available parallelism per input data and/or computation phase, and (iii) finding true, semantic dependencies that limit parallelism. He will survey several techniques for leveraging dependence information along these lines, which make safe use of dynamic (rather than static) dependencies, backed by user-provided data abstractions, for precise dependence analysis.