New research from Georgia Tech’s School of Computational Science and Engineering (CSE) presents a pioneering method of overlapping communications with communications to more effectively use available network bandwidth.
The approach allows for actual data transfer in one operation to be overlapped with overheads, or any combination of memory, bandwidth, or other resources required to perform a specific task, in another operation. This method, and its two suggested techniques, are used on a key kernel in electronic structure calculations, and show up to 91.2 percent faster speeds.
“Communication-intensive applications that use collective operations like broadcast and reduction are likely to benefit from this method. Our techniques are simple to implement and may give better communication performance to such existing applications by modifying just a few lines of code,” said CSE Ph.D. student Hua Huang, the primary investigator of the research.
“We are planning to study the impact of these techniques on more applications. From a computational science perspective, our results may raise the awareness of MPI library developers and inspire them to design better implementations for some communication operations.”
Huang and CSE Associate Professor Edmond Chow are presenting a paper detailing their work this week at the 33rd IEEE International Parallel and Distributed Processing Symposium (IPDPS 2019).