DOE Funds Asynchronous Supercomputing Research at Georgia Tech

Print Friendly, PDF & Email
Edmond Chow, Georgia Tech

Edmond Chow, Georgia Tech

The DOE is funding a $2.4 million project at Georgia Tech to develop new computer algorithms for solving linear and nonlinear equations that will ultimately help pave the way for the next generation of supercomputers.

More than just building bigger and faster computers, high-performance computing is about how to build the algorithms and applications that run on these computers,” said School of Computational Science and Engineering (CSE) Associate Professor Edmond Chow. “We’ve brought together the top people in the U.S. with expertise in asynchronous techniques as well as experience needed to develop, test, and deploy this research in scientific and engineering applications.”

The research targets a critical need for more advanced algorithms in the transition from petascale to exascale computing, which could unlock a thousand-fold increase in computer performance. Exascale computing refers to computing systems capable of at least a billion billion (quintillion) – or one exaflop – floating-point operations per second.

Chow and his team plan to replace the current generation of solvers­ – mathematical tools used to determine solutions to particular problems – that are being impeded by “synchronous operations.” Occurring at the same time, these operations create a bottleneck due to the sequence in which processors must perform their calculations. The proposed approaches offer “asynchronous” techniques that allow each processor to operate independently, proceeding with the most recently available data, rather than waiting to sync with the remaining processors.

The three-year project is part of the U.S. government’s initiative to build an exascale supercomputer by 2023. This research could have an impact on a large variety of applications, including large-scale materials science, climate research, and combustion simulations. Additionally, the research could fundamentally shift current understanding of what can be achieved through parallel computing.

Sign up for our insideHPC Newsletter