PNNL announced yesterday that it is leading a multi-institutional team that has been awarded $4M to develop software for the Cray XMT for data-intensive computing. (For some additional background on data intensive computing, check out this article I wrote at HPCwire.)
The difference between the new breed and traditional supercomputers is how they access data, a difference that significantly increases computing power. But old software won’t run on the new hardware any more than a PC program will run on a Mac. So, the Department of Defense provided the funding this month to seed the Center for Adaptive Supercomputing Software, a joint project between the Department of Energy’s Pacific Northwest National Laboratory and Cray, Inc, in Seattle.
…Other researchers in the software collaboration hail from Sandia National Laboratories, Georgia Institute of Technology, Washington State University and the University of Delaware.
PNNL’s article is interesting, with some additional results and background on applications
In previously published work, PNNL computational scientist Jarek Nieplocha used a predecessor of the Cray XMT to run typical software programs that help operators keep the power grid running smoothly. Adapted to the advanced hardware, these programs ran 10 times faster on the multithreaded machine. “That was the best speed ever reported. We’re getting closer to being able to track the grid in real time,” said Nieplocha.