In this special feature, Mike Bernhardt looks at our current progress towards Exascale computing, a future milestone of supercomputing 10^18 flops.
The Blue Waters petascale computing project at NCSA and the Extreme Science and Engineering Discovery Environment (XSEDE) have signed a collaborative agreement, bringing together the National Science Foundation’s two largest cyberinfrastructure projects.
There is a strong and growing wave of interest (and not just from vendors) in cloud-based engineering simulation. I met with numerous end users, especially from smaller and mid-market companies, who desperately need a more cost-effective way to perform engineering work, in terms of both computation and software licensing.
Hurricane Electric will host its next Carrier Networking Event on Wednesday, October 30. The event will feature Rich Brueckner from insideBIGDATA. “We are thrilled to welcome Rich to our increasingly popular Carrier Networking Event. Big Data stands out in the industry as a topic continually growing in importance and we look forward to Rich sharing his extensive knowledge and thoughts on future trends.”
Clive Longbottom contrasts the architectures of high-performance and high-availability clusters. And while users may request the best of both worlds, the end game often comes down to money.
A public database dubbed MERIL (Mapping of the European Research Infrastructure Landscape) has been launched with the aim of providing a comprehensive inventory of high quality research infrastructures in Europe across all scientific domains, and accessible through an interactive online portal.
John Barr asks what standard approach can the industry agree on to make next-generation HPC systems easier to program?
When the architecture of high-performance computing (HPC) systems changes, the tools and programming paradigms used to develop applications may also have to change. We have seen several such evolutions in recent decades, including the introduction of multiprocessors, the use of heterogeneous processors to accelerate applications, vector processors, and cluster computing.
These changes, while providing the potential for delivering higher performance at lower price points, give the HPC industry a big headache. That headache is brought on by the need for application portability, which in turn leverages standard development tools that support a range of platforms. Without software standards supporting emerging platforms, independent software vendors (ISVs) are slow to target these new systems, and without a broad base of software the industry pauses while the software catches up.
An example of this was the transition from complex, expensive shared memory systems to proprietary distributed memory systems, and then to clusters of low cost commodity servers. When each new system had its own message passing library, ISVs were reluctant to port major applications. It was only when MPI was widely accepted as the standard message passing library that the body of applications available for clusters started to grow, and cluster computing became the norm in mainstream HPC.
So the burning question is: What standard approach can the industry agree on to make next generation HPC systems easier to program, and therefore more attractive for ISVs to support?
In this podcast from The Exascale Report, Pete Beckman from Argonne National Labs describes the Argo project, a prototype operating system for future Exascale supercomputers. Download the MP3. For a Full Transcript, subscribe to The Exascale Report.