Today GENCI announced a collaboration with IBM aimed at speeding up the path to exascale computing. “The collaboration, planned to run for at least 18 months, focuses on readying complex scientific applications for systems under development expected to achieve more than 100 petaflops, a solid step forward on the path to exascale. Working closely with supercomputing experts from IBM, GENCI will have access to some of the most advanced high performance computing technologies stemming from the rapidly expanding OpenPOWER ecosystem.”
Geert Wenes writes in the Cray Blog that the next generation of Grand Challenges will focus on critical workflows for Exascale. “For every historical HPC grand challenge application, there is now a critical dependency on a series of other processing and analysis steps, data movement and communications that goes well beyond the pre- and post-processing of yore. It is iterative, sometimes synchronous (in situ) and generally more on an equal footing with the “main” application.”
In this special guest feature, Robert Roe from Scientific Computing World explores the efforts made by top HPC centers to scale software codes to the extreme levels necessary for exascale computing. “The speed with which supercomputers process useful applications is more important than rankings on the TOP500, experts told the ISC High Performance Conference in Frankfurt last month.”
KAUST in Saudi Arabia has been named as the latest Intel Parallel Computing Center. “The new PCC aims to provide scalable software kernels common to scientific simulation codes that will adapt well to future architectures, including a scheduled upgrade of KAUST’s globally Top10 Intel-based Cray XC40 system. In the spirit of co-design, Intel PCC at KAUST will also provide feedback that could influence architectural design trade-offs.”
As reported here, President Obama established the National Strategic Computing Initiative (NSCI) in July to ensure the United States continues leading high performance computing over the coming decades. Today, IDC announced what promises to be the first NSCI discussion involving the lead agencies at their next HPC User Forum.
“With this delivery, the DEEP consortium can leverage a supercomputer with a peak performance of 505 TFlop/s and an efficiency of over 3 GFlop/s per Watt. The Eurotech hot water cooling solution allows for additional permanent gains in energy efficiency at data centre level as it guarantees year-round free cooling in all climate zones. The system includes a matching innovative software stack, and six carefully selected grand challenge simulation applications have been optimized to show the full performance potential of the system.”
IDC has published the agenda for their next HPC User Forum. The event will take place Sept. 8-10 in Broomfield, CO.
“HPC has reached an inflection point with the convergence of traditional high performance computing and the emerging world of Big Data analytics. Intel’s HPC Scalable System Framework enables an unprecedented level of system balance, performance, and scalability necessary to meet the demands of bot compute- and data-intensive workloads, today and well into the future.”
“Exascale computers are going to deliver only one or two per cent of their theoretical peak performance when they run real applications; and both the people paying for, and the people using, such machines need to have realistic expectations about just how low a percentage of the peak performance they will obtain.”
Today President Obama issued an Executive Order establishing the National Strategic Computing Initiative (NSCI) to ensure the United States continues leading high performance computing over the coming decades.