In this special guest feature, Earl Joseph from IDC describes his SC15 panel where four HPC luminaries discussed, disputed, and divined the path to exascale computing. “As the panel wound to a close, participants agreed on one thing: the path to exascale contains significant obstacles, but they’re not insurmountable. Tremendous progress is being made in preparing codes for the next generations of systems, and sheer determination and innovation is running at an all-time high.”
Today the ISC 2016 conference announced that their Tuesday keynote session will highlight contributions from female researchers and scientists in advancing the field of computational science. “This year, Dr. Jacqueline H. Chen, a distinguished member of technical staff at Sandia National Laboratories, has been invited to keynote on Tuesday, June 21, on the topic of advancing the science of turbulent combustion using petascale and exascale simulations.”
The HPC Advisory Council has posted the speaker agenda for the HPCAC Swiss Conference. The event takes place March 21-23 in Lugano, Switzerland. The conference will focus on High-Performance Computing essentials, new developments and emerging technologies, best practices and hands-on training.
“We expect NCSI to run for the next two decades. It’s a bit audacious to start a 20 year project in the last 18 months of an administration, but one of the things that gives us momentum is that we are not starting from a clean sheet of paper. There are many government agencies already involved and what we’re really doing is increasing their coordination and collaboration. Also we will be working very hard over the next 18 months to build momentum and establish new working relationships with academia and industry.”
In this special guest feature, Robert Roe from Scientific Computing World reports that a new Exascale computing architecture using ARM processors is being developed by a European consortium of hardware and software providers, research centers, and industry partners. Funded by the European Union’s Horizon2020 research program, a full prototype of the new system is expected to be ready by 2018.
Today the European Consortium announced a step towards Exascale computing with the ExaNeSt project. Funded by the Horizon 2020 initiative, ExaNeSt plans to build its first straw man prototype in 2016. The Consortium consists of twelve partners, each of which has expertise in a core technology needed for innovation to reach Exascale. ExaNeSt takes the sensible, integrated approach of co-designing the hardware and software, enabling the prototype to run real-life evaluations, facilitating its scalability and maturity into this decade and beyond.
The fastest supercomputers are built with the fastest microprocessor chips, which in turn are built upon the fastest switching technology. But, even the best semiconductors are reaching their limits as more is demanded of them. In the closing months of this year, came news of several developments that could break through silicon’s performance barrier and herald an age of smaller, faster, lower-power chips. It is possible that they could be commercially viable in the next few years.
“The path to Exascale computing is clearly paved with Co-Design architecture. By using a Co-Design approach, the network infrastructure becomes more intelligent, which reduces the overhead on the CPU and streamlines the process of passing data throughout the network. A smart network is the only way that HPC data centers can deal with the massive demands to scale, to deliver constant performance improvements, and to handle exponential data growth.”
In this video from SC15, Peter Hopton from Iceotope describes the company’s innovative liquid cooling technology for the European ExaNeSt project. “ExaNeSt will develop, evaluate, and prototype the physical platform and architectural solution for a unified Communication and Storage Interconnect and the physical rack and environmental structures required to deliver European Exascale Systems.”
In this Intel Chip Chat podcast, Alan Gara describes how Intel’s Scalable System Framework (SSF) is meeting the extreme challenges and opportunities that researchers and scientists face in high performance computing today. He explains that SSF incorporates many different Intel technologies including; Intel Xeon and Phi processors, Intel Omni-Path Fabrics, silicon photonics, innovative memory technologies, and efficiently integrates these elements into a broad spectrum of system solutions optimized for both compute and data-intensive workloads. Mr. Gara emphasizes that this framework has the ability to scale from very small HPC systems all the way up to exascale computing systems and meets the needs of users in a very scalable and flexible way.