Today the European Consortium announced a step towards Exascale computing with the ExaNeSt project. Funded by the Horizon 2020 initiative, ExaNeSt plans to build its first straw man prototype in 2016. The Consortium consists of twelve partners, each of which has expertise in a core technology needed for innovation to reach Exascale. ExaNeSt takes the sensible, integrated approach of co-designing the hardware and software, enabling the prototype to run real-life evaluations, facilitating its scalability and maturity into this decade and beyond.
The fastest supercomputers are built with the fastest microprocessor chips, which in turn are built upon the fastest switching technology. But, even the best semiconductors are reaching their limits as more is demanded of them. In the closing months of this year, came news of several developments that could break through silicon’s performance barrier and herald an age of smaller, faster, lower-power chips. It is possible that they could be commercially viable in the next few years.
“The path to Exascale computing is clearly paved with Co-Design architecture. By using a Co-Design approach, the network infrastructure becomes more intelligent, which reduces the overhead on the CPU and streamlines the process of passing data throughout the network. A smart network is the only way that HPC data centers can deal with the massive demands to scale, to deliver constant performance improvements, and to handle exponential data growth.”
In this video from SC15, Peter Hopton from Iceotope describes the company’s innovative liquid cooling technology for the European ExaNeSt project. “ExaNeSt will develop, evaluate, and prototype the physical platform and architectural solution for a unified Communication and Storage Interconnect and the physical rack and environmental structures required to deliver European Exascale Systems.”
In this Intel Chip Chat podcast, Alan Gara describes how Intel’s Scalable System Framework (SSF) is meeting the extreme challenges and opportunities that researchers and scientists face in high performance computing today. He explains that SSF incorporates many different Intel technologies including; Intel Xeon and Phi processors, Intel Omni-Path Fabrics, silicon photonics, innovative memory technologies, and efficiently integrates these elements into a broad spectrum of system solutions optimized for both compute and data-intensive workloads. Mr. Gara emphasizes that this framework has the ability to scale from very small HPC systems all the way up to exascale computing systems and meets the needs of users in a very scalable and flexible way.
In this video from SC15, Scot Schultz from Mellanox describes the company’s new Switch-IB 2, the new generation of its InfiniBand switch optimized for High-Performance Computing, Web 2.0, database and cloud data centers, capable of 100Gb/s per port speeds. “Switch-IB 2 is the world’s first smart network switch that offloads MPI operations from the CPU to the network to deliver 10X performance improvements. Switch-IB 2 will enables a performance breakthrough in building the next generation scalable and data intensive data centers, enabling users to gain a competitive advantage.”
Intel in Oregon is seeking an HPC Software Intern in our Job of the Week. “If you are interested in being on the team that builds the world’s fastest supercomputer, read on. Our team is designing how we integrate new HW and SW, validate extreme scale systems, and debug challenges that arise. The team consist of engineers who love to learn, love a good challenge, and aren’t afraid of a changing environment. We need someone who can help us with creating and executing codes that will be used to validate and debug our system from first Si bring-up through at-scale deployment. The successful candidate will have experience in the Linux environment creating code: C or Python. If you have the right skills, you will help build systems utilized by the best minds on the planet to solve grand challenge science problems such as climate research, bio-medical research, genome analysis, renewable energy, and other areas that require the world’s fastest supercomputers to tackle. Be part of the first to get to Exascale!”
In this video, Torsten Hoefler from ETH Zurich and John West from TACC discuss the preview the upcoming PASC16 and SC16 conferences. With a focus on Exascale computing and user applications, the events will set the stage for the next decade in High Performance Computing.
“We’ve had a great time here in Austin talking about data centric computing– the ability to use IBM Spectrum Scale and Platform LSF to do Cognitive Computing. Customers, partners, and the world have been talking about how we can really bring together file, object, and even business analytics workloads together in amazing ways. It’s been fun.”
In this special guest feature from Scientific Computing World, Robert Roe writes that software scalability and portability may be more important even than energy efficiency to the future of HPC. “As the HPC market searches for the optimal strategy to reach exascale, it is clear that the major roadblock to improving the performance of applications will be the scalability of software, rather than the hardware configuration – or even the energy costs associated with running the system.”