In this slidecast, Gilad Shainer from Mellanox describes the advantages of InfiniBand and the company’s off-loading network architecture for HPC. “The path to Exascale computing is clearly paved with Co-Design architecture. By using a Co-Design approach, the network infrastructure becomes more intelligent, which reduces the overhead on the CPU and streamlines the process of passing data throughout the network. A smart network is the only way that HPC data centers can deal with the massive demands to scale, to deliver constant performance improvements, and to handle exponential data growth.”
Today ISC 2016 announced that five renowned experts in computational science will participate in their new Distinguished Speaker series. Topics will include exascale computing efforts in the US, the next supercomputers in development in Japan and China, cognitive computing advancements at IBM, and quantum computing research at NASA.
In this video from the HPC User Forum in Tucson, Saul Gonzalez Martirena from NSF provides an update on the NSCI initiative. “As a coordinated research, development, and deployment strategy, NSCI will draw on the strengths of departments and agencies to move the Federal government into a position that sharpens, develops, and streamlines a wide range of new 21st century applications. It is designed to advance core technologies to solve difficult computational problems and foster increased use of the new capabilities in the public and private sectors.”
“Atos is one out of three or four worldwide players having the expertise and know-how to build supercomputers today – and the only one in Europe. It is a source of pride for our company and provides a unique competitive advantage for our clients. With Atos’ Bull sequana astounding compute performance, businesses can now more efficiently maximize the value of data on a daily basis. By 2020, Bull sequana will reach exaflops level and will be able to process a billion billion operations per second.” says Atos Chairman and CEO Thierry Breton.
“Traditionally, storage have been using brute force rather than intelligent design to deliver the required throughputs but the current trend is to design balanced systems with full utilization of the back-end storage and other related components. These new systems need to use fine grained power control all the way down to individual disk drives as well as tools for continuous monitoring and management of these systems. In addition, the storage solutions of tomorrow needs to support multiple tiers including backend archiving systems supported by HSM as well multiple file systems if required. This presentation is intended to provide a short update of where Seagate HPC storage is today.”
In this video from the 2016 HPC Advisory Council Switzerland Conference, Addison Snell from Intersect360 Research moderates a panel discussion on Exascale computing. “Exascale computing will uniquely provide knowledge leading to transformative advances for our economy, security and society in general. A failure to proceed with appropriate speed risks losing competitiveness in information technology, in our industrial base writ large, and in leading-edge science.”
Funded by the European Commission in 2011, the DEEP project was the brainchild of scientists and researchers at the Jülich Supercomputing Centre (JSC) in Germany. The basic idea is to overcome the limitations of standard HPC systems by building a new type of heterogeneous architecture. One that could dynamically divide less parallel and highly parallel parts of a workload between a general-purpose Cluster and a Booster—an autonomous cluster with Intel® Xeon Phi™ processors designed to dramatically improve performance of highly parallel code.
“High performance computing has begun scaling beyond Petaflop performance towards the Exaflop mark. One of the major concerns throughout the development toward such performance capability is scalability – at the component level, system level, middleware and the application level. A Co-Design approach between the development of the software libraries and the underlying hardware can help to overcome those scalability issues and to enable a more efficient design approach towards the Exascale goal.”
DK Panda from Ohio State University presented this talk at the Switzerland HPC Conference. “This talk will focus on challenges in designing runtime environments for Exascale systems with millions of processors and accelerators to support various programming models. We will focus on MPI, PGAS (OpenSHMEM, CAF, UPC and UPC++) and Hybrid MPI+PGAS programming models by taking into account support for multi-core, high-performance networks, accelerators (GPUs and Intel MIC) and energy-awareness. Features and sample performance numbers from the MVAPICH2 libraries will be presented.”
Rich Graham presented this talk at the Stanford HPC Conference. “Exascale levels of computing pose many system- and application- level computational challenges. Mellanox Technologies, Inc. as a provider of end-to-end communication services is progressing the foundation of the InfiniBand architecture to meet the exascale challenges. This presentation will focus on recent technology improvements which significantly improve InfiniBand’s scalability, performance, and ease of use.”