MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


NSCI Update from the HPC User Forum

In this video from the HPC User Forum in Tucson, Saul Gonzalez Martirena from NSF provides an update on the NSCI initiative. “As a coordinated research, development, and deployment strategy, NSCI will draw on the strengths of departments and agencies to move the Federal government into a position that sharpens, develops, and streamlines a wide range of new 21st century applications. It is designed to advance core technologies to solve difficult computational problems and foster increased use of the new capabilities in the public and private sectors.”

Atos Rolls Out Bull sequana “The World’s Most Efficient Supercomputer”

“Atos is one out of three or four worldwide players having the expertise and know-how to build supercomputers today – and the only one in Europe. It is a source of pride for our company and provides a unique competitive advantage for our clients. With Atos’ Bull sequana astounding compute performance, businesses can now more efficiently maximize the value of data on a daily basis. By 2020, Bull sequana will reach exaflops level and will be able to process a billion billion operations per second.” says Atos Chairman and CEO Thierry Breton.

Video: Seagate Exascale HPC Storage

“Traditionally, storage have been using brute force rather than intelligent design to deliver the required throughputs but the current trend is to design balanced systems with full utilization of the back-end storage and other related components. These new systems need to use fine grained power control all the way down to individual disk drives as well as tools for continuous monitoring and management of these systems. In addition, the storage solutions of tomorrow needs to support multiple tiers including backend archiving systems supported by HSM as well multiple file systems if required. This presentation is intended to provide a short update of where Seagate HPC storage is today.”

Panel Discussion on Exascale Computing

In this video from the 2016 HPC Advisory Council Switzerland Conference, Addison Snell from Intersect360 Research moderates a panel discussion on Exascale computing. “Exascale computing will uniquely provide knowledge leading to transformative advances for our economy, security and society in general. A failure to proceed with appropriate speed risks losing competitiveness in information technology, in our industrial base writ large, and in leading-edge science.”

How Intel Worked with the DEEP Consortium to Challenge Amdahl’s Law

Funded by the European Commission in 2011, the DEEP project was the brainchild of scientists and researchers at the Jülich Supercomputing Centre (JSC) in Germany. The basic idea is to overcome the limitations of standard HPC systems by building a new type of heterogeneous architecture. One that could dynamically divide less parallel and highly parallel parts of a workload between a general-purpose Cluster and a Booster—an autonomous cluster with Intel® Xeon Phi™ processors designed to dramatically improve performance of highly parallel code.

Co-Design Architecture: Emergence of New Co-Processors

“High performance computing has begun scaling beyond Petaflop performance towards the Exaflop mark. One of the major concerns throughout the development toward such performance capability is scalability – at the component level, system level, middleware and the application level. A Co-Design approach between the development of the software libraries and the underlying hardware can help to overcome those scalability issues and to enable a more efficient design approach towards the Exascale goal.”

High-Performance and Scalable Designs of Programming Models for Exascale Systems

DK Panda from Ohio State University presented this talk at the Switzerland HPC Conference. “This talk will focus on challenges in designing runtime environments for Exascale systems with millions of processors and accelerators to support various programming models. We will focus on MPI, PGAS (OpenSHMEM, CAF, UPC and UPC++) and Hybrid MPI+PGAS programming models by taking into account support for multi-core, high-performance networks, accelerators (GPUs and Intel MIC) and energy-awareness. Features and sample performance numbers from the MVAPICH2 libraries will be presented.”

Rich Graham Presents: The Exascale Architecture

Rich Graham presented this talk at the Stanford HPC Conference. “Exascale levels of computing pose many system- and application- level computational challenges. Mellanox Technologies, Inc. as a provider of end-to-end communication services is progressing the foundation of the InfiniBand architecture to meet the exascale challenges. This presentation will focus on recent technology improvements which significantly improve InfiniBand’s scalability, performance, and ease of use.”

Video: Panel Discussion on Exascale Computing

In this video from the 2016 Stanford HPC Conference, Gilad Shainer from the HPC Advisory Council moderates a panel discussion on Exascale Computing. “Exascale computing will uniquely provide knowledge leading to transformative advances for our economy, security and society in general. A failure to proceed with appropriate speed risks losing competitiveness in information technology, in our industrial base writ large, and in leading-edge science.”

EXTOLL Deploys Immersion Cooled Compute Booster at Jülich

Today Extoll, the German HPC innovation company, announced that is has it has successfully implemented its new GreenICE immersion cooling system at the Jülich Supercomputing Centre. As part of the DEEP Dynamical Exascale Entry Platform project, GreenICE was developed to meet the need for increased compute power, density, and energy efficiency.