Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Long Live Posix – HPC Storage and the HPC Datacenter

Robert Triendl from DDN gave this talk at the Swiss HPC Conference. “The Portable Operating System Interface (POSIX) is a family of standards specified by the IEEE Computer Society for maintaining compatibility between operating systems. Since it was developed over 30 years ago, storage has changed dramatically. To improve the IO performance of applications, many users have called for the relaxation in POSIX IO that could lead to the development of new storage mechanisms to improve not only application performance, but management, reliability, portability, and scalability.”

Podcast: Doug Kothe Looks back at the Exascale Computing Project Annual Meeting

In this podcast, Doug Kothe from the Exascale Computing Project describes the 2019 ECP Annual Meeting. “Key topics to be covered at the meeting are discussions of future systems, software stack plans, and interactions with facilities. Several parallel sessions are also planned throughout the meeting.”

Accelerate Your Business with HPC

This best-practice guide will help you evaluate and consider the best approach to adopt HPC for your business needs as well as the solution components to be considered in its implementation. Download the new report from Intel that explores how to accelerate your business with HPC. 

Growing HPC Adoption Among Manufacturers

The global manufacturing industry is moving down the path to a fourth industrial revolution — Industry 4.0 — empowered by the opportunity to collect and analyze massive amounts of data. This guest post from Intel explores how the global manufacturing industry is moving toward HPC adoption, and how it is approaching an inflection point which Intel refers to as “HPC for Everyone.” 

Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems

DK Panda from Ohio State University gave this talk at the Stanford HPC Conference. “This talk will focus on challenges in designing HPC, Deep Learning, and HPC Cloud middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss the challenges in designing runtime environments for MPI+X (PGAS-OpenSHMEM/UPC/CAF/UPC++, OpenMP and Cuda) programming models by taking into account support for multi-core systems (KNL and OpenPower), high networks, GPGPUs (including GPUDirect RDMA) and energy awareness.”

Video: Project Cyclops comes to SC17 in a Quest to Build the World’s Fastest Node

In this video from SC17, Rich Brueckner from insideHPC describes Project Cyclops, a benchmarking quest to build the world’s fastest single node. The single-node Cyclops supercomputer demonstrates the computational power that individual scientists, engineers, artificial intelligence practitioners, and data scientists can deploy in their offices. Cyclops looks to rank well on the HPCG benchmark.

Video: System Interconnects for HPC

In this video from the 2017 Argonne Training Program on Extreme-Scale Computing, Pavan Balaji from Argonne presents an overview of system interconnects for HPC. “The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future.”

Video: Silicon Photonics for Extreme Computing

Keren Bergman from Columbia University gave this talk at the 2017 Argonne Training Program on Extreme-Scale Computing. “Exaflop machines would represent a thousand-fold improvement over the current standard, the petaflop machines that first came on line in 2008. But while exaflop computers already appear on funders’ technology roadmaps, making the exaflop leap on the short timescales of those roadmaps constitutes a formidable challenge.”

Video: The AI Initiative at NIST

Michael Garris from NIST gave this talk at the HPC User Forum. “AI must be developed in a trustworthy manner to ensure reliability and safety. NIST cultivates trust in AI technology by developing and deploying standards, tests and metrics that make technology more secure, usable, interoperable and reliable, and by strengthening measurement science. This work is critically relevant to building the public trust of rapidly evolving AI technologies.”

Video: Revolution in Computer and Data-enabled Science and Engineering

Ed Seidel from the University of Illinois gave this talk at the 2017 Argonne Training Program on Extreme-Scale Computing. The theme of his talk centers around the need for interdisciplinary research. “Interdisciplinary research (IDR) is a mode of research by teams or individuals that integrates information, data, techniques, tools, perspectives, concepts, and/or theories from two or more disciplines or bodies of specialized knowledge to advance fundamental understanding or to solve problems whose solutions are beyond the scope of a single discipline or area of research practice.”