Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Toronto Startup Launches HPCBOX – An Elastic HPC Cloud Platform

Today Toronto startup Drizti announced the availability of HPCBOX, a desktop-centric, intelligent workflow cloud HPC platform for automating and executing application pipelines. “HPCBOX introduces an innovative method of plugging in cloud infrastructure into application pipelines and provides a rich desktop experience, including hardware accelerated remote graphics technology for authoring workflows with its workflow editor. Users from multiple industry verticals, like Manufacturing, Packaging, Automotive, Renewable Energy, Aerospace etc., who use Simulation Engineering, AI, or Machine Learning technologies and require HPC can cut costs can immensely benefit from this turn-key HPC cloud platform.”

Dr. Eng Lim Goh presents: Prediction – Use Science or History?

Dr. Eng Lim Goh from HPE gave this keynote talk at PASC18. “Traditionally, scientific laws have been applied deductively – from predicting the performance of a pacemaker before implant, downforce of a Formula 1 car, pricing of derivatives in finance or the motion of planets for a trip to Mars. With Artificial Intelligence, we are starting to also use the data-intensive inductive approach, enabled by the re-emergence of Machine Learning which has been fueled by decades of accumulated data.”

Video: New Cascade Lake Xeons to Speed Ai with Intel Deep Learning Boost

This week at the Data-Centric Innovation Summit, Intel laid out their near-term Xeon roadmap and plans to augment their AVX-512 instruction set to boost machine learning performance. “This dramatic performance improvement and efficiency – up to twice as fast as the current generation – is delivered by using a single instruction to handle INT8 convolutions for deep learning inference workloads which required three separate AVX-512 instructions in previous generation processors.”

SkyScale: GPU Cloud Computing with a Difference

In this guest post, Tim Miller, president of SkyScale, covers how GPU cloud computing is on the fast track to crossing the chasm to widespread adoption for HPC applications. “Two good examples of very different markets adopting GPU computing and where cloud usage makes sense are artificial intelligence and high quality rendering.”

NSF STAQ Project to devise First Practical Quantum Computer

To accelerate the development of a practical quantum computer that will one day answer currently unsolvable research questions, the National Science Foundation (NSF) has awarded $15 million over five years to the multi-institution Software-Tailored Architecture for Quantum co-design (STAQ) project. “Developing the first practical quantum computer would be a major milestone. By bringing together experts who have outlined a path to a practical quantum computer and supporting its development, NSF is working to take the quantum revolution from theory to reality.”

Machine Learning with Python: Distributed Training and Data Resources on Blue Waters

Aaron Saxton from NCSA gave this talk at the Blue Waters Symposium. “Blue Waters currently supports TensorFlow 1.3, PyTorch 0.3.0 and we hope to support CNTK and Horovod in the near future. This tutorial will go over the minimum ingredients needed to do distributed training on Blue Waters with these packages. What’s more, we also maintain an ImageNet data set to help researchers get started training CNN models. I will review the process by which a user can get access to this data set.”

Radio Free HPC Discusses the IO500 Benchmark Suite with John Bent

In this podcast, the Radio Free HPC team talks to John Bent from the IO500 committee about why he and a team of I/O professionals created the IO500 benchmark suite. The second IO500 list was revealed at ISC 2018 in Frankfurt, Germany. “The current list includes results from BeeGFS, DataWarp, IME, Lustre, and Spectrum Scale. We hope that the next list has even more.”

Characterizing Faults, Errors and Failures in Extreme-Scale Computing Systems

Christian Engelmann from ORNL gave this talk at PASC18. “Building a reliable supercomputer that achieves the expected performance within a given cost budget and providing efficiency and correctness during operation in the presence of faults, errors, and failures requires a full understanding of the resilience problem. The Catalog project develops a fault taxonomy, catalog and models that capture the observed and inferred conditions in current supercomputers and extrapolates this knowledge to future-generation systems. To date, the Catalog project has analyzed billions of node hours of system logs from supercomputers at Oak Ridge National Laboratory and Argonne National Laboratory. This talk provides an overview of our findings and lessons learned.”

The Search for Gravitational Waves

In this video from PASC18, Alexander Nitz from the Max Planck Institute for Gravitational Physics in Germany presents: The Search for Gravitational Waves. “The LIGO and Virgo detectors have completed a prolific observation run. We are now observing gravitational waves from both the mergers of binary black holes and neutron stars. We’ll discuss how these discoveries were made and look into what the near future of searching for gravitational waves from compact binary mergers will look like.”

How Red Hat Powers the #1 Summit Supercomputer

In this video from ISC 2018, Yan Fisher from Red Hat and Buddy Bland from ORNL discuss Summit, the world’s fastest supercomputer. Red Hat teamed with IBM, Mellanox, and NVIDIA to provide users with a new level of performance for HPC and AI workloads. ”
But the rapid innovation showcased by Summit must be consumable, and that’s where Red Hat Enterprise Linux comes in. Despite the scale, processing capability, and “intelligence” of Summit’s composition, end users interact with something they understand: Linux, in the form of the world’s leading enterprise Linux platform. Red Hat Enterprise Linux provides a common, stable basis that ties together all of this innovation.”