Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


John Shalf from LBNL on Computing Challenges Beyond Moore’s Law

In this special guest feature from Scientific Computing World, Robert Roe interviews John Shalf from LBNL on the development of digital computing in the post Moore’s law era. “In his keynote speech at the ISC conference in Frankfurt, Shalf described the lab-wide project at Berkeley and the DOE’s efforts to overcome these challenges through the development acceleration of the design of new computing technologies.”

Podcast: Tackling Massive Scientific Challenges with AI/HPC Convergence

In this Chip Chat podcast, Brandon Draeger from Cray describes the unique needs of HPC customers and how new Intel technologies in Cray systems are helping to deliver improved performance and scalability. “More and more, we are seeing the convergence of AI and HPC – users investigating how they can use AI to complement what they are already doing with their HPC workloads. This includes using machine and deep learning to analyze results from a simulation, or using AI techniques to steer where to take a simulation on the fly.”

Time to Value: Storage Performance in the Epoch of AI

Sven Oehme gave this talk at the DDN User Group meeting at ISC 2019. “New AI and ML frameworks, advances in computational power (primarily driven by GPU’s), and sophisticated, maturing use-cases are demanding more from the storage platform. Sven will share some of DDN’s recent innovations around performance and talks about how they translate into real-world customer value.”

Podcast: ExaScale is a 4-way Competition

In this podcast, the RadioFree team discusses the 4-way competition for Exascale computing between the US, China, Japan, and Europe. “The European effort is targeting 2 pre-exa installation in the coming months, and 2 actual ExaScale installations in the 2022-2023 timeframe at least one of which will be based on European technology.”

Video: Verne Global joins NVIDIA DGX-Ready Program as HPC & AI Colocation Partner

In this video, Bob Fletcher from Verne Global describes advantages the HPC cloud provider offers through the NVIDIA DGX Ready Data Center program. “Enterprises and research organizations seeking to leverage the NVIDIA DGX-2 System – the world’s most powerful AI system – now have the option to deploy their AI infrastructure using a cost-effective Op-Ex solution in Verne Global’s HPC-optimized campus in Iceland, which utilizes 100 percent renewable energy and relies on one of the world’s most reliable and affordable power grids.”

Rigetti Computing acquires QxBranch for Quantum-powered Analytics

Today Rigetti Computing announced it has acquired QxBranch, a quantum computing and data analytics software startup. “Our mission is to deliver the power of quantum computing to our customers and help them solve difficult and valuable problems,” said Chad Rigetti, founder and C.E.O. of Rigetti Computing. “We believe we have the leading hardware platform, and QxBranch is the leader at the application layer. Together we can shorten the timeline to quantum advantage and open up new opportunities for our customers.”

Google Cloud and NVIDIA Set New Training Records on MLPerf v0.6 Benchmark

Today the MLPerf effort released results for MLPerf Training v0.6, the second round of results from their machine learning training performance benchmark suite. MLPerf is a consortium of over 40 companies and researchers from leading universities, and the MLPerf benchmark suites are rapidly becoming the industry standard for measuring machine learning performance. “We are creating a common yardstick for training and inference performance,” said Peter Mattson, MLPerf General Chair.

The Challenges of Updating Scientific Codes for New HPC Architectures

In this video from PASC19 in Zurich, Benedikt Riedel from the University of Wisconsin describes the challenges researchers face when it comes to updating their scientific codes for new HPC architectures. After that he describes his work on the IceCube Neutrino Observatory.

Supercomputing Potential Impacts of a Major Quake by Building Location and Size

National lab researchers from Lawrence Livermore and Berkeley Lab are using supercomputers to quantify earthquake hazard and risk across the Bay Area. Their work is focused on the impact of high-frequency ground motion on thousands of representative different-sized buildings spread out across the California region. “While working closely with the NERSC operations team in a simulation last week, we used essentially the entire Cori machine – 8,192 nodes, and 524,288 cores – to execute an unprecedented 5-hertz run of the entire San Francisco Bay Area region for a magnitude 7 Hayward Fault earthquake.”

NEC Embraces Open Source Frameworks for SX-Aurora Vector Computing

In this video from ISC 2019, Dr. Erich Focht from NEC Deutschland GmbH describes how the company is embracing open source frameworks for the SX-Aurora TSUBASA Vector Supercomputer. “Until now, with the existing server processing capabilities, developing complex models on graphical information for AI has consumed significant time and host processor cycles. NEC Laboratories has developed the open-source Frovedis framework over the last 10 years, initially for parallel processing in Supercomputers. Now, its efficiencies have been brought to the scalable SX-Aurora vector processor.”