Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


LANL’s Herbert Van de Sompel to Receive Paul Evan Peters Award

“For the last two decades Herbert, working with a range of collaborators, has made a sustained series of key contributions that have helped shape the current networked infrastructure to support scholarship,” noted CNI executive director Clifford Lynch. “While many people accomplish one really important thing in their careers, I am struck by the breadth and scope of his contributions.” Lynch added, “I’ve had the privilege of working with Herbert on several of these initiatives over the years, and I was honored in 2000 to be invited to serve as a special external member of the PhD committee at the University of Ghent, where he received his doctorate.”

DOE Helps Tackle Biology’s Big Data

Six proposals have been selected to participate in a new partnership between two U.S. Department of Energy (DOE) user facilities through the “Facilities Integrating Collaborations for User Science” (FICUS) initiative. The expertise and capabilities available at the DOE Joint Genome Institute (JGI) and the National Energy Research Scientific Computing Center (NERSC) – both at the Lawrence Berkeley National Laboratory (Berkeley Lab) – will help researchers explore the wealth of genomic and metagenomic data generated worldwide through access to supercomputing resources and computational science experts to accelerate discoveries.

Supercomputing by API: Connecting Modern Web Apps to HPC

In this video from OpenStack Australia, David Perry from the University of Melbourne presents: Supercomputing by API – Connecting Modern Web Apps to HPC. “OpenStack is a free and open-source set of software tools for building and managing cloud computing platforms for public and private clouds. OpenStack Australia Day is the region’s largest, and Australia’s best, conference focusing on Open Source cloud technology. Gathering users, vendors and solution providers, OpenStack Australia Day is an industry event to showcase the latest technologies and share real-world experiences of the next wave of IT virtualization.”

ORNL Taps D-Wave for Exascale Computing Project

Today Oak Ridge National Laboratory (ORNL) announced they’re bringing on D-Wave to use quantum computing as an accelerator for the Exascale Computing Project. “Advancing the problem-solving capabilities of quantum computing takes dedicated collaboration with leading scientists and industry experts,” said Robert “Bo” Ewald, president of D-Wave International. “Our work with ORNL’s exceptional community of researchers and scientists will help us understand the potential of new hybrid computing architectures, and hopefully lead to faster and better solutions for critical and complex problems.”

Podcast: A Retrospective on Great Science and the Stampede Supercomputer

TACC will soon deploy Phase 2 of the Stampede II supercomputer. In this podcast, they celebrate by looking back on some of the great science computed on the original Stampede machine. “In 2017, the Stampede supercomputer, funded by the NSF, completed its five-year mission to provide world-class computational resources and support staff to more than 11,000 U.S. users on over 3,000 projects in the open science community. But what made it special? Stampede was like a bridge that moved thousands of researchers off of soon-to-be decommissioned supercomputers, while at the same time building a framework that anticipated the eminent trends that came to dominate advanced computing.”

Intel’s Xeon Scalable Processors Provide Cooling Challenges for HPC

Unless you reduce node and rack density, the wattages of today’s high-poweredCPUs and GPUs are simply no longer addressable with air cooling alone. Asetek explores how new processors, such as Intel’s Xeon Scalable processors, often call for more than just air cooling. “The largest Xeon Phi direct-to-chip cooled system today is Oakforest-PACS system in Japan. The system is made up of 8,208 computational nodes using Asetek Direct-to-Chip liquid cooled Intel Xeon Phi high performance processors with Knights Landing architecture. It is the highest performing system in Japan and #7 on the Top500.”

Surprising Stories from 17 National Labs in 17 Minutes

In this video, the U.S. Department of Energy gives a quick tour of all 17 National Labs. Each one comes with a surprising story on what these labs do for us as a Nation. “And they all do really different stuff. Think of a big scientific question or challenge, and one or more of the labs is probably working on it.”

Radio Free HPC Looks at AI Ethics and a Tale of Henry’s Super Heroism

In this podcast, the Radio Free HPC team learns about Henry’s first exploit as an Ethical Superhero. “After witnessing a hit-and-run fender bender, Henry confronted the culprit and ensured that the miscreant left a note on the victim’s windshield. And while we applaud Henry for his heroism, we are also very grateful that he was not shot in the process. This tale leads us into a discussion of AI ethics and how we won’t have this problem in the coming era of self-driving cars.”

LANL Adds Capacity to Trinity Supercomputer for Stockpile Stewardship

Los Alamos National Laboratory has boosted the computational capacity of their Trinity supercomputer with a merger of two system partitions. “With this merge completed, we have now successfully released one of the most capable supercomputers in the world to the Stockpile Stewardship Program,” said Bill Archer, Los Alamos ASC program director. “Trinity will enable unprecedented calculations that will directly support the mission of the national nuclear security laboratories, and we are extremely excited to be able to deliver this capability to the complex.”

A Scalable Object Store for Meteorological and Climate Data

Simon D. Smart gave this talk at the PASC17 conference. “Numerical Weather Prediction (NWP) and Climate simulations sit in the intersection between classically understood HPC and the Big Data communities. Driven by ever more ambitious scientific goals, both the size and number of output data elements generated as part of NWP operations has grown by several orders of magnitude, and will continue to grow into the future. This poses significant scalability challenges for the data processing pipeline, and the movement of data through and between stages is one of the most significant factors in this.”