Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Managing large-scale cosmology simulations with Parsl and Singularity

Rick Wagner from Globus gave this talk at the Singularity User Group “We package the imSim software inside a Singularity container so that it can be developed independently, packaged to include all dependencies, trivially scaled across thousands of computing nodes, and seamlessly moved between computing systems. To date, the simulation workflow has consumed more than 30M core hours using 4K nodes (256K cores) on Argonne’s Theta supercomputer and 2K nodes (128K cores) on NERSC’s Cori supercomputer.”

Video: The Human Side of AI

 In this video from the GPU Technology Conference, Dan Olds from OrionX discusses the human impact of AI with Greg Schmidt from HPE. The industry buzz about artificial intelligence and deep learning typically focuses on hardware, software, frameworks,  performance, and the lofty business plans that will be enabled by this new technology. What we don’t […]

Adaptive Deep Reuse Technique cuts AI Training Time by more than 60 Percent

North Carolina State University researchers have developed a technique that reduces training time for deep learning networks by more than 60 percent without sacrificing accuracy, accelerating the development of new artificial intelligence applications. “One of the biggest challenges facing the development of new AI tools is the amount of time and computing power it takes to train deep learning networks to identify and respond to the data patterns that are relevant to their applications. We’ve come up with a way to expedite that process, which we call Adaptive Deep Reuse. We have demonstrated that it can reduce training times by up to 69 percent without accuracy loss.”

Spectra Logic and Arcitecta team up for Genomics Data Management

Spectra Logic is teaming with Arcitecta for tackling the massive datasets used in life sciences. The two companies will showcase their joint solutions at the BioIT World conference this week in Boston. “Addressing the needs of the life sciences market with reliable data storage lies at the heart of the Spectra and Arcitecta relationship,” said Spectra CTO Matt Starr. “This joint solution enables customers to better manage their data and metadata by optimizing multiple storage targets, retrieving data efficiently and tracking content and resources.”

DUG Installs Immersive Cooling for Bubba Supercomputer in Houston

Today DownUnder GeoSolutions (DUG) announced that tanks are arriving at Skybox Houston for “Bubba,” its huge geophysically-configured supercomputer. “DUG will cool the massive Houston supercomputer using their innovative immersion cooling system that has computer nodes fully submerged in specially-designed tanks filled with polyalphaolefin dielectric fluid. This month, the first of these 722 tanks have been arriving in shipping containers at the facility in Houston.”

Jack Dongarra Named a Foreign Fellow of the Royal Society

Jack Dongarra from the University of Tennessee has been named a Foreign Fellow of the Royal Society, joining previously inducted icons of science such as Isaac Newton, Charles Darwin, Albert Einstein, and Stephen Hawking. “This honor is both humbling because of others who have been so recognized and gratifying for the acknowledgement of the research and work I have done,” Dongarra said. “I’m deeply grateful for this recognition.”

Vintage Video: The Paragon Supercomputer – A Product of Partnership

In this vintage video, Intel launches the Paragon line of supercomputers, a series of massively parallel systems produced in the 1990s. In 1993, Sandia National Laboratories installed an Intel XP/S 140 Paragon supercomputer, which claimed the No. 1 position on the June 1994 TOP500 list. “With 3,680 processors, the system ran the Linpack benchmark at 143.40 Gflop/s. It was the first massively parallel processor supercomputer to be indisputably the fastest system in the world.”

The Computing4Change Program takes on STEM and Workforce Issues

Kelly Gaither from TACC gave this talk at the HPC User Forum. “Computing4Change is a competition empowering people to create change through computing. You may have seen articles on the anticipated shortfall of engineers, computer scientists, and technology designers to fill open jobs. Numbers from the Report to the President in 2012 (President Obama’s Council of Advisors on Science and Technology) show a shortfall of one million available workers to fill STEM-related jobs by 2020.”

Quobyte Distributed File System adds TensorFlow Plug-In for Machine Learning

Today Quobyte announced that the company’s Data Center File System is the first distributed file system to offer a TensorFlow plug-in, providing increased throughput performance and linear scalability for ML-powered applications to enable faster training across larger data sets while achieving higher-accuracy results. “By providing the first distributed file system with a TensorFlow plug-in, we are ensuring as much as a 30 percent faster throughput performance improvement for ML training workflows, helping companies better meet their business objectives through improved operational efficiency,” said Bjorn Kolbeck, Quobyte CEO.

Video: Advancing U.S. Weather Prediction Capabilities with Exascale HPC

Mark Govett from NOAA gave this talk at the GPU Technology Conference. “We’ll discuss the revolution in computing, modeling, data handling and software development that’s needed to advance U.S. weather-prediction capabilities in the exascale computing era. Creating prediction models to cloud-resolving 1 KM-resolution scales will require an estimated 1,000-10,000 times more computing power, but existing models can’t exploit exascale systems with millions of processors. We’ll examine how weather-prediction models must be rewritten to incorporate new scientific algorithms, improved software design, and use new technologies such as deep learning to speed model execution, data processing, and information processing.”