Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Simulations of Antarctic Meltdown should send chills on Earth Day

In this video, researchers investigate the millennial-scale vulnerability of the Antarctic Ice Sheet (AIS) due solely to the loss of its ice shelves. Starting at the present-day, the AIS evolves for 1000 years, exposing the floating ice shelves to an extreme thinning rate, which results in their complete collapse. The visualizations show the first 500 […]

40 Powers of 10 – Simulating the Universe with the DiRAC HPC Facility

Mark Wilkinson from DiRAC gave this talk at the Swiss HPC Conference. “DiRAC is the integrated supercomputing facility for theoretical modeling and HPC-based research in particle physics, and astrophysics, cosmology, and nuclear physics, all areas in which the UK is world-leading. DiRAC provides a variety of compute resources, matching machine architecture to the algorithm design and requirements of the research problems to be solved.”

Video: High Performance Computing on the Google Cloud Platform

“High performance computing is all about scale and speed. And when you’re backed by Google Cloud’s powerful and flexible infrastructure, you can solve problems faster, reduce queue times for large batch workloads, and relieve compute resource limitations. In this session, we’ll discuss why GCP is a great platform to run high-performance computing workloads. We’ll present best practices, architectural patterns, and how PSO can help your journey. We’ll conclude by demo’ing the deployment of an autoscaling batch system in GCP.”

Video: Why Not all NAS Architectures can keep up with HPC

In this video, Curtis Anderson from Panasas describes how different NAS architectures optimize data flow to bring competitive advantage to your business. “You have a vision: to use high performance computing applications to help people, revolutionize your industry, or change the world. You don’t want to worry if your storage system is up to the task. As the only plug-and-play parallel storage file system in the market, Panasas helps you move beyond storage so you can focus on your big ideas and supercharge innovation.”

Evolving NASA’s Data and Information Systems for Earth Science

Rahul Ramachandran from NASA gave this talk at the HPC User Forum. “NASA’s Earth Science Division (ESD) missions help us to understand our planet’s interconnected systems, from a global scale down to minute processes. ESD delivers the technology, expertise and global observations that help us to map the myriad connections between our planet’s vital processes and the effects of ongoing natural and human-caused changes.”

Video: Managing large-scale cosmology simulations with Parsl and Singularity

Rick Wagner from Globus gave this talk at the Singularity User Group “We package the imSim software inside a Singularity container so that it can be developed independently, packaged to include all dependencies, trivially scaled across thousands of computing nodes, and seamlessly moved between computing systems. To date, the simulation workflow has consumed more than 30M core hours using 4K nodes (256K cores) on Argonne’s Theta supercomputer and 2K nodes (128K cores) on NERSC’s Cori supercomputer.”

Video: The Human Side of AI

 In this video from the GPU Technology Conference, Dan Olds from OrionX discusses the human impact of AI with Greg Schmidt from HPE. The industry buzz about artificial intelligence and deep learning typically focuses on hardware, software, frameworks,  performance, and the lofty business plans that will be enabled by this new technology. What we don’t […]

Spectra Logic and Arcitecta team up for Genomics Data Management

Spectra Logic is teaming with Arcitecta for tackling the massive datasets used in life sciences. The two companies will showcase their joint solutions at the BioIT World conference this week in Boston. “Addressing the needs of the life sciences market with reliable data storage lies at the heart of the Spectra and Arcitecta relationship,” said Spectra CTO Matt Starr. “This joint solution enables customers to better manage their data and metadata by optimizing multiple storage targets, retrieving data efficiently and tracking content and resources.”

DUG Installs Immersive Cooling for Bubba Supercomputer in Houston

Today DownUnder GeoSolutions (DUG) announced that tanks are arriving at Skybox Houston for “Bubba,” its huge geophysically-configured supercomputer. “DUG will cool the massive Houston supercomputer using their innovative immersion cooling system that has computer nodes fully submerged in specially-designed tanks filled with polyalphaolefin dielectric fluid. This month, the first of these 722 tanks have been arriving in shipping containers at the facility in Houston.”

Vintage Video: The Paragon Supercomputer – A Product of Partnership

In this vintage video, Intel launches the Paragon line of supercomputers, a series of massively parallel systems produced in the 1990s. In 1993, Sandia National Laboratories installed an Intel XP/S 140 Paragon supercomputer, which claimed the No. 1 position on the June 1994 TOP500 list. “With 3,680 processors, the system ran the Linpack benchmark at 143.40 Gflop/s. It was the first massively parallel processor supercomputer to be indisputably the fastest system in the world.”