Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Addison Snell presents: The New HPC

Addison Snell from Intersect360 Research gave this talk at the Swiss HPC Conference. “Intersect360 Research returns with an annual deep dive into the trends, technologies and usage models that will be propelling the HPC community through 2017 and beyond. Emerging areas of focus and opportunities to expand will be explored along with insightful observations needed to support measurably positive decision making within your operations.”

Velocity Compute: PeerCache for HPC Cloud Bursting

In this podcast, Eric Thune from Velocity Compute describes how the company’s PeerCache software optimizes data flow for HPC Cloud Bursting. “By using PeerCache to deliver hybrid cloud bursting, development teams can quickly extend their existing on-premise compute to burst into the cloud for elastic compute power. Your on-premise workflows will run identically in the cloud, without the need for retooling, and the workflow is then moved back to your on-premises servers until the next time you have a peak load.”

Video: Simulations of Antarctic Meltdown should send chills on Earth Day

In this video, researchers investigate the millennial-scale vulnerability of the Antarctic Ice Sheet (AIS) due solely to the loss of its ice shelves. Starting at the present-day, the AIS evolves for 1000 years, exposing the floating ice shelves to an extreme thinning rate, which results in their complete collapse. The visualizations show the first 500 […]

40 Powers of 10 – Simulating the Universe with the DiRAC HPC Facility

Mark Wilkinson from DiRAC gave this talk at the Swiss HPC Conference. “DiRAC is the integrated supercomputing facility for theoretical modeling and HPC-based research in particle physics, and astrophysics, cosmology, and nuclear physics, all areas in which the UK is world-leading. DiRAC provides a variety of compute resources, matching machine architecture to the algorithm design and requirements of the research problems to be solved.”

Video: High Performance Computing on the Google Cloud Platform

“High performance computing is all about scale and speed. And when you’re backed by Google Cloud’s powerful and flexible infrastructure, you can solve problems faster, reduce queue times for large batch workloads, and relieve compute resource limitations. In this session, we’ll discuss why GCP is a great platform to run high-performance computing workloads. We’ll present best practices, architectural patterns, and how PSO can help your journey. We’ll conclude by demo’ing the deployment of an autoscaling batch system in GCP.”

Video: Why Not all NAS Architectures can keep up with HPC

In this video, Curtis Anderson from Panasas describes how different NAS architectures optimize data flow to bring competitive advantage to your business. “You have a vision: to use high performance computing applications to help people, revolutionize your industry, or change the world. You don’t want to worry if your storage system is up to the task. As the only plug-and-play parallel storage file system in the market, Panasas helps you move beyond storage so you can focus on your big ideas and supercharge innovation.”

Job of the Week: HPC System Administrator at D.E. Shaw Research

D.E. Shaw Research is seeking and HPC System Administrator in our Job of the Week. “Exceptional sysadmins sought to manage systems, storage, and network infrastructure for a New York–based interdisciplinary research group. Ideal candidates should have strong fundamental knowledge of Linux concepts such as file systems, networking, and processes in addition to practical experience administering Linux systems. Relevant areas of expertise might include large-installation systems administration experience and strong programming and scripting ability, but specific knowledge of and level of experience in any of these areas is less critical than exceptional intellectual ability.”

Dr. Lin Gan Reflects on the SC19 Theme: HPC is Now

In this special guest feature from the SC19 Blog, Charity Plata from Brookhaven National Lab catches up with Dr. Lin Gan from Tsinghua University, who’s outstanding work in HPC has been recognized with a number of awards including the Gordon Bell Prize. As a highly awarded young researcher who already has been acknowledged for “outstanding, influential, and potentially long-lasting contributions” in HPC, Gan shares his thoughts on future supercomputers and what it means to say, “HPC Is Now.”

Evolving NASA’s Data and Information Systems for Earth Science

Rahul Ramachandran from NASA gave this talk at the HPC User Forum. “NASA’s Earth Science Division (ESD) missions help us to understand our planet’s interconnected systems, from a global scale down to minute processes. ESD delivers the technology, expertise and global observations that help us to map the myriad connections between our planet’s vital processes and the effects of ongoing natural and human-caused changes.”

Video: Managing large-scale cosmology simulations with Parsl and Singularity

Rick Wagner from Globus gave this talk at the Singularity User Group “We package the imSim software inside a Singularity container so that it can be developed independently, packaged to include all dependencies, trivially scaled across thousands of computing nodes, and seamlessly moved between computing systems. To date, the simulation workflow has consumed more than 30M core hours using 4K nodes (256K cores) on Argonne’s Theta supercomputer and 2K nodes (128K cores) on NERSC’s Cori supercomputer.”