Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Podcast: Seeing the Black Hole with Big Data

In this podcast, the Radio Free HPC team discusses how the news of the cool visualization of an actual black hole leads to interesting issues in HPC land. “The real point: the daunting 1.75 PB of raw data from each telescope meant a lot of physical drives that had to be flown to the data center. Henry leads a discussion about the race between bandwidth and data size.”

Personalized Healthcare with High Performance Computing in the Cloud

Wolfgang Gentzsch from the UberCloud gave this talk at the HPC User Forum. “The concept of personalized medicine has its roots deep in genomic research. Indeed, the successful completion of the Human Genome Project in 2003 marked a critical milestone for the field. That project took $3 billion over 13 years. Today, thanks to technological progress, a similar sequencing task would take only about $4,000 and a few weeks. Such computational power is possible thanks to cloud technology, which eliminates the barriers to high-performance computing by removing software and hardware constraints.”

Video: LANL Creates first Billion-atom Biomolecular Simulation

Researchers at Los Alamos National Laboratory have created the largest simulation to date of an entire gene of DNA, a feat that required one billion atoms to model and will help researchers to better understand and develop cures for diseases like cancer. “It is important to understand DNA at this level of detail because we want to understand precisely how genes turn on and off,” said Karissa Sanbonmatsu, a structural biologist at Los Alamos. “Knowing how this happens could unlock the secrets to how many diseases occur.”

Video: Simulations of Antarctic Meltdown should send chills on Earth Day

In this video, researchers investigate the millennial-scale vulnerability of the Antarctic Ice Sheet (AIS) due solely to the loss of its ice shelves. Starting at the present-day, the AIS evolves for 1000 years, exposing the floating ice shelves to an extreme thinning rate, which results in their complete collapse. The visualizations show the first 500 […]

Supercomputing Bioelectric Fields in the Fight Against Cancer

Researchers from of the University of California at Santa Barbara are using TACC supercomputers to study bioelectric effects of cells to develop new anti-cancer strategies. “For us, this research would not have been possible without XSEDE because such simulations require over 2,000 cores for 24 hours and terabytes of data to reach time scales and length scales where the collective interactions between cells manifest themselves as a pattern,” Gibou said. “It helped us observe a surprising structure for the behavior of the aggregate out of the inherent randomness.”

40 Powers of 10 – Simulating the Universe with the DiRAC HPC Facility

Mark Wilkinson from DiRAC gave this talk at the Swiss HPC Conference. “DiRAC is the integrated supercomputing facility for theoretical modeling and HPC-based research in particle physics, and astrophysics, cosmology, and nuclear physics, all areas in which the UK is world-leading. DiRAC provides a variety of compute resources, matching machine architecture to the algorithm design and requirements of the research problems to be solved.”

Evolving NASA’s Data and Information Systems for Earth Science

Rahul Ramachandran from NASA gave this talk at the HPC User Forum. “NASA’s Earth Science Division (ESD) missions help us to understand our planet’s interconnected systems, from a global scale down to minute processes. ESD delivers the technology, expertise and global observations that help us to map the myriad connections between our planet’s vital processes and the effects of ongoing natural and human-caused changes.”

Video: Managing large-scale cosmology simulations with Parsl and Singularity

Rick Wagner from Globus gave this talk at the Singularity User Group “We package the imSim software inside a Singularity container so that it can be developed independently, packaged to include all dependencies, trivially scaled across thousands of computing nodes, and seamlessly moved between computing systems. To date, the simulation workflow has consumed more than 30M core hours using 4K nodes (256K cores) on Argonne’s Theta supercomputer and 2K nodes (128K cores) on NERSC’s Cori supercomputer.”

Video: Advancing U.S. Weather Prediction Capabilities with Exascale HPC

Mark Govett from NOAA gave this talk at the GPU Technology Conference. “We’ll discuss the revolution in computing, modeling, data handling and software development that’s needed to advance U.S. weather-prediction capabilities in the exascale computing era. Creating prediction models to cloud-resolving 1 KM-resolution scales will require an estimated 1,000-10,000 times more computing power, but existing models can’t exploit exascale systems with millions of processors. We’ll examine how weather-prediction models must be rewritten to incorporate new scientific algorithms, improved software design, and use new technologies such as deep learning to speed model execution, data processing, and information processing.”

Job of the Week: HPC Technology Researcher at Chevron

Chevron is seeking an HPC Technology Researcher in our Job of the Week. This position will be accountable for strategic research, technology development and business engagement to deliver High Performance Computing solutions that differentiate Chevron’s performance. The successful candidate is expected to manage projects and small programs and personally apply and grow technical skills in the Advanced Computing space.”