Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Agenda Posted: Forum Teratec in France

The Forum Teratec in France has posted their speaker agenda. With over 1300 attendees, the event takes place June 11-12 in Palaiseau. “The Forum Teratec is the premier international meeting for all players in HPC, Simulation, Big Data and Machine Learning (AI). It is a unique place of exchange and sharing for professionals in the sector. Come and discover the innovations that will revolutionize practices in industry and in many other fields of activity.”

Video: LANL Creates first Billion-atom Biomolecular Simulation

Researchers at Los Alamos National Laboratory have created the largest simulation to date of an entire gene of DNA, a feat that required one billion atoms to model and will help researchers to better understand and develop cures for diseases like cancer. “It is important to understand DNA at this level of detail because we want to understand precisely how genes turn on and off,” said Karissa Sanbonmatsu, a structural biologist at Los Alamos. “Knowing how this happens could unlock the secrets to how many diseases occur.”

Supercomputing Bioelectric Fields in the Fight Against Cancer

Researchers from of the University of California at Santa Barbara are using TACC supercomputers to study bioelectric effects of cells to develop new anti-cancer strategies. “For us, this research would not have been possible without XSEDE because such simulations require over 2,000 cores for 24 hours and terabytes of data to reach time scales and length scales where the collective interactions between cells manifest themselves as a pattern,” Gibou said. “It helped us observe a surprising structure for the behavior of the aggregate out of the inherent randomness.”

40 Powers of 10 – Simulating the Universe with the DiRAC HPC Facility

Mark Wilkinson from DiRAC gave this talk at the Swiss HPC Conference. “DiRAC is the integrated supercomputing facility for theoretical modeling and HPC-based research in particle physics, and astrophysics, cosmology, and nuclear physics, all areas in which the UK is world-leading. DiRAC provides a variety of compute resources, matching machine architecture to the algorithm design and requirements of the research problems to be solved.”

Taurus Europe Acquires ClusterVision

The Taurus Group in the Netherlands has acquired European HPC specialist ClusterVision. “The ability to bring ClusterVision into the portfolio is very important for our HPC strategy and future growth, “says “We have a long history in the distribution of storage, networks and compute. We believe that the integration of closely linked corporate vertical will ultimately bring significant scale, synergy and a thriving circular economy to the entire group.”

Video: High Performance Computing on the Google Cloud Platform

“High performance computing is all about scale and speed. And when you’re backed by Google Cloud’s powerful and flexible infrastructure, you can solve problems faster, reduce queue times for large batch workloads, and relieve compute resource limitations. In this session, we’ll discuss why GCP is a great platform to run high-performance computing workloads. We’ll present best practices, architectural patterns, and how PSO can help your journey. We’ll conclude by demo’ing the deployment of an autoscaling batch system in GCP.”

Job of the Week: HPC System Administrator at D.E. Shaw Research

D.E. Shaw Research is seeking and HPC System Administrator in our Job of the Week. “Exceptional sysadmins sought to manage systems, storage, and network infrastructure for a New York–based interdisciplinary research group. Ideal candidates should have strong fundamental knowledge of Linux concepts such as file systems, networking, and processes in addition to practical experience administering Linux systems. Relevant areas of expertise might include large-installation systems administration experience and strong programming and scripting ability, but specific knowledge of and level of experience in any of these areas is less critical than exceptional intellectual ability.”

Dr. Lin Gan Reflects on the SC19 Theme: HPC is Now

In this special guest feature from the SC19 Blog, Charity Plata from Brookhaven National Lab catches up with Dr. Lin Gan from Tsinghua University, who’s outstanding work in HPC has been recognized with a number of awards including the Gordon Bell Prize. As a highly awarded young researcher who already has been acknowledged for “outstanding, influential, and potentially long-lasting contributions” in HPC, Gan shares his thoughts on future supercomputers and what it means to say, “HPC Is Now.”

Evolving NASA’s Data and Information Systems for Earth Science

Rahul Ramachandran from NASA gave this talk at the HPC User Forum. “NASA’s Earth Science Division (ESD) missions help us to understand our planet’s interconnected systems, from a global scale down to minute processes. ESD delivers the technology, expertise and global observations that help us to map the myriad connections between our planet’s vital processes and the effects of ongoing natural and human-caused changes.”

Video: Managing large-scale cosmology simulations with Parsl and Singularity

Rick Wagner from Globus gave this talk at the Singularity User Group “We package the imSim software inside a Singularity container so that it can be developed independently, packaged to include all dependencies, trivially scaled across thousands of computing nodes, and seamlessly moved between computing systems. To date, the simulation workflow has consumed more than 30M core hours using 4K nodes (256K cores) on Argonne’s Theta supercomputer and 2K nodes (128K cores) on NERSC’s Cori supercomputer.”