Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Mellanox HDR 200G InfiniBand is powering next-gen supercomputers

Today Mellanox announced that HDR 200G InfiniBand is powering the next generation of supercomputers world-wide, enabling higher levels of research and scientific discovery. HDR 200G InfiniBand solutions include the ConnectX-6 adapters, Mellanox Quantum switches, LinkX cables and transceivers and software packages. With its highest data throughput, extremely low latency, and smart In-Network Computing acceleration engines, HDR InfiniBand provides world leading performance and scalability for the most demanding compute and data applications.

Podcast: Radio Free HPC Runs Down the TOP500

In this podcast, the Radio Free HPC team reviews the new TOP500 list of most powerful supercomputers. “Not much changed in the TOP10 but a lot is changing further down the list. The overwhelming majority of the systems, 478 of them, are based on Intel CPUs. 13 are IBM, and there is 1 system based on Arm provided by Cavium, now part of Marvell.”

For the first time, all TOP500 Systems are Petaflop Machines

The latest TOP500 list of the world’s fastest supercomputers is out today, marking a major milestone in the 26-year history of the list. For the first time, all 500 systems deliver a petaflop or more on the Linpack benchmark. “Frontera at TACC is the only new supercomputer in the top 10, which attained its number five ranking by delivering 23.5 petaflops on HPL. The Dell C6420 system, powered by Intel Xeon Platinum 8280 processors.”

Video: Supercomputing Dynamic Earthquake Ruptures

Researchers are using XSEDE supercomputers to model multi-fault earthquakes in the Brawley fault zone, which links the San Andreas and Imperial faults in Southern California. Their work could predict the behavior of earthquakes that could potentially affect millions of people’s lives and property. “Basically, we generate a virtual world where we create different types of earthquakes. That helps us understand how earthquakes in the real world are happening.”

TACC Powers Climate Studies with GRACE Project

Researchers are using powerful supercomputers at TACC to process data from Gravity Recovery and Climate Experiment (GRACE). “Intended to last just five years in orbit for a limited, experimental mission to measure small changes in the Earth’s gravitational fields, GRACE operated for more than 15 years and provided unprecedented insight into our global water resources, from more accurate measurements of polar ice loss to a better view of the ocean currents, and the rise in global sea levels.”

Video: The Exascale Computing Project and the Future of HPC

Doug Kothe from ORNL presented this ACM Tech Talk in April, 2019. “The ECP is designing the software infrastructure to enable the next generation of supercomputers—systems capable of more than 1018 operations per second—to effectively and efficiently run applications that address currently intractable problems of strategic importance. The ECP is creating and deploying an expanded and vertically integrated software stack on DOE HPC exascale and pre-exascale systems, thereby defining the enduring US exascale ecosystem.”

Call for Presentations: MVAPICH User Group in August

The 7th annual MVAPICH User Group (MUG) meeting has issued its Call for Presentations. MUG will take place from August 19-21, 2019 in Columbus, Ohio. “MUG aims to bring together MVAPICH2 users, researchers, developers, and system administrators to share their experience and knowledge and learn from each other. The event includes keynote talks, invited tutorials, invited talks, contributed presentations, open MIC session, hands-on sessions with MVAPICH developers, etc.”

TACC Podcast Looks at the Challenges of Computational Reproducibility

In this TACC Podcast, Dan Stanzione and Doug James from the Texas Advanced Computing Center discuss the thorny issue of reproducibility in HPC. “Computational reproducibility is a subset of the broader and even harder topic of scientific reproducibility,” said Dan Stanzione, TACC’s executive director. “If we can’t get the exact same answer bit-for-bit, then what’s close enough? What’s a scientifically valid way to represent that? That’s what we’re after.”

The Computing4Change Program takes on STEM and Workforce Issues

Kelly Gaither from TACC gave this talk at the HPC User Forum. “Computing4Change is a competition empowering people to create change through computing. You may have seen articles on the anticipated shortfall of engineers, computer scientists, and technology designers to fill open jobs. Numbers from the Report to the President in 2012 (President Obama’s Council of Advisors on Science and Technology) show a shortfall of one million available workers to fill STEM-related jobs by 2020.”

Podcast: Supercomputing Synthetic Biomolecules

Researchers are using HPC to design potentially life-saving proteins. In this TACC podcast, host Jorge Salazar discusses this groundbreaking work with the science team. “The scientists say their methods could be applied to useful technologies such as pharmaceutical targeting, artificial energy harvesting, ‘smart’ sensing and building materials, and more.”