Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Simulating the Earth’s mysterious mantle

Scientists are taking advantage of an $2.5 million NSF grant to develop a new framework for integrated geodynamic models that simulate the Earth’s molten core. “Most physical phenomena can be described by partial differential equations that explain energy balances or loss,” said Heister, an associate professor of mathematical sciences who will receive $393,000 of the overall funding. “My geoscience colleagues will develop the equations to describe the phenomena and I’ll write the algorithms that solve their equations quickly and accurately.”

The Role of Cyberinfrastructure in Science: Challenges and Opportunities

Ewa Deelman from the University of Southern California gave this Invited Talk at SC19. “This presentation examines selected modern scientific discoveries, such as the detection of gravitational waves from the CI perspective. It aims to answer the following key questions: first, what were the key CI solutions that enabled a particular scientific result? and second, what were the challenges that needed to be overcome?”

SDSC Conducts 50,000+ GPU Cloudburst Experiment with Wisconsin IceCube Particle Astrophysics Center

In all, some 51,500 GPU processors were used during the approximately two-hour experiment conducted on November 16 and funded under a National Science Foundation EAGER grant. The experiment used simulations from the IceCube Neutrino Observatory, an array of some 5,160 optical sensors deep within a cubic kilometer of ice at the South Pole. In 2017, researchers at the NSF-funded observatory found the first evidence of a source of high-energy cosmic neutrinos – subatomic particles that can emerge from their sources and pass through the universe unscathed, traveling for billions of light years to Earth from some of the most extreme environments in the universe.

NSF Invests in Multi-Messenger Astrophysics

“The promise of multi-messenger astrophysics, however, can be realized only if sufficient cyberinfrastructure is available to rapidly handle, combine and analyze the very large-scale distributed data from all types of astronomical measurements. The conceptualization phase of SCIMMA will balance rapid prototyping, novel algorithm development and software sustainability to accelerate scientific discovery over the next decade and more.”

HPC Framework Blocks to Ease Programming of Exascale Supercomputers

Researchers are beginning a three-year cross-institute project that aims to lower the barrier to entry for software engineers developing new high-performance applications on large scale parallel systems. “The team of researchers plan to combine user insights, new compiler optimizations, and advanced runtime support to create the PAbB framework which will ultimately create building blocks of parallel code for heterogeneous environments to use across a number of applications from computational science and data science.”

NSF Grant to help develop cyberinfrastructure across Midwest

The National Science Foundation has awarded a $1.4 million grant to a team of experts led by Timothy Middelkoop, assistant teaching professor of industrial and manufacturing systems engineering in the University of Missouri’s College of Engineering. The researchers said the grant will fill an emerging need by providing training and resources in high-performance computer systems. “There is a critical need for building cyberinfrastructure across the nation, including the Midwest region,” said Middelkoop, who also serves as the director of Research Computing Support Services in the Division of Information Technology at MU. “It is our job as cyberinfrastructure professionals to facilitate research and work with researchers as a team to identify the best practices.”

Podcast: Inside TACC’s Frontera Supercomputer

In this Intel Chip Chat podcast, Dan Stanzione from TACC discusses the architecture and capabilities of Frontera, TACC’s newest HPC cluster. “Frontera’s architecture includes 8,000 servers, each powered by 2nd Generation Intel Xeon Scalable processors. The cluster includes hundreds of thousands of processing cores and a liquid-cooled infrastructure enabling a higher clock rate for even more performance.”

TACC Unveils Frontera – Fastest Supercomputer in Academia

Today TACC unveiled Frontera, the 5th most powerful supercomputer in the world. “Frontera has been supporting science applications since June and has already enabled more than three dozen teams to conduct research on a range of topics from black hole physics to climate modeling to drug design, employing simulation, data analysis, and artificial intelligence at a scale not previously possible.”

Frontera: The Next Generation NSF HPC Resource, and Why HPC Still isn’t the Cloud

Dan Stanzione from TACC gave this talk at the MVAPICH User Group. “In this talk, I will describe the main components of the award: the Phase 1 system, “Frontera”, the plans for facility operations and scientific support for the next five years, and the plans to design a Phase 2 system in the mid-2020s to be the NSF Leadership system for the latter half of the decade, with capabilities 10x beyond Frontera. The talk will also discuss the key role MVAPICH and Infiniband play in the project, and why the workload for HPC still can’t fit effectively on the cloud without advanced networking support.”

SDSC Awarded NSF Grant for Triton Stratus

The National Science Foundation has awarded SDSC a two-year grant worth almost $400,000 to deploy a new system called Triton Stratus. “Triton Stratus will provide researchers with improved facilities for utilizing emerging computing paradigms and tools, namely interactive and portal-based computing, and scaling them to commercial cloud computing resources. Researchers, especially data scientists, are increasingly using toolssuch as Jupyter notebooks and RStudio to implement computational and data analysis functions and workflows.”