Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


GRC Builds GPU-Based Immersion Cluster for TACC

Green Revolution Cooling has announced plans to deliver a custom GPU-based cluster to the Texas Advanced Computing Center (TACC). Our goal is to make HPC more affordable. Offering lower cost, energy dense servers that take full advantage of our highly efficient CarnotJet cooling system is a huge benefit to our customers,” said Larry Stone, VP of Engineering at GRC “we are seeing a growing number of people opt for servers designed for immersion, over traditional big brand OEM hardware.”

Podcast: Optimizing Cosmos Code on Intel Xeon Phi

In this TACC podcast, Cosmos code developer Chris Fragile joins host Jorge Salazar for a discussion on how researchers are using supercomputers to simulate the inner workings of Black holes. “For this simulation, the manycore architecture of KNL presents new challenges for researchers trying to get the best compute performance. This is a computer chip that has lots of cores compared to some of the other chips one might have interacted with on other systems,” McDougall explained. “More attention needs to be paid to the design of software to run effectively on those types of chips.”

Firing up a Continent with HPC

In this special guest feature from Scientific Computing World, Nox Moyake describes the process of entrenching and developing HPC in South Africa. “The CHPC currently has about 1,000 users; most are in academia and others in industry. The centre supports research from across a number of domains and participates in a number of grand international projects such at the CERN and the SKA projects.”

Fighting the West Nile Virus with HPC & Analytical Ultracentrifugation

Researchers are using new techniques with HPC to learn more about how the West Nile virus replicates inside the brain. “Over several years, Demeler has developed analysis software for experiments performed with analytical ultracentrifuges. The goal is to facilitate the extraction of all of the information possible from the available data. To do this, we developed very high-resolution analysis methods that require high performance computing to access this information,” he said. “We rely on HPC. It’s absolutely critical.”

Researchers Use TACC, SDSC and NASA Supercomputers to Forecast Corona of the Sun

Predictive Sciences ran a large-scale simulation of the Sun’s surface in preparation for a prediction of what the solar corona will look like during the eclipse. “The Solar eclipse allows us to see levels of the solar corona not possible even with the most powerful telescopes and spacecraft,” said Niall Gaffney, a former Hubble scientist and director of Data Intensive Computing at the Texas Advanced Computing Center. “It also gives high performance computing researchers who model high energy plasmas the unique ability to test our understanding of magnetohydrodynamics at a scale and environment not possible anywhere else.”

Podcast: 18 Petaflop Stampede 2 Supercomputer Powers Research at TACC

In this Texas Standard podcast, Dan Stanzione from TACC describes Stampede2, the most powerful university supercomputer in the United States. “Phase 1 of the Stampede2 rollout, now complete, features 4,200 Knights Landing (KNL) nodes, the second generation of processors based on Intel’s Many Integrated Core (MIC) architecture. Later this summer Phase 2 will add 1,736 Intel Xeon Skylake nodes.”

Radio Free HPC Looks at Posit Computing

In this podcast, the Radio Free HPC team looks at the problems with IEEE Floating Point. “As described in a recent presentation by John Gustafson, the flaws and idiosyncrasies of floating-point arithmetic ‘constitute a sizable portion of any curriculum on Numerical Analysis.’ The whole thing has Dan pretty worked up, so we hope that the news of Posit Computing coming to the new processors from Rex Computing will help.”

18 Petaflop Stampede2 Supercomputer Dedicated at TACC

Stampede2 is the newest strategic supercomputing resource for the nation’s research and education community, enabling scientists and engineers across the U.S., from multiple disciplines, to answer questions at the forefront of science and engineering. “Building on the success of the initial Stampede system, the Stampede team has partnered with other institutions as well as industry to bring the latest in forward-looking computing technologies combined with deep computational and data science expertise to take on some of the most challenging science and engineering frontiers,” said Irene Qualters, director of NSF’s Office of Advanced Cyberinfrastructure.

Podcast: A Retrospective on Great Science and the Stampede Supercomputer

TACC will soon deploy Phase 2 of the Stampede II supercomputer. In this podcast, they celebrate by looking back on some of the great science computed on the original Stampede machine. “In 2017, the Stampede supercomputer, funded by the NSF, completed its five-year mission to provide world-class computational resources and support staff to more than 11,000 U.S. users on over 3,000 projects in the open science community. But what made it special? Stampede was like a bridge that moved thousands of researchers off of soon-to-be decommissioned supercomputers, while at the same time building a framework that anticipated the eminent trends that came to dominate advanced computing.”

Supercomputing DNA Packing in Nuclei at TACC

Aaron Dubrow writes that researchers at the University of Texas Medical Branch are exploring DNA folding and cellular packing with supercomputing power from TACC. “In the field of molecular biology, there’s a wonderful interplay between theory, experiment and simulation,” Pettitt said. “We take parameters of experiments and see if they agree with the simulations and theories. This becomes the scientific method for how we now advance our hypotheses.”