Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Podcast: China Tianhe-3 Exascale machine is coming

In this podcast, the Radio Free HPC team takes a second look at China’s plans for the Tianhe-3 exascale supercomputer. “According to news articles, Tianhe-3 will be 200 times faster than Tianhe-1, with 100x more storage. What we don’t know is if these comparisons are relative to Tianhe-1 or Tianhe 1A. The later machine weighs in at 2.256 PFlop/s which means that Tianhe-3 might be as fast as 450 PFlop/s when complete. We also made a reference to a past episode, which we know you remember vividly, where we discussed China’s three-pronged strategy for exascale.”

Podcast: Weather Forecasting Goes Crowdsourcing, Q means Quantum

In this episode of Radio Free HPC, Dan, Henry, and Shahin start with a spirited discussion about IBM’s recent announcement of a “crowd sourced weather prediction application.” Henry was dubious as to whether Big Blue could get access to the data they need in order to truly put out a valuable product. Dan had questions about the value of the crowd sourced data and how it could be scrubbed in order to be useful. Shahin was pretty favorable towards IBM’s plans and believes that they will solve the problems that Henry and Dan raised.

Podcast: How AI and HPC Are Converging with Support from Intel Technology

In this Intel Chip Chat podcast, Dr. Pradeep Dubey, Intel Fellow and director of its Parallel Computing Lab, explains why it makes sense for the HPC and Ai to come together and how Intel is supporting this convergence. “AI developers tend to be data scientists, focused on deriving intelligence and insights from massive amounts of digital data, rather than typical HPC programmers with deep system programming skills. Because Intel architecture serves as the foundation for both AI and HPC workloads, Intel is uniquely positioned to drive their convergence. Its technologies and products span processing, memory, and networking at ever-increasing levels of power and scalability.”

Video: Ramping up for Exascale at the National Labs

In this video from the Exascale Computing Project, Dave Montoya from LANL describes the continuous software integration effort at DOE facilities where exascale computers will be located sometime in the next 3-4 years. “A key aspect of the Exascale Computing Project’s continuous integration activities is ensuring that the software in development for exascale can efficiently be deployed at the facilities and that it properly blends with the facilities’ many software components. As is commonly understood in the realm of high-performance computing, integration is very challenging: both the hardware and software are complex, with a huge amount of dependencies, and creating the associated essential healthy software ecosystem requires abundant testing.”

Podcast: Improving Parallel Applications with the TAU tool

In the podcast, Mike Bernhardt from ECP catches up with Sameer Shende to learn how the Performance Research Lab at the University of Oregon is helping to pave the way to Exascale. “Developers of parallel computing applications can well appreciate the Tuning and Analysis Utilities (TAU) performance evaluation tool—it helps them optimize their efforts. Sameer has worked with the TAU software for nearly two and a half decades and has released more than 200 versions of it. Whatever your application looks like, there’s a good chance that TAU can support it and help you improve your performance.”

Radio Free HPC Looks at Santa’s Big Data Challenges

In this podcast video, the Radio Free HPC team looks at the monumental IT challenges that Santa faces each Holiday Season. “With nearly 2 billion children to serve, Santa’s operations are an IT challenge on the grandest scale. If the world’s population keeps growing by 83 million people per year, Santa may need to build a hybrid cloud just to keep up. With billions of simultaneous queries, the Big Data analytics required will certainly require an 8-socket numa machines with 4 terabytes of central memory.”

Podcast Looks at Exascale Computing for Forefront Scientific Problems

In this edition of Let’s Talk Exascale, Fred Streitz of Lawrence Livermore National Laboratory describes his team’s efforts to develop supercomputer applications that address forefront scientific problems by pushing the limits of leadership-class computing. “At SC18, Fred Streitz gave a talk in the US Department of Energy booth on the topic “Machine Learning and Predictive Simulation: HPC and the US Cancer Moonshot on Sierra.” As a guest on the ECP podcast, he provides an overview and some insights from his booth talk.”

Radio Free HPC Looks at TOP500 Trends on the Road to Exascale

In this podcast, the Radio Free HPC team looks at the semi-annual TOP500 BoF presentation by Jack Dongarra.

The TOP500 list of supercomputers serves as a “Who’s Who” in the field of High Performance Computing. “This BoF will present detailed analyses of the TOP500 and discuss the changes in the HPC marketplace during the past years. The BoF is meant as an open forum for discussion and feedback between the TOP500 authors and the user community.”

Radio Free HPC Runs Down the TOP500 Fastest Supercomputers

In this podcast, the Radio Free HPC team looks back on the highlights of SC18 and the newest TOP500 list of the world’s fastest supercomputers.

Buddy Bland shows off Summit, the world’s fastest supercomputer at ORNL. “The latest TOP500 list of the world’s fastest supercomputers is out, a remarkable ranking that shows five Department of Energy supercomputers in the top 10, with the first two captured by Summit at Oak Ridge and Sierra at Livermore. With the number one and number two systems on the planet, the “Rebel Alliance” vendors of IBM, Mellanox, and NVIDIA stand far and tall above the others.”

Radio Free HPC Gets an Update on the Spaceborne Supercomputer

In this podcast, the Radio Free HPC team sits down with Mark Fernandez from HPE to discuss the Spaceborne Supercomputer that it currently orbiting the planet in the International Space Station. “Last week, HPE announced it is opening high-performance computing capabilities to astronauts on the International Space Station (ISS) as part of its continued experiments on the Spaceborne Computer project.”