Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Podcast: Modernizing the Electric Grid with HPC

In this podcast, the Radio Free HPC team looks at how Lawrence Livermore National Lab is working to simulate and help modernize the electric grid. They discuss how the ‘new grid’ will need to be two-way, both delivering and accepting electricity. The new grid will also have to communicate with smart homes and other buildings in order to predict demand and adjust real time pricing.

Podcast: What is an Ai Supercomputer?

In this podcast, the Radio Free HPC team asks whether a supercomputer can or cannot be a “AI Supercomputer.” The question came up after HPE announced a new AI system called Jean Zay that will double the capacity of French supercomputing. “So what are the differences between a traditional super and a AI super? According to Dan, it mostly comes down to how many GPUs the system is configured with, while Shahin and Henry think it has something to do with the datasets.”

Video: TensorFlow for HPC?

In this podcast, Peter Braam looks at how TensorFlow framework could be used to accelerate high performance computing. “Google has developed TensorFlow, a truly complete platform for ML. The performance of the platform is amazing, and it begs the question if it will be useful for HPC in a similar manner that GPU’s heralded a revolution.”

Podcast: Doug Kothe Looks back at the Exascale Computing Project Annual Meeting

In this podcast, Doug Kothe from the Exascale Computing Project describes the 2019 ECP Annual Meeting. “Key topics to be covered at the meeting are discussions of future systems, software stack plans, and interactions with facilities. Several parallel sessions are also planned throughout the meeting.”

Podcast: China Tianhe-3 Exascale machine is coming

In this podcast, the Radio Free HPC team takes a second look at China’s plans for the Tianhe-3 exascale supercomputer. “According to news articles, Tianhe-3 will be 200 times faster than Tianhe-1, with 100x more storage. What we don’t know is if these comparisons are relative to Tianhe-1 or Tianhe 1A. The later machine weighs in at 2.256 PFlop/s which means that Tianhe-3 might be as fast as 450 PFlop/s when complete. We also made a reference to a past episode, which we know you remember vividly, where we discussed China’s three-pronged strategy for exascale.”

Podcast: Weather Forecasting Goes Crowdsourcing, Q means Quantum

In this episode of Radio Free HPC, Dan, Henry, and Shahin start with a spirited discussion about IBM’s recent announcement of a “crowd sourced weather prediction application.” Henry was dubious as to whether Big Blue could get access to the data they need in order to truly put out a valuable product. Dan had questions about the value of the crowd sourced data and how it could be scrubbed in order to be useful. Shahin was pretty favorable towards IBM’s plans and believes that they will solve the problems that Henry and Dan raised.

Podcast: How AI and HPC Are Converging with Support from Intel Technology

In this Intel Chip Chat podcast, Dr. Pradeep Dubey, Intel Fellow and director of its Parallel Computing Lab, explains why it makes sense for the HPC and Ai to come together and how Intel is supporting this convergence. “AI developers tend to be data scientists, focused on deriving intelligence and insights from massive amounts of digital data, rather than typical HPC programmers with deep system programming skills. Because Intel architecture serves as the foundation for both AI and HPC workloads, Intel is uniquely positioned to drive their convergence. Its technologies and products span processing, memory, and networking at ever-increasing levels of power and scalability.”

Video: Ramping up for Exascale at the National Labs

In this video from the Exascale Computing Project, Dave Montoya from LANL describes the continuous software integration effort at DOE facilities where exascale computers will be located sometime in the next 3-4 years. “A key aspect of the Exascale Computing Project’s continuous integration activities is ensuring that the software in development for exascale can efficiently be deployed at the facilities and that it properly blends with the facilities’ many software components. As is commonly understood in the realm of high-performance computing, integration is very challenging: both the hardware and software are complex, with a huge amount of dependencies, and creating the associated essential healthy software ecosystem requires abundant testing.”

Podcast: Improving Parallel Applications with the TAU tool

In the podcast, Mike Bernhardt from ECP catches up with Sameer Shende to learn how the Performance Research Lab at the University of Oregon is helping to pave the way to Exascale. “Developers of parallel computing applications can well appreciate the Tuning and Analysis Utilities (TAU) performance evaluation tool—it helps them optimize their efforts. Sameer has worked with the TAU software for nearly two and a half decades and has released more than 200 versions of it. Whatever your application looks like, there’s a good chance that TAU can support it and help you improve your performance.”

Radio Free HPC Looks at Santa’s Big Data Challenges

In this podcast video, the Radio Free HPC team looks at the monumental IT challenges that Santa faces each Holiday Season. “With nearly 2 billion children to serve, Santa’s operations are an IT challenge on the grandest scale. If the world’s population keeps growing by 83 million people per year, Santa may need to build a hybrid cloud just to keep up. With billions of simultaneous queries, the Big Data analytics required will certainly require an 8-socket numa machines with 4 terabytes of central memory.”