Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Interview: Under Secretary Paul Dabbar on the COVID-19 HPC Consortium

The DOE laboratory complex has many core capabilities that can be applied to addressing the threats posed by COVID-19. “This public-private partnership includes the biggest players in advanced computing from government, industry, and academia. At launch, the consortium includes five DOE laboratories, industry leaders like IBM, Microsoft, Google, and Amazon, and preeminent U.S. universities like MIT, RPI, and UC San Diego. And within a week, we’ve already received more than a dozen requests from other organizations to join the consortium.”

Podcast: A Look inside the El Capitan Supercomputer coming to LLNL

In this podcast, the Radio Free HPC team looks at some of the more interesting configuration aspects of the pending El Capitan exascale supercomputer coming to LLNL in 2023. “Dan talks about the briefing he received on the new Lawrence Livermore El Capitan system to be built by HPE/Cray. This new $600 million system will be fueled by the AMD Genoa processor coupled with AMD’s Instinct GPUs. Performance should come in at TWO 64-bit exaflops peak, which is very, very sporty.”

Video: Quantum Computing and Supercomputing, AI, Blockchain

Shahin Khan from OrionX.net gave this talk at the Washington Quantum Computing Meetup. “A whole new approach to computing (as in, not binary any more), quantum computing is as promising as it is unproven. Quantum computing goes beyond Moore’s law since every quantum bit (qubit) doubles the computational power, similar to the famous wheat and chessboard problem. So the payoff is huge, even though it is, for now, expensive, unproven, and difficult to use. But new players will become more visible, early use cases and gaps will become better defined, new use cases will be identified, and a short stack will emerge to ease programming.”

Manufacturing Engineers can turn to Cloud HPC for Work from Home

In this special guest feature, Wolfgang Gentzsch from the UberCloud describes how engineers can perform work from home in the same way they do at their offices, while maintaining or even increasing productivity. “In our short post, here, we are looking at how product development engineers – e.g. in manufacturing – are able to perform the same work they are used to perform at their office, maintaining (or even increasing) their productivity while working from home.”

New AI Solutions from Dell Technologies

In this special guest feature, Dave Frattura from Dell Technologies writes that the company is helping customers simplify and drive data science and AI initiatives that can deliver valuable insights, automation and intelligence to fuel innovation across their IT landscape — from edge locations to core data center and public clouds. “Dell has developed new solutions to help data scientists and developers get their AI applications and projects up and running without delay.”

Fast Track your AI Workflows

In this special guest feature, our friends over at Inspur write that for new workloads that are highly compute intensive, accelerators are often required. Accelerators can speed up the computation and allow for AI and ML algorithms to be used in real time. Inspur is a leading supplier of solutions for HPC and AI/ML workloads.

The Role of Middleware in Optimizing Vector Processing

A new whitepaper from NEC X delves into the world of unstructured data and explores how vector processors and their optimization software can help solve the challenges of wrangling the ever-growing volumes of data generated globally. “In short, vector processing with SX-Aurora TSUBASA will play a key role in changing the way big data is handled while stripping away the barriers to achieving even higher performance in the future.”

Podcast: Supercomputing the Coronavirus on Frontera

Scientists are preparing a massive computer model of the coronavirus that they expect will give insight into how it infects in the body. They’ve taken the first steps, testing the first parts of the model and optimizing code on the Frontera supercomputer at the Texas Advanced Computing Center of UT Austin. The knowledge gained from the full model can help researchers design new drugs and vaccines to combat the coronavirus.

Is Your Storage Infrastructure Ready for the Coming AI Wave?

In this new whitepaper from our friends over at Panasas, we take a look at whether your storage infrastructure is ready for the robust requirements in support of AI workloads. AI promises to not only create entirely new industries, but it will also fundamentally change the way organizations large and small conduct business. IT planners need to start revising their storage infrastructure now to prepare the organization for the coming AI wave.

Efficient AI Computing for the Planet

In this keynote talk from the 2020 HiPEAC conference, Alesssandro Cremonesi from STMicroelectronics describes how artificial intelligence (AI) is the central nervous system of an increasingly connected world. He sets out both the benefits and potential pitfalls of AI, before arguing that AI now has to move beyond performance to efficiency in order to be sustainable. “So far, AI developments have been focused on performances regardless of the computational power needed, reaching in some applications performances better than the human ones. Now it is time to focus on efficient computation.”