Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Special Report: Modernizing and Future-Proofing Your Storage Infrastructure – Part 2

Data—the gold that today’s organizations spend significant resources to acquire—is ever-growing and underpins significant innovation in technologies for storing and accessing it. In this technology guide, insideHPC Special Research Report: Modernizing and Future-Proofing Your Storage Infrastructure, we’ll see how in this environment, different applications and workflows will always have data storage and access requirements, making it critical for planning to understand that a heterogeneous storage infrastructure is needed for a fully functioning organization.

Podcast: WarpX exascale application to accelerate plasma accelerator research

In this Let’s Talk Exascale podcast, researchers from LBNL discuss how the WarpX project are developing an exascale application for plasma accelerator research. “The new breeds of virtual experiments that the WarpX team is developing are not possible with current technologies and will bring huge savings in research costs, according to the project’s summary information available on ECP’s website. The summary also states that more affordable research will lead to the design of a plasma-based collider, and even bigger savings by enabling the characterization of the accelerator before it is built.”

Intelligent Video Analytics Pushes Demand for High Performance Computing at the Edge

In this special guest feature, Tim Miller, VP of Product Marketing at One Stop Systems (OSS), writes that his company is addressing the common requirements for video analytic applications with its AI on the Fly® building blocks. AI on the Fly is defined as moving datacenter levels of HPC and AI compute capabilities to the edge.

Podcast: ZFP Project looks to Reduce Memory Footprint and Data Movement on Exascale Systems

In this Let’s Talk Exascale podcast, Peter Lindstrom from Lawrence Livermore National Laboratory describes how the ZFP project will help reduce the memory footrprint and data movement in Exascale systems. “To perfom those computations, we oftentimes need random access to individual array elements,” Lindstrom said. “Doing that, coupled with data compression, is extremely challenging.”

insideHPC Special Research Report: Modernizing and Future-Proofing Your Storage Infrastructure

Data—the gold that today’s organizations spend significant resources to acquire—is ever-growing and underpins significant innovation in technologies for storing and accessing it. In this technology guide, insideHPC Special Research Report: Modernizing and Future-Proofing Your Storage Infrastructure, we’ll see how in this environment, different applications and workflows will always have data storage and access requirements, making it critical for planning to understand that a heterogeneous storage infrastructure is needed for a fully functioning organization.

Video: Fighting Wildfires with AI and IBM Systems

In this video, Michele Taufer from the University of Tennessee describes how AI enabled by HPC allows researchers to study wildfire propagation which enhances predictions and mitigation. “One of the projects her team is working on looks at how to integrate aspects of soil moisture with wildfire simulations. HPC today and machine learning/AI enable us to identify those patterns and extract the knowledge. The data are generated, analyzed at the same time, and then the knowledge extracted by the data is re-injected into the simulation. Our POWER9 system that allow us exactly to do that.”

EPEEC Project Fosters Heterogeneous HPC Programming in Europe

The European Programming Environment for Programming Productivity of Heterogeneous Supercomputers (EPEEC) is a project that aims to combine European made tools for programming models and performance tools that could help to relieve the burden of targeting highly-heterogeneous supercomputers. It is hoped that this project will make researchers jobs easier as they can more effectively use large scale HPC systems.

Podcast: Supercomputers Battle Coronavirus

In this podcast, the Radio Free HPC team looks at how supercomputers are being used to battle the coronavirus. “We discuss how the supercomputing community has joined the fight and the impact on the battle against the virus. We do our best to keep the conversation light, knowing that everyone out there is suffering from the virus – it’s the one thing we all have in common these days.”

AI for Any Environment, All the Time

In this special guest feature, our friends over at Advantech takes a look at the shift to edge computing environments versus large, secure data centers, a trend in stark contrast to the other end of the spectrum where large cloud providers and on-premise data centers offer a wide range of computing, networking and storage options within carefully controlled environments. Ultimately, we need both based on their respective value. But if we look deeper into what lies between the two extremes, we find the hybrid – rugged systems that bring the high-performance data center computing power and functionality to the edge. Advantech is a global leader in the fields of IoT intelligent systems and embedded platforms.

The true cost of AI innovation

“As the world’s attention has shifted to climate change, the field of AI is beginning to take note of its carbon cost. Research done at the Allen Institute for AI by Roy Schwartz et al. raises the question of whether efficiency, alongside accuracy, should become an important factor in AI research, and suggests that AI scientists ought to deliberate if the massive computational power needed for expensive processing of models, colossal amounts of training data, or huge numbers of experiments is justified by the degree of improvement in accuracy.”