Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


The true cost of AI innovation

“As the world’s attention has shifted to climate change, the field of AI is beginning to take note of its carbon cost. Research done at the Allen Institute for AI by Roy Schwartz et al. raises the question of whether efficiency, alongside accuracy, should become an important factor in AI research, and suggests that AI scientists ought to deliberate if the massive computational power needed for expensive processing of models, colossal amounts of training data, or huge numbers of experiments is justified by the degree of improvement in accuracy.”

AI for Any Environment, All the Time

In this special guest feature, our friends over at Advantech takes a look at the shift to edge computing environments versus large, secure data centers, a trend in stark contrast to the other end of the spectrum where large cloud providers and on-premise data centers offer a wide range of computing, networking and storage options within carefully controlled environments. Ultimately, we need both based on their respective value. But if we look deeper into what lies between the two extremes, we find the hybrid – rugged systems that bring the high-performance data center computing power and functionality to the edge. Advantech is a global leader in the fields of IoT intelligent systems and embedded platforms.

Video: High-Performance Memory For AI And HPC

In this video, Frank Ferro from Rambus examines the current performance bottlenecks in HPC, drilling down into power and performance for different memory options. “HBM2E offers the capability to achieve tremendous memory bandwidth. Four HBM2E stacks connected to a processor will deliver over 1.6 TB/s of bandwidth. And with 3D stacking of memory, high bandwidth and high capacity can be achieved in an exceptionally small footprint. Further, by keeping data rates relatively low, and the memory close to the processor, overall system power is kept low.”

HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD

William Beaudin from DDN gave this talk at GTC Digital. “Enabling high performance computing through the use of GPUs requires an incredible amount of IO to sustain application performance. We’ll cover architectures that enable extremely scalable applications through the use of NVIDIA’s SuperPOD and DDN’s A3I systems. The groundbreaking performance delivered by the DGX SuperPOD enables the rapid training of deep learning models at great scale.”

Interview: Under Secretary Paul Dabbar on the COVID-19 HPC Consortium

The DOE laboratory complex has many core capabilities that can be applied to addressing the threats posed by COVID-19. “This public-private partnership includes the biggest players in advanced computing from government, industry, and academia. At launch, the consortium includes five DOE laboratories, industry leaders like IBM, Microsoft, Google, and Amazon, and preeminent U.S. universities like MIT, RPI, and UC San Diego. And within a week, we’ve already received more than a dozen requests from other organizations to join the consortium.”

Video: NVIDIA to Accelerate the HPC-AI Convergence

Gunter Roeth from NVIDIA gave this talk at ML4HPC 2020. “The growing adoption of NVIDIA Volta GPU by the Top500 Supercomputers highlights the need of computing acceleration for this HPC & AI convergence. Many projects today demonstrate the benefit of AI for HPC, in terms of accuracy and time to solution, in many domains such as Computational Mechanics, Earth Sciences, Life Sciences, Computational Chemistry, and Computational Physics. NVIDIA today for instance, uses Physics Informed Neural Networks for the heat sink design in our DGX system.”

WekaIO Receives Artificial Intelligence Excellence Award

Today WekaIO announced that The Business Intelligence Group has named Weka a winner in its Artificial Intelligence Excellence Awards program. “the WekaFS file system can deliver 80 GB/sec of bandwidth to a single GPU server, scale to Exabytes in a single namespace, and support an entire pipeline for edge-to-core-to-cloud workflows. The system also delivers operational agility with versioning, explainability, and reproducibility along with governance and compliance with in-line encryption and data protection.”

New AI Solutions from Dell Technologies

In this special guest feature, Dave Frattura from Dell Technologies writes that the company is helping customers simplify and drive data science and AI initiatives that can deliver valuable insights, automation and intelligence to fuel innovation across their IT landscape — from edge locations to core data center and public clouds. “Dell has developed new solutions to help data scientists and developers get their AI applications and projects up and running without delay.”

Fast Track your AI Workflows

In this special guest feature, our friends over at Inspur write that for new workloads that are highly compute intensive, accelerators are often required. Accelerators can speed up the computation and allow for AI and ML algorithms to be used in real time. Inspur is a leading supplier of solutions for HPC and AI/ML workloads.

Student Teams Encouraged to Join the 3rd APAC HPC-AI Competition

Student teams are encouraged to apply for the 2020 APAC HPC-AI Competition. Continuing the success of the previous competitions, student teams will square off against international teams to produce solutions and applications in the High-Performance Computing and Artificial Intelligence domains. “We hope that the HPC-AI training established among our young aspiring programmers can help us tackle global threats such as COVID-19 and accelerate an improved response to future pandemics.”