Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Dell Technologies Introduces New Solutions to Advance HPC and AI Innovation

At SC19 this week, Dell Technologies is introducing several new solutions, reference architectures and portfolio advancements all designed to simplify and accelerate customers’ HPC and AI efforts. “There’s a lot of value in the data that organizations collect, and HPC and AI are helping organizations get the most out of this data,” said Thierry Pellegrino, vice president of HPC at Dell Technologies. “We’re committed to building solutions that simplify the use and deployment of these technologies for organizations of all sizes and at all stages of deployment.”

Intel Unveils New GPU Architecture and oneAPI Software Stack for HPC and AI

Today at SC19, Intel unveiled its new GPU architecture optimized for HPC and AI as well as an ambitious new software initiative called oneAPI that represents a paradigm shift from today’s single-architecture, single-vendor programming models. “HPC and AI workloads demand diverse architectures, ranging from CPUs, general-purpose GPUs and FPGAs, to more specialized deep learning NNPs which Intel demonstrated earlier this month,” said Raja Koduri, senior vice president, chief architect, and general manager of architecture, graphics and software at Intel. “Simplifying our customers’ ability to harness the power of diverse computing environments is paramount, and Intel is committed to taking a software-first approach that delivers unified and scalable abstraction for heterogeneous architectures.”

Slidecast: Dell EMC Using Neural Networks to “Read Minds”

In this slidecast, Luke Wilson from Dell EMC describes a case study with McGill University using neural networks to read minds. “If you want to build a better neural network, there is no better model than the human brain. In this project, McGill University was running into bottlenecks using neural networks to reverse-map fMRI images. The team from the Dell EMC HPC & AI Innovation Lab was able to tune the code to run solely on Intel Xeon Scalable processors, rather than porting to the university’s scarce GPU accelerators.”

Podcast: SC19 Student Cluster Competition Preview

In this podcast, the Radio Free HPC team catches up with Jessi Lanum, a veteran of the SC19 Student Cluster Competition, for an insider peek on what it’s like to compete for cluster competition glory. “For the few of you who are not already fans of these events, here’s the lowdown: 16 student teams representing universities from around the world have been working their brains out designing, building, and tuning clusters provided by their sponsors. They can use as much hardware as they want, the only limitation is that their systems can’t use more than 3,000 watts during the competition.”

Tackling Turbulence on the Summit Supercomputer

Researchers at the Georgia Institute of Technology have achieved world record performance on the Summit supercomputer using a new algorithm for turbulence simulation. “The team identified the most time-intensive parts of a base CPU code and set out to design a new algorithm that would reduce the cost of these operations, push the limits of the largest problem size possible, and take advantage of the unique data-centric characteristics of Summit, the world’s most powerful and smartest supercomputer for open science.”

GPU-Powered Turbocharger coming to JUWELS Supercomputer at Jülich

The Jülich Supercomputing Centre is adding a high-powered booster module to their JUWELS supercomputer. Designed in cooperation with Atos, ParTec, Mellanox, and NVIDIA, the booster module is equipped with several thousand GPUs designed for extreme computing power and artificial intelligence tasks. “With the launch of the booster in 2020, the computing power of JUWELS will be increased from currently 12 to over 70 petaflops.”

Keys to Success for AI in Modeling and Simulation

In this special guest feature from Scientific Computing World, Robert Roe interviews Loren Dean from Mathworks on the use of AI in modeling and simulation. “If you just focus on AI algorithms, you generally don’t succeed. It is more than just developing your intelligent algorithms, and it’s more than just adding AI – you really need to look at it in the context of the broader system being built and how to intelligently improve it.”

Optimizing in a Heterogeneous World is (Algorithms x Devices)

In this guest article, our friends at Intel discuss how CPUs prove better for some important Deep Learning. Here’s why, and keep your GPUs handy! Heterogeneous computing ushers in a world where we must consider permutations of algorithms and devices to find the best platform solution. No single device will win all the time, so we need to constantly assess our choices and assumptions.

Deep Learning on Summit Supercomputer Powers Insights for Nuclear Waste Remediation

A research collaboration between LBNL, PNNL, Brown University, and NVIDIA has achieved exaflop (half-precision) performance on the Summit supercomputer with a deep learning application used to model subsurface flow in the study of nuclear waste remediation. Their achievement, which will be presented during the “Deep Learning on Supercomputers” workshop at SC19, demonstrates the promise of physics-informed generative adversarial networks (GANs) for analyzing complex, large-scale science problems.

Intersect360 Research Examines Spending Trends in Machine Learning Market

Intersect360 Research has released a pair of new reports examining major technology trends in AI and machine learning, including the worldwide market, spending trends, and impact on HPC. “Machine learning has been in a very high growth stage,” says Intersect360 Research CEO Addison Snell. “In addition to that $10 billion, many systems not one hundred percent dedicated to machine learning are serving training needs as part of their total workloads, increasing the influence that machine learning has on spending and configuration.”