Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


HPE Tackles AI Ops R&D for Energy Efficiency, Sustainability and Resiliency in Data Centers

Today HPE announced an AI Ops R&D collaboration with NREL to develop AI and Machine Learning technologies to automate and improve operational efficiency, including resiliency and energy usage, in data centers for the exascale era. The effort is part of NREL’s ongoing mission as a world leader in advancing energy efficiency and renewable energy technologies to create and implement new approaches that reduce energy consumption and lower operating costs.

Slidecast: Dell EMC Using Neural Networks to “Read Minds”

In this slidecast, Luke Wilson from Dell EMC describes a case study with McGill University using neural networks to read minds. “If you want to build a better neural network, there is no better model than the human brain. In this project, McGill University was running into bottlenecks using neural networks to reverse-map fMRI images. The team from the Dell EMC HPC & AI Innovation Lab was able to tune the code to run solely on Intel Xeon Scalable processors, rather than porting to the university’s scarce GPU accelerators.”

Intel showcases New Class of AI Hardware from Cloud to Edge

Today Intel unveiled new products designed to accelerate AI system development and deployment from cloud to edge. “In its key announcement, Intel demonstrated its Intel Nervana Neural Network Processors (NNP) for training (NNP-T1000) and inference (NNP-I1000) — Intel’s first purpose-built ASICs for complex deep learning with incredible scale and efficiency for cloud and data center customers. Intel also revealed its next-generation Intel Movidius Myriad Vision Processing Unit (VPU) for edge media, computer vision and inference applications.”

Department of Energy to Showcase World-Leading Science at SC19

The DOE’s national laboratories will be showcased at SC19 next week in Denver, CO. “Computational scientists from DOE laboratories have been involved in the conference since it began in 1988 and this year’s event is no different. Experts from the 17 national laboratories will be sharing a booth featuring speakers, presentations, demonstrations, discussions, and simulations. DOE booth #925 will also feature a display of high performance computing artifacts from past, present and future systems. Lab experts will also contribute to the SC19 conference program by leading tutorials, presenting technical papers, speaking at workshops, leading birds-of-a-feather discussions, and sharing ideas in panel discussions.”

Keys to Success for AI in Modeling and Simulation

In this special guest feature from Scientific Computing World, Robert Roe interviews Loren Dean from Mathworks on the use of AI in modeling and simulation. “If you just focus on AI algorithms, you generally don’t succeed. It is more than just developing your intelligent algorithms, and it’s more than just adding AI – you really need to look at it in the context of the broader system being built and how to intelligently improve it.”

Optimizing in a Heterogeneous World is (Algorithms x Devices)

In this guest article, our friends at Intel discuss how CPUs prove better for some important Deep Learning. Here’s why, and keep your GPUs handy! Heterogeneous computing ushers in a world where we must consider permutations of algorithms and devices to find the best platform solution. No single device will win all the time, so we need to constantly assess our choices and assumptions.

Video: Deep Learning for resource-constrained systems

Amos Storkey from the University of Edinburgh gave this talk at HiPEAC CSW Edinburgh. “Storkey explores the demands of getting deep learning software to work on embedded devices, with challenges including real-time requirements, memory availabilit and the energy budget. He discusses work undertaken within the context of the European Union-funded Bonseyes project.”

NVIDIA Tops MLPerf AI Inference Benchmarks

Today NVIDIA posted the fastest results on new benchmarks measuring the performance of AI inference workloads in data centers and at the edge — building on the company’s equally strong position in recent benchmarks measuring AI training. “NVIDIA topped all five benchmarks for both data center-focused scenarios (server and offline), with Turing GPUs providing the highest performance per processor among commercially available entries.”

NVIDIA Launches $399 Jetson Xavier NX for AI at the Edge

Today NVIDIA introduced Jetson Xavier NX, “the world’s smallest, most powerful AI supercomputer for robotic and embedded computing devices at the edge. “With a compact form factor smaller than the size of a credit card, the energy-efficient Jetson Xavier NX module delivers server-class performance up to 21 TOPS for running modern AI workloads, and consumes as little as 10 watts of power.”

MLPerf Releases Over 500 Inference Benchmarks

Today the MLPerf consortium released over 500 inference benchmark results from 14 organizations. “Having independent benchmarks help customers understand and evaluate hardware products in a comparable light. MLPerf is helping drive transparency and oversight into machine learning performance that will enable vendors to mature and build out the AI ecosystem. Intel is excited to be part of the MLPerf effort to realize the vision of AI Everywhere,” stated Dr Naveen Rao, Corp VP Intel, GM AI Products.