Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


New Paper: Nanophotonic Neural Networks coming Closer to Reality

Over at the Intel AI blog, Casimir Wierzynski writes that Optical Neural Networks have exciting potential for power-efficiency in AI computation. “At last week’s CLEO conference, we and our collaborators at UC Berkeley presented new findings around ONNs, including a proposal for how that original work could be extended in the face of real-world manufacturing constraints to bring nanophotonic neural network circuits one step closer to a practical reality.”

The Pending Age of Exascale

In this special guest feature from Scientific Computing World, Robert Roe looks at advances in exascale computing and the impact of AI on HPC development. “There is a lot of co-development, AI and HPC are not mutually exclusive. They both need high-speed interconnects and very fast storage. It just so happens that AI functions better on GPUs. HPC has GPUs in abundance, so they mix very well.”

Preliminary Agenda Posted for HP-CAST at ISC 2019

Hewlett Packard Enterprise has posted their Structural Agenda for HP-CAST at ISC 2019. The event takes place June 14-15 in Frankfurt, Germany. “HP-CAST provides guidance to Hewlett Packard Enterprise on the essential development and support issues for HPC systems.”

DUG Opens the Doors for 250 PF Bubba Supercomputer in Houston

Today DownUnder GeoSolutions (DUG) opened its giant new data center in Skybox Houston. Touted to be one of the most powerful supercomputers on earth, the facility is home to the company’s geophysical cloud service, DUG McCloud. “DUG is offering a unique cloud product including compute, storage, geophysical software, and services, initially with a massive 250 PF of geophysically-configured compute ready to go,” said DUG’s Managing Director, Dr Matthew Lamont.

Intel Xeon Scalable Processors Set Deep Learning Performance Record on ResNet-50

Today Intel announced a deep learning performance record on image classification workloads. “Today, we have achieved leadership performance of 7878 images per second on ResNet-50 with our latest generation of Intel Xeon Scalable processors, outperforming 7844 images per second on Nvidia Tesla V100, the best GPU performance as published by Nvidia on its website including T4.”

GCS in Germany Appoints Prof. Dr. Dieter Kranzlmüller as Chairman of the Board

Today the Gauss Centre for Supercomputing in Germany announced the appointment of Prof. Dr. Dieter Kranzlmüller as its new Chair of the Board of Directors. “As we advance towards the exascale threshold of computing and an era of unprecedented discovery and insights driven by the integration of modeling and simulation, data analytics and artificial intelligence, GCS stands ready to provide the basis and the catalyst of innovation–the hardware, software ecosystem, experience and expertise–needed to boost scientific and industrial breakthroughs.”

Speed Machine Learning with the Model Zoo for Intel Architecture

Intel has launched a Model Zoo for Intel Architecture, an open-sourced collection of optimized machine learning inference applications that demonstrates how to get the best performance on Intel platforms. The project contains more than 20 pre-trained models, benchmarking scripts, best practice documents, and step-by-step tutorials for running deep learning (DL) models optimized for Intel Xeon Scalable processors.

CoolIT Systems Launches Liquid Cooling Solution for Intel Server System S9200WK

Today CoolIT Systems announced an integrated liquid cooling solution to support the Intel Server System S9200WK. “The Intel Server System S9200WK uses CoolIT’s innovative Rack DLC coldplate solution, featuring patented Split-Flow design. The liquid cooling solution for this 2U, four node server manages heat from the recently announced dual Intel Xeon Platinum 9200 processor CPUs, voltage regulators, and memory.”

Video: Intel HPC Platform and Memory Technologies

Dr. Jean-Laurent Philippe from Intel gave this talk at the Swiss HPC Conference. “Intel continues to deliver performance leadership with the introduction of the 56-core, 12 memory channel Intel Xeon Platinum 9200. This processor is designed to deliver leadership socket-level performance and unprecedented DDR memory bandwidth in a wide variety of HPC workloads, AI applications, and high-density infrastructure”

Podcast: Accelerating AI Inference with Intel Deep Learning Boost

In this Chip Chat podcast, Jason Kennedy from Intel describes how Intel Deep Learning Boost works as an embedded AI accelerator in the CPU designed to speed deep learning inference workloads. “The key to Intel DL Boost – and its performance kick – is augmentation of the existing Intel Advanced Vector Extensions 512 (Intel AVX-512) instruction set. This innovation significantly accelerates inference performance for deep learning workloads optimized to use vector neural network instructions (VNNI). Image classification, language translation, object detection, and speech recognition are just a few examples of workloads that can benefit.”