Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Podcast: Intel to Ship Neural Network Processor by end of year

Intel’s Naveen Rao writes that Intel will soon be shipping the world’s first family of processors designed from the ground up for artificial intelligence. As announced today, the new chip will be the company’s first step towards it goal of achieving 100 times greater AI performance by 2020. “The goal of this new architecture is to provide the needed flexibility to support all deep learning primitives while making core hardware components as efficient as possible.”

Video: The AI Initiative at NIST

Michael Garris from NIST gave this talk at the HPC User Forum. “AI must be developed in a trustworthy manner to ensure reliability and safety. NIST cultivates trust in AI technology by developing and deploying standards, tests and metrics that make technology more secure, usable, interoperable and reliable, and by strengthening measurement science. This work is critically relevant to building the public trust of rapidly evolving AI technologies.”

OSS Showcases New HDCA Platforms with Volta GPUs at GTC Europe

At GTC Europe this week, One Stop Systems (OSS) will exhibit two of the most powerful GPU accelerators for data scientists and deep learning researchers, the CA16010 and SCA8000. NVIDIA GPU computing is helping researchers and engineers take on some the world’s hardest challenges,” said Paresh Kharya, group product marketing manager of Accelerated Computing at NVIDIA. “One Stop Systems’ customers can now tap into the power of our Volta architecture to accelerate their deep learning and high performance computing workloads.”

Accelerating Quantum Chemistry for Drug Discovery

In the pharmaceutical industry, drug discovery is a long and expensive process. This sponsored post from Nvidia explores how the University of Florida and University of North Carolina developed an anakin-me neural network engine to produce computationally fast quantum mechanical simulations with high accuracy at a very low cost to speed drug discovery and exploration.

Fujitsu to Build 37 Petaflop AI Supercomputer for AIST in Japan

Nikkei in Japan reports that Fujitsu is building a 37 Petaflop supercomputer for the National Institute of Advanced Industrial Science and Technology (AIST). “Targeted at Deep Learning workloads, the machine will power the AI research center at the University of Tokyo’s Chiba Prefecture campus. The new Fujitsu system feature will comprise 1,088 servers, 2,176 Intel Xeon processors, and 4,352 NVIDIA GPUs.”

How Can We Bring Apps to Racks?

In this special guest feature, Rosemary Dr Rosemary Francis from Ellexus describes why the customized nature of HPC is not a sustainable path forward for the next generation. “The downside is that many of our systems and tools are inaccessible to non-expert users. For example, deep learning is bringing more and more scientists closer towards HPC, but while they bring their knowledge, they also bring their high expectations for what they believe IT can do and not necessarily an understanding of how it works.”

A Perspective on HPC-enabled AI

Tim Barr from Cray gave this talk at the HPC User Forum in Milwaukee. “Cray’s unique history in supercomputing and analytics has given us front-line experience in pushing the limits of CPU and GPU integration, network scale, tuning for analytics, and optimizing for both model and data parallelization. Particularly important to machine learning is our holistic approach to parallelism and performance, which includes extremely scalable compute, storage and analytics.”

Supermicro steps up with Optimized Systems for NVIDIA Tesla V100 GPUs

Today Supermicro announced support for NVIDIA Tesla V100 PCI-E and V100 SXM2 GPUs on its industry leading portfolio of GPU server platforms. With our latest innovations incorporating the new NVIDIA V100 PCI-E and V100 SXM2 GPUs in performance-optimized 1U and 4U systems with next-generation NVLink, our customers can accelerate their applications and innovations to help solve the world’s most complex and challenging problems.”

Penguin Computing Launches NVIDIA Tesla V100-based Servers

Today Penguin Computing announced strategic support for the field of artificial intelligence through availability of its servers based on the highly-advanced NVIDIA Tesla V100 GPU accelerator, powered by the NVIDIA Volta GPU architecture. “Deep learning, machine learning and artificial intelligence are vital tools for addressing the world’s most complex challenges and improving many aspects of our lives,” said William Wu, Director of Product Management, Penguin Computing. “Our breadth of products covers configurations that accelerate various demanding workloads – maximizing performance, minimizing P2P latency of multiple GPUs and providing minimal power consumption through creative cooling solutions.”

SC17 Session Preview: Dr. Pradeep Dubey on AI & The Virtuous Cycle of Compute

Deep Learning was recently scaled to obtain 15PF performance on the Cori supercomputer at NERSC. Cori Phase II features over 9600 KNL processors. It can significantly impact how we do computing and what computing can do for us. In this talk, I will discuss some of the application-level opportunities and system-level challenges that lie at the heart of this intersection of traditional high performance computing with emerging data-intensive computing.