Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


SC17 Preview: Artificial Intelligence and The Virtuous Cycle of Compute

In this video, Pradeep Dubey from Intel Labs describes his upcoming SC17 Invited Talk on Artificial Intelligence. “Dubey will discuss how the convergence of AI, Big Data, HPC systems, and algorithmic advances are transforming the relationship between computers and humans, disrupting past notions of a partnership where humans made all the “intelligent” decisions.”

Revolutionizing Healthcare With Artificial Intelligence

Artificial intelligence has already had a profound effect on many industries. But for the healthcare sector, this collection of technologies is proving to be nothing short of transformative. Download the new report from HPE that explores how tools like GPUs and deep learning platforms are changing and progressing healthcare.

NVIDIA Expands Deep Learning Institute

Today NVIDIA announced a broad expansion of its Deep Learning Institute (DLI), which is training tens of thousands of students, developers and data scientists with critical skills needed to apply artificial intelligence. “The world faces an acute shortage of data scientists and developers who are proficient in deep learning, and we’re focused on addressing that need,” said Greg Estes, vice president of Developer Programs at NVIDIA. “As part of the company’s effort to democratize AI, the Deep Learning Institute is enabling more developers, researchers and data scientists to apply this powerful technology to solve difficult problems.”

Designing HPC, Big Data, & Deep Learning Middleware for Exascale

DK Panda from Ohio State University presented this talk at the HPC Advisory Council Spain Conference. “This talk will focus on challenges in designing HPC, Big Data, and Deep Learning middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss about the challenges in designing runtime environments for MPI+X (PGAS OpenSHMEM/UPC/CAF/UPC++, OpenMP, and CUDA) programming models. Features and sample performance numbers from MVAPICH2 libraries will be presented.”

The Inflection Point of Wattage in HPC, Deep Learning and AI

Magnified in 2017 by machine learning and AI, there is a heightened concern in the HPC community over wattage trends in CPUs, GPUs and emerging neural chips required to meet accelerating computational demands in HPC clusters. In this sponsored post from Asetek, the company examines how high wattage trends in HPC, deep learning and AI might be reaching an inflection point.

HPE Unveils a Set Artificial Intelligence Platforms and Services

Today Hewlett Packard Enterprise announced new purpose-built platforms and services capabilities to help companies simplify the adoption of Artificial Intelligence, with an initial focus on a key subset of AI known as deep learning. “HPE’s infrastructure and software solutions are designed for ease-of-use and promise to play an important role in driving AI adoption into enterprises and other organizations in the next few years.”

Accelerating Quantum Chemistry for Drug Discovery

In the pharmaceutical industry, drug discovery is a long and expensive process. This sponsored post from Nvidia explores how the University of Florida and University of North Carolina developed an anakin-me neural network engine to produce computationally fast quantum mechanical simulations with high accuracy at a very low cost to speed drug discovery and exploration.

A Vision for Exascale: Simulation, Data and Learning

Rick Stevens gave this talk at the recent ATPESC training program. “The ATPESC program provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future. As a bridge to that future, this two-week program fills the gap that exists in the training computational scientists typically receive through formal education or other shorter courses.”

A Perspective on HPC-enabled AI

Tim Barr from Cray gave this talk at the HPC User Forum in Milwaukee. “Cray’s unique history in supercomputing and analytics has given us front-line experience in pushing the limits of CPU and GPU integration, network scale, tuning for analytics, and optimizing for both model and data parallelization. Particularly important to machine learning is our holistic approach to parallelism and performance, which includes extremely scalable compute, storage and analytics.”

Penguin Computing Launches NVIDIA Tesla V100-based Servers

Today Penguin Computing announced strategic support for the field of artificial intelligence through availability of its servers based on the highly-advanced NVIDIA Tesla V100 GPU accelerator, powered by the NVIDIA Volta GPU architecture. “Deep learning, machine learning and artificial intelligence are vital tools for addressing the world’s most complex challenges and improving many aspects of our lives,” said William Wu, Director of Product Management, Penguin Computing. “Our breadth of products covers configurations that accelerate various demanding workloads – maximizing performance, minimizing P2P latency of multiple GPUs and providing minimal power consumption through creative cooling solutions.”