Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Accelerating Machine Learning on VMware vSphere with NVIDIA GPUs

Mohan Potheri from VMware gave this talk at Stanford HPC Conference. “This session introduces machine learning on vSphere to the attendee and explains when and why GPUs are important for them. Basic machine learning with Apache Spark is demonstrated. GPUs can be effectively shared in vSphere environments and the various methods of sharing are addressed here.”

Agenda Posted for Dell EMC Community Event in Austin

The Dell EMC Community Meeting has published their preliminary speaker agenda. The event takes place March 25-27 in Austin, Texas. The Dell HPC Community is a worldwide technical forum that facilitates the exchange of ideas among researchers, computer scientists, executives, developers, and engineers and promotes the advancement of innovative, powerful HPC solutions. The vision of the […]

Pioneering and Democratizing Scalable HPC+AI at the Pittsburgh Supercomputing Center

Nick Nystrom from the Pittsburgh Supercomputing Center gave this talk at the Stanford HPC Conference. “To address the demand for scalable AI, PSC recently introduced Bridges-AI, which adds transformative new AI capability. In this presentation, we share our vision in designing HPC+AI systems at PSC and highlight some of the exciting research breakthroughs they are enabling.”

Video: Sierra – Science Unleashed

Rob Neely from LLNL gave this talk at the Stanford HPC Conference. “This talk will give an overview of the Sierra supercomputer and some of the early science results it has enabled. Sierra is an IBM system harnessing the power of over 17,000 NVIDIA Volta GPUs recently deployed at Lawrence Livermore National Laboratory and is currently ranked as the #2 system on the Top500. Before being turned over for use in the classified mission, Sierra spent months in an “open science campaign” where we got an early glimpse at some of the truly game-changing science this system will unleash – selected results of which will be presented.”

Big Compute Podcast Looks at New Architectures for HPC

In this Big Compute podcast, host Gabriel Broner from Rescale interviews Mike Woodacre, HPE Fellow, to discuss the shift from CPUs to an emerging diversity of architectures. They discuss the evolution of CPUs, the advent of GPUs with increasing data parallelism, memory driven computing, and the potential benefits of a cloud environment with access to multiple architectures.

NERSC Hosts GPU Hackathon in Preparation for Perlmutter Supercomputer

NERSC recently hosted a successful GPU Hackathon event in preparation for their next-generation Perlmutter supercomputer. Perlmutter, a pre-exascale Cray Shasta system slated to be delivered in 2020, will feature a number of new hardware and software innovations and is the first supercomputing system designed with both data analysis and simulations in mind. Unlike previous NERSC systems, Perlmutter will use a combination of nodes with only CPUs, as well as nodes featuring both CPUs and GPUs.

Podcast: How Intel powers Dell EMC Ready Solutions for Ai

In this Intel podcast, Adnan Khaleel describes how Dell EMC is making enterprise AI implementation fast and easy with its Dell EMC Ready Solutions for AI. Adnan also illustrates how Dell EMC Ready Solutions are well suited for many different industries including financial services, insurance, pharmaceuticals and many more.

Architecting the Right System for Your AI Application—without the Vendor Fluff

Brett Newman from Microway gave this talk at the Stanford HPC Conference. “Figuring out how to map your dataset or algorithm to the optimal hardware design is one of the hardest tasks in HPC. We’ll review what helps steer the selection of one system architecture from another for AI applications. Plus the right questions to ask of your collaborators—and a hardware vendor. Honest technical advice, no fluff.”

The New HPC

Addison Snell gave this talk at the Stanford HPC Conference. “Intersect360 Research returns with an annual deep dive into the trends, technologies and usage models that will be propelling the HPC community through 2017 and beyond. Emerging areas of focus and opportunities to expand will be explored along with insightful observations needed to support measurably positive decision making within your operations.”

HPC + Ai: Machine Learning Models in Scientific Computing

Steve Oberlin from NVIDIA gave this talk at the Stanford HPC Conference. “Clearly, AI has benefited greatly from HPC. Now, AI methods and tools are starting to be applied to HPC applications to great effect. This talk will describe an emerging workflow that uses traditional numeric simulation codes to generate synthetic data sets to train machine learning algorithms, then employs the resulting AI models to predict the computed results, often with dramatic gains in efficiency, performance, and even accuracy.”