Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


The Simulation of the Behavior of the Human Brain using CUDA

Pedro Valero-Lara from BSC gave this talk at the GPU Technology Conference. “The attendees can learn about how the behavior of Human Brain is simulated by using current computers, and the different challenges which the implementation has to deal with. We cover the main steps of the simulation and the methodologies behind this simulation. In particular we highlight and focus on those transformations and optimizations carried out to achieve a good performance on NVIDIA GPUs.”

Brain Research: A Pathfinder for Future HPC

Dirk Pleiter from the Jülich Supercomputing Centre gave this talk at the NVIDIA GPU Technology Conference. “One of the biggest and most exiting scientific challenge requiring HPC is to decode the human brain. Many of the research topics in this field require scalable compute resources or the use of advance data analytics methods (including deep learning) for processing extreme scale data volumes. GPUs are a key enabling technology and we will thus focus on the opportunities for using these for computing, data analytics and visualization. GPU-accelerated servers based on POWER processors are here of particular interest due to the tight integration of CPU and GPU using NVLink and the enhanced data transport capabilities.”

Pre-exascale Architectures: OpenPOWER Performance and Usability Assessment for French Scientific Community

Gabriel Hautreux from GENCI gave this talk at the NVIDIA GPU Technology Conference. “The talk will present the OpenPOWER platform bought by GENCI and provided to the scientific community. Then, it will present the first results obtained on the platform for a set of about 15 applications using all the solutions provided to the users (CUDA,OpenACC,OpenMP,…). Finally, a presentation about one specific application will be made regarding its porting effort and techniques used for GPUs with both OpenACC and OpenMP.”

Video: Liqid Teams with Inspur at GTC for Composable Infrastructure

In this video from GTC 2018, Dolly Wu from Inspur and Marius Tudor from Liquid describe how the two companies are collaborating on Composable Infrastructure for AI and Deep Learning workloads. “AI and deep learning applications will determine the direction of next-generation infrastructure design, and we believe dynamically composing GPUs will be central to these emerging platforms,” said Dolly Wu, GM and VP Inspur Systems.

Why the World’s Largest Telescope Relies on GPUs

Over at the NVIDIA blog, Jamie Beckett writes that the new European-Extremely Large Telescope, or E-ELT, will capture images 15 times sharper than the dazzling shots the Hubble telescope has beamed to Earth for the past three decades. “are running GPU-powered simulations to predict how different configurations of E-ELT will affect image quality. Changes to the angle of the telescope’s mirrors, different numbers of cameras and other factors could improve image quality.”

DDN feeds NVIDIA DGX Servers 33GB/s for Machine Learning

Today DDN announced that its EXAScaler DGX solution accelerated client has been fully integrated with the NVIDIA DGX Architecture. “By supplying this groundbreaking level of performance, DDN enables customers to greatly accelerate their Machine Learning initiatives, reducing load wait times of large datasets to mere seconds for faster training turnaround.”

Video: VMware powers HPC Virtualization at NVIDIA GPU Technology Conference

In this video from from 2018 GPU Technology Conference, Ziv Kalmanovich from VMware and Fred Devoir from NVIDIA describe how they are working together to bring the benefits of virtualization to GPU workloads. “For cloud environments based on vSphere, you can deploy a machine learning workload yourself using GPUs via the VMware DirectPath I/O or vGPU technology.”

Liqid and Inspur team up for Composable GPU-Centric Rack-Scale Solutions

Today Liqid and Inspur announced that the two companies will offer a joint solution designed specifically for advanced, GPU-intensive applications and workflows. “Our goal is to work with the industry’s most innovative companies to build an adaptive data center infrastructure for the advancement of AI, scientific discovery, and next-generation GPU-centric workloads,” said Sumit Puri, CEO of Liqid. “Liqid is honored to be partnering with data center leaders Inspur Systems and NVIDIA to deliver the most advanced composable GPU platform on the market with Liqid’s fabric technology.”

Liqid Showcases Composable Infrastructure for GPUs at GTC 2017

“The Liqid Composable Infrastructure (CI) Platform is the first solution to support GPUs as a dynamic, assignable, bare-metal resource. With the addition of graphics processing, the Liqid CI Platform delivers the industry’s most fully realized approach to composable infrastructure architecture. With this technology, disaggregated pools of compute, networking, data storage and graphics processing elements can be deployed on demand as bare-metal resources and instantly repurposed when infrastructure needs change.”

Podcast: Marc Hamilton on how Volta GPUs will Power Next-Generation HPC and AI

In this podcast, Marc Hamilton from Nvidia describes how the new Volta GPUs will power the next generation of systems for HPC and AI. According to Nvidia, the Tesla V100 accelerator is the world’s highest performing parallel processor, designed to power the most computationally intensive HPC, AI, and graphics workloads.