Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


PBS Works will Power New Supercomputer at BASF

Over at the Altair Blog, Jochen Krebs writes that the new HPC cluster at BASF will run PBS Works workload management software. “What does it take to go from months to mere days in gaining results when conducting research? Supercomputing now plays a vital role in the advancement of systems efficiency across industries. On March 17th, BASF and HPE announced in a press release that BASF has chosen HPE to build a new supercomputer for chemical research projects. HPE’s Apollo System supercomputer will help BASF to reduce computer simulation and modeling times from months to days and will drive the digitalization of BASF’s worldwide research activities.”

OpenStack for Research Computing

“This talk will present the motivating factors for considering OpenStack for the management of research computing infrastructure. Stig Telfer will give an overview of the differences in design criteria between cloud, HPC and data analytics, and how these differences can be mitigated through architectural and configuration choices of an OpenStack private cloud. Some real-world examples will be given that demonstrate the potential for using OpenStack for managing HPC infrastructure. This talk will present ways that the HPC community can gain the benefits of using software-defined infrastructure without paying the performance overhead.”

The Computer That Could Be Smarter than Us – Cognitive Computing

Ingolf Wittmann from IBM presented this talk for the Switzerland HPC Conference. “This presentation will point out based on real examples how HPC environments can benefit from such solutions and technologies to drive cognitive solutions, machine/deep learning where we can ask ourselves, ‘What will be possible in the near future – can the future computers be smarter than humans?”

Cedar Supercomputer Comes to Canada

Simon Fraser University (SFU), Compute Canada and WestGrid were all part of the major new update to Canada’s HPC resources with the recent announcement of the launch of the most powerful academic supercomputer in Canada, Cedar. Housed in the new data centre at SFU’s Burnaby Campus, Cedar will serve Canadian researchers across the country in all scientific disciplines by providing expanded compute, storage and cloud resources.

Dr. Eng Lim Goh presents: HPC & AI Technology Trends

Dr. Eng Lim Goh from Hewlett Packard Enterprise gave this talk at the HPC User Forum. “SGI’s highly complementary portfolio, including its in-memory high-performance data analytics technology and leading high-performance computing solutions will extend and strengthen HPE’s current leadership position in the growing mission critical and high-performance computing segments of the server market.”

Update on the Exascale Computing Project (ECP)

Paul Messina from Argonne presented this talk at the HPC User Forum in Santa Fe. “The Exascale Computing Project (ECP) was established with the goals of maximizing the benefits of HPC for the United States and accelerating the development of a capable exascale computing ecosystem. The ECP is a collaborative effort of two U.S. Department of Energy organizations – the Office of Science (DOE-SC) and the National Nuclear Security Administration (NNSA).”

Rock Stars of HPC: John Stone

This Rock Stars of HPC series is about the men and women who are changing the way the HPC community develops, deploys, and operates the supercomputers and social and economic impact of their discoveries. “As the lead developer of the VMD molecular visualization and analysis tool, John Stone’s code is used by more than 100,000 researchers around the world. He’s also a CUDA Fellow, helping to bring HPC to the masses with accelerated computing. In this way and many others, John Stone is certainly one of the Rock Stars of HPC.”

SPACK: A Package Manager for Supercomputers, Linux, and MacOS

“HPC software is becoming increasingly complex. The space of possible build configurations is combinatorial, and existing package management tools do not handle these complexities well. Because of this, most HPC software is built by hand. This talk introduces “Spack”, an open-source tool for scientific package management which helps developers and cluster administrators avoid having to waste countless hours porting and rebuilding software.” A tutorial video on using Spack is also included.

Baidu Deep Learning Service adds Latest NVIDIA Pascal GPUs

“Baidu and NVIDIA are long-time partners in advancing the state of the art in AI,” said Ian Buck, general manager of Accelerated Computing at NVIDIA. “Baidu understands that enterprises need GPU computing to process the massive volumes of data needed for deep learning. Through Baidu Cloud, companies can quickly convert data into insights that lead to breakthrough products and services.”

Podcast: How AI Can Improve the Diagnosis and Treatment of Diseases

In this AI Podcast, Mark Michalski from the Massachusetts General Hospital Center for Clinical Data Science discusses how AI is being used to advance medicine. “Medicine — particularly radiology and pathology — have become more data-driven. The Massachusetts General Hospital Center for Clinical Data Science — led by Mark Michalski — promises to accelerate that, using AI technologies to spot patterns that can improve the detection, diagnosis and treatment of diseases.”