Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


The Computer That Could Be Smarter than Us – Cognitive Computing

Ingolf Wittmann from IBM presented this talk for the Switzerland HPC Conference. “This presentation will point out based on real examples how HPC environments can benefit from such solutions and technologies to drive cognitive solutions, machine/deep learning where we can ask ourselves, ‘What will be possible in the near future – can the future computers be smarter than humans?”

HPC Workflows Using Containers

“In this talk we will discuss a workflow for building and testing Docker containers and their deployment on an HPC system using Shifter. Docker is widely used by developers as a powerful tool for standardizing the packaging of applications across multiple environments, which greatly eases the porting efforts. On the other hand, Shifter provides a container runtime that has been specifically built to fit the needs of HPC. We will briefly introduce these tools while discussing the advantages of using these technologies to fulfill the needs of specific workflows for HPC, e.g., security, high-performance, portability and parallel scalability.”

Dr. Eng Lim Goh presents: HPC & AI Technology Trends

Dr. Eng Lim Goh from Hewlett Packard Enterprise gave this talk at the HPC User Forum. “SGI’s highly complementary portfolio, including its in-memory high-performance data analytics technology and leading high-performance computing solutions will extend and strengthen HPE’s current leadership position in the growing mission critical and high-performance computing segments of the server market.”

Update on the Exascale Computing Project (ECP)

Paul Messina from Argonne presented this talk at the HPC User Forum in Santa Fe. “The Exascale Computing Project (ECP) was established with the goals of maximizing the benefits of HPC for the United States and accelerating the development of a capable exascale computing ecosystem. The ECP is a collaborative effort of two U.S. Department of Energy organizations – the Office of Science (DOE-SC) and the National Nuclear Security Administration (NNSA).”

High Performance Interconnects – Assessments, Rankings and Landscape

Dan Olds from OrionX.net presented this talk at the Switzerland HPC Conference. “Dan Olds will present recent research into the history of High Performance Interconnects (HPI), the current state of the HPI market, where HPIs are going in the future, and how customers should evaluate HPI options today. This will be a highly informative and interactive session.”

Open CAPI: A New Standard for High Performance Attachment of Memory, Acceleration, and Networks

In this video from the Switzerland HPC Conference, Jeffrey Stuecheli from IBM presents: Open CAPI, A New Standard for High Performance Attachment of Memory, Acceleration, and Networks. “OpenCAPI sets a new standard for the industry, providing a high bandwidth, low latency open interface design specification. This session will introduce the new standard and it’s goals. This includes details on how the interface protocol provides unprecedented latency and bandwidth to attached devices.”

SPACK: A Package Manager for Supercomputers, Linux, and MacOS

“HPC software is becoming increasingly complex. The space of possible build configurations is combinatorial, and existing package management tools do not handle these complexities well. Because of this, most HPC software is built by hand. This talk introduces “Spack”, an open-source tool for scientific package management which helps developers and cluster administrators avoid having to waste countless hours porting and rebuilding software.” A tutorial video on using Spack is also included.

OpenPOWER Developer Congress Event to Focus on Machine Learning

Today IBM announced that the first annual OpenPOWER Foundation Developer Congress will take place May 22-25 in San Francisco. With a focus on Machine Learning, the conference will focus on continuing to foster the collaboration within the foundation to satisfy the performance demands of today’s computing market.

Lenovo HPC Strategy Update

Luigi Brochard from Lenovo gave this talk at the Switzerland HPC Conference. “High performance computing is converging more and more with the big data topic and related infrastructure requirements in the field. Lenovo is investing in developing systems designed to resolve todays and future problems in a more efficient way and respond to the demands of Industrial and research application landscape.”

Deep Learning on the SaturnV Cluster

“The basic idea of deep learning is to automatically learn to represent data in multiple layers of increasing abstraction, thus helping to discover intricate structure in large datasets. NVIDIA has invested in SaturnV, a large GPU-accelerated cluster, (#28 on the November 2016 Top500 list) to support internal machine learning projects. After an introduction to deep learning on GPUs, we will address a selection of open questions programmers and users may face when using deep learning for their work on these clusters.”