Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Panel Discussion on Disruptive Technologies for HPC

In this video from the HPC User Forum, Bob Sorensen from Hyperion Research moderates a panel discussion on Disruptive Technologies for HPC. “A disruptive innovation is an innovation that creates a new market and value network and eventually disrupts an existing market and value network, displacing established market leading firms, products and alliances. The term was defined and phenomenon analyzed by Clayton M. Christensen beginning in 1995.”

Radio Free HPC Catches Up with the Exascale Computing Project

In this podcast, the Radio Free HPC team looks at a recent update on the Exascale Computing Project by Paul Messina. “The Exascale Computing Project (ECP) was established with the goals of maximizing the benefits of HPC for the United States and accelerating the development of a capable exascale computing ecosystem.”

Video: 2016 HPC Market Results, Growth Projections, and Trends

In this video from the HPC User Forum in Santa Fe, Earl Joseph from Hyperion Research provides an HPC Market Update and results from their Exascale Tracking Study. “Formerly the IDC HPC Research Group, Hyperion Research tracks the high performance market.”

PBS Works will Power New Supercomputer at BASF

Over at the Altair Blog, Jochen Krebs writes that the new HPC cluster at BASF will run PBS Works workload management software. “What does it take to go from months to mere days in gaining results when conducting research? Supercomputing now plays a vital role in the advancement of systems efficiency across industries. On March 17th, BASF and HPE announced in a press release that BASF has chosen HPE to build a new supercomputer for chemical research projects. HPE’s Apollo System supercomputer will help BASF to reduce computer simulation and modeling times from months to days and will drive the digitalization of BASF’s worldwide research activities.”

OpenStack for Research Computing

“This talk will present the motivating factors for considering OpenStack for the management of research computing infrastructure. Stig Telfer will give an overview of the differences in design criteria between cloud, HPC and data analytics, and how these differences can be mitigated through architectural and configuration choices of an OpenStack private cloud. Some real-world examples will be given that demonstrate the potential for using OpenStack for managing HPC infrastructure. This talk will present ways that the HPC community can gain the benefits of using software-defined infrastructure without paying the performance overhead.”

The Computer That Could Be Smarter than Us – Cognitive Computing

Ingolf Wittmann from IBM presented this talk for the Switzerland HPC Conference. “This presentation will point out based on real examples how HPC environments can benefit from such solutions and technologies to drive cognitive solutions, machine/deep learning where we can ask ourselves, ‘What will be possible in the near future – can the future computers be smarter than humans?”

HPC Workflows Using Containers

“In this talk we will discuss a workflow for building and testing Docker containers and their deployment on an HPC system using Shifter. Docker is widely used by developers as a powerful tool for standardizing the packaging of applications across multiple environments, which greatly eases the porting efforts. On the other hand, Shifter provides a container runtime that has been specifically built to fit the needs of HPC. We will briefly introduce these tools while discussing the advantages of using these technologies to fulfill the needs of specific workflows for HPC, e.g., security, high-performance, portability and parallel scalability.”

Dr. Eng Lim Goh presents: HPC & AI Technology Trends

Dr. Eng Lim Goh from Hewlett Packard Enterprise gave this talk at the HPC User Forum. “SGI’s highly complementary portfolio, including its in-memory high-performance data analytics technology and leading high-performance computing solutions will extend and strengthen HPE’s current leadership position in the growing mission critical and high-performance computing segments of the server market.”

Update on the Exascale Computing Project (ECP)

Paul Messina from Argonne presented this talk at the HPC User Forum in Santa Fe. “The Exascale Computing Project (ECP) was established with the goals of maximizing the benefits of HPC for the United States and accelerating the development of a capable exascale computing ecosystem. The ECP is a collaborative effort of two U.S. Department of Energy organizations – the Office of Science (DOE-SC) and the National Nuclear Security Administration (NNSA).”

High Performance Interconnects – Assessments, Rankings and Landscape

Dan Olds from OrionX.net presented this talk at the Switzerland HPC Conference. “Dan Olds will present recent research into the history of High Performance Interconnects (HPI), the current state of the HPI market, where HPIs are going in the future, and how customers should evaluate HPI options today. This will be a highly informative and interactive session.”