Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Jülich to Build 5 Petaflop Supercomputing Booster with Dell

Today Intel and the Jülich Supercomputing Centre together with ParTec and Dell today announced plans to develop and deploy a next-generation modular supercomputing system. Leveraging the experience and results gained in the EU-funded DEEP and DEEP-ER projects, in which three of the partners have been strongly engaged, the group will develop the necessary mechanisms required to augment JSC’s JURECA cluster with a highly-scalable component named “Booster” and being based on Intel’s Scalable Systems Framework (Intel SSF).

10 Things You’re Wrong About in HPC

Rich Brueckner from insideHPC presented this talk at the Switzerland HPC Conferene. “While High Performance Computing has gone through dramatic changes since Seymour Cray created the supercomputer industry in the 1970’s, misnomers, myths, and Alternative Facts have established themselves in the hive mind of the HPX community. In this session, Rich will turn the industry on its ear and reveal the whole truth in the service of outright parody.”

LANL Prepares Next Generation of HPC Professionals at New Mexico High School Supercomputing Challenge

More than 200 New Mexico students and teachers from 55 different teams came together in Albuquerque the week to showcase their computing research projects at the 27th annual New Mexico Supercomputing Challenge expo and awards ceremony. “It is encouraging to see the excitement generated by the participants and the great support provided by all the volunteers involved in the Supercomputing Challenge,” said David Kratzer of the Laboratory’s High Performance Computing Division, the Los Alamos coordinator of the Supercomputing Challenge.

Rock Stars of HPC: James Phillips

Recipient of a Gordon Bell Award in 2002, James Phillips has been a full-time research programmer for almost 20 years. Since 1998, he has been the lead developer of NAMD, a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems that scales beyond 200,000 cores, and is undoubtedly a Rock Star of HPC.

Mark III Systems Becomes Cray Solutions Provider

Today Cray announced it has signed a solutions provider agreement with Mark III Systems to develop, market and sell solutions that leverage Cray’s portfolio of supercomputing and big data analytics systems. “We’re very excited to be partnering with Cray to deliver unique platforms and data-driven solutions to our joint clients, especially around the key opportunities of data analytics, artificial intelligence, cognitive compute, and deep learning,” said Chris Bogan, Mark III’s director of business development and alliances. “Combined with Mark III’s full stack approach of helping clients capitalize on the big data and digital transformation opportunities, we think that this partnership offers enterprises and organizations the ability to differentiate and win in the marketplace in the digital era.”

Video: HPC Trends for 2017

In this video from Switzerland HPC Conference, Michael Feldman from TOP500.org presents an annual deep dive into the trends, technologies and usage models that will be propelling the HPC community through 2017 and beyond. “Emerging areas of focus and opportunities to expand will be explored along with insightful observations needed to support measurably positive decision making within your operations.”

Paula Stephan and Paul Morin to Keynote PEARC17 in New Orleans

Today the PEARC17 conference announced their lineup of keynote speakers. The conference takes place in July 9–13 in New Orleans.

Panel Discussion on Disruptive Technologies for HPC

In this video from the HPC User Forum, Bob Sorensen from Hyperion Research moderates a panel discussion on Disruptive Technologies for HPC. “A disruptive innovation is an innovation that creates a new market and value network and eventually disrupts an existing market and value network, displacing established market leading firms, products and alliances. The term was defined and phenomenon analyzed by Clayton M. Christensen beginning in 1995.”

Argonne Seeking Proposals to Advance Big Data in Science

The Argonne Leadership Computing Facility Data Science Program (ADSP) is now accepting proposals for projects hoping to gain insight into very large datasets produced by experimental, simulation, or observational methods. The larger the data, in fact, the better. Applications are due by June 15, 2017.

Radio Free HPC Catches Up with the Exascale Computing Project

In this podcast, the Radio Free HPC team looks at a recent update on the Exascale Computing Project by Paul Messina. “The Exascale Computing Project (ECP) was established with the goals of maximizing the benefits of HPC for the United States and accelerating the development of a capable exascale computing ecosystem.”