Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: State of Linux Containers

“Linux Containers gain more and more momentum in all IT ecosystems. This talk provides an overview about what happened in the container landscape (in particular Docker) during the course of the last year and how it impacts datacenter operations, HPC and High-Performance Big Data. Furthermore Christian will give an update/extend on the ‘things to explore’ list he presented in the last Lugano workshop, applying what he learned and came across during the year 2016.”

Intel Rolls Out BigDL Deep Learning Library for Apache Spark

Today Intel announced the open-source BigDL, a Distributed Deep Learning Library for the Apache Spark* open-source cluster-computing framework. “BigDL is an open-source project, and we encourage all developers to connect with us on the BigDL Github, sample the code and contribute to the project,” said Doug Fisher, senior vice president and general manager of the Software and Services Group at Intel.

Job of the Week: Research Systems and Application Administrator at University of Oregon

“The University of Oregon (UO) High Performance Computing Research Core Facility (HPCRCF) seeks experienced applicants for the position of Research Systems and Application Administrator. The HPCRCF is a new facility located on the campus of the UO in Eugene, Oregon. The mission of the HPCRCF is to support computational research at the UO and collaborating institutions, and is home to a new flagship research cluster.”

Global HPC Centers Form Accelerated Computing Institute

Leaders in hybrid accelerated HPC in the United States, Japan, and Switzerland have signed a memorandum of understanding establishing an international institute dedicated to common goals, the sharing of HPC expertise, and forward-thinking evaluation of computing architecture. “Forecasting the future of leadership-class computing and managing the risk of architectural change is a shared interest among ORNL, Tokyo Tech, and ETH Zurich,” said Jeff Nichols, associate laboratory director of computing and computational sciences at ORNL. “What unites our three organizations is a willingness to embrace change, actively partner with HPC vendors, and devise solutions that advance the work of our scientific users. ADAC provides a framework for member organizations to pursue mutual interests such as accelerated node architectures as computing moves toward the exascale era and beyond.”

Coposky and Russell Tapped to Lead iRODS Consortium

Industry veterans Jason Coposky and Terrell Russell have taken lead roles at the membership-based foundation that leads development and support of the integrated Rule-Oriented Data System (iRODS). “With data becoming the currency of the knowledge economy, now is an exciting time to be involved with developing and sustaining a world-class data management platform like iRODS,” said Coposky. “Our consortium membership is growing, and our increasing ability to integrate with commonly used hardware and software is translating into new users and an even more robust product.”

Best Practices – Large Scale Multiphysics

Frank Ham from Cascade Technologies presented this talk at the Stanford HPC Conference. “A spin-off of the Center for Turbulence Research at Stanford University, Cascade Technologies grew out of a need to bridge between fundamental research from institutions like Stanford University and its application in industries. In a continual push to improve the operability and performance of combustion devices, high-fidelity simulation methods for turbulent combustion are emerging as critical elements in the design process. Multiphysics based methodologies can accurately predict mixing, study flame structure and stability, and even predict product and pollutant concentrations at design and off-design conditions.”

Tutorial: Towards Exascale Computing with Fortran 2015

“This tutorial will present several features that the draft Fortran 2015 standard introduces to meet challenges that are expected to dominate massively parallel programming in the coming exascale era. The expected exascale challenges include higher hardware- and software-failure rates, increasing hardware heterogeneity, a proliferation of execution units, and deeper memory hierarchies.”

Podcast: IDC’s Steve Conway on China’s New Plan for Exascale

“China and the United States have been in the race to develop the most capable supercomputer. China has announced that its exascale computer could be released sooner than originally planned. Steve Conway, VP for high performance computing at IDC, joins Federal Drive with Tom Temin for analysis.”

Video: Trish Damkroger on her New Mission at Intel

In this video from KAUST Live: Patricia Damkroger discusses her new role as Vice President, Data Center Group and General Manager, Technical Computing Initiative, Enterprise and Government at Intel. “As the former Associate Director for Computation at Lawrence Livermore National Laboratory (LLNL), Trish Damkroger lead the 1,000-employee workforce behind the Laboratory’s high performance computing efforts. She is a longtime committee member and one-time general chair of the SC conference. Most recently, Damkroger was the SC16 Diverse HPC Workforce Chair.”

Panel Discussion: The Exascale Endeavor

Gilad Shainer moderated this panel discussion on Exascale Computing at the Stanford HPC Conference. “The creation of a capable exascale ecosystem will have profound effects on the lives of Americans, improving our nation’s national security, economic competitiveness, and scientific capabilities. The exponential increase of computation power enabled with exascale will fuel a vast range of breakthroughs and accelerate discoveries in national security, medicine, earth sciences and many other fields.”