Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral Program

Sven Oehme, Chief Research Strategist at IBM presented this talk at the DDN User Group. “Since 2007, DDN has sustained a highly strategic partnership with IBM to drive our mutual HPC technology vision to the next level. By leveraging a close working relationship with IBM, DDN provides the performance and capacity systems that help deliver IBM’s Spectrum Scale (formerly known as GPFS) into the most demanding environments.”

Interview: Bill Mannel and Dr. Eng Lim Goh on What’s Next for HPE & SGI

In this video, Bill Mannel, VP & GM, High-Performance Computing and Big Data, HPE & Dr. Eng Lim GoH, PhD, SVP & CTO of SGI join Dave Vellante & Paul Gillin at HPE Discover 2016. “The combined HPE and SGI portfolio, including a comprehensive services capability, will support private and public sector customers seeking larger high-performance computing installations, including U.S. federal agencies as well as enterprises looking to leverage high-performance computing for business insights and a competitive edge.”

New AMD Radeon Instinct Rolls Out to Accelerate Machine Intelligence

“New Radeon Instinct accelerators will offer organizations powerful GPU-based solutions for deep learning inference and training. Along with the new hardware offerings, AMD announced MIOpen, a free, open-source library for GPU accelerators intended to enable high-performance machine intelligence implementations, and new, optimized deep learning frameworks on AMD’s ROCm software to build the foundation of the next evolution of machine intelligence workloads.”

Penguin Computing Lands 9 CTS-1 Open Compute Project Supercomputers on the TOP500

In this video from SC16, Dan Dowling from Penguin Computing describes the company’s momentum with Nine CTS-1 supercomputers on the TOP500. The systems were procured under NNSA’s Tri-Laboratory Commodity Technology Systems program, or CTS-1, to bolster computing for national security at Los Alamos, Sandia and Lawrence Livermore national laboratories. The resulting deployment of these supercomputing clusters is among world’s largest Open Compute-based installations, a major validation of Penguin Computing’s leadership in Open Compute high-performance computing architecture.

Building a Platform for Collaborative Scientific Research on AWS

“The pharmaceutical industry trend toward joint ventures and collaborations has created a need for new platforms in which to work together. We’ll dive into architectural decisions for building collaborative systems. Examples include how such a platform allowed Human Longevity, Inc. to accelerate software deployment to production in a fast-paced research environment, and how Celgene uses AWS for research collaboration with outside universities and foundations.”

NCAR’s Evolving Infrastructure for Weather and Climate Research

Pamela Hill from NCAR/UCAR presented this talk at the DDN User Group at SC16. “With the game-changing SFA14K, NCAR now has the storage capacity and sustained compute performance to perform sophisticated modeling while substantially reducing workflow bottlenecks. As a result, the organization will be able to quickly process mixed I/O workloads while sharing up to 40 PBs of vital research data with a growing scientific community around the world.”

New Plan: ECP Project to Deploy First Exascale System by 2021

Today the DOE Exascale Computing Project announced the following changes to their strategic plan. The ECP project now plans to deploy the first Exascale system in the U.S. in 2021, a full 1-2 years earlier than previously planned. This system will be built from a “novel architecture” that will be put out for bid in the near future. According to Argonne’s Paul Messina, Director, Exascale Computing Project, “It won’t be something out there like quantum computing, but we are looking for new ideas in terms of processing and networking technologies for the machine.”

Nvidia Powers Deep Learning for Healthcare at SC16

In this video from SC16, Abdul Hamid Al Halabi from Nvidia describes how the company is accelerating Deep Learning for Healthcare. “From Electronic Health Records (EHR) to wearables, every year the flood of heterogeneous healthcare data increases exponentially. Deep learning has the power to unlock the potential within this data.Harnessing the power of GPUs, healthcare and medical researchers are able to design and train more sophisticated neural networks—networks that can accelerate high-throughput screening for drug discovery, guide pre-operative strategies, or work in conjunction with traditional techniques and apparatus to detect invasive cancer cells in real-time during surgery.”

Supermicro Showcases Versatile HPC Solutions at SC16

In this video from SC16, Don Clegg from Supermicro describes the company’s broad range of HPC solutions. “Innovation is at the core of Supermicro product development and benefits the HPC community with first-to-market integration of advanced technology such as our 1U with four and 4U with eight Pascal P100 SXM2 GPUs or 4U with ten PCI-e GPU systems, hot-swap U.2 NVMe, upcoming fabric technologies like Red Rock Canyon and PCI-E switches, as well as new architecture designs like our new high-density BigTwin system design.”

Oakforest-PACS: Overview of the Fastest Supercomputer in Japan

Prof. Taisuke Boku from the University of Tsukuba & JCAHPC presented this talk at the DDN User Group at SC16. “Thanks to DDN’s IME Burst Buffer, researchers using Oakforest-PACS at the Joint Center for Advanced High Performance Computing (JCAHPC) are able to improve modeling of fundamental physical systems and advance understanding of requirements for Exascale-level systems architectures. With DDN’s advanced technology, JCAHPC has achieved effective I/O performance exceeding 1TB/s in writing tens of thousands of processes to the same file.”