Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Sandia’s Bill Camp to Receive Seymour Cray Award

This week, IEEE announced that Dr. William Camp, Director Emeritus at Sandia National Laboratories, has been named the recipient of the 2016 IEEE Computer Society Seymour Cray Computer Engineering Award “for visionary leadership of the Red Storm project, and for decades of leadership of the HPC community.” Dr. Camp spent most of his career at NNSA’s Sandia Labs, at Cray Research and at Intel.

Is Free Lunch Back? Douglas Eadline Looks at the Epiphany-V Processor

Over at Cluster Monkey, Douglas Eadline writes that the “free lunch” performance boost of Moore’s Law may indeed be back with the 1024-core Epiphany-V chip that will hit the market in the next few months.

Brookhaven Lab to Develop ECP Exascale Software

Scientists at Brookhaven National Laboratory will play major roles in two of the 15 fully funded application development proposals recently selected by the DOE’s Exascale Computing Project (ECP) in its first-round funding of $39.8 million. “The team at Brookhaven will develop algorithms, language environments, and application codes that will enable scientists to perform lattice quantum chromodynamics (QCD) calculations on next-generation supercomputers.”

Video: Sustainable High-Performance Computing through Data Science

Ozalp Babaoglu from the University of Bologna presented this Google Talk. “At exascale, failures and errors will be frequent, with many instances occurring daily. This fact places resilience squarely as another major roadblock to sustainability. In this talk, I will argue that large computer systems, including exascale HPC systems, will ultimately be operated based on predictive computational models obtained through data-science tools, and at that point, the intervention of humans will be limited to setting high-level goals and policies rather than performing “nuts-and-bolts” operations.”

Exascale – A Race to the Future of HPC

From Megaflops to Gigaflops to Teraflops to Petaflops and soon to be Exaflops, the march in HPC is always on and moving ahead. This whitepaper details some of the technical challenges that will need to be addressed in the coming years in order to get to exascale computing.

Supercomputing Plant Polymers for Biofuels

A huge barrier in converting cellulose polymers to biofuel lies in removing other biomass polymers that subvert this chemical process. To overcome this hurdle, large-scale computational simulations are picking apart lignin, one of those inhibiting polymers, and its interactions with cellulose and other plant components. The results point toward ways to optimize biofuel production and […]

IDC to Launch New Exascale Tracking Study

In this video from the 2016 HPC User Forum in Austin, Earl Joseph describes IDC’s new Exascale Tracking Study. The project will monitor the many Exascale projects around the world.

NREL to Lead for Wind Power Research for Exascale Computing Project

“This project will make a substantial contribution to advancing wind energy,” said Steve Hammond, NREL’s Director of Computational Science and the principal investigator on the project. “It will advance our fundamental understanding of the complex flow physics of whole wind plants, which will help further reduce the cost of electricity derived from wind energy.”

Co-design for Data Analytics And Machine Learning

The big data analytics market has seen rapid growth in recent years. Part of this trend includes the increased use of machine learning (Deep Learning) technologies. Indeed, machine learning speed has been drastically increased though the use of GPU accelerators. The issues facing the HPC market are similar to the analytics market — efficient use of the underlying hardware. A position paper from the third annual Big Data and Extreme Computing conference (2015) illustrates the power of co-design in the analytics market.

DOE Funds Asynchronous Supercomputing Research at Georgia Tech

“More than just building bigger and faster computers, high-performance computing is about how to build the algorithms and applications that run on these computers,” said School of Computational Science and Engineering (CSE) Associate Professor Edmond Chow. “We’ve brought together the top people in the U.S. with expertise in asynchronous techniques as well as experience needed to develop, test, and deploy this research in scientific and engineering applications.”