Ozalp Babaoglu from the University of Bologna presented this Google Talk. “At exascale, failures and errors will be frequent, with many instances occurring daily. This fact places resilience squarely as another major roadblock to sustainability. In this talk, I will argue that large computer systems, including exascale HPC systems, will ultimately be operated based on predictive computational models obtained through data-science tools, and at that point, the intervention of humans will be limited to setting high-level goals and policies rather than performing “nuts-and-bolts” operations.”
From Megaflops to Gigaflops to Teraflops to Petaflops and soon to be Exaflops, the march in HPC is always on and moving ahead. This whitepaper details some of the technical challenges that will need to be addressed in the coming years in order to get to exascale computing.
A huge barrier in converting cellulose polymers to biofuel lies in removing other biomass polymers that subvert this chemical process. To overcome this hurdle, large-scale computational simulations are picking apart lignin, one of those inhibiting polymers, and its interactions with cellulose and other plant components. The results point toward ways to optimize biofuel production and […]
In this video from the 2016 HPC User Forum in Austin, Earl Joseph describes IDC’s new Exascale Tracking Study. The project will monitor the many Exascale projects around the world.
“This project will make a substantial contribution to advancing wind energy,” said Steve Hammond, NREL’s Director of Computational Science and the principal investigator on the project. “It will advance our fundamental understanding of the complex flow physics of whole wind plants, which will help further reduce the cost of electricity derived from wind energy.”
The big data analytics market has seen rapid growth in recent years. Part of this trend includes the increased use of machine learning (Deep Learning) technologies. Indeed, machine learning speed has been drastically increased though the use of GPU accelerators. The issues facing the HPC market are similar to the analytics market — efficient use of the underlying hardware. A position paper from the third annual Big Data and Extreme Computing conference (2015) illustrates the power of co-design in the analytics market.
“More than just building bigger and faster computers, high-performance computing is about how to build the algorithms and applications that run on these computers,” said School of Computational Science and Engineering (CSE) Associate Professor Edmond Chow. “We’ve brought together the top people in the U.S. with expertise in asynchronous techniques as well as experience needed to develop, test, and deploy this research in scientific and engineering applications.”
“Our collaborative role in these exascale applications projects stems from our laboratory’s long-term strategy in co-design and our appreciation of the vital role of high-performance computing to address national security challenges,” said John Sarrao, associate director for Theory, Simulation and Computation at Los Alamos National Laboratory. “The opportunity to take on these scientific explorations will be especially rewarding because of the strategic partnerships with our sister laboratories.”
In this podcast, the Radio Free HPC team discusses the recent news that Intel has sold its controlling stake in McAfee and that NSF has funded the next generation of XSEDE.
Yutaka Ishikawa from Riken AICS presented this talk at the HPC User Forum. “Slated for delivery sometime around 2022, the ARM-based Post-K Computer has a performance target of being 100 times faster than the original K computer within a power envelope that will only be 3-4 times that of its predecessor. RIKEN AICS has been appointed as the main organization for leading the development of the Post-K.”