At insideHPC, are very pleased to publish the Print ‘n Fly Guide to SC16 in Salt Lake City. We designed this Guide to be an in-flight magazine custom tailored for your journey to SC16 — the world’s largest gathering of high performance computing professionals. “Inside this guide you will find technical features on supercomputing, HPC interconnects, and the latest developments on the road to exascale. It also has great recommendations on food, entertainment, and transportation in SLC.”
Next month at SC16, Dr. Thomas Schulthess from CSCS in Switzerland will present a talk entitled “Reflecting on the Goal and Baseline for Exascale Computing.” The presentation will take place on Wednesday, Nov. 15 at 11:15 am in Salt Palace Ballroom-EFGHIJ.
Today’s operating systems were not developed with the immense complexity of Exascale in mind. Now, researchers at Argonne National Lab are preparing for HPC’s next wave, where the operating system will have to assume new roles in synchronizing and coordinating tasks. “The Argo team is making several of its experimental OS modifications available. Beckman expects to test them on large machines at Argonne and elsewhere in the next year.”
Jack Dongarra presented this talk at the Argonne Training Program on Extreme-Scale Computing. “ATPESC provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future.”
This week, IEEE announced that Dr. William Camp, Director Emeritus at Sandia National Laboratories, has been named the recipient of the 2016 IEEE Computer Society Seymour Cray Computer Engineering Award “for visionary leadership of the Red Storm project, and for decades of leadership of the HPC community.” Dr. Camp spent most of his career at NNSA’s Sandia Labs, at Cray Research and at Intel.
Over at Cluster Monkey, Douglas Eadline writes that the “free lunch” performance boost of Moore’s Law may indeed be back with the 1024-core Epiphany-V chip that will hit the market in the next few months.
Scientists at Brookhaven National Laboratory will play major roles in two of the 15 fully funded application development proposals recently selected by the DOE’s Exascale Computing Project (ECP) in its first-round funding of $39.8 million. “The team at Brookhaven will develop algorithms, language environments, and application codes that will enable scientists to perform lattice quantum chromodynamics (QCD) calculations on next-generation supercomputers.”
Ozalp Babaoglu from the University of Bologna presented this Google Talk. “At exascale, failures and errors will be frequent, with many instances occurring daily. This fact places resilience squarely as another major roadblock to sustainability. In this talk, I will argue that large computer systems, including exascale HPC systems, will ultimately be operated based on predictive computational models obtained through data-science tools, and at that point, the intervention of humans will be limited to setting high-level goals and policies rather than performing “nuts-and-bolts” operations.”
From Megaflops to Gigaflops to Teraflops to Petaflops and soon to be Exaflops, the march in HPC is always on and moving ahead. This whitepaper details some of the technical challenges that will need to be addressed in the coming years in order to get to exascale computing.
A huge barrier in converting cellulose polymers to biofuel lies in removing other biomass polymers that subvert this chemical process. To overcome this hurdle, large-scale computational simulations are picking apart lignin, one of those inhibiting polymers, and its interactions with cellulose and other plant components. The results point toward ways to optimize biofuel production and […]