Today SC16 announced that the conference will feature 38 high-quality workshops to complement the overall Technical Program events, expand the knowledge base of its subject area, and extend its impact by providing greater depth of focus.
Today the U.S. Department of Energy announced that it will invest $16 million over the next four years to accelerate the design of new materials through use of supercomputers. “Our simulations will rely on current petascale and future exascale capabilities at DOE supercomputing centers. To validate the predictions about material behavior, we’ll conduct experiments and use the facilities of the Advanced Photon Source, Spallation Neutron Source and the Nanoscale Science Research Centers.”
Today Cycle Computing announced its continued involvement in optimizing research spearheaded by NASA’s Center for Climate Simulation (NCCS) and the University of Minnesota. Currently, a biomass measurement effort is underway in a coast-to-coast band of Sub-Saharan Africa. An over 10 million square kilometer region of Africa’s trees, a swath of acreage bigger than the entirety […]
Peter Ungaro presented this talk at the 2016 Blue Waters Symposium. “Built by Cray, Blue Waters is one of the most powerful supercomputers in the world, and is the fastest supercomputer on a university campus. Scientists and engineers across the country use the computing and data power of Blue Waters to tackle a wide range of challenging problems, from predicting the behavior of complex biological systems to simulating the evolution of the cosmos.”
“I am honored to have been asked to drive NCSA’s continuing mission as a world-class, integrative center for transdisciplinary convergent research, education, and innovation,” said Gropp. “Embracing advanced computing and domain collaborations across the University of Illinois at Urbana-Champaign campus and ensuring scientific communities have access to advanced digital resources will be at the heart of these efforts.”
Researchers at the University of Oxford have achieved a quantum logic gate with record-breaking 99.9% precision, reaching the benchmark required theoretically to build a quantum computer. “An analogy from conventional computing hardware would be that we have finally worked out how to build a transistor with good enough performance to make logic circuits, but the technology for wiring thousands of those transistors together to build an electronic computer is still in its infancy.”
“High performance computing has transformed how science and engineering research is conducted. Answering a question in 30 minutes that used to take 6 months can quickly change the way one asks questions. Large computing facilities provide access to some of the world’s largest computing, data, and network resources in the world. Indeed, the DOE complex has the highest concentration of supercomputing capability in the world. However, by nature of their existence, making use of the largest computers in the world can be a challenging and unique task. This talk will discuss how supercomputers are unique and explain how that impacts their use.”
In this podcast, the Radio Free HPC team looks HPE’s pending acquisition of SGI. “Will the acquisition be good for SGI and HP customers? Our RFHPC team is in unprecedented agreement that indeed it will. The key, however, to HPE’s success will be keeping the SGI people. Rich thinks this acquisition will potentially give HPE the engineering talent it needs to compete with Cray at the high end of the market.”
Nikkei in Japan writes that the Post K supercomputer is facing 1-2 year delay for deployment as part of the Flagship2020 project. Originally targeted for completion in 2020, the ARM-based Post K supercomputer has a performance target of being 100 times faster than the original K computer within a power envelope that will only be 3-4 times that of its predecessor. Nikkei cites semiconductor development issues as the reason for the project delay.
“Between 2011 and 2016, eight projects, with a total budget of more than €50 Million, were selected for this first push in the direction of the next- generation supercomputer: CRESTA, DEEP and DEEP-ER, EPiGRAM, EXA2CT, Mont- Blanc (I + II) and Numexas. The challenges they addressed in their projects were manifold: innovative approaches to algorithm and application development, system software, energy efficiency, tools and hardware design took centre stage.”