Norbert Eicker from the Jülich Supercomputing Centre presented this talk at the SAI Computing Conference in London. “The ultimate goal is to reduce the burden on the application developers. To this end DEEP/-ER provides a well-accustomed programming environment that saves application developers from some of the tedious and often costly code modernization work. Confining this work to code-annotation as proposed by DEEP/-ER is a major advancement.”
In this video from the 2016 Intel Developer Forum, Diane Bryant describes the company’s efforts to advance Machine Learning and Artificial Intelligence. Along the way, she offers a sneak peak at the Knights Mill processor, the next generation of Intel Xeon Phi slated for release sometime in 2017. “Now you can scale your machine learning and deep learning applications quickly – and gain insights more efficiently – with your existing hardware infrastructure. Popular open frameworks newly optimized for Intel, together with our advanced math libraries, make Intel Architecture-based platforms a smart choice for these projects.”
In this video, D-Wave Systems Founder Eric Ladizinsky presents: The Coming Quantum Computing Revolution. “Despite the incredible power of today’s supercomputers, there are many complex computing problems that can’t be addressed by conventional systems. Our need to better understand everything, from the universe to our own DNA, leads us to seek new approaches to answer the most difficult questions. While we are only at the beginning of this journey, quantum computing has the potential to help solve some of the most complex technical, commercial, scientific, and national defense problems that organizations face.”
In this video from the 2016 Blue Waters Symposium, Andriy Kot from NCSA presents: Parallel I/O Best Practices.
Peter Ungaro presented this talk at the 2016 Blue Waters Symposium. “Built by Cray, Blue Waters is one of the most powerful supercomputers in the world, and is the fastest supercomputer on a university campus. Scientists and engineers across the country use the computing and data power of Blue Waters to tackle a wide range of challenging problems, from predicting the behavior of complex biological systems to simulating the evolution of the cosmos.”
“High performance computing has transformed how science and engineering research is conducted. Answering a question in 30 minutes that used to take 6 months can quickly change the way one asks questions. Large computing facilities provide access to some of the world’s largest computing, data, and network resources in the world. Indeed, the DOE complex has the highest concentration of supercomputing capability in the world. However, by nature of their existence, making use of the largest computers in the world can be a challenging and unique task. This talk will discuss how supercomputers are unique and explain how that impacts their use.”
Nikkei in Japan writes that the Post K supercomputer is facing 1-2 year delay for deployment as part of the Flagship2020 project. Originally targeted for completion in 2020, the ARM-based Post K supercomputer has a performance target of being 100 times faster than the original K computer within a power envelope that will only be 3-4 times that of its predecessor. Nikkei cites semiconductor development issues as the reason for the project delay.
Ed Seidel from NCSA presented this talk at The Digital Future conference in Berlin. “The National Center for Supercomputing Applications (NCSA) is a hub of transdisciplinary research and digital scholarship where University of Illinois faculty, staff, and students, and collaborators from around the globe, unite to address research grand challenges for the benefit of science and society. NCSA is also an engine of economic impact for the state and the nation, helping companies address computing and data challenges and providing hands-on training for undergraduate and graduate students and post-docs.”
In this video, Dan Stanzione from TACC describes how the Stampede II supercomputer will driving computational science. “Announced in June, a $30 million NSF award to the Texas Advanced Computing Center will be used acquire and deploy a new large scale supercomputing system, Stampede II, as a strategic national resource to provide high-performance computing capabilities for thousands of researchers across the U.S. This award builds on technology and expertise from the Stampede system first funded in by NSF 2011 and will deliver a peak performance of up to 18 Petaflops, over twice the overall system performance of the current Stampede system.”
In this video from The Digital Future conference in Berlin, Leslie Greengard from the Simons Center for Data Analysis presents: Modeling Physical Systems in Complex Geometry. Greengard is an American mathematician, physician and computer scientist. He is co-inventor of the fast multipole method, recognized as one of the top-ten algorithms of computing.”