In this video from the 2016 MSST Conference, Yoonho Park from IBM presents: Storage Performance Modeling for Future Systems. “The burst buffer is an intermediate, high-speed layer of storage that is positioned between the application and the parallel file system (PFS), absorbing the bulk data produced by the application at a rate a hundred times higher than the PFS, while seamlessly draining the data to the PFS in the background.”
In this podcast, the Radio Free HPC team recaps the ASC16 Student Cluster Competition in China and the 2016 MSST Conference in Santa Clara. Dan spent a week in Wuxi interviewing ASC16 student teams, he came back impressed with the Linpack benchmark tricks from the team at Zhejiang University, who set a new student LINPACK record with 12.03 TFlop/s. Meanwhile, Rich was in Santa Clara for the MSST conference, where he captured two days of talks on Mass Storage Technologies.
In its latest move to build a practical quantum computer, IBM Research for the first time ever is making quantum computing available in the cloud to anyone interested in hands-on access to the company’s advanced experimental quantum system. “The cloud-enabled quantum computing platform, called IBM Quantum Experience, will allow users to run algorithms and experiments on IBM’s quantum processor, work with the individual quantum bits (qubits), and explore tutorials and simulations around what might be possible with quantum computing.”
Today the Information Technology and Innovation Foundation (ITIF) published a new report that urges U.S. policymakers to take decisive steps to ensure the United States continues to be a world leader in high-performance computing. “While America is still the world leader, other nations are gaining on us, so the U.S. cannot afford to rest on its laurels. It is important for policymakers to build on efforts the Obama administration has undertaken to ensure the U.S. does not get out paced.”
Today Cambridge University spin-out Optalysys announced that the company has been awarded a $350k grant for a 13-month project from the US Defense Advanced Research Projects Agency (DARPA). The project will see the company advance their research in developing and applying their optical co-processing technology to solving complex mathematical equations. These equations are relevant to large-scale scientific and engineering simulations such as weather prediction and aerodynamics.
The good folks at the European Network on High Performance and Embedded Architecture and Compilation (HiPEAC) have launched a call for contributions to the 2017 edition of the HiPEAC Vision, which will set out the way forward for computing systems over the next ten years. “Published every two years, HiPEAC’s definitive roadmap provides guidance for policy makers and technologists on key issues in the area of computing systems, such as security, reliability and energy efficiency.”
Today ISC 2016 announced that five renowned experts in computational science will participate in their new Distinguished Speaker series. Topics will include exascale computing efforts in the US, the next supercomputers in development in Japan and China, cognitive computing advancements at IBM, and quantum computing research at NASA.
The Human Brain Project (HBP) is developing a shared European research infrastructure with the aim of examining the organization of the brain using detailed analyses and simulations and thus combating neurological and psychiatric disorders. For this purpose, the HBP is creating new information technologies like neurosynaptic processors which are based on the principles governing how the human brain works.
Bo Ewald from D-Wave Systems presented this talk at the HPC Advisory Council Switzerland Conference. “This talk will provide an introduction to quantum computing and briefly review different approached to implementing a quantum computer. D-Wave’s approach to implementing a quantum annealing architecture and the software and programming environment will be discussed. Finally, some potential applications of quantum computing will also be addressed.”
“The Exascale computing challenge is the current Holy Grail for high performance computing. It envisages building HPC systems capable of 10^18 floating point operations under a power input in the range of 20-40 MW. To achieve this feat, several barriers need to be overcome. These barriers or “walls” are not completely independent of each other, but present a lens through which HPC system design can be viewed as a whole, and its composing sub-systems optimized to overcome the persistent bottlenecks.”