Today IBM announced the opening of the first OpenPOWER Research Facility (OPRF) at the Indian Institute of Technology Bombay. The OPRF will help drive the country’s National Knowledge Network initiative to interconnect all institutions of higher learning and research with a high-speed data communication network, facilitating knowledge sharing and collaborative research and innovation. “Open collaboration is driving the next wave of innovation across the entire system stack, allowing clients and organizations to develop customized solutions to capitalize on today’s emerging workloads,’ said Monica Aggarwal, Vice President, India Systems Development Lab (ISDL), IBM Systems. “The OPRF will enable Indian companies, universities and government organizations to build technologies indigenously using the high-performance POWER processor, helping to drive the national IT agenda of India,” she added.
In this podcast, the Radio Free HPC team reviews the recent 2016 Intel Developer Forum. “How will Intel return to growth in the face of a declining PC market? At IDF, they put the spotlight on IoT and Machine Learning. With new threats rising from the likes of AMD and Nvidia, will Chipzilla make the right moves? Tune in to find out.”
This week Nvidia CEO Jen-Hsun Huang hand-delivered one of the company’s new DGX-1 Machine Learning supercomputers to the OpenAI non-profit in San Francisco. “The DGX-1 is a huge advance,” OpenAI Research Scientist Ilya Sutskever said. “It will allow us to explore problems that were completely unexplored before, and it will allow us to achieve levels of performance that weren’t achievable.”
Norbert Eicker from the Jülich Supercomputing Centre presented this talk at the SAI Computing Conference in London. “The ultimate goal is to reduce the burden on the application developers. To this end DEEP/-ER provides a well-accustomed programming environment that saves application developers from some of the tedious and often costly code modernization work. Confining this work to code-annotation as proposed by DEEP/-ER is a major advancement.”
Altair’s new Data Center GPU Management Tool is now available to Nvidia HPC Customers. With the wide adoption of Graphics Processing Units, customers are addressing vital work in fields including artificial intelligence, deep learning, self-driving cars, and virtual reality now have the ability to improve the speed and reliability of their computations through a new technology collaboration with Altair to integrate PBS Professional.
In this video from the 2016 Intel Developer Forum, Diane Bryant describes the company’s efforts to advance Machine Learning and Artificial Intelligence. Along the way, she offers a sneak peak at the Knights Mill processor, the next generation of Intel Xeon Phi slated for release sometime in 2017. “Now you can scale your machine learning and deep learning applications quickly – and gain insights more efficiently – with your existing hardware infrastructure. Popular open frameworks newly optimized for Intel, together with our advanced math libraries, make Intel Architecture-based platforms a smart choice for these projects.”
There is still time to register for the 2016 Hot Interconnects Conference, which takes place August 24-26 at Huawei in Santa Clara, California. The keynote speaker this year is Kiran Makhijan, Senior Research Scientist, Network Technology Labs at the Huawei America Research Center. Her talk is entitled: Cloudcasting – Perspectives on Virtual Routing for Cloud Centric Network Architectures.
In this video, D-Wave Systems Founder Eric Ladizinsky presents: The Coming Quantum Computing Revolution. “Despite the incredible power of today’s supercomputers, there are many complex computing problems that can’t be addressed by conventional systems. Our need to better understand everything, from the universe to our own DNA, leads us to seek new approaches to answer the most difficult questions. While we are only at the beginning of this journey, quantum computing has the potential to help solve some of the most complex technical, commercial, scientific, and national defense problems that organizations face.”
“The major functionality of the Intel Xeon Phi coprocessor is a chip that does the heavy computation. The current version utilizes up to 16 channels of GDDR5 memory. An interesting notes is that up to 32 memory devices can be used, by using both sides of the motherboard to hold the memory. This doubles the effective memory availability as compared to more conventional designs.”
A single issue has always defined the history of HPC systems: performance. While offloading and co-design may seem like new approaches to computing, they actually have been used, to a lesser degree, in the past as a way to enhance performance. Current co-design methods are now going deeper into cluster components than was previously possible. These new capabilities extend from the local cluster nodes into the “computing network.”