“In Deep Learning what we do is try to minimize the amount of hand engineering and get the neural nets to learn, more or less, everything. Instead of programing computers to do particular tasks, you program the computer to know how to learn. And then you can give it any old task, and the more data and the more computation you provide, the better it will get.”
In this slidecast, Pavel Shamis from ORNL and Gilad Shainer from Mellanox announce the UCX Unified Communication X Framework. “UCX is a collaboration between industry, laboratories, and academia to create an open-source production grade communication framework for data centric and HPC applications.”
“Our computing systems continue to evolve, providing significant challenges to the programming teams managing large, long-lived projects. Issues include rapidly increasing on-node parallelism, varying forms of heterogeneity, deepening memory hierarchies, growing concerns around resiliency and silent data corruption, and worsening storage bottlenecks.”
“Rapid growth in the use cases and demands for extreme computing and huge data processing is leading to convergence of the two infrastructures. The trend towards convergence is not only strategic however but rather inevitable as the Moore’s law ends such that sustained growth in data capabilities, not compute, will advance the capacity and thus the overall capacities towards accelerating research and ultimately the industry.”
In this podcast from the 2015 NCSA Blue Waters Symposium, Arden L. Bement discusses the Blue Waters supercomputer and the future of HPC. Formerly Director of the NSF, Bement keynoted the symposium and is currently the Davis A. Ross Distinguished Professor Emeritus and Adjunct Professor of the College of Technology at Purdue University.