Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

OpenPOWER Gains Momentum in Europe

Growing momentum was the watchword at the inaugural OpenPOWER European Summit this week, where the OpenPOWER Foundation made a series of announcements today detailing the rapid growth, adoption and support of OpenPOWER across the continent. ” With today’s announcements by our European members, the OpenPOWER Foundation expands its reach, bringing open source, high performing, flexible and scalable solutions to organizations worldwide.”

Microsoft Cognitive Toolkit Updates for Deep Learning Advances

Today Microsoft released an updated version of Microsoft Cognitive Toolkit, a system for deep learning that is used to speed advances in areas such as speech and image recognition and search relevance on CPUs and Nvidia GPUs. “We’ve taken it from a research tool to something that works in a production setting,” said Frank Seide, a principal researcher at Microsoft Artificial Intelligence and Research and a key architect of Microsoft Cognitive Toolkit.

Fujitsu Develops New Architecture for Combinatorial Optimization

Today Fujitsu Laboratories announced a collaboration with the University of Toronto to develop a new computing architecture to tackle a range of real-world issues by solving combinatorial optimization problems that involve finding the best combination of elements out of an enormous set of element combinations. “This architecture employs conventional semiconductor technology with flexible circuit configurations to allow it to handle a broader range of problems than current quantum computing can manage. In addition, multiple computation circuits can be run in parallel to perform the optimization computations, enabling scalability in terms of problem size and processing speed.”

The Intelligent Industrial Revolution

“Over the past six weeks, we took NVIDIA’s developer conference on a world tour. The GPU Technology Conference (GTC) was started in 2009 to foster a new approach to high performance computing using massively parallel processing GPUs. GTC has become the epicenter of GPU deep learning — the new computing model that sparked the big bang of modern AI. It’s no secret that AI is spreading like wildfire. The number of GPU deep learning developers has leapt 25 times in just two years.”

Supercomputing the Cancer Moonshot and Beyond

In this video, Dr. Dimitri Kusnezov from the U.S. Department of Energy National Nuclear Security Administration presents: Supercomputing the Cancer Moonshot and Beyond. “How can the next generation of supercomputers unlock biomedical mysteries that will shape the future practice of medicine? Scientists behind the National Strategic Computing Initiative, a federal strategy for investing in high-performance computing, are exploring this question.”

ORNL Testing Compact “Emu Chick” Memory Server for Big Data

Today Emu Technology announced that it has delivered an Emu Chick Memory Server to Oak Ridge National Laboratory. “ORNL intends to study the system for streaming graph analysis applications, sparse multilinear computations, and other memory-intensive problems, as we continue to test the potential of emerging computing technologies to further our mission within the DOE,” said Jeffrey S. Vetter, Director of the Future Technologies Group at ORNL’s Computer Science and Mathematics Division.

Video: HPC Opportunities in Deep Learning

“This talk will provide empirical evidence from our Deep Speech work that application level performance (e.g. recognition accuracy) scales with data and compute, transforming some hard AI problems into problems of computational scale. It will describe the performance characteristics of Baidu’s deep learning workloads in detail, focusing on the recurrent neural networks used in Deep Speech as a case study. It will cover challenges to further improving performance, describe techniques that have allowed us to sustain 250 TFLOP/s when training a single model on a cluster of 128 GPUs, and discuss straightforward improvements that are likely to deliver even better performance.”

EPA Joins National Consortium for Data Science

“The work we do involves capturing and analyzing huge environmental data sets so that the government can make informed policy decisions that protect humans and the environment,” said Ron Hines, Associate Director for Health at the EPA’s National Health and Environmental Effects Research Laboratory in Research Triangle Park, N.C. “We have collaborated with the NCDS on some of its initiatives in the past and having a seat at its leadership table will help us connect with leading data researchers, access data resources and infrastructure, and contribute to the development of future NCDS strategies.”

White House Releases Report on the Future of Artificial Intelligence

Today, to ready the United States for a future in which Artificial Intelligence (AI) plays a growing role, the White House is releasing a report on future directions and considerations for AI called Preparing for the Future of Artificial Intelligence. This report surveys the current state of AI, its existing and potential applications, and the questions that progress in AI raise for society and public policy.”

NYU Advances Robotics with Nvidia DGX-1 Deep Learning Supercomputer

In this video, NYU researchers describe their plans to advance deep learning with their new Nvidia DGX-1 AI supercomputer. “The DGX-1 is going to be used in just about every research project we have here,” said Yann LeCun, founding director of the NYU Center for Data Science and a pioneer in the field of AI. “The students here can’t wait to get their hands on it.”