Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


MIT helps move Neural Nets back to Analog

MIT researchers have developed a special-purpose chip that increases the speed of neural-network computations by three to seven times over its predecessors, while reducing power consumption 94 to 95 percent. “The computation these algorithms do can be simplified to one specific operation, called the dot product. Our approach was, can we implement this dot-product functionality inside the memory so that you don’t need to transfer this data back and forth?”

Agenda Posted for April HPC User Forum in Tucson

The HPC User Forum has posted their speaker agenda for their upcoming meeting in Tucson. Hosted by Hyperion Research, the event takes place April 16-18 at Loews Ventana Canyon. “The April meeting will explore the status and prospects for quantum computing and HPC use of HPC for environmental research, especially natural disasters such as earthquakes and the recent California wildfires. As always, the meeting will also look at new developments in HPDA-AI, cloud computing and other areas of continuing interest to the HPC community. A special session will look at the growing field of processors and accelerators supporting HPC systems.”

MIT Paper Sheds Light on How Neural Networks Think

MIT researchers have developed a new general-purpose technique sheds light on inner workings of neural nets trained to process language. “During training, a neural net continually readjusts thousands of internal parameters until it can reliably perform some task, such as identifying objects in digital images or translating text from one language to another. But on their own, the final values of those parameters say very little about how the neural net does what it does.”

Announcing the New MIT–IBM Watson AI Lab

Today IBM announced a 10-year, $240 million investment to create the MIT–IBM Watson AI Lab. “The combined MIT and IBM talent dedicated to this new effort will bring formidable power to a field with staggering potential to advance knowledge and help solve important challenges.”

MIT Professor Runs Record Google Compute Engine job with 220K Cores

Over at the Google Blog, Alex Barrett writes that an MIT math professor recently broke the record for the largest-ever Compute Engine cluster, with 220,000 cores on Preemptible VMs. According to Google, this is the largest known HPC cluster to ever run in the public cloud.

Video: A Look at the Lincoln Laboratory Supercomputing Center

“Guided by the principles of interactive supercomputing, Lincoln Laboratory was responsible for a lot of the early work on machine learning and neural networks. We now have a world-class group investigating speech and video processing as well as machine language topics including theoretical foundations, algorithms and applications. In the process, we are changing the way we go about computing. Over the years we have tended to assign a specific systems to service a discrete market, audience or project. But today those once highly specialized systems are becoming increasingly heterogeneous. Users are interacting with computational resources that exhibit a high degree of autonomy. The system, not the user, decides on the computer hardware and software that will be used for the job.”

MIT Lincoln Laboratory Takes the Mystery Out of Supercomputing

“Many supercomputer users, like the big DOE labs, are implementing these next generation systems. They are now engaged in significant code modernization efforts to adapt their key present and future applications to the new processing paradigm, and to bring their internal and external users up to speed. For some in the HPC community, this creates unanticipated challenges along with great opportunities.”

Video: JuMP – A Modeling Language for Mathematical Optimization

Miles Lubin from presented this talk at the CSGF Annual Program Review. “JuMP is an open-source software package in Julia for modeling optimization problems. In less than three years since its release, JuMP has received more than 50 citations and has been used in at least 10 universities for teaching. We tell the story of how JuMP was developed, explain the role of the DOE CSGF and high-performance computing, and discuss ongoing extensions to JuMP developed in collaboration with DOE labs.”

Video: HPC in Earth & Planetary Science using MITgcm

Christopher Hill from MIT presented this talk at the HPC User Forum. “The MITgcm (MIT General Circulation Model) is a numerical model designed for study of the atmosphere, ocean, and climate. Its non-hydrostatic formulation enables it to simulate fluid phenomena over a wide range of scales; its adjoint capability enables it to be applied to parameter and state estimation problems. By employing fluid isomorphisms, one hydrodynamical kernel can be used to simulate flow in both the atmosphere and ocean.”

XSEDE Powers Polymer Research at MIT

Researchers at MIT are using XSEDE resources to study polymers, the chemical compounds used to make plastic, rubber, and more.