New Memristors at MIT: Networks of Artificial Brain Synapses for Neuromorphic Devices

A possible glimpse at a future form of high performance edge computing – networks of artificial brain synapses – developed by engineers at the Massachusetts Institute of Technology is showing promise as a new memristor design for neuromorphic devices, which mimic the neural architecture in the human brain. Published today in Nature Nanotechnology, results of […]

Video: Heterogeneous Computing at the Large Hadron Collider

In this video, Philip Harris from MIT presents: Heterogeneous Computing at the Large Hadron Collider. “Only a small fraction of the 40 million collisions per second at the Large Hadron Collider are stored and analyzed due to the huge volumes of data and the compute power required to process it. This project proposes a redesign of the algorithms using modern machine learning techniques that can be incorporated into heterogeneous computing systems, allowing more data to be processed and thus larger physics output and potentially foundational discoveries in the field.”

Visualizing an Entire Brain at Nanoscale Resolution

In this video from SC19, Berkeley researchers visualizes an entire brain at nanoscale resolution. The work was published in the journal, Science. “At the core of the work is the combination of expansion microscopy and lattice light-sheet microscopy (ExLLSM) to capture large super-resolution image volumes of neural circuits using high-speed, nano-scale molecular microscopy.”

Deep Learning State of the Art in 2020

Lex Fridman gave this talk as part of the MIT Deep Learning series. “This lecture is on the most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rather a set of highlights of machine learning and AI innovations and progress in academia, industry, and society in general.”

FPGAs and the Road to Reprogrammable HPC

In this special guest feature from Scientific Computing World, Robert Roe writes that FPGAs provide an early insight into possibile architectural specialization options for HPC and machine learning. “Architectural specialization is one option to continue to improve performance beyond the limits imposed by the slow down in Moore’s Law. Using application-specific hardware to accelerate an application or part of one, allows the use of hardware that can be much more efficient, both in terms of power usage and performance.”

Video: MIT Makes Billion-Dollar Bet on Ai and Machine Learning

Today MIT announced a new $1 billion commitment to address the global opportunities and challenges presented by the prevalence of computing and the rise of artificial intelligence. The initiative marks the single largest investment in computing and AI by an American academic institution, and will help position the United States to lead the world in preparing for the rapid evolution of computing and AI. ” As we look to the future, we must utilize these important technologies to shape our world for the better and harness their power as a force for social good.”

Erik Brynjolfsson from MIT to Keynote SC18

Today the SC18 conference announced that Erik Brynjolfsson from MIT will deliver the keynote address on Tuesday, Nov. 13 in Dallas. “We set out to inspire our attendees with the most compelling thought leaders,” said SC18 General Chair Ralph McEldowney. “Erik’s insights and perspectives will make a weighty centerpiece for the SC18 experience. We are thrilled to welcome such an agile thinker and astute articulator of complex landscapes to our keynote stage,” McEldowney said.

MIT helps move Neural Nets back to Analog

MIT researchers have developed a special-purpose chip that increases the speed of neural-network computations by three to seven times over its predecessors, while reducing power consumption 94 to 95 percent. “The computation these algorithms do can be simplified to one specific operation, called the dot product. Our approach was, can we implement this dot-product functionality inside the memory so that you don’t need to transfer this data back and forth?”

Agenda Posted for April HPC User Forum in Tucson

The HPC User Forum has posted their speaker agenda for their upcoming meeting in Tucson. Hosted by Hyperion Research, the event takes place April 16-18 at Loews Ventana Canyon. “The April meeting will explore the status and prospects for quantum computing and HPC use of HPC for environmental research, especially natural disasters such as earthquakes and the recent California wildfires. As always, the meeting will also look at new developments in HPDA-AI, cloud computing and other areas of continuing interest to the HPC community. A special session will look at the growing field of processors and accelerators supporting HPC systems.”

MIT Paper Sheds Light on How Neural Networks Think

MIT researchers have developed a new general-purpose technique sheds light on inner workings of neural nets trained to process language. “During training, a neural net continually readjusts thousands of internal parameters until it can reliably perform some task, such as identifying objects in digital images or translating text from one language to another. But on their own, the final values of those parameters say very little about how the neural net does what it does.”