Can IBM Succeed with Neurosynaptic Computing?

Print Friendly, PDF & Email

In this Industry Perspective from Scientific Computing World, Robert Roe considers the human challenges that must be overcome if IBM’s new neurosynaptic computing paradigm is to be successful.

Robert Roe

Robert Roe

In concert with IBM’s recent announcement that it had created a new neurosynaptic computing chip, the company has also created the SYNAPSE University, with a curriculum of lectures, hands-on exercises, and coaching, to help interested parties build complex neurosynaptic systems.

IBM’s move reflects a growing understanding by hardware manufacturers that software is critical to the uptake of new chips and new architectures. Creating a new architecture, drastically different from those that preceded it, inexorably creates challenges and increases the complexity of generating code that will work for the new system.

A similar task faced Nvidia with the launch of its GPUs. Nvidia has spent considerable time and resources trying to encourage potential users to use the GPU programming language, CUDA, to the point of computer scientists and programmers to develop code for the GPUs.

But the industry is already trying to plug a skills gap in experienced, talented programmers. Earlier this year, James Osborne, HPC Wales Training and Outreach Mentor discussed the skills gap for traditional HPC industry in Scientific Computing World.

High-performance computing is playing a key role in helping us to unravel the science that sits just beyond the horizon of our understanding,” said Osborne. “To carry out that science requires experts in a particular domain — be that biology, chemistry, physics or engineering – to not only develop specialities within their respective fields, but also with the skills required to maximize the potential of the computing systems available to them.”

The TrueNorth chip redefines what is possible in the field of brain-inspired computers, in terms of size, architecture, efficiency, scalability, and design techniques.

The TrueNorth chip redefines what is possible in the field of brain-inspired computers, in terms of size, architecture, efficiency, scalability, and design techniques.

However, IBM has a difficult task, firstly because despite any similarities, the classical expertise in programming for CPU or GPU is still based on von-Neumann architecture whereas the Neurosynaptic chip created by IBM uses a different architecture, called TrueNorth.

Nvidia implemented a series of training programmes to encourage researchers, teachers and academic institutions to adopt, teach and help develop the CUDA platform to where it is today.

Nvidia has created CUDA centers of excellence, designed to foster collaboration with institutions at the forefront of massively parallel computing research. It has also created or funded CUDA forums, teaching centres, research centres, teaching resources and curriculum based around teaching the CUDA programming language to the next wave of software engineers.

This has created an ecosystem which drives the programming language forward, something that IBM will have to do if it wants its brand of neurosynaptic computing to become widespread.

The payoff could be vast, according to Dr Dharmendra Modha, manager of IBM’s cognitive computing initiative. He said: “The architecture can solve a wide class of problems from vision, audition, and multi-sensory fusion, and has the potential to revolutionize the computer industry by integrating brain-like capability into devices where computation is constrained by power and speed.”

Although it differs from the von-Neumann architecture which has been used almost universally since its creation in the 1940s, its new programming language is based on FORTRAN, and, has been developed for the TrueNorth Architecture by IBM research. IBM research – Almaden — released a paper on the language entitled ‘Cognitive Computing Programming Paradigm: A Corelet Language for Composing Networks of Neurosynaptic Cores’.

The paper states: “The sequential programming paradigm of the von-Neumann architecture is wholly unsuited for TrueNorth. Therefore, as our main contribution, we develop a new programming paradigm that permits construction of complex cognitive algorithms and applications while being efficient for TrueNorth and effective for programmer productivity.”

It continues: “A TrueNorth program is a complete specification of a network of neurosynaptic cores, and all external inputs and outputs to the network, including the specification of the physiological properties (neuron parameters, synaptic weights) and the anatomy (inter- and intra-core connectivity). The job of a TrueNorth programmer is to translate a desired computation into a specification that efficiently executes on TrueNorth, namely, a completely specified network of neurosynaptic cores, its inputs, and its outputs. In this context, the linear programming paradigm of the von-Neumann architecture is not ideal for TrueNorth programs.”

Modha said: “TrueNorth has a parallel, distributed, modular, scalable, fault-tolerant, flexible architecture that integrates computation, communication, and memory and has no clock. It is fair to say that TrueNorth completely redefines what is now possible in the field of brain-inspired computers, in terms of size, architecture, efficiency, scalability, and chip design techniques.”

Similarly, IBM’s new programming model breaks the mould of sequential operation underlying today’s von-Neumann architecture. It is instead tailored for this new class of distributed, highly interconnected, asynchronous, parallel, large-scale cognitive computing architectures.

Architectures and programs are closely intertwined and a new architecture necessitates a new programming paradigm,’ said Modha. “We are working to create a Fortran for synaptic computing chips. While complementing today’s computers, this will bring forth a fundamentally new technological capability in terms of programming and applying emerging learning systems.”

The announcement, reported in Scientific Computing World at the beginning of August focused on IBM’s success in creating the first neurosynaptic computer chip to achieve one million programmable neurons, 256 million programmable synapses, and 46 billion synaptic operations per second per watt.

IBM developed the chip as a result of SyNAPSE, a Defense Advanced Research Projects Agency (DARPA)-funded programme to develop electronic neuromorphic machine technology that will eventually scale to biological levels. DARPA is the US Government’s Defense Advanced Research Projects Agency.

Having completed Phase 0, Phase 1, and Phase 2, IBM and its collaborators (Cornell University and iniLabs, Ltd) have recently been awarded approximately $12 million in new funding from the DARPA for Phase 3 of the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project, thus bringing the cumulative funding to approximately $53 million.

$53 million dollars represents a considerable investment by any standards, a clear sign that this is seen, by DARPA at least, as an important area of research for the future of computing big data effectively.

This story appears here as part of a cross-publishing agreement with Scientific Computing World.