MIT Paper Sheds Light on How Neural Networks Think

MIT researchers have developed a new general-purpose technique sheds light on inner workings of neural nets trained to process language. “During training, a neural net continually readjusts thousands of internal parameters until it can reliably perform some task, such as identifying objects in digital images or translating text from one language to another. But on their own, the final values of those parameters say very little about how the neural net does what it does.”

Video: What is Wrong with Convolutional Neural Nets?

Geoffrey Hinton from the University of Toronto gave this talk at the Vector Institute. “What is Wrong with ‘standard’ Convolutional Neural Nets? They have too few levels of structure: Neurons, Layers, and Whole Nets. We need to group neurons in each layer in ‘capsules’ that do a lot of internal computation and then output a compact result.”