Sign up for our newsletter and get the latest big data news and analysis.

EPCC Selects Cerebras Systems AI Supercomputer

Los Altos, Calif. & Edinburgh, UK — Cerebras Systems, the high performance artificial intelligence (AI) compute company, and EPCC, the supercomputing centre at the University of Edinburgh, today announced the selection of what Cerebras said is the world’s fastest AI computer, the Cerebras CS-1, for EPCC’s new international data facility for the Edinburgh and southeastern […]

Cambridge Quantum Reports Progress toward ‘Meaning Aware’ NLP

UK-based Cambridge Quantum Computing (CQC), a quantum software and algorithm specialist, today released researcher papers on its use of quantum computing to develop intuitive, “meaning-aware” natural language processing (QNLP).

A focal point of artificial intelligence inquiry, NLP that is contextual, that comprehends emotion, nuance, even humor, is NLP’s most advanced and challenging form. Demonstrates Natural Language Understanding Inspired by Neuroscience

In this video, CEO Francisco Webber demonstrates how the company’s software running on Xilinx FPGAs breaks new ground in the field of natural language understanding (NLU). “ delivers AI-based Natural Language Understanding solutions which are quicker and easier to implement and more capable than current approaches. The company’s patented approach enables enterprises to more effectively search, extract, annotate and analyze key information from any kind of unstructured text.”

Deep Learning for Natural Language Processing – Choosing the Right GPU for the Job

In this new whitepaper from our friends over at Exxact Corporation we take a look at the important topic of deep learning for Natural Language Processing (NLP) and choosing the right GPU for the job. Focus is given to the latest developments in neural networks and deep learning systems, in particular a neural network architecture called transformers. Researchers have shown that transformer networks are particularly well suited for parallelization on GPU-based systems.

MIT Paper Sheds Light on How Neural Networks Think

MIT researchers have developed a new general-purpose technique sheds light on inner workings of neural nets trained to process language. “During training, a neural net continually readjusts thousands of internal parameters until it can reliably perform some task, such as identifying objects in digital images or translating text from one language to another. But on their own, the final values of those parameters say very little about how the neural net does what it does.”