Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

Quantinuum Updates to Quantum Natural Language Processing Toolkit lambeq

OXFORD, UK — MARCH 29, 2022 — The quantum natural language processing team at Quantinuum, an integrated quantum computing company, has released an update to its open-source Python library and toolkit, lambeq (pronounced “lambek”). lambeq converts  natural language sentence into a quantum circuit, ready to be realised on a quantum computer. The new release has […]

Cerebras and nference Launch NLP Collaboration

SUNNYVALE, Calif. – High performance AI compute company Cerebras Systems and nference, an AI-driven health technology company, today announced a collaboration to accelerate natural language processing (NLP) for biomedical research and development by orders of magnitude with a Cerebras CS-2 system installed at the nference headquarters in Cambridge, Mass. The vast amounts of health data that […]

@HPCpodcast: Argonne’s Rick Stevens on AI for Science (Part 2) – Coming Breakthroughs, Ethics and the Replacement of Scientists by Robots

In part 2 of our not-to-be-missed @HPCpodcast with Argonne National Laboratory Associate Director Rick Stevens, he discusses some of the important advances that had, by 2015, likely ended the cycle of AI for science winters. He also delves into the major challenges in AI for science, such as building models that are transparent and unbiased while also robust and secure. And Stevens looks at important upcoming AI for science breakthrough use cases, including the welcome news – for researchers beset by mountains of scientific papers – of utilizing large natural language modeling to ingest and collate existing knowledge of a scientific problem, enabling analysis of the literature that, Stevens said, goes well beyond a Google search….

AI Demand Pushes against Skills Shortage, Lack of IT Infrastructure – IBM

As with other areas of the economy (chips, workforce), demand isn’t the problem in the AI market, it’s supply: supply of knowledge and skills, and supply of technology. IBM this morning released the results of its Global AI Adoption Index 2021 study, which the company said shows that business adoption of AI slowed over the […]

EPCC Selects Cerebras Systems AI Supercomputer

Los Altos, Calif. & Edinburgh, UK — Cerebras Systems, the high performance artificial intelligence (AI) compute company, and EPCC, the supercomputing centre at the University of Edinburgh, today announced the selection of what Cerebras said is the world’s fastest AI computer, the Cerebras CS-1, for EPCC’s new international data facility for the Edinburgh and southeastern […]

Cambridge Quantum Reports Progress toward ‘Meaning Aware’ NLP

UK-based Cambridge Quantum Computing (CQC), a quantum software and algorithm specialist, today released researcher papers on its use of quantum computing to develop intuitive, “meaning-aware” natural language processing (QNLP).

A focal point of artificial intelligence inquiry, NLP that is contextual, that comprehends emotion, nuance, even humor, is NLP’s most advanced and challenging form.

Cortical.io Demonstrates Natural Language Understanding Inspired by Neuroscience

In this video, Cortical.io CEO Francisco Webber demonstrates how the company’s software running on Xilinx FPGAs breaks new ground in the field of natural language understanding (NLU). “Cortical.io delivers AI-based Natural Language Understanding solutions which are quicker and easier to implement and more capable than current approaches. The company’s patented approach enables enterprises to more effectively search, extract, annotate and analyze key information from any kind of unstructured text.”

Deep Learning for Natural Language Processing – Choosing the Right GPU for the Job

In this new whitepaper from our friends over at Exxact Corporation we take a look at the important topic of deep learning for Natural Language Processing (NLP) and choosing the right GPU for the job. Focus is given to the latest developments in neural networks and deep learning systems, in particular a neural network architecture called transformers. Researchers have shown that transformer networks are particularly well suited for parallelization on GPU-based systems.

MIT Paper Sheds Light on How Neural Networks Think

MIT researchers have developed a new general-purpose technique sheds light on inner workings of neural nets trained to process language. “During training, a neural net continually readjusts thousands of internal parameters until it can reliably perform some task, such as identifying objects in digital images or translating text from one language to another. But on their own, the final values of those parameters say very little about how the neural net does what it does.”