2023 Trends in Artificial Intelligence and Machine Learning: Generative AI Unfolds  

In this contributed article, editorial consultant Jelani Harper offers his perspectives around 2023 trends for the boundless potential of generative Artificial Intelligence—the variety of predominantly advanced machine learning that analyzes content to produce strikingly similar new content.

Cortical.io Semantic Folding Approach Demonstrates a 2,800x Acceleration and 4,300x Increase in Energy Efficiency over BERT

Cortical.io announced its breakthrough prototype for classifying high volumes of unstructured text. Classifying documents or messages constitutes one of the most fundamental Natural Language Understanding (NLU) functions for business artificial intelligence (AI). The benchmark was carried out on two similar system setups using the same, off-the-shelve, dual AMD-Epyc server hardware. The “BERT” system, a transformer-based machine learning technique for natural language processing, was augmented by a NVidia GPU. The “Semantic Folding” approach utilized a cost comparable number of Xilinx Alveo FPGA accelerator cards.

Research Highlights: ExBERT

In the insideAI News Research Highlights column we take a look at new and upcoming results from the research community for data science, machine learning, AI and deep learning. Our readers need to get a glimpse for technology coming down the pipeline that will make their efforts more strategic and competitive. In this installment we review a new paper: EXBERT: A Visual Analysis Tool to Explore Learned Representations in Transformers Models by researchers from the MIT-IBM Watson AI Lab and Harvard.

Interview: Beerud Sheth, CEO of Gupshup

I recently caught up with Beerud Sheth, CEO of Gupshup, to discuss the state-of-the-art for conversational AI and chatbot technology. He also gives us an idea for future areas of evolution for AI chatbots.

Optimizing in a Heterogeneous World is (Algorithms x Devices)

In this guest article, our friends at Intel discuss how CPUs prove better for some important Deep Learning. Here’s why, and keep your GPUs handy! Heterogeneous computing ushers in a world where we must consider permutations of algorithms and devices to find the best platform solution. No single device will win all the time, so we need to constantly assess our choices and assumptions.

How NLP and BERT Will Change the Language Game

In this contributed article, Rob Dalgety, Industry Specialist at Peltarion, discusses how the recent model open-sourced by Google in October 2018, BERT (Bidirectional Encoder Representations from Transformers, is now reshaping the NLP landscape. BERT is significantly more evolved in its understanding of word semantics given its context and has an ability to process large amounts of text and language.

NVIDIA TensorRT 6 Breaks 10 millisecond barrier for BERT-Large

Today, NVIDIA released TensorRT 6, which includes new capabilities that dramatically accelerate conversational AI applications, speech recognition, 3D image segmentation for medical applications, as well as image-based applications in industrial automation. TensorRT is a high-performance deep learning inference optimizer and runtime that delivers low latency, high-throughput inference for AI applications. “With today’s release, TensorRT continues to expand its set of optimized layers, provides highly requested capabilities for conversational AI applications, delivering tighter integrations with frameworks to provide an easy path to deploy your applications on NVIDIA GPUs. In TensorRT 6, we’re also releasing new optimizations that deliver inference for BERT-Large in only 5.8 ms on T4 GPUs, making it practical for enterprises to deploy this model in production for the first time.”