NeuReality Launches Developer Portal for NR1 AI Inference Platform 

SAN JOSE — April 16, 2024 — NeuReality, an AI infrastructure technology company, announced today the release of a software developer portal and demo for installation of its software stack and APIs. The company said the announcement marks a milestone since delivery of its 7nm AI inference server-on-a-chip, the NR1 NAPU, and bring up of […]

HPC News Bytes 20240226: Intel Foundry Bash, Nvidia Earnings and AI Inference, HPC in Space, ISC 2024

A happy Monday of Leap Year Week to you! We offer a rapid run-through of the latest in HPC-AI, including: Intel Foundry bash, Gelsinger talks up the “Systems Foundry Era,” Wall Street hangs on Nvidia earnings, AI Training vs Inference, Digitial In-Memory Computing for inference efficiency, HPC in space, ISC 2024.

In-Memory Computing Could Be an AI Inference Breakthrough

[CONTRIBUTED THOUGHT PIECE] In-memory computing promises to revolutionize AI inference. Given the rapid adoption of generative AI, it makes sense to pursue a new approach to reduce cost and power consumption by bringing compute in memory and improving performance.

Accelerating AI Inference for High-throughput Experiments

An upgrade to the ALCF AI Testbed will help accelerate data-intensive experimental research. The Argonne Leadership Computing Facility’s (ALCF) AI Testbed—which aims to help evaluate the usability and performance of machine learning-based high-performance computing (HPC) applications on next-generation accelerators—has been upgraded to include Groq’s inference-driven AI systems, designed to accelerate the time-to-solution for complex science problems. […]

Kickstart Your Business to the Next Level with AI Inferencing

{SPONSORED GUEST ARTICLE] Check out this article form HPE (with NVIDIA.) The need to accelerate AI initiatives is real and widespread across all industries. The ability to integrate and deploy AI inferencing with pre-trained models can reduce development time with scalable secure solutions….

Mythic Raises $13M for Edge AI Inference

Austin – March 9, 2023 – AI processing company Mythic has raised $13 million in a new round of funding. Mythic’s existing investors Atreides Management, DCVC, and Lux Capital contributed to the round, along with new investors Catapult Ventures and Hermann Hauser Investment (which is led by Hermann Hauser, one of the founders of Acorn Computers and […]

AI Inference Company d-Matrix Announces Collaboration with Microsoft

SANTA CLARA – Today, d-Matrix, a AI-compute and inference company, announced a collaboration with Microsoft using its low-code reinforcement learning (RL) platform, Project Bonsai, to enable an AI-trained compiler for d-Matrix’s  digital in-memory compute (DIMC) products. The Project Bonsai platform accelerates time-to-value, with a product-ready solution designed to cut down on development efforts using an […]

MLCommons: Latest MLPerf AI Benchmark Results Show Machine Learning Inference Advances

SAN FRANCISCO – September 8, 2022 – Today, the open engineering consortium MLCommons announced results from MLPerf Inference v2.1, which analyzes the performance of inference — the application of a trained machine learning model to new data. Inference allows for the intelligent enhancement of a vast array of applications and systems. Here are the results and […]

Improving AI Inference Performance with GPU Acceleration in Aerospace and Defense

The aerospace/defense industry often must solve mission-critical problems as they arise while also planning and designing for the rigors of future workloads. Technology advancements let aerospace/defense agencies gain the benefits of AI, but it’s essential to understand these advancements and the infrastructure requirements for AI training and inference.

Untether AI Unveils At-Memory Compute Architecture at Hot Chips

PALO ALTO — Untether AI, an at-memory computation company for artificial intelligence (AI) workloads, today announced at the HOT CHIPS 2022 conference its next-generation architecture for accelerating AI inference workloads called speedAI devices, with an internal codename “Boqueria.” At 30 TeraFlops per watt (TFlops/W) and 2 PetaFlops of performance, the speedAI architecture sets a new […]