IBM Announces New AI Hardware Research, Red Hat Collaborations

Print Friendly, PDF & Email

At the IEEE CAS/EDS AI Compute Symposium, IBM Research introduced new technology and partnerships designed to dynamically run massive AI workloads in hybrid clouds:

The company said it is developing analog AI, combining compute and memory in a single device designed to alleviate “the von Neumann bottleneck,” a limitation resulting from traditional hardware architectures in which compute and memory are segregated in different locations, with data moving back and forth between them when an operation is performed.

Mukesh Khare, vice president, IBM Systems Research, stated in a blog that IBM Research is exploring ways to create more powerful, yet efficient, AI systems with digital and analog cores. He said the company is releasing the Analog Hardware Acceleration Kit as an open source Python toolkit that enables communities of developers to test the possibilities of using in-memory computing devices for AI.

Khare said the kit has two components: PyTorch integration and an analog devices simulator. PyTorch is an open source machine learning library based on the Torch library, a scientific computing framework with wide support for machine learning algorithms. PyTorch is used for developing AI applications such as computer vision and natural language processing.

“The analog kit allows AI practitioners to evaluate analog AI technologies while allowing for customizing a wide range of analog device configurations, and the ability to modulate device material parameters,” wrote Khare. “The goal is to enlist the AI community to assess the capabilities of analog AI hardware, to be part of a community that tailors models to extract the full potential of the hardware, and to invent new AI applications that can exploit the potential of this breakthrough technology.”

Khare also said that Red Hat has joined the IBM AI Hardware Center in Albany, NY, to help bring IBM Digital AI Cores to Red Hat OpenShift, an enterprise Kubernetes platform. “Digital AI Cores serve as accelerators, using custom architecture, software and algorithms to transform existing semiconductor technologies to apply reduced precision formats to speed computation and decrease power consumption while maintaining model accuracy,” he said.

Red Hat is collaborating with IBM’s AI hardware development stream and working to enable AI hardware accelerator deployment across hybrid cloud infrastructure: multi-cloud, private cloud, on-premise and edge. These accelerators can be used to build neural network models for applications that perform AI tasks, including speech recognition, natural language processing and computer vision. The integration of accelerators based on IBM Digital AI Cores with Red Hat OpenShift enables the accelerators to be deployed and managed as part of a hybrid infrastructure, according to Khare.

“As IBM is developing AI hardware, we can work on the software integration in parallel with Red Hat,” he said. “This has two large benefits: The software and hardware can be ready at the same time (shortening the overall development cycle by months if not years), and the opportunity exists for a better overall solution.”

Khare said the IBM Research AI Hardware Center, which aims to be an ecosystem of research and commercial partners collaborating with IBM has expanded to 14 members. It includes Synopsys, an electronic design automation software and emulation and prototyping company. Synopsys also develops IP blocks for use in the high-performance silicon chips and secure software applications for advancements in AI.

Synopsys will serve as lead electronic design automation (EDA) partner for IBM’s AI Hardware Center, supporting IBM’s goal of 1,000X improvement in AI compute performance by 2030.

To address the need for advanced interconnect bandwidth connectivity, IBM said the company and NY Creates, an organization aimed at attracting high tech entrepreneurs to New York State, are investing in a new cleanroom facility on the campus of AI Center member, SUNY-Poly, in Albany. The cleanroom will focus on advanced packaging, also called “heterogeneous integration,” to improve memory proximity and interconnect capabilities, according to Khare. “This work will also help ensure that, as our new compute cores are developed, the memory bandwidth is increased in tandem,” he said. “Otherwise, the compute units can remain idle waiting for data, leading to unbalanced system performance.”

“AI’s unprecedented demand for data, power and system resources poses the greatest challenge to realizing this optimistic vision of the future,” wrote Khare.  “To meet that demand, we’re developing a new class of inherently energy-efficient AI hardware accelerators that will increase compute power by orders of magnitude, in hybrid cloud environments, without the demand for increased energy.”