UPMEM Puts CPUs Inside Memory to Allow Apps to Run 20 Times Faster

Print Friendly, PDF & Email

Today UPMEM announced a Processing-in-Memory (PIM) acceleration solution that allows big data and AI applications to run 20 times faster and with 10 times less energy. Instead of moving massive amounts of data to CPUs, the silicon-based technology from UPMEM puts CPUs right in the middle of data, saving time and improving efficiency. By allowing compute to take place directly in the memory chips where data already resides, data-intensive applications can be substantially accelerated. UPMEM reduces data movement while leveraging existing server architecture and memory technologies.

UPMEM CTO and Co-Founder Fabrice Devaux will discuss this new approach along with user case studies in a session titled “True Processing in Memory with DRAM Accelerator” at the HOT CHIPS Conference in Silicon Valley today.

Today, applications in the data center and at the edge are becoming increasingly data-intensive and processing them becomes constrained by the energy cost of the data movement between the memory and the processing cores, as well as the limited bandwidth between them,” said Devaux. “In my session, I will explain how PIM technology can address those challenges and bring unprecedented benefits to organizations of all sizes. Here at UPMEM, we think that making in-situ processing a practical reality is a major advance in computing.”

“Offloading most of the processing in the memory chips while leveraging existing computing technologies is directly benefiting our target customers running critical software applications in data centers,” says Gilles Hamou, CEO and co-founder of UPMEM. “The level of interest we have been experiencing clearly demonstrates the market need and we are looking forward to sharing more details about customer adoption in the upcoming months.”

The PIM chip, embedding UPMEM’s proprietary processors (DRAM Processing Units, DPUs) and main memory (DRAM) on a memory chip, is the low-cost, ultra-efficient building block of this technology. Together with its Software Development Kit (SDK), delivered on standard DIMM modules, the UPMEM PIM solution accelerates data-intensive applications with a seamless integration into standard servers.

Today’s AI- and ML-driven applications are rapidly increasing the volume, velocity and variety of data, while simultaneously increasing the need to process data in real-time,” said Steffen Hellmold, vice president of corporate business development at Western Digital, an investor in UPMEM through the company’s strategic investment fund, Western Digital Capital. “UPMEM’s innovative PIM acceleration solution intelligently integrates processing with DRAM memory, providing the flexibility to create the purpose-built, data-centric compute architectures that will be essential to meet the demands of the zettabyte age.”

Current use cases include genomics companies where mapping or comparing DNA fragments against a reference genome involves tens of GBs of data. The UPMEM PIM modules are installed in existing servers to replace the regular DRAM memory modules and the UPMEM PIM accelerator then reduces operations from hours to minutes, delivering an unprecedented level of efficiency and performance.

Sign up for our insideHPC Newsletter