Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

@HPCpodcast: CXL News, the CHIPS Act, Chips and Nm and Chip ‘Sprawl’

We’ve heard so much about the CXL interconnect – including the recent announcement of CXL v3.0 – and components that are CXL-ready, that it may come as a surprise that CXL v1.1 “hosts” are only just now shipping. It’s a technology that could play a central role in the ever-more heterogenous, more memory-intensive systems of the future. And now, after several years of experimentation and various interconnect consortia, CXL is emerging as the standard for advanced functionality for fabric technologies. Along with CXL we also discuss some of the details of the CHIPS and Science Act….

Intel Releases oneAPI 2022 Toolkits

Jan. 4, 2022 — Intel today released oneAPI 2022 toolkits, which the company said have expanded cross-architecture features to provide greater utility and architectural choice to accelerate computing. oneAPI is a cross-industry, open, standards-based unified programming model designed to improve the productivity of code development when building cross-architecture applications. New capabilities in the 2022 toolkits […]

Intel’s Infrastructure Processing Unit Targets Hyperscaler Data Centers

Intel has unveiled a programmable networking device for hyperscalers and their massive data center infrastructures. Called the Infrastructure Processing Unit and announced at Intel made the announcement during the Six Five Summit, it’s a networking device is intended to help cloud and communication service providers cut overhead and free up performance for CPUs, to better utilize […]

Video: GigaIO on Optimizing Compute Resources for ML, HPDA and other Advanced Workloads

In this interview, GigaIO CEO Alan Benjamin talks about systems performance problems and wasted compute resources when implementing ML, HPDA and other high demand workloads that involve high data volumes. At issue, Benjamin explains, is today’s rack architecture, which is decades old and unsuited for combinations of CPUs, GPUs and other accelerators needed for advanced computing strategies. The answer: the “composable disaggregated infrastructure.”

Video: Heterogeneous Computing at the Large Hadron Collider

In this video, Philip Harris from MIT presents: Heterogeneous Computing at the Large Hadron Collider. “Only a small fraction of the 40 million collisions per second at the Large Hadron Collider are stored and analyzed due to the huge volumes of data and the compute power required to process it. This project proposes a redesign of the algorithms using modern machine learning techniques that can be incorporated into heterogeneous computing systems, allowing more data to be processed and thus larger physics output and potentially foundational discoveries in the field.”