Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

@HPCpodcast: The GTC Cornucopia

Last week’s rendition of NVIDIA’s bi-annual GTC extravaganza unveiled a raft of new HPC/AI announcements, the latest public performance of a company in its prime led by a leather-clad CEO generally regarded as a master marketer. Ok, roll your eyes at that gushing statement if you like, but it reflects the sentiment of Wall Street, which pushed NVIDIA stock up 10 percent the day CEO Jensen Huang delivered his GTC keynote, and of most (if not all) in the HPC industry analyst community. In this episode of the @HPCpodcast….

Supermicro Announces Universal GPU System – Supports CPU, GPU and Fabric Architectures

San Jose – Super Micro Computer, Inc. (SMCI), an enterprise computing, storage, networking solutions and green computing technology company, has announced a revolutionary technology that simplifies large scale GPU deployments and is a future proof design that supports yet to be announced technologies. The Universal GPU server provides the ultimate flexibility in a resource-saving server. […]

NVIDIA Announces DGX H100 ‘AI Infrastructure’ Systems

San Jose, March 22, 2022 — NVIDIA today announced the fourth-generation NVIDIA DGX system, which the company said is the first AI platform to be built with its new H100 Tensor Core GPUs. DGX H100 systems deliver the scale demanded to meet the massive compute requirements of large language models, recommender systems, healthcare research and climate science. […]

DDN at GTC Says Storage Appliance Doubles NVIDIA DGX Performance

CHATSWORTH, Calif. – March 22, 2022 – AI and multi-cloud data management company DDN today announced flash and hybrid data platforms for NVIDIA DGX POD and  DGX SuperPOD AI, analytics and deep learning computing infrastructure. Powering thousands of NVIDIA DGX systems, including NVIDIA’s Selene and Cambridge-1 DGX SuperPOD systems, DDN offers AI data storage solutions for applications such as […]

@HPCpodcast: Argonne’s Rick Stevens on AI for Science (Part 2) – Coming Breakthroughs, Ethics and the Replacement of Scientists by Robots

In part 2 of our not-to-be-missed @HPCpodcast with Argonne National Laboratory Associate Director Rick Stevens, he discusses some of the important advances that had, by 2015, likely ended the cycle of AI for science winters. He also delves into the major challenges in AI for science, such as building models that are transparent and unbiased while also robust and secure. And Stevens looks at important upcoming AI for science breakthrough use cases, including the welcome news – for researchers beset by mountains of scientific papers – of utilizing large natural language modeling to ingest and collate existing knowledge of a scientific problem, enabling analysis of the literature that, Stevens said, goes well beyond a Google search….

DDN and Aspen Systems in HPC – AI Partnership to Help PNNL Study Environmental Impact on Coastal Regions

CHATSWORTH, Calif. – Mar. 10, 2022 – DDN, maker of artificial intelligence (AI) and multicloud data management solutions, and Aspen Systems, manufacturer of HPC products, have partnered to deliver custom AI and HPC solutions that enable data-intensive organizations to generate more value and lower times for analyzing data, on premise and in the cloud. Pacific Northwest […]

PNNL and Micron Partner to Push Memory Boundaries for HPC and AI

Researchers at Pacific Northwest National Laboratory (PNNL) and Micron are are developing an advanced memory system to support AI for scientific computing. The work is designed to address AI’s insatiable demand for live data — to push the boundaries of memory-bound AI applications — by connecting memory across processors in a technology strategy utilizing the […]

HPE and Ayar Labs Partner to Bring Optical I/O to Slingshot Fabric for HPC and AI

HPC systems leader Hewlett Packard Enterprise and startup Ayar Labs, maker of chip-to-chip optical I/O connectivity, today announced a strategic collaboration to integrate silicon photonics within HPE’s high performance Slingshot fabric. Longer term, HPE envisions future generations of HPC systems interconnects significantly enhanced by optical I/O, which is a silicon photonics-based technology that uses light instead of electricity to transmit data. The technology addresses both the need for higher data rates and improved energy efficiency (see “Composable HPC-AI at Scale: The Emergence of Optical I/O Chiplets”).

Dell Technologies Interview: Dell’s Jay Boisseau on Data Growth Outpacing Moore’s Law, the HPC-AI Divide-Convergence and Beating Zoom Fatigue

[SPONSORED CONTENT]  Jay Boisseau, HPC & AI Technology Strategist at Dell Technologies and organizer of the Dell Technologies HPC Community meetings, is one of the big personalities of HPC, someone who brings energy and insight to any gathering he’s a part of. In this insideHPC interview conducted on behalf of Dell, we talk with Boisseau about his career path, including serving as director of the Texas Advanced Computing Center, about big shifts and future trends in HPC and about the phenomenon, where HPC and AI intersect, of data growth rates outpacing Moore’s Law. He also disc uses his strategy for infusing life into the HPC Community meetings in an era of “Zoom fatigue,” including drawing inspiration from The Art of Gathering by Priya Parker, who, as Boisseau points out, probably never dreamed of the Zoom era.

Meta Announces AI Research SuperCluster: 16,000 GPUs, 1 Exabyte of Storage, 5 ExaFLOPS of Compute

Meta, formerly Facebook, announced it has designed and built what it calls the AI Research SuperCluster (RSC), “the fastest AI supercomputers running today and will be the fastest AI supercomputer in the world when it’s fully built out in mid-2022” delivering nearly 5 exaFLOPS, the company said in a blog announcing the system. “Once we […]