Featured Stories

  • ASCR: Exascale to Burst Bubbles that Block Carbon Capture

    Bubbles could block a promising technology that would separate carbon dioxide from industrial emissions, capturing the greenhouse gas before it contributes to climate change. A team of researchers with backing from the Department of Energy’s Exascale Computing Project (ECP) is out to burst the barrier, using a code that captures the floating blisters and provides insights to deter them. Chemical looping reactors (CLRs) combine fuels such as methane with oxygen [READ MORE…]

  • At ISC 2023: Hyperion Reports HPC Industry Grew 4% in 2022; AI to Drive Stronger Growth Next and Following Years

    The HPC industry managed to achieve modest overall growth in 2022, but growth of any kind was in doubt right up to the end of the year, according to HPC-AI industry analyst firm Hyperion Research, which hosted its bi-annual HPC market update at a breakfast event this week during the ISC 2023 conference in Hamburg. According to the firm’s findings, HPC grew by 4 percent last year, but “we were [READ MORE…]

  • AMD-Powered LUMI Supercomputer: In the Vanguard of HPC Performance and Energy Efficiency

    [SPONSORED GUEST ARTICLE]   LUMI is a model of both supercomputing and sustainability. It also embodies Europe’s rise on the global HPC scene in recent years. The AMD-powered, HPE-built system, ranked no. 3 on the new TOP500 list of the world’s most powerful supercomputers, also ranks no. 7 on the GREEN500 list of the most energy efficient HPC systems. LUMI (Large Unified Modern Infrastructure) is in the vanguard of leadership-class supercomputers [READ MORE…]

  • @HPCpodcast: A Breakdown of the ‘Treasure Trove’ TOP500 List

    We release this special edition of the @HPCpodcast, sponsored by Lenovo, from the ISC conference in Hamburg with a detailed discussion of the new TOP500 list, issued this morning. Shahin, a list advocate, notes that it contains 30-plus years of system data showing the convolutions and evolution of the highest performing HPC architectures – “a treasure trove,” as he puts it. We offer top-line insights from the new list, not [READ MORE…]

Featured Resource

HPC Buyers Guide

This ebook from our friends over at Rescale, shows that by adopting a platform solution and retiring the fixed capacity, on-premises infrastructure model, companies can significantly reduce capital expenditures, dramatically increase productivity, and develop next-generation innovative products at a pace that surpasses their competition.

HPC Newsline

Industry Perspectives

  • …today’s situation is clear: HPC is struggling with reliability at scale. Well over 10 years ago, Google proved that commodity hardware was both cheaper and more effective for hyperscale processing when controlled by software-defined systems, yet the HPC market persists with its old-school, hardware-based paradigm. Perhaps this is due to prevailing industry momentum or working within the collective comfort zone of established practices. Either way, hardware-centric approaches to storage resiliency need to go.

  • New, Open DPC++ Extensions Complement SYCL and C++

    In this guest article, our friends at Intel discuss how accelerated computing has diversified over the past several years given advances in CPU, GPU, FPGA, and AI technologies. This innovation drives the need for an open and cross-platform language that allows developers to realize the potential of new hardware, minimizes development cost and complexity, and maximizes reuse of their software investments.

RSS Featured from insideBIGDATA

  • Why FinOps Needs DataOps Observability
    In this special guest feature, Chris Santiago, Vice President/Solutions Engineering, Unravel Data, talks about controlling cloud spend through three phases of the FinOps lifecycle.

Editor’s Choice

  • @HPCpodcast: Silicon Photonics – Columbia Prof. Keren Bergman on the Why, How and When of a Technology that Could Transform HPC

    Silicon photonics has the potential to transform HPC: it’s a dual-threat interconnect technology that could – if and when it is wrestled into commercial, cost-effective form – move data within chips and systems much faster than conventional, copper-based interconnects while also delivering far greater energy efficiency. Venture-backed start-ups and established tech companies (HPC, NVIDIA, AMD and Intel, to name four) have mounted significant R&D efforts.  In this episode of the @HPCpodcast, Shahin and Doug spoke with a leading silicon photonics expert, Keren Bergmen, Columbia University’s Charles Batchelor Professor of Electrical Engineering, Faculty Director of the Columbia Nano Initiative, and Principal [READ MORE…]

  • Azure, AMD and the Power of Cloud-based HPC for Sustainability R&D Projects

    [SPONSORED GUEST ARTICLE]  Sustainability – both in the way it operates and in its support for the development of sustainable technologies and products – is a theme that permeates the Microsoft Azure public cloud platform and its end-user community. Azure, in combination with advanced and ultra-efficient CPUs from AMD and other HPC-class technologies, is a hothouse for sustainability R&D projects ranging from electric vehicles to wind turbine design. Before we look in detail at an example of those projects, let’s start with Azure’s operational efficiencies….

  • Frontier Pushes Boundaries: 86% of Nodes Engaged on Reactor Simulation Runs

    Details have trickled out of the Oak Ridge Leadership Computing Facility (OLCF) indicating progress in preparing Frontier, the exascale-class supercomputer ranked the world’s most powerful system, for full user operations. Earlier this week, the Exascale Computing Project released an article on its web site entitled “Predicting the Future of Fission Power” discussing the ExaSMR (Exa for exascale; SMR for small modular reactors) toolkit for running nuclear reactor design simulations on Frontier. Toward the end of the article, we learn that ExaSMR performed simulations on 8,192 of Frontier’s 9,472 nodes, involving more than 250 billion neutron histories per iteration, according to [READ MORE…]

  • Conventional Wisdom Watch: Matsuoka & Co. Take on 12 Myths of HPC

    A group of HPC thinkers, including the estimable Satoshi Matsuoka of the RIKEN Center for Computational Science in Japan, have come together to challenge common lines of thought they say have become, to varying degrees, accepted wisdom in HPC. In a paper entitled “Myths and Legends of High-Performance Computing” appearing this week on the Arvix site, Matsuoka and four colleagues (three from the RIKEN Center – see author list below) offer opinions and analysis on such issues as quantum replacing classical HPC, the zettascale timeline, disaggregated computing, domain-specific languages (DSLs) vs. Fortran and cloud subsuming HPC, among other topics. “We [READ MORE…]

  • SC22: CXL3.0, the Future of HPC Interconnects and Frontier vs. Fugaku

    HPC luminary Jack Dongarra’s fascinating comments at SC22 on the low efficiency of leadership-class supercomputers highlighted by the latest High Performance Conjugate Gradients (HPCG) benchmark results will, I believe, influence the next generation of supercomputer architectures to optimize for sparse matrix computations. The upcoming technology that will help address this problem is CXL. Next generation architectures will use CXL3.0 switches to connect processing nodes, pooled memory and I/O resources into very large, coherent fabrics within a rack, and use Ethernet between racks. I call this a “Petalith” architecture (explanation below), and I think CXL will play a significant and growing [READ MORE…]

Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly