• Lenovo HPC Powers SPEChpc™ 2021 with AMD 3rd Generation EPYC™ Processors

    As a leader in high performance computing, Lenovo continually supports the Standard Performance Evaluation Corporation (SPEC) benchmarks, that would help customers make better-informed decisions for their HPC workloads. SPEChpc™ 2021 is a newly released benchmark suite from SPEC that produces industry-standard benchmarks for the newest generation of computer systems. What separates SPEChpc™ 2021 from SPEC CPU® 2017, SPEC MPI® 2007 or the other SPEC benchmark suites is that SPEChpc™ 2021 is one-of-a-kind benchmark suite which uses real-world applications that support “multiple programming models and offloading” to evaluate the performance of state-of-the-art heterogenetic HPC systems.

Featured Stories

  • HPC-AI Chips in the News: NVIDIA, AMD Ensnared in US-China Trade War; Arm Sues Qualcomm

    NVIDIA and AMD, makers of advanced GPUs used in HPC-AI workloads, became embroiled this week in the deteriorating relations and ongoing trade war between the US and the People’s Republic of China. Yesterday, Nvidia said it has been prohibited by the US government from selling to the PRC its A100 Tensor Core GPU, on the market since 2020, as well as its forthcoming H100 Tensor Core GPU, scheduled for availability [READ MORE…]

  • OLCF’s Doug Kothe on Pushing Frontier Across the Exascale Line and the Future of Leadership Supercomputers

    Everyone involved in the Frontier supercomputer project got a taste of what a moonshot is like. Granted, lives were not on the line with Frontier as they were when Armstrong and Aldrin went to the moon in 1969. But in other ways there are parallels between the space mission and standing up Frontier, the world’s first exascale HPC system. Both were decade-plus-long efforts involving thousands of people across the public [READ MORE…]

  • Los Alamos, PNNL, Univ. of New Mexico Researchers to Lead $70M DOE HPC Climate Model Projects

    The U.S. Department of Energy (DOE) today announced $70 million in funding for seven projects intended to improve climate prediction and aid in the fight against climate change. The research will be used to accelerate development of DOE’s Energy Exascale Earth System Model (E3SM), enabling scientific discovery through collaborations between climate scientists, computer scientists and applied mathematicians. The projects will be led by researchers at DOE’s Los Alamos National Laboratory [READ MORE…]

  • Accelerating the Modern Data Center – Gear Up for AI

    Modern applications are transforming every business. From AI for better customer engagement, to data analytics for forecasting, to advanced visualization for product innovation, the need for accelerated computing is rapidly increasing. But enterprises face challenges with using existing infrastructure to power these applications.

Featured Resource

Virtualizing HPC

Virtualizing HPC Throughput Computing Environments

This pioneering study focuses primarily on the virtual performance of throughput workloads. Download the new white paper from VMWare that explores the possibilities of virtualizing HPC throughput in computing environments. 

HPC Newsline

Industry Perspectives

  • …today’s situation is clear: HPC is struggling with reliability at scale. Well over 10 years ago, Google proved that commodity hardware was both cheaper and more effective for hyperscale processing when controlled by software-defined systems, yet the HPC market persists with its old-school, hardware-based paradigm. Perhaps this is due to prevailing industry momentum or working within the collective comfort zone of established practices. Either way, hardware-centric approaches to storage resiliency need to go.

  • New, Open DPC++ Extensions Complement SYCL and C++

    In this guest article, our friends at Intel discuss how accelerated computing has diversified over the past several years given advances in CPU, GPU, FPGA, and AI technologies. This innovation drives the need for an open and cross-platform language that allows developers to realize the potential of new hardware, minimizes development cost and complexity, and maximizes reuse of their software investments.

RSS Featured from insideBIGDATA

Editor’s Choice

  • Frontier Named No. 1 Supercomputer on TOP500 List and ‘First True Exascale Machine’

    Hamburg — This morning, AMD’s long comeback from trampled HPC also-ran – a comeback that began in 2017 when company executives told skeptical press and industry analysts to expect price/performance chip superiority over Intel – reached a high point (not to say an end point) with the news that the U.S. Department of Energy’s Frontier supercomputer, an HPE-Cray EX system powered by AMD CPUs and GPUs, has not only been named the world’s most powerful supercomputer, it also is the first system to exceed the exascale (1018 calculations/second) milestone. This may not come as a  surprise to many in the [READ MORE…]

  • Chip Geopolitics: If China Invades, Make Taiwan ‘Unwantable’ by Destroying TSMC, Military Paper Suggests

    US military planners are taking notice of a suggestion by two military scholars calling for the destruction of semiconductor foundry company Taiwan Semiconductor Manufacturing Co. (TSMC), whose fabs produce advanced microprocessors used in HPC and AI, in the event China invades the island nation A news story in today’ edition of Data Center Times cites the Nikkei Asia news service and a paper in the U.S. Army War College’s scholarly journal, Parameters, discussing the possibility of Taiwan adopting “’a scorched earth policy’ and wipe out its own semiconductor foundries in the wake of any Chinese invasion as a deterrent, U.S. [READ MORE…]

  • How Machine Learning Is Revolutionizing HPC Simulations

    Physics-based simulations, that staple of traditional HPC, may be evolving toward an emerging, AI-based technique that could radically accelerate simulation runs while cutting costs. Called “surrogate machine learning models,” the topic was a focal point in a keynote on Tuesday at the International Conference on Parallel Processing by Argonne National Lab’s Rick Stevens. Stevens, ANL’s associate laboratory director for computing, environment and life sciences, said early work in “surrogates,” as the technique is called, shows tens of thousands of times (and more) speed-ups and could “potentially replace simulations.” Surrogates can be looked at as an end-around to two big problems [READ MORE…]

  • Double-precision CPUs vs. Single-precision GPUs; HPL vs. HPL-AI HPC Benchmarks; Traditional vs. AI Supercomputers

    If you’ve wondered why GPUs are faster than CPUs, in part it’s because GPUs are asked to do less – or, to be more precise, to be less precise. Next question: So if GPUs are faster than CPUs, why aren’t GPUs  the mainstream, baseline processor used in HPC server clusters? Again, in part it gets back to precision. In many workload types, particularly traditional HPC workloads, GPUs aren’t precise enough. Final question: So if GPUs and AI are inextricably linked, particularly for training machine learning models, and if GPUs are less precise than CPUs, does that mean AI is imprecise? [READ MORE…]

  • 6,000 GPUs: Perlmutter to Deliver 4 Exaflops, Top Spot in AI Supercomputing

    The U.S. National Energy Research Scientific Computing Center today unveiled the Perlmutter HPC system, a beast of a machine powered by 6,159 Nvidia A100 GPUs and delivering 4 exaflops of mixed precision performance. Perlmutter is based on the HPE Cray Shasta platform, including Slingshot interconnect, a heterogeneous system with both GPU-accelerated and CPU-only nodes. The system is being installed in two phases – today’s unveiling is Phase 1, which includes the system’s GPU-accelerated nodes and scratch file system. Phase 2 will add CPU-only nodes later in 2021. “That makes Perlmutter the fastest system on the planet on the 16- and 32-bit [READ MORE…]

Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly