• Intel Confirms Damkroger Out as Head of HPC; McVeigh to Lead Newly Formed Super Computer Group

    Intel has confirmed a story run earlier today on insideHPC that Trish Damkroger will leave her position as vice president and general manager of Intel’s High Performance Computing (HPC) Group. Intel has confirmed a story run earlier today on insideHPC that Trish Damkroger will leave her position as vice president and general manager of Intel’s High Performance Computing (HPC) Group. An email sent by Intel’s communications office to insideHPC today said “Intel recently made some changes to our Accelerated Computing Systems and Graphics organization (AXG).  The changes we made were done to improve organizational workflows and enable us to scale [READ MORE…]

Featured Stories

  • Join insideHPC at Today’s HPE Exascale Day Broadcast

    Exascale Day, today, is a special occasion for HPE, which isn’t surprising considering that the company is the prime contractor for two of the first three exascale systems (Frontier, at Oak Ridge National Laboratory and El Capitan at Lawrence Livermore National Lab) scheduled to be installed in the U.S. and is heavily in a third such system, Aurora, at Argonne National Lab. In celebration of the onset of exascale, HPE [READ MORE…]

  • Comparing Price-performance of HPE GreenLake for HPC vs. the Public Cloud

    [SPONSORED POST] In this article, Max Alt, HPE’s Distinguished Technologist and Director, Hybrid HPC, discusses recent real-world tests measuring the price-performance of HPC applications used in a range of workloads – including High Performance Linpack (HPL) and OpenFOAM CFD solver. The tests compared performance on AWS and Oracle Cloud solutions powered by Intel® processors against HPE GreenLake for HPC solutions powered by AMD EPYC™ processors.

  • Eni Upgrades HPE HPC Infrastructure via GreenLake

    Hewlett Packard Enterprise (NYSE: HPE) today announced the upgrade of the supercomputer system of Eni, the Italian multinational supermajor energy company. The upgrade of the company’s supercomputer, HPC4, will be delivered as a service through the HPE GreenLake edge-to-cloud platform and is intended increase performance and double storage capacity to improve accuracy of image-intensive modeling and simulations of complex energy research. Eni’s new HPC4 is built with 1,500 customized nodes [READ MORE…]

  • HPC: Stop Scaling the Hard Way

    …today’s situation is clear: HPC is struggling with reliability at scale. Well over 10 years ago, Google proved that commodity hardware was both cheaper and more effective for hyperscale processing when controlled by software-defined systems, yet the HPC market persists with its old-school, hardware-based paradigm. Perhaps this is due to prevailing industry momentum or working within the collective comfort zone of established practices. Either way, hardware-centric approaches to storage resiliency [READ MORE…]

Featured Resource

UC San Diego Center for Microbiome Innovation Breaks Data Bottlenecks

In this compelling use case provided by our friends over at HPC storage solution provider Panasas, we look at how the UC San Diego Center for Microbiome Innovation (CMI) got around a number of hurdles by deploying a Panasas ActiveStor® high-performance storage solution.

HPC Newsline

Industry Perspectives

  • New, Open DPC++ Extensions Complement SYCL and C++

    In this guest article, our friends at Intel discuss how accelerated computing has diversified over the past several years given advances in CPU, GPU, FPGA, and AI technologies. This innovation drives the need for an open and cross-platform language that allows developers to realize the potential of new hardware, minimizes development cost and complexity, and maximizes reuse of their software investments.

  • insideHPC Special Report: Citizens Benefit from Public/Private Partnerships – Part 3

    This special report sponsored by Dell Technologies, takes a look at how now more than ever, agencies from all levels of government are teaming with private Information Technology (IT) organizations to leverage AI and HPC to create and implement solutions that not only increase safety for all, but also provide a more streamlined and modern experience for citizens.

RSS Featured from insideBIGDATA

  • Your Business’s Data Strategy is Hosed, You Just May Not Know It Yet
    In this special guest feature, Nick Bonfiglio, CEO of Syncari, discusses the key takeaway of a recent cross-functional executive panel: data interoperability is the key to effective operational data. Cloud data warehouses are here to stay, so rather than dedicating them to reporting business intelligence insights, businesses should think about their warehouse as a part […]

Editor’s Choice

  • How Machine Learning Is Revolutionizing HPC Simulations

    Physics-based simulations, that staple of traditional HPC, may be evolving toward an emerging, AI-based technique that could radically accelerate simulation runs while cutting costs. Called “surrogate machine learning models,” the topic was a focal point in a keynote on Tuesday at the International Conference on Parallel Processing by Argonne National Lab’s Rick Stevens. Stevens, ANL’s associate laboratory director for computing, environment and life sciences, said early work in “surrogates,” as the technique is called, shows tens of thousands times and more speed-ups and could “potentially replace simulations.” Surrogates can be looked at as an end-around to two big problems associated [READ MORE…]

  • Double-precision CPUs vs. Single-precision GPUs; HPL vs. HPL-AI HPC Benchmarks; Traditional vs. AI Supercomputers

    If you’ve wondered why GPUs are faster than CPUs, in part it’s because GPUs are asked to do less – or, to be more precise, to be less precise. Next question: So if GPUs are faster than CPUs, why aren’t GPUs  the mainstream, baseline processor used in HPC server clusters? Again, in part it gets back to precision. In many workload types, particularly traditional HPC workloads, GPUs aren’t precise enough. Final question: So if GPUs and AI are inextricably linked, particularly for training machine learning models, and if GPUs are less precise than CPUs, does that mean AI is imprecise? [READ MORE…]

  • The US-China Supercomputing Race, Post-Exascale HPC Government Policy and the ‘Partitioning of the Internet’

    All over the world, HPC has burst into geopolitics. HPC – broadly defined here as advanced supercomputing combined with big AI – is at the fault lines of national and regional rivalries, particularly between the U.S. and China, expanding in power, cost, intensity and in potential impact. Which is to say that global players put supercomputing at the heart of their defense, surveillance, healthcare and economic competitiveness strategies. Is it going too far to say that supercomputing now plays a role similar to the nuclear arms and space races in the Cold War era? Just as Sputnik spurred U.S. determination [READ MORE…]

  • 6,000 GPUs: Perlmutter to Deliver 4 Exaflops, Top Spot in AI Supercomputing

    The U.S. National Energy Research Scientific Computing Center today unveiled the Perlmutter HPC system, a beast of a machine powered by 6,159 Nvidia A100 GPUs and delivering 4 exaflops of mixed precision performance. Perlmutter is based on the HPE Cray Shasta platform, including Slingshot interconnect, a heterogeneous system with both GPU-accelerated and CPU-only nodes. The system is being installed in two phases – today’s unveiling is Phase 1, which includes the system’s GPU-accelerated nodes and scratch file system. Phase 2 will add CPU-only nodes later in 2021. “That makes Perlmutter the fastest system on the planet on the 16- and 32-bit [READ MORE…]

  • IBM Doubles Down on 1000+-Qubit Quantum in 2023

    As expectation-setting goes in the technology industry, this is bold. At IBM’s annual Think conference, a senior systems executive reiterated the company’s intent to deliver a 1,121-qubit IBM Quantum Condor processor by 2023. In a video interview with theCUBE, technology publication SiliconANGLE Media’s livestreaming studio, IBM GM of systems strategy and development for enterprise security, Jamie Thomas, said the company is on track with its quantum roadmap – though she did not sugarcoat the challenges involved. “In terms of the roadmap around hardware, we put ourselves out there,” said Thomas. “We said we were going to get to over a [READ MORE…]

Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly