• Quantum in the News: IBM Touts ‘Entanglement Forging’ Simulation; Multiverse Launches Quantum-based Stock Valuations

    Quantum is in the news this week, with IBM today announcing new research on its “entanglement forging” simulation method and Multiverse Computing launching  a quantum-based method for financial institutions to calculate the fair price stock valuations. At IBM, “entanglement forging” creates a “remarkably accurate” simulation of a water molecule using half as many qubits on IBM’s 27-qubit Falcon quantum processor, according to the company. IBM said the new simulation method could represent a step toward achieving quantum advantage, when a quantum computer can perform a task much faster than a classical computer. The irony in this case is that entanglement [READ MORE…]

Featured Stories

  • @HPCpodcast: Zettascale Is Coming – But What About Exascale?

    After SC21, Patrick Kennedy at Serve the Home got a scoop when he met with Raja Koduri, SVP/GM of Intel’s Accelerated Computing Systems and Graphics (AXG) Group, to discuss Intel’s zettascale projections and plans, anticipating delivery by 2027. Or maybe 2028. By way of definition, a zettaflop is 1,000 exaflops, or one sextillion (1021) floating point operations per second, a thousand times more powerful than an exascale system. But is [READ MORE…]

  • ORNL: Updated Exascale Earth Simulation Model Delivers 2X Speed

      Oak Ridge National Laboratory announced today that a new version of the Energy Exascale Earth System Model, or E3SM, is two times faster than an earlier version released in 2018. Earth system models have weather-scale resolution and use advanced computers to simulate aspects of earth’s variability and anticipate decadal changes that will critically impact the U.S. energy sector in coming years. Scientists at the Department of Energy’s Oak Ridge [READ MORE…]

  • Dell Technologies Interview: How Cambridge University Pushed the Wilkes3 Supercomputer to No. 4 on the Green500

    [SPONSORED CONTENT]  In this interview conducted on behalf of Dell Technologies, insideHPC spoke with Dr. Paul Calleja, director of Research Computing Services at the University of Cambridge, about the Wilkes3 supercomputer, currently ranked no. 4 on the Green500 list of the world’ most energy efficient supercomputers. Dr. Calleja discusses how he and his team developed an low-power strategy for the 80-node Wilkes3 system by the adoption of GPUs and lower [READ MORE…]

  • HPC Cluster Management Software Company Bright Computing Acquired by NVIDIA

    HPC cluster management software company Bright Computing has been acquired by GPU powerhouse NVIDIA. Used by more than 700 companies, Bright Cluster Manager is now part of NVIDIA’s software stack for accelerated computing. The companies declined to disclose the terms of the deal other than to say that Bright’s employees will join NVIDIA. “We’ve been working with Bright for more than a decade as they integrated their software with our GPUs, [READ MORE…]

Featured Resource

Deep Learning for Natural Language Processing – Choosing the Right GPU for the Job

In this new whitepaper from our friends over at Exxact Corporation we take a look at the important topic of deep learning for Natural Language Processing (NLP) and choosing the right GPU for the job. Focus is given to the latest developments in neural networks and deep learning systems, in particular a neural network architecture called transformers. Researchers have shown that transformer networks are particularly well suited for parallelization on GPU-based systems.

HPC Newsline

Industry Perspectives

  • HPC: Stop Scaling the Hard Way

    …today’s situation is clear: HPC is struggling with reliability at scale. Well over 10 years ago, Google proved that commodity hardware was both cheaper and more effective for hyperscale processing when controlled by software-defined systems, yet the HPC market persists with its old-school, hardware-based paradigm. Perhaps this is due to prevailing industry momentum or working within the collective comfort zone of established practices. Either way, hardware-centric approaches to storage resiliency need to go.

  • New, Open DPC++ Extensions Complement SYCL and C++

    In this guest article, our friends at Intel discuss how accelerated computing has diversified over the past several years given advances in CPU, GPU, FPGA, and AI technologies. This innovation drives the need for an open and cross-platform language that allows developers to realize the potential of new hardware, minimizes development cost and complexity, and maximizes reuse of their software investments.

RSS Featured from insideBIGDATA

Editor’s Choice

  • Exascale: Rumors Circulate HPC Community Regarding Frontier’s Status

    By now you may have expected a triumphant announcement from the U.S. Department of Energy that the Frontier supercomputer, slated to be installed by the end of 2021 as the first U.S. exascale-class system, has been stood up with all systems go. But as of now, DOE (whose Oak Ridge National Laboratory will house Frontier) is foregoing a “mission accomplished” announcement and instead has issued a somewhat formal statement about Frontier’s status. Left unaddressed are rumors circulating through the HPC community of difficulties encountered in the late stages of Frontier system integration and fine tuning. Here’s the official statement on [READ MORE…]

  • insideHPC and OrionX.net Launch the @HPCpodcast

    insideHPC in association with the technology analyst firm OrionX.net today announced the launch of the @HPCpodcast, featuring OrionX.net analyst Shahin Khan and Doug Black, insideHPC’s editor-in-chief. @HPCpodcast is intended to be a lively and informative forum examining key technology trends driving high performance computing and artificial intelligence. Each podcast will feature Khan and Blacks’ comments on the latest HPC news and also a deeper dive into a focused topic. In our first @HPCpodcast episode, we talk about a recent spate of good news for Intel before taking up one of the hottest areas of the advanced computing arena: new HPC-AI [READ MORE…]

  • How Machine Learning Is Revolutionizing HPC Simulations

    Physics-based simulations, that staple of traditional HPC, may be evolving toward an emerging, AI-based technique that could radically accelerate simulation runs while cutting costs. Called “surrogate machine learning models,” the topic was a focal point in a keynote on Tuesday at the International Conference on Parallel Processing by Argonne National Lab’s Rick Stevens. Stevens, ANL’s associate laboratory director for computing, environment and life sciences, said early work in “surrogates,” as the technique is called, shows tens of thousands times and more speed-ups and could “potentially replace simulations.” Surrogates can be looked at as an end-around to two big problems associated [READ MORE…]

  • Double-precision CPUs vs. Single-precision GPUs; HPL vs. HPL-AI HPC Benchmarks; Traditional vs. AI Supercomputers

    If you’ve wondered why GPUs are faster than CPUs, in part it’s because GPUs are asked to do less – or, to be more precise, to be less precise. Next question: So if GPUs are faster than CPUs, why aren’t GPUs  the mainstream, baseline processor used in HPC server clusters? Again, in part it gets back to precision. In many workload types, particularly traditional HPC workloads, GPUs aren’t precise enough. Final question: So if GPUs and AI are inextricably linked, particularly for training machine learning models, and if GPUs are less precise than CPUs, does that mean AI is imprecise? [READ MORE…]

  • 6,000 GPUs: Perlmutter to Deliver 4 Exaflops, Top Spot in AI Supercomputing

    The U.S. National Energy Research Scientific Computing Center today unveiled the Perlmutter HPC system, a beast of a machine powered by 6,159 Nvidia A100 GPUs and delivering 4 exaflops of mixed precision performance. Perlmutter is based on the HPE Cray Shasta platform, including Slingshot interconnect, a heterogeneous system with both GPU-accelerated and CPU-only nodes. The system is being installed in two phases – today’s unveiling is Phase 1, which includes the system’s GPU-accelerated nodes and scratch file system. Phase 2 will add CPU-only nodes later in 2021. “That makes Perlmutter the fastest system on the planet on the 16- and 32-bit [READ MORE…]

Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly