• Can HPE GreenLake for HPC Deliver a Simpler User Experience than Public Cloud?

    [Sponsored Post] When looking for a simple HPC user experience, many of us would naturally think of public cloud. But on-premises solutions like HPE GreenLake for HPC, powered by AMD EPYC processors, have picked up the advantages of public cloud without compromising performance. HPE GreenLake for HPC is designed to help you get the benefits of HPC without the deployment challenges. It’s a consumption-based solution that is fully managed and operated for you, just like public cloud.

Featured Stories

  • @HPCpodcast: Will the Metaverse Hit Next-Big-Thing Status?

    Is the metaverse really real? Sorry – it’s not real, though uber virtual realism is a big part of its promise. But will it be the next phase of the internet, the next big market opportunity in tech, HPC-AI included (as Nvidia asserts)? Ultimately, will it enable us to escape this world and enter a world of our own, personal choosing? Or is it hype — will there even be [READ MORE…]

  • MLCommons Releases MLPerf Training v1.1 AI Benchmarks

    San Francisco — Dec. 1, 2021 – Today, MLCommons, the open engineering consortium, released new results for MLPerf Training v1.1, the organization’s machine learning training performance benchmark suite. MLPerf Training measures the time it takes to train machine learning models to a standard quality target in a variety of tasks including  image classification, object detection, NLP, recommendation, and reinforcement learning. MLPerf Training is a full system benchmark, testing machine learning [READ MORE…]

  • Daimler Selects Green Data Center for HPC

    Stuttgart, Germany – November 30, 2021: Daimler has selected consulting firm Infosys (NSE, BSE, NYSE: INFY) to transfer the car maker’s high performance computing (HPC) workloads for designing vehicles and automated driving technologies to the Norwegian green colo, Lefdal Mine Datacenter. The shift to Green Data Center as a Service supports Daimler deliver on its sustainability mission to become CO2 neutral by 2039. Data centers currently account for around 1 [READ MORE…]

  • Innovation Accelerated: Step Into Intel’s SC’21 Virtual Experience

    [Sponsored Post] Get the latest Intel and community technology updates in the virtual HPC + AI Pavilion for Supercomputing 2021. Catch the demos, fireside chats, partner presentations, developer-led talks and Intel keynote at whatever time works best for you.

Featured Resource

Massive Scalable Cloud Storage for Cloud Native Applications

In this comprehensive technology white paper, written by Evaluator Group, Inc. on behalf of Lenovo, we delve into OpenShift, a key component of Red Hat's portfolio of products designed for cloud native applications. It is built on top of Kubernetes, along with numerous other open source components, to deliver a consistent developer and operator platform that can run across a hybrid environment and scale to meet the demands of enterprises. Ceph open source storage technology is utliized by Red Hat to provide a data plane for Red Hat's OpenShift environment.

HPC Newsline

Industry Perspectives

  • HPC: Stop Scaling the Hard Way

    …today’s situation is clear: HPC is struggling with reliability at scale. Well over 10 years ago, Google proved that commodity hardware was both cheaper and more effective for hyperscale processing when controlled by software-defined systems, yet the HPC market persists with its old-school, hardware-based paradigm. Perhaps this is due to prevailing industry momentum or working within the collective comfort zone of established practices. Either way, hardware-centric approaches to storage resiliency need to go.

  • New, Open DPC++ Extensions Complement SYCL and C++

    In this guest article, our friends at Intel discuss how accelerated computing has diversified over the past several years given advances in CPU, GPU, FPGA, and AI technologies. This innovation drives the need for an open and cross-platform language that allows developers to realize the potential of new hardware, minimizes development cost and complexity, and maximizes reuse of their software investments.

RSS Featured from insideBIGDATA

Editor’s Choice

  • insideHPC and OrionX.net Launch the @HPCpodcast

    insideHPC in association with the technology analyst firm OrionX.net today announced the launch of the @HPCpodcast, featuring OrionX.net analyst Shahin Khan and Doug Black, insideHPC’s editor-in-chief. @HPCpodcast is intended to be a lively and informative forum examining key technology trends driving high performance computing and artificial intelligence. Each podcast will feature Khan and Blacks’ comments on the latest HPC news and also a deeper dive into a focused topic. In our first @HPCpodcast episode, we talk about a recent spate of good news for Intel before taking up one of the hottest areas of the advanced computing arena: new HPC-AI [READ MORE…]

  • How Machine Learning Is Revolutionizing HPC Simulations

    Physics-based simulations, that staple of traditional HPC, may be evolving toward an emerging, AI-based technique that could radically accelerate simulation runs while cutting costs. Called “surrogate machine learning models,” the topic was a focal point in a keynote on Tuesday at the International Conference on Parallel Processing by Argonne National Lab’s Rick Stevens. Stevens, ANL’s associate laboratory director for computing, environment and life sciences, said early work in “surrogates,” as the technique is called, shows tens of thousands times and more speed-ups and could “potentially replace simulations.” Surrogates can be looked at as an end-around to two big problems associated [READ MORE…]

  • Double-precision CPUs vs. Single-precision GPUs; HPL vs. HPL-AI HPC Benchmarks; Traditional vs. AI Supercomputers

    If you’ve wondered why GPUs are faster than CPUs, in part it’s because GPUs are asked to do less – or, to be more precise, to be less precise. Next question: So if GPUs are faster than CPUs, why aren’t GPUs  the mainstream, baseline processor used in HPC server clusters? Again, in part it gets back to precision. In many workload types, particularly traditional HPC workloads, GPUs aren’t precise enough. Final question: So if GPUs and AI are inextricably linked, particularly for training machine learning models, and if GPUs are less precise than CPUs, does that mean AI is imprecise? [READ MORE…]

  • The US-China Supercomputing Race, Post-Exascale HPC Government Policy and the ‘Partitioning of the Internet’

    All over the world, HPC has burst into geopolitics. HPC – broadly defined here as advanced supercomputing combined with big AI – is at the fault lines of national and regional rivalries, particularly between the U.S. and China, expanding in power, cost, intensity and in potential impact. Which is to say that global players put supercomputing at the heart of their defense, surveillance, healthcare and economic competitiveness strategies. Is it going too far to say that supercomputing now plays a role similar to the nuclear arms and space races in the Cold War era? Just as Sputnik spurred U.S. determination [READ MORE…]

  • 6,000 GPUs: Perlmutter to Deliver 4 Exaflops, Top Spot in AI Supercomputing

    The U.S. National Energy Research Scientific Computing Center today unveiled the Perlmutter HPC system, a beast of a machine powered by 6,159 Nvidia A100 GPUs and delivering 4 exaflops of mixed precision performance. Perlmutter is based on the HPE Cray Shasta platform, including Slingshot interconnect, a heterogeneous system with both GPU-accelerated and CPU-only nodes. The system is being installed in two phases – today’s unveiling is Phase 1, which includes the system’s GPU-accelerated nodes and scratch file system. Phase 2 will add CPU-only nodes later in 2021. “That makes Perlmutter the fastest system on the planet on the 16- and 32-bit [READ MORE…]

Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly