• HPC as a Service to Accelerate Transformational Growth

    [SPONSORED POST] This whitepaper discusses how HPC delivered as a service through HPE GreenLake combines the power and compliance of on-premises systems, with cloud-like financial flexibility, ease of management, and consumption-based pricing. HPE managed services and support help accelerate HPC time to value. Without upheaval, customers get a smoother, faster path to better business through HPC.

Featured Stories

  • HPE Launches GreenLake Edge-to-Cloud Platform for Analytics and Data Protection Cloud Services

    Hewlett Packard Enterprise (NYSE: HPE) today announced new cloud services for the HPE GreenLake edge-to-cloud platform that the company said signify HPE’s entry into two software markets – unified analytics and data protection. “Together, these innovations further accelerate HPE’s transition to a cloud services company and give customers greater choice and freedom for their business and IT strategy, with an open and modern platform that provides a cloud experience everywhere,” [READ MORE…]

  • European Exascale Chip Designer SiPearl Opens 5th Center in Grenoble

    Maisons-Laffitte (France), Sept. 28, 2021 – SiPearl, whose mission is to design a high performance, low-power microprocessor for European exascale supercomputers, has opened a design center in Grenoble, France, with the goal of recruiting 50 engineers on site by the end of 2022. Following SiPeal facilities in Maisons-Laffitte, Duisburg, Barcelona and Sophia Antipolis, SiPearl’s Grenoble site is an important recruitment pool for semiconductor technologies and HPC. “It was obvious that SiPearl [READ MORE…]

  • Accelerating Discovery and Innovation at the University of Alabama with the Dell ‘Cheaha’ Supercomputer

    The University of Alabama at Birmingham is an internationally renowned research university and academic medical center known for its innovative and interdisciplinary approach to education. More than 1,200 UAB faculty are engaged in sponsored research fueled by funding from federal, state and local agencies, industry, nonprofits and foundations. For universities around the world, a great deal of this scientific research now depends on the power of high performance computing systems [READ MORE…]

  • Preparing for Exascale: Aurora to Drive Brain Map Construction

    The U.S. Department of Energy’s Argonne National Laboratory will be home to one of the nation’s first exascale supercomputers when Aurora arrives in 2022. To prepare codes for the architecture and scale of the system, 15 research teams are taking part in the Aurora Early Science Program through the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility. With access to pre-production hardware and software, these researchers [READ MORE…]

Featured Resource

10 Questions to Ask When Starting With AI

In this insideHPC Guide, our friends over at WEKA offer 10 important questions to ask when starting with AI, specifically planning for success beyond the initial stages of a project. Reasons given for these failures include not having a plan ahead of time, not getting executive or business leadership buy-in, or failing to find the  […]

HPC Newsline

Industry Perspectives

  • insideHPC Special Report: Citizens Benefit from Public/Private Partnerships – Part 3

    This special report sponsored by Dell Technologies, takes a look at how now more than ever, agencies from all levels of government are teaming with private Information Technology (IT) organizations to leverage AI and HPC to create and implement solutions that not only increase safety for all, but also provide a more streamlined and modern experience for citizens.

  • insideHPC Special Report: Citizens Benefit from Public/Private Partnerships – Part 2

    This special report sponsored by Dell Technologies, takes a look at how now more than ever, agencies from all levels of government are teaming with private Information Technology (IT) organizations to leverage AI and HPC to create and implement solutions that not only increase safety for all, but also provide a more streamlined and modern experience for citizens.

RSS Featured from insideBIGDATA

Editor’s Choice

  • How Machine Learning Is Revolutionizing HPC Simulations

    Physics-based simulations, that staple of traditional HPC, may be evolving toward an emerging, AI-based technique that could radically accelerate simulation runs while cutting costs. Called “surrogate machine learning models,” the topic was a focal point in a keynote on Tuesday at the International Conference on Parallel Processing by Argonne National Lab’s Rick Stevens. Stevens, ANL’s associate laboratory director for computing, environment and life sciences, said early work in “surrogates,” as the technique is called, shows tens of thousands times and more speed-ups and could “potentially replace simulations.” Surrogates can be looked at as an end-around to two big problems associated [READ MORE…]

  • Double-precision CPUs vs. Single-precision GPUs; HPL vs. HPL-AI HPC Benchmarks; Traditional vs. AI Supercomputers

    If you’ve wondered why GPUs are faster than CPUs, in part it’s because GPUs are asked to do less – or, to be more precise, to be less precise. Next question: So if GPUs are faster than CPUs, why aren’t GPUs  the mainstream, baseline processor used in HPC server clusters? Again, in part it gets back to precision. In many workload types, particularly traditional HPC workloads, GPUs aren’t precise enough. Final question: So if GPUs and AI are inextricably linked, particularly for training machine learning models, and if GPUs are less precise than CPUs, does that mean AI is imprecise? [READ MORE…]

  • The US-China Supercomputing Race, Post-Exascale HPC Government Policy and the ‘Partitioning of the Internet’

    All over the world, HPC has burst into geopolitics. HPC – broadly defined here as advanced supercomputing combined with big AI – is at the fault lines of national and regional rivalries, particularly between the U.S. and China, expanding in power, cost, intensity and in potential impact. Which is to say that global players put supercomputing at the heart of their defense, surveillance, healthcare and economic competitiveness strategies. Is it going too far to say that supercomputing now plays a role similar to the nuclear arms and space races in the Cold War era? Just as Sputnik spurred U.S. determination [READ MORE…]

  • 6,000 GPUs: Perlmutter to Deliver 4 Exaflops, Top Spot in AI Supercomputing

    The U.S. National Energy Research Scientific Computing Center today unveiled the Perlmutter HPC system, a beast of a machine powered by 6,159 Nvidia A100 GPUs and delivering 4 exaflops of mixed precision performance. Perlmutter is based on the HPE Cray Shasta platform, including Slingshot interconnect, a heterogeneous system with both GPU-accelerated and CPU-only nodes. The system is being installed in two phases – today’s unveiling is Phase 1, which includes the system’s GPU-accelerated nodes and scratch file system. Phase 2 will add CPU-only nodes later in 2021. “That makes Perlmutter the fastest system on the planet on the 16- and 32-bit [READ MORE…]

  • IBM Doubles Down on 1000+-Qubit Quantum in 2023

    As expectation-setting goes in the technology industry, this is bold. At IBM’s annual Think conference, a senior systems executive reiterated the company’s intent to deliver a 1,121-qubit IBM Quantum Condor processor by 2023. In a video interview with theCUBE, technology publication SiliconANGLE Media’s livestreaming studio, IBM GM of systems strategy and development for enterprise security, Jamie Thomas, said the company is on track with its quantum roadmap – though she did not sugarcoat the challenges involved. “In terms of the roadmap around hardware, we put ourselves out there,” said Thomas. “We said we were going to get to over a [READ MORE…]

Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly