• HPE Wins $35M HPC DR Deal with Australia’s Bureau of Meteorology

    Aug. 11, 2022 — Australia’s Bureau of Meteorology has purchased a high performance computing system from HPE in a three-year deal for US$35 million. The new system, intended to improve the resilience of Australia’s weather forecasting supercomputing capability, will augment a Cray XC50 supercomputer installed at the BOM in 2016, according to a story in the Australian publication ITnews. The publication reported that the bureau would not disclose system details. Back in 2016, the Cray XC40 computer replaced an Oracle/Sun HPC system. Then two years ago, the bureau said it planned a hardware enhancement to a 4.0 petaFLOPS  Cray SC50 [READ MORE…]

Featured Stories

  • You’re Invited! Take the insideHPC Reader Survey

    insideHPC invites you to take our reader survey, your feedback will help us keep publishing relevant content that HPC users and vendors need. In return for your generosity, you’ll be entered into a drawing for a $200 Amazon gift card. The survey only takes about five minutes to complete and will be greatly appreciated by everyone at insideHPC! Please go to this page: https://insidehpc.com/media-survey/

  • Argonne’s Polaris Supercomputer Deployed for Scientific Research

    Argonne National Laboratory announced that the Polaris supercomputer, a 44-petaflops HPE system powered by AMD CPUs and NVIDIA GPUs, is now open to the research community. Researchers can apply for computing time through the ALCF’s Director’s Discretionary allocation program. Details on the system can be found here. The system, housed at the Argonne Leadership Computing Facility (ALCF), provides a platform for researchers to prepare codes and workloads for Argonne’s upcoming Aurora [READ MORE…]

  • @HPCpodcast: Equity, Diversity and Inclusion Strategies – A How-to Guide for HPC Managers

    While it’s safe to assume that most senior managers in the HPC community are in favor of hiring and promoting more women and people of color within their organizations, many don’t know how to make it happen. At last week’s annual meeting of the Dell Technologies HPC Community, Melyssa Fratkin of the Texas Advanced Computing Center offered this piece of advice: using same-old-same-old hiring practices won’t work.

  • Lenovo Brings a Decade of Liquid Cooling Experience to the Faster, Denser, Hotter HPC Systems of the Future

    [SPONSORED CONTENT]  HPC systems customers (and vendors) are in permanent pursuit of more compute power with equal or greater node density. But with that comes more power consumption, greater heat generation and rising cooling costs. Because of this, the IT business – with a boost from the HPC and hyperscale segments – is spiraling up the list of industries ranked by power consumption. According to ITProPortal, data center power use [READ MORE…]

Featured Resource

Supermicro and Preferred Networks (PFN) Collaborate to Develop the World’s Most Efficient Supercomputer

Supermicro and Preferred Networks (PFN) collaborated to develop the most efficient supercomputer anywhere on earth, earning the #1 position on the Green500 list. This supercomputer, the MN-3, is comprised of Intel® Xeon® CPUs and MN-Core™ boards developed by Preferred Networks. In this white paper, read more about this collaboration and how a record-setting supercomputer was developed.

HPC Newsline

Industry Perspectives

  • …today’s situation is clear: HPC is struggling with reliability at scale. Well over 10 years ago, Google proved that commodity hardware was both cheaper and more effective for hyperscale processing when controlled by software-defined systems, yet the HPC market persists with its old-school, hardware-based paradigm. Perhaps this is due to prevailing industry momentum or working within the collective comfort zone of established practices. Either way, hardware-centric approaches to storage resiliency need to go.

  • New, Open DPC++ Extensions Complement SYCL and C++

    In this guest article, our friends at Intel discuss how accelerated computing has diversified over the past several years given advances in CPU, GPU, FPGA, and AI technologies. This innovation drives the need for an open and cross-platform language that allows developers to realize the potential of new hardware, minimizes development cost and complexity, and maximizes reuse of their software investments.

RSS Featured from insideBIGDATA

  • Interview: Dr. Susan Hura, Chief Design Officer at Kore.ai
    I recently caught up with Dr. Susan Hura, Chief Design Officer at Kore.ai to discuss the bank-end work that goes into developing an intuitive conversational AI-supported chatbot. She'll also dispel some of the myths behind developing and introducing a CAI-empowered chatbot into a business’s digital platform. Whether used out-of-the-box or customized, a chatbot’s design plays […]

Editor’s Choice

  • Frontier Named No. 1 Supercomputer on TOP500 List and ‘First True Exascale Machine’

    Hamburg — This morning, AMD’s long comeback from trampled HPC also-ran – a comeback that began in 2017 when company executives told skeptical press and industry analysts to expect price/performance chip superiority over Intel – reached a high point (not to say an end point) with the news that the U.S. Department of Energy’s Frontier supercomputer, an HPE-Cray EX system powered by AMD CPUs and GPUs, has not only been named the world’s most powerful supercomputer, it also is the first system to exceed the exascale (1018 calculations/second) milestone. This may not come as a  surprise to many in the [READ MORE…]

  • Chip Geopolitics: If China Invades, Make Taiwan ‘Unwantable’ by Destroying TSMC, Military Paper Suggests

    US military planners are taking notice of a suggestion by two military scholars calling for the destruction of semiconductor foundry company Taiwan Semiconductor Manufacturing Co. (TSMC), whose fabs produce advanced microprocessors used in HPC and AI, in the event China invades the island nation A news story in today’ edition of Data Center Times cites the Nikkei Asia news service and a paper in the U.S. Army War College’s scholarly journal, Parameters, discussing the possibility of Taiwan adopting “’a scorched earth policy’ and wipe out its own semiconductor foundries in the wake of any Chinese invasion as a deterrent, U.S. [READ MORE…]

  • How Machine Learning Is Revolutionizing HPC Simulations

    Physics-based simulations, that staple of traditional HPC, may be evolving toward an emerging, AI-based technique that could radically accelerate simulation runs while cutting costs. Called “surrogate machine learning models,” the topic was a focal point in a keynote on Tuesday at the International Conference on Parallel Processing by Argonne National Lab’s Rick Stevens. Stevens, ANL’s associate laboratory director for computing, environment and life sciences, said early work in “surrogates,” as the technique is called, shows tens of thousands of times (and more) speed-ups and could “potentially replace simulations.” Surrogates can be looked at as an end-around to two big problems [READ MORE…]

  • Double-precision CPUs vs. Single-precision GPUs; HPL vs. HPL-AI HPC Benchmarks; Traditional vs. AI Supercomputers

    If you’ve wondered why GPUs are faster than CPUs, in part it’s because GPUs are asked to do less – or, to be more precise, to be less precise. Next question: So if GPUs are faster than CPUs, why aren’t GPUs  the mainstream, baseline processor used in HPC server clusters? Again, in part it gets back to precision. In many workload types, particularly traditional HPC workloads, GPUs aren’t precise enough. Final question: So if GPUs and AI are inextricably linked, particularly for training machine learning models, and if GPUs are less precise than CPUs, does that mean AI is imprecise? [READ MORE…]

  • 6,000 GPUs: Perlmutter to Deliver 4 Exaflops, Top Spot in AI Supercomputing

    The U.S. National Energy Research Scientific Computing Center today unveiled the Perlmutter HPC system, a beast of a machine powered by 6,159 Nvidia A100 GPUs and delivering 4 exaflops of mixed precision performance. Perlmutter is based on the HPE Cray Shasta platform, including Slingshot interconnect, a heterogeneous system with both GPU-accelerated and CPU-only nodes. The system is being installed in two phases – today’s unveiling is Phase 1, which includes the system’s GPU-accelerated nodes and scratch file system. Phase 2 will add CPU-only nodes later in 2021. “That makes Perlmutter the fastest system on the planet on the 16- and 32-bit [READ MORE…]

Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly