• The Hyperion-insideHPC Interviews: NERSC’s Jeff Broughton on the End of the Top500 and Exascale Begetting Petaflops in a Rack

    NERSC’s Jeff Broughton career extends back to HPC ancient times (1979) when, fresh out of college, he was promoted to a project management role at Lawrence Livermore National Laboratory – a big job for a young man. Broughton has taken on big jobs in the ensuing 40 years. In this interview, he talks about such topics as the end of the Top500 list and the fallouts from the U.S. Dept. of Energy’s drive to build exascale supercomputers, one of which could be petaflops machines “that sit in the size of one or two conventional racks, that will cost less than [READ MORE…]

Featured Stories

  • AMD’s Su on Xilinx Acquisition: ‘We Can Define the Future of High Performance Computing’

    Marking a new phase in in the re-resurrection AMD in its eternal struggle with Intel, and as reported here October 9, AMD has agreed to buy FPGA chip maker Xilinx, for $35 billion – another in a series of recent  technology industry acquisitions with direct bearing on HPC. “The acquisition brings together two industry leaders with complementary product portfolios and customers,” AMD said in its announcement. “AMD will offer the [READ MORE…]

  • Practical Hardware Design Strategies for Modern HPC Workloads – Part 3

    This special research report sponsored by Tyan discusses practical hardware design strategies for modern HPC workloads. As hardware continued to develop, technologies like multi-core, GPU, NVMe, and others have allowed new application areas to become possible. These application areas include accelerator assisted HPC, GPU based Deep learning, and Big Data Analytics systems. Unfortunately, implementing a general purpose balanced system solution is not possible for these applications. To achieve the best price-to-performance in each of these application verticals, attention to hardware features and design is most important. [READ MORE…]

  • IBM Announces New AI Hardware Research, Red Hat Collaborations

    At the IEEE CAS/EDS AI Compute Symposium, IBM Research introduced new technology and partnerships designed to dynamically run massive AI workloads in hybrid clouds: The company said it is developing analog AI, combining compute and memory in a single device designed to alleviate “the von Neumann bottleneck,” a limitation resulting from traditional hardware architectures in which compute and memory are segregated in different locations, with data moving back and forth [READ MORE…]

  • Transform Your Business with the Next Generation of Accelerated Computing

    In this white paper, you’ll find a compelling discussion regarding how Supermicro servers optimized for NVIDIA A100 GPUs are solving the world’s greatest HPC and AI challenges. As the expansion of HPC and AI poses mounting challenges to IT environments, Supermicro and NVIDIA are equipping organizations for success, with world-class solutions to empower business transformation. The Supermicro team is continually testing and validating advanced hardware featuring optimized software components to support a rising number of use cases. [READ MORE…]

Featured Resource

insideHPC Dell Special Report – HPC and AI for the Era of Genomics

This white paper sponsored by Dell Technologies, takes a deep dive into HPC and AI for life sciences in the era of genomics. 2020 will be remembered for the outbreak of the Novel Coronavirus or COVID-19. While infection rates are growing exponentially, the race is on to find a treatment, vaccine, or cure. Governments and private  organizations are teaming together to understand the basic biology of the virus, its genetic code, to find  what can stop it.

Recent News

Industry Perspectives

RSS Featured from insideBIGDATA

  • NetApp AI and Run:AI Partner to Speed Up Data Science Initiatives
    NetApp, a leading cloud data services provider has teamed up with Run:AI, a company virtualizing AI infrastructure, have teamed up to allow faster AI experimentation with full GPU utilization. The partnership allows teams to speed up AI by running many experiments in parallel, with fast access to data, utilizing limitless compute resources. Run:AI enables full […]

Editor’s Choice

  • Where Have You Gone, IBM?

    The company that built the world’s nos. 2 and 3 most powerful supercomputers is to all appearances backing away from the supercomputer systems business. IBM, whose Summit and Sierra CORAL-1 systems set the global standard for pre-exascale supercomputing, failed to win any of the three exascale contracts, and since then IBM has seemingly withdrawn from the HPC systems field. This has been widely discussed within the HPC community for at least the last 18 months. In fact, an industry analyst told us that as long ago as the annual ISC Conference in Frankfurt four years ago, he was shocked when IBM told him the company was no longer interested in the HPC business per se…. [READ MORE…]

  • DOE Under Secretary for Science Dabbar’s Exascale Update: Frontier to Be First, Aurora to Be Monitored

    As Exascale Day (October 18) approaches, U.S. Department of Energy Under Secretary for Science Paul Dabbar has commented on the hottest exascale question of the day: which of the country’s first three systems will be stood up first? In a recent, far-reaching interview with us, Dabbar confirmed what has been expected for more than two months, that the first U.S. exascale system will not, as planned, be the Intel-powered Aurora system at Argonne National Laboratory. It will instead be HPE-Cray’s Frontier, powered by AMD CPUs and GPUs and designated for Oak Ridge National Laboratory. [READ MORE…]

  • Exascale Exasperation: Why DOE Gave Intel a 2nd Chance; Can Nvidia GPUs Ride to Aurora’s Rescue?

    The most talked-about topic in HPC these days – another Intel chip delay and therefore delay of the U.S.’s flagship Aurora exascale system – is something no one directly involved wants to talk about. Not Argonne National Laboratory, where Intel was to install Aurora in 2021; not the Department of Energy’s Exascale Computing Project, guiding development of a “capable exascale ecosystem”; and not DOE itself. As for Intel, a spokesperson earlier this week promised to “circle back shortly,” but hadn’t as of press time. In lieu of information (other than rote answers from public relations Q&A sheets issued three weeks [READ MORE…]

  • ARM-based Fugaku Supercomputer on Summit of New Top500 – Surpasses Exaflops on AI Benchmark

    The new no. 1 system on the updated  ranking of the TOP500 list of the world’s most powerful supercomputers, released this morning, is Fugaku, a machine built at the Riken Center for Computational Science in Kobe, Japan. The new top system turned in a High Performance LINPACK (HPL) result of 415.5 petaflops (nearly half an exascale), outperforming Summit, the former no. 1 system housed at the U.S. Dept. of Energy’s Oak Ridge National Lab, by a factor of 2.8x. Fugaku, powered by Fujitsu’s 48-core A64FX SoC, is the first ARM-based system to take the TOP500 top spot. [READ MORE…]

  • Hats Over Hearts

    It is with great sadness that we announce the death of Rich Brueckner. His passing is an unexpected and enormous blow to both his family and the HPC Community. In his coverage of the HPC market, he was tireless and thorough. What Rich brought to the table was a deep curiosity about computing and science, and the people that made the two happen. [READ MORE…]

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC: