Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Video: The Game Changing Post-K Supercomputer for HPC, Big Data, and Ai

Satoshi Matsuoka from RIKEN gave this talk at the Rice Oil & Gas Conference. “Rather than to focus on double precision flops that are of lesser utility, rather Post-K, especially its Arm64fx processor and the Tofu-D network is designed to sustain extreme bandwidth on realistic applications including those for oil and gas, such as seismic wave propagation, CFD, as well as structural codes, besting its rivals by several factors in measured performance. Post-K is slated to perform 100 times faster on some key applications c.f. its predecessor, the K-Computer, but also will likely to be the premier big data and AI/ML infrastructure.”

Interview: HPC User Forum in Santa Fe to look at Supercomputing Technology Trends

Hyperion Research will host the next HPC User Forum April 1-3 in Santa Fe, New Mexico. Now in its 20th year, the HPC User Forum has grown to 150 members from government, industry, and academia. To learn more, insideHPC caught up with Steve Conway from Hyperion Research.

Video: Why InfiniBand is the Way Forward for Ai and Exascale

In this video, Gilad Shainer from the InfiniBand Trade Association describes how InfiniBand offers the optimal interconnect technology for Ai, HPC, and Exascale. “Tthrough Ai, you need the biggest pipes in order to move those giant amount of data in order to create those Ai software algorithms. That’s one thing. Latency is important because you need to drive things faster. RDMA is one of the key technology that enables to increase the efficiency of moving data, reducing CPU overhead. And by the way, now, there’s all of the Ai frameworks that exist out there, supports RDMA as a default element within the framework itself.”

A Look Ahead at Disruptive Technologies for HPC

In this special guest feature from Scientific Computing World, Robert Roe looks at technology that could disrupt the HPC ecosystem. “As the HPC industry reaches the end of technology scaling based on Moore’s Law, system designers and hardware manufacturers must look towards more complex technologies that can replace the gains in performance provided by transistor scaling.”

How to Design Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems

DK Panda from Ohio State University gave this talk at the Stanford HPC Conference. “This talk will focus on challenges in designing HPC, Deep Learning, and HPC Cloud middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss about the challenges in designing runtime environments for MPI+X (PGAS – OpenSHMEM/UPC/CAF/UPC++, OpenMP, and CUDA) programming models taking into account support for multi-core systems (Xeon, OpenPower, and ARM), high-performance networks, GPGPUs (including GPUDirect RDMA), and energy-awareness.”

John Shalf and Thomas Sterling to Keynote ISC 2019 in Frankfurt

Today ISC 2019 announced that its lineup of keynote speakers will include John Shalf from LBNL and Thomas Sterling from Indiana University. The event takes place June 16-20 in Frankfurt, Germany. “On June 18, John Shalf, from Lawrence Berkeley National Laboratory will offer his thoughts on how the slowdown and eventual demise of Moore’s Law will affect the prospects for high performance computing in the next decade. On June 19, Thomas Sterling will present his annual retrospective of the most important developments in HPC over the last 12 months.”

Tachyum Joins Open Euro HPC Project

Today Tachyum announced its participation and support for the Open Euro High Performance Computing Project (OEUHPC). “Tachyum is supporting the OEUHPC Project due to its ability to satisfy the need for both scalable and converged computing, which, in part, will ultimately enable users to cost effectively simulate, in real-time, human brain-sized Neural Networks. Tachyum has been working to develop its ultra-low power Prodigy Universal Processor Chip to allow system integrators to build a 32 Tensor ExaFLOPS AI supercomputer in 2020, well ahead of the scheduled EU goal to achieve 1 ExaFLOPS in 2028.”

EuroHPC Takes First Steps Towards Exascale

The European High Performance Computing Joint Undertaking (EuroHPC JU) has launched its first calls for expressions of interest, to select the sites that will host the Joint Undertaking’s first supercomputers (petascale and precursor to exascale machines) in 2020. “Deciding where Europe will host its most powerful petascale and precursor to exascale machines is only the first step in this great European initiative on high performance computing,” said Mariya Gabriel, Commissioner for Digital Economy and Society. “Regardless of where users are located in Europe, these supercomputers will be used in more than 800 scientific and industrial application fields for the benefit of European citizens.”

Exascale Computing Project updates Extreme-Scale Scientific Software Stack

Exascale computing is only a few years away. Today the Exascale Computing Project (ECP) put out the second release of their Extreme-Scale Scientific Software Stack. The E4S Release 0.2 includes a subset of ECP ST software products, and demonstrates the target approach for future delivery of the full ECP ST software stack. Also available are […]

Gordon Bell Prize Highlights the Impact of Ai

In this special guest feature from Scientific Computing World, Robert Roe reports on the Gordon Bell Prize finalists for 2018. “The finalist’s research ranges from AI to mixed precision workloads, with some taking advantage of the Tensor Cores available in the latest generation of Nvidia GPUs. This highlights the impact of AI and GPU technologies, which are opening up not only new applications to HPC users but also the opportunity to accelerate mixed precision workloads on large scale HPC systems.”