Pavel Shamis from ARM Research presented this talk at the OpenFabrics Workshop. “With the emerging availability server platforms based on ARM CPU architecture, it is important to understand ARM integrates with RDMA hardware and software eco-system. In this talk, we will overview ARM architecture and system software stack. We will discuss how ARM CPU interacts with network devices and accelerators. In addition, we will share our experience in enabling RDMA software stack (OFED/MOFED Verbs) and one-sided communication libraries (Open UCX, OpenSHMEM/SHMEM) on ARM and share preliminary evaluation results.”
ARM has taken a step into the artificial intelligence market with the announcement of a new micro-architecture – DynamIQ – specifically designed for artificial intelligence (AI). “DynamIQ technology is a monumental shift in multi-core microarchitecture for the industry and the foundation for future ARM Cortex-A processors. The flexibility and versatility of DynamIQ will redefine the multi-core experience across a greater range of devices from edge to cloud across a secure, common platform.”
Over at the SUSE Blog, Jay Kruemcke writes that the High-Performance Computing Module (HPC Module) for SUSE Linux Enterprise (SLES) is now available for 64-bit ARM (AArch64) systems. The HPC Module is delivered as an add-on product to SUSE Linux Enterprise Server. “In summary, the HPC module allows us to keep the content closer to what’s happening in the HPC community upstream, providing more leading-edge tools in a more manageable fashion, leveraging a different lifecycle than the base SUSE Linux Enterprise Server. The new HPC module contains packages to optimize and manage HPC systems, and build HPC applications – building a bridge between the base server and an HPC stack (such as the stack provided by OpenHPC). This journey has started – some packages have already been made public and we have much more in the works and in our release queue.”
In this podcast, the Radio Free HPC team looks at a set of IT and Science stories. Microsoft Azure is making a big move to GPUs and the OCP Platform as part of their Project Olympus. Meanwhile, Huawei is gaining market share in the server market and IBM is bringing storage to the atomic level.
In this video, Ricard Borrell from the Barcelona Supercomputing Center describes how the Mont Blanc Project Industrial End User Group on TermoFluids is advancing HPC on ARM-based platforms.
“Developing a supercomputer that is many times faster than any of those currently available is clearly a challenging process and involves leveraging Fujitsu’s top hardware and software talent, as well as the help of partner companies such as ARM,” said Naoki Shinjo, SVP, Head of Next Generation Technical Computing Unit, Fujitsu.
In this video from KAUST, Professor Thomas Sterling, Professor of Intelligent Systems Engineering at Indiana University, shares his thoughts on new approaches to energy efficient supercomputing. “Our technical strategy focuses on the research and development of advanced technologies for extreme-scale computing and future exascale systems, including the following key elements: Execution Models; Runtime Systems; Graph Processing; Programming Interfaces; Compilers, Libraries, and Languages; Systems Architecture (Architecture, Power/Energy, Fault Tolerance, Networking), and Extreme Scale Applications and Visualization.”
Francis Lam from Huawei presented this talk at the Stanford HPC Conference. “High performance computing is rapidly finding new uses in many applications and businesses, enabling the creation of disruptive products and services. Huawei, a global leader in information and communication technologies, brings a broad spectrum of innovative solutions to HPC. This talk examines Huawei’s world class HPC solutions and explores creative new ways to solve HPC problems.
A technology-leading Fortune 100 company has deployed over 30,000 Supermicro MicroBlade servers, at its Silicon Valley data center facility with a Power Use Effectiveness (PUE) of 1.06, to support the company’s growing compute needs. Compared to a traditional data center running at 1.49 PUE, or more, the new datacenter achieves an 88percent improvement in overall energy efficiency. When the build out is complete at a 35 megawatt IT load power, the company is targeting $13.18M in savings per year in total energy costs across the entire datacenter.
“This is an exciting time in high performance computing,” said Prof Simon McIntosh-Smith, leader of the project and Professor of High Performance Computing at the University of Bristol. “Scientists have a growing choice of potential computer architectures to choose from, including new 64-bit ARM CPUs, graphics processors, and many-core CPUs from Intel. Choosing the best architecture for an application can be a difficult task, so the new Isambard GW4 Tier 2 HPC service aims to provide access to a wide range of the most promising emerging architectures, all using the same software stack.”