Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Interview: The Importance of Message Passing Interface to Supercomputing

In this video, Mike Bernhardt from the Exascale Computing Project catches up with ORNL’s David Bernholdt at SC18. They discuss supercomputing the conference, his career, the evolution and significance of message passing interface (MPI) in parallel computing, and how ECP has influenced his team’s efforts.

Red Hat Steps Up with HPC Software Solutions at SC18

In this video from SC18 in Dallas, Yan Fisher and Dan McGuan from Red Hat describe the company’s powerful software solutions for HPC and Ai workloads. “All supercomputers on the coveted Top500 list run on Linux, a scalable operating system that has matured over the years to run some of the most critical workloads and in many cases has displaced proprietary operating systems in the process. For the past two decades, Red Hat Enterprise Linux has served as the foundation for building software stacks for many supercomputers. We are looking to continue this trend with the next generation of systems that seek to break the exascale threshold.”

XTREME-D IaaS Platform Works to Simplify HPC Cloud Cluster Management

An IaaS platform can help keep HPC cloud cluster users out of the cluster management business. A new white paper from XTREME-D, “Point and Click HPC: The XTREME-Stargate IaaS Platform”, explores how the Stargate platform, that provides a web portal to cluster resources, can increase user efficiency, eliminate cluster administration costs and acts as a “pay-as-you-go” cloud model, simplifying HPC cloud clusters and making them more accessible.

Video: Scientific Benchmarking of Parallel Computing Systems

In this video, Torsten Hoefler from ETH Zurich presents: Scientific Benchmarking of Parallel Computing Systems. “Measuring and reporting performance of parallel computers constitutes the basis for scientific advancement of high-performance computing. Most scientific reports show performance improvements of new techniques and are thus obliged to ensure reproducibility or at least interpretability. Our investigation of a stratified sample of 120 papers across three top conferences in the field shows that the state of the practice is not sufficient.”

OCF Deploys Largest IBM POWER9 Machine Learning Cluster in the UK

With a new upgrade, the University of Birmingham is set to benefit from the largest IBM POWER9 machine learning cluster in the UK, delivering unprecedented performance for AI workloads. Working with OCF, the high-performance compute, the University will integrate a total of 11 IBM POWER9-based IBM Power Systems servers into its existing HPC infrastructure. “With our early deployment of the two IBM POWER9 servers we have seen what is possible. By scaling up, we can keep-pace with the escalating demand and offer the computational capacity and capability to attract leading researchers to the University.”

VMware Powers Machine Learning & HPC Workloads

In this video from SC18 in Dallas, Ziv Kalminovich from VMware describes how the company’s powerful virtualization capabilities bring flexibility and performance to HPC workloads. “With VMware, you can capture the benefits of virtualization for HPC workloads while delivering performance that is comparable to bare-metal. Our approach to virtualizing HPC adds a level of flexibility, operational efficiency, agility and security that cannot be achieved in bare-metal environments—enabling faster time to insights and discovery.”

Intel Pushes the Envelope at SC18

Intel has a long history of making important announcements at the annual Supercomputer shows, and this year was no exception. This guest post from Intel covers what new technology was front and center from Intel at SC18, including its Cascade Lake advanced performance processors, Intel Optane Persistent Memory and more. Learn more about these new technologies designed to accelerate the convergence of high-performance computing and AI.

Video: How OpenACC Enables Scientists to port their codes to GPUs and Beyond

In this video SC18, Jack Wells from ORNL describes how OpenACC enables scientists to port their codes to GPUs and other HPC platforms. “OpenACC, a directive-based high-level parallel programming model, has gained rapid momentum among scientific application users – the key drivers of the specification. The user-friendly programming model has facilitated acceleration of over 130 applications including CAM, ANSYS Fluent, Gaussian, VASP, Synopsys on multiple platforms and is also seen as an entry-level programming model for the top supercomputers (Top500 list) such as Summit, Sunway Taihulight, and Piz Daint. As in previous years, this BoF invites scientists, programmers, and researchers to discuss their experiences in adopting OpenACC for scientific applications, learn about the roadmaps from implementers and the latest developments in the specification.”

Equus Rolls out G2660 2U 2xGPU Server for HPC & Ai

Today Equus Compute Solutions rolled out its new G2660 2U 2xGPU server, ideal for artificial intelligence and deep learning environments. This GPU platform offers higher performance, reduced rack space requirements, and lower power consumption compared with traditional CPU-centric server platforms. :Our customers have been asking for the flexibility to source GPUs in different ways on high performance servers,” said Lee Abrahamson, CTO of Equus Compute Solutions. “Our GPU servers, such as the G2660 server, are the ideal cost-optimized solutions for a wide range of applications and workloads. At the same time, these innovative platforms provide benefits of scale and volume, component standardization, ease of service logistics, and the means to avoid vendor lock-in.”

Video: Arm + Lustre in HPC

In this video from DDN booth at SC18, Brent Gorda from ARM presents: Arm + Lustre in HPC. At the show, DDN announced that its Whamcloud division is delivering professional support for Lustre clients on Arm architectures. With this support offering, organizations can confidently use Lustre in production environments, introduce new clients into existing Lustre infrastructures, and deploy Arm-based clusters of any size within test, development or production environments. As the use of Lustre continues to expand across HPC, artificial intelligence and data-intensive, performance-driven applications, the deployment of alternative architectures is on the rise.