The Hyperion-insideHPC Interviews: Rich Brueckner and Doug Ball Talk CFD, Autonomous Mobility and Driving Down HPC Package Sizing

Doug Ball is a leading expert in computational fluid dynamics and aerodynamic engineering, disciplines he became involved with more than 40 years ago. In this interview with the late Rich Brueckner of insideHPC, Ball discusses the increased scale and model complexity that HPC technology has come to handle and, looking to the future, his anticipation […]

SeRC Turns to oneAPI Multi-Chip Programming Model for Accelerated Research

At ISC 2020 Digital, the Swedish e-Science Research Center (SeRC), Stockholm, has announced plans to use Intel’s oneAPI unified programming language by researchers conducting  massive simulations powered by CPUs and GPUs. The center said it chose the oneAPI programming model, designed to span CPUs, GPUs, FPGAs and other architectures and silicon,  to accelerate compute for research using GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics software, developed by SeRC and first released in 1991

‘Rocky Year’ – Hyperion’s HPC Market Update: COVID-19 Hits Q1 Revenues, Cloud HPC Boom, Shift in Server Vendor Standings

Instead of its usual mid-year HPC market update presented at the ISC conference in Frankfurt, industry analyst firm Hyperion Research has virtually released its latest findings – including estimates of COVID-19 ‘s impact on the industry, on growth of HPC in public clouds and a significant shift in the competitive standing among the leading HPC server vendors. Taking 2019 in total, Hyperion sized the HPC server market at $13.7 billion, record revenues

NetApp Deploys Iguazio’s Data Science Platform for Optimized Storage Management

Previously built on Hadoop, NetApp said it was also looking to modernize the service infrastructure “to reduce the complexities of deploying new AI services and the costs of running large-scale analytics. In addition, the shift was needed to enable real-time predictive AI, and to abstract deployment, allowing the technology to run on multi-cloud or on premises seamlessly.”

Podcast: A Shift to Modern C++ Programming Models

In this Code Together podcast, Alice Chan from Intel and Hal Finkel from Argonne National Lab discuss how the industry is uniting to address the need for programming portability and performance across diverse architectures, particularly important with the rise of data-intensive workloads like artificial intelligence and machine learning. “We discuss the important shift to modern C++ programming models, and how the cross-industry oneAPI initiative, and DPC++, bring much-needed portable performance to today’s developers.”

How to Achieve High-Performance, Scalable and Distributed DNN Training on Modern HPC Systems

DK Panda from Ohio State University gave this talk at the Stanford HPC Conference. “This talk will focus on a range of solutions being carried out in my group to address these challenges. The solutions will include: 1) MPI-driven Deep Learning, 2) Co-designing Deep Learning Stacks with High-Performance MPI, 3) Out-of- core DNN training, and 4) Hybrid (Data and Model) parallelism. Case studies to accelerate DNN training with popular frameworks like TensorFlow, PyTorch, MXNet and Caffe on modern HPC systems will be presented.”

Appentra raises €1.8M for its Parallelware Analyzer software

Today Appentra announced that the company has raised €1.8M in new funding in a round led by Armilar Venture Partners and K Fund. Appentra is a Deep Tech global company that delivers products based on the Parallelware technology, a unique approach to static code analysis specialized in parallelism. Our aim is to make parallel programming easier, enabling everyone to make the best use of parallel computing hardware from the multi-cores in a laptop to the fastest supercomputers. “During the last months we have been working hand in hand with the new investors, and we are proud to say that they will definitely bring in a strong expertise as international VCs specialized in B2B software companies, which will help us to fully realize Appentra’s vision.”

Video: Preparing to program Aurora at Exascale – Early experiences and future directions

Hal Finkel from Argonne gave this talk at IWOCL / SYCLcon 2020. “Argonne National Laboratory’s Leadership Computing Facility will be home to Aurora, our first exascale supercomputer. This presentation will summarize the experiences of our team as we prepare for Aurora, exploring how to port applications to Aurora’s architecture and programming models, and distilling the challenges and best practices we’ve developed to date.”

Podcast: Spack Helps Automate Deployment of Supercomputer Software

In this Let’s Talk Exascale podcast, Todd Gamblin from LLNL describes how the Spack flexible package manager helps automate the deployment of software on supercomputer systems. “After many hours building software on Lawrence Livermore’s supercomputers, in 2013 Todd Gamblin created the first prototype of a package manager he named Spack (Supercomputer PACKage manager). The tool caught on, and development became a grassroots effort as colleagues began to use the tool.”

Khronos Group Releases OpenCL 3.0

Today, the Khronos Group consortium released the OpenCL 3.0 Provisional Specifications. OpenCL 3.0 realigns the OpenCL roadmap to enable developer-requested functionality to be broadly deployed by hardware vendors, and it significantly increases deployment flexibility by empowering conformant OpenCL implementations to focus on functionality relevant to their target markets. Many of our customers want a GPU programming language that runs on all devices, and with growing deployment in edge computing and mobile, this need is increasing,” said Vincent Hindriksen, founder and CEO of Stream HPC. “OpenCL is the only solution for accessing diverse silicon acceleration and many key software stacks use OpenCL/SPIR-V as a backend. We are very happy that OpenCL 3.0 will drive even wider industry adoption, as it reassures our customers that their past and future investments in OpenCL are justified.”