Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Job of the Week: Application Engineer for Arm in Austin

Arm in Austin, Texas is seeking an Application Engineer in our job of the Week. “The Applications Engineer is part of a focused professional services team part of the Development Solutions Group, that has responsibility for supporting and enabling key HPC customers and partners in their development of HPC software, using the Arm HPC Tools across various Linux/UNIX HPC platforms (Arm and other architectures). In this position, you will sharpen your HPC application expertise working on a wide range of scientific fields and environments. You will gain an excellent knowledge of Arm’s HPC development tools, alongside a deep understanding on Arm architecture and Arm IP roadmap. This position is located in the Austin Arm office. This role involves working with sensitive government customers and will involve up to 50% travel, primarily across the US.”

Podcast: Tackling Massive Scientific Challenges with AI/HPC Convergence

In this Chip Chat podcast, Brandon Draeger from Cray describes the unique needs of HPC customers and how new Intel technologies in Cray systems are helping to deliver improved performance and scalability. “More and more, we are seeing the convergence of AI and HPC – users investigating how they can use AI to complement what they are already doing with their HPC workloads. This includes using machine and deep learning to analyze results from a simulation, or using AI techniques to steer where to take a simulation on the fly.”

Time to Value: Storage Performance in the Epoch of AI

Sven Oehme gave this talk at the DDN User Group meeting at ISC 2019. “New AI and ML frameworks, advances in computational power (primarily driven by GPU’s), and sophisticated, maturing use-cases are demanding more from the storage platform. Sven will share some of DDN’s recent innovations around performance and talks about how they translate into real-world customer value.”

Achieving Parallelism in Intel Distribution for Python with Numba

The rapid growth in popularity of Python as a programming language for mathematics, science, and engineering applications has been amazing. Not only is it easy to learn, but there is a vast treasure of packaged open source libraries out there targeted at just about every computational domain imaginable. This sponsored post from Intel highlights how today’s enterprises can achieve high levels of parallelism in large scale Python applications using the Intel Distribution for Python with Numba. 

Google Cloud and NVIDIA Set New Training Records on MLPerf v0.6 Benchmark

Today the MLPerf effort released results for MLPerf Training v0.6, the second round of results from their machine learning training performance benchmark suite. MLPerf is a consortium of over 40 companies and researchers from leading universities, and the MLPerf benchmark suites are rapidly becoming the industry standard for measuring machine learning performance. “We are creating a common yardstick for training and inference performance,” said Peter Mattson, MLPerf General Chair.

NEC Embraces Open Source Frameworks for SX-Aurora Vector Computing

In this video from ISC 2019, Dr. Erich Focht from NEC Deutschland GmbH describes how the company is embracing open source frameworks for the SX-Aurora TSUBASA Vector Supercomputer. “Until now, with the existing server processing capabilities, developing complex models on graphical information for AI has consumed significant time and host processor cycles. NEC Laboratories has developed the open-source Frovedis framework over the last 10 years, initially for parallel processing in Supercomputers. Now, its efficiencies have been brought to the scalable SX-Aurora vector processor.”

Modular Supercomputing Moves Forward in Europe

In this video from ISC 2019, Thomas Lippert from the Jülich Supercomputing Centre describes how modular supercomputing is paving the way forward for HPC in Europe. “The Modular Supercomputer Architecture (MSA) is an innovative approach to build High-Performance Computing (HPC) and High-Performance Data Analytics (HPDA) systems by coupling various compute modules, following a building-block principle. Each module is tailored to the needs of a specific group of applications, and all modules together behave as a single machine.”

3 Ways to Unlock the Power of HPC and AI

A growing number of commercial businesses are implementing HPC solutions to derive actionable business insights, to run higher performance applications and to gain a competitive advantage. Complexities abound as HPC becomes more pervasive across industries and markets, especially as companies adopt, scale, and optimize both HPC and Artificial Intelligence (AI) workloads. Bill Mannel, VP & GM HPC & AI Solutions Segment at Hewlett Packard Enterprise, walks readers through three strategies to ensure HPC and AI success.

ISC19 Student Cluster Competition: LINs Packed & Conjugates Gradient-ed

In this special guest feature, Dan Olds from OrionX shares first-hand coverage of the Student Cluster Competition at the recent ISC 2019 conference. “The benchmark results from the recently concluded ISC19 Student Cluster Competition have been compiled, sliced, diced, and analyzed senseless. As you cluster comp fanatics know, this year the student teams are required to run LINPACK, HPCG, and HPCC as part of the ISC19 competition.”

Flexibly Scalable High Performance Architectures with Embedded Photonics

Keren Bergman from Columbia University gave this talk at PASC19. “Data movement, dominated by energy costs and limited ‘chip-escape’ bandwidth densities, is a key physical layer roadblock to these systems’ scalability. Integrated silicon photonics with deeply embedded optical connectivity is on the cusp of enabling revolutionary data movement and extreme performance capabilities.”