Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


NGD Systems Steps up with Arm Processors on SSDs at SC19

In this video from the Arm booth at SC19, Scott Shadley from NGD Systems describes the company’s innovative computational storage technology. “In a nutshell, Computational Storage is an IT architecture where data is processed at the storage device level to reduce the amount of data that has to move between the storage and compute planes. As such, the technology provides a faster and more efficient means to address the unique challenges of our data-heavy world – satisfying reduced excess bandwidth and providing very low latency response times by reducing data movement and allowing response for analytics as much as 20 to 40-times faster.”

GigaIO Wins Most Innovative New Product Award for Big Data

Today GigaIO announced that the company’s FabreX technology has been selected as the winner of Connect’s Most Innovative New Product Award for Big Data. The Most Innovative New Product Awards is an annual competition that recognizes San Diego industry leaders for their groundbreaking contributions to technology and life sciences sectors. “FabreX is a cutting-edge network architecture that drives the performance of data centers and high-performance computing environments. Featuring a unified, software-driven composable infrastructure, the fabric dynamically assigns resources to facilitate streamlined application deployment, meeting today’s growing demands of data-intensive programs such as Artificial Intelligence and Deep Learning. FabreX adheres to industry standard PCI Express (PCIe) technology and integrates computing, storage and input/output (IO) communication into a single-system cluster fabric for flawless server-to-server communication. Optimized with GPU Direct RDMA (GDR) and NVMe-oF, FabreX facilitates direct memory access by a server to the system memories of all other servers in the cluster, enabling native host-to- host communication to create the industry’s first in-memory network.”

NEC SX-Aurora Tops Energy Efficiency on HPCG Benchmark

In this video from SC19, Erich Focht from NEC describes how the company’s SX-Aurora vector architecture achieves extreme energy efficiency on the HPCG benchmark. After that, Shintaro Momose from NEC describes recent enhancements to the SX-Aurora vector computing platform and how extreme energy efficiency helped the company win major contracts with DWD in Germany and the National Institute for Fusion Science (NIFS) in Japan.

Data Scientist Thomas Thurston to speak at HPC User Forum in New Jersey

Venture Capitalist and Data Scientist Thomas Thurston is slated to speak at the upcoming HPC User Forum in Princeton, New Jersey. Thurston will give a talk titled, “Using HPC-enabled AI to Guide Investment Strategies for Finding and Funding Startups.” Thurston will describe how his fund is using technology to gain unique insights into early startups that otherwise disclose little or no public data. His discussion highlights counter-intuitive insights about what is, and what isn’t, predictive of new business success, along with a discussion of current challenges of analyzing potential startup investments and how companies are grappling with the promises and perils of executive decision making in a world of increasingly advanced computing.

Intel Powers HPE Apollo 20 for HPC Workloads

In this video from SC19, Larry Keller from HPE and Taha Mughi from Intel describe how the two companies collaborated on the innovative new Apollo 20 server for HPC workloads. “Are you searching for greater performance and more memory bandwidth? The HPE Apollo 20 System is built on the 2nd Generation Intel Xeon 9200 family of processors which offer unmatched 2- socket performance leadership across popular workloads. Built to support both liquid-cooled and air-cooled options, the HPE Apollo 20 System takes advantage of the Hewlett Packard Enterprise experience in HPC cooling technologies as workloads continue to push power and density.”

Call for Papers: HPML2020 High Performance Machine Learning Workshop

The third High Performance Machine Learning Workshop has issued its Call for Papers. HPML2020 takes place May 11, 2020 in Melbourne, Australia in conjunction with CCGrid 2020. “This workshop is intended to bring together the Machine Learning (ML), Artificial Intelligence (AI) and High Performance Computing (HPC) communities. In recent years, much progress has been made in Machine Learning and Artificial Intelligence in general. This progress required heavy use of high performance computers and accelerators. Moreover, ML and AI have become a “killer application” for HPC and, consequently, driven much research in this area as well. These facts point to an important cross-fertilization that this workshop intends to nourish.”

GIGABYTE Steps up with a Broad Array of Server Offerings for AI & HPC

In this video from SC19, Peter Hanley from GIGABYTE describes how the company delivers a full range of server solutions for HPC, AI, and the Edge. “GIGABYTE is an industry leader in HPC, delivering systems with the highest GPU density combined with excellent cooling performance, power efficiency and superior networking flexibility. These systems can provide massive parallel computing capabilities to power your next AI breakthrough.”

AMD Readies EPYC for Exascale with ROCm at SC19

In this video from SC19, Derek Bouius from AMD describes how the company’s new EPYC processors and Radeon GPUs can speed HPC and Ai applications. With its EPYC processors, Radeon Instinct accelerators, Infinity Fabric technologies, and ROCm open software, AMD is building an Exascale ecosystem for heterogeneous compute. “Community support for the pre-exascale software ecosystem continues to grow. This ecosystem is built on ROCm, the foundational open source components for GPU compute provided by AMD.”

Podcast: SCR Scalable Checkpoint/Restart Paves the Way for Exascale

A software product called the Scalable Checkpoint/Restart (SCR) Framework 2.0 recently won an R&D 100 Award. In this episode, Elsa Gonsiorowski and Kathryn Mohror of LLNL discuss what SCR does, the challenges involved in creating it, and the impact it is expected to have in HPC. “SCR enables HPC simulations to take advantage of hierarchical storage systems, without complex code modifications. With SCR, the input/output (I/O) performance of scientific simulations can be improved by orders of magnitude.”

Team RACKLette from ETH Zurich steps up at the SC19 Student Cluster Competition

In this video from SC19, Thor Goebel and Emir Isman from ETH Zurich Team RACKLette describe their system configuration in the Student Cluster Competition. “We are a team of motivated students from ETH Zürich in Switzerland with various fields of interests around HPC. Together we work on optimizing and tuning computations on all the different levels down from the physical hardware up to algorithmic performance optimizations wherever possible.”