Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


OpenACC Names ORNL’s Jack Wells President, Updates OpenACC API

Jack Wells, director of strategic planning and performance management at Oak Ridge National Laboratory, has been named president of OpenACC, a nonprofit dedicated to advancing scientists’ parallel computing skills. OpenACC also announced updates to Version 3.1 of OpenACC API for writing parallel programs in C, C++, and Fortran, and it announced the 2021 schedule of […]

Using AI to See What Eye Doctors Can’t

This white paper, “Using AI to See What Eye Doctors Can’t,” explains how Voxeleron, a leader in delivering advanced ophthalmic image analysis and machine learning solutions, is extending ophthalmology’s diagnostic horizons with image analysis based on artificial intelligence (AI) models, trained using Dell Precision workstations with NVIDIA GPUs.

HPC-Scale Data Management for the Enterprise That’s Easier and More Cost-Efficient than NAS

In this sponsored post, Shailesh Manjrekar, Head of AI and Strategic Alliances, WekaIO, discusses how WekaFS is designed to enable organizations to maximize the full value of their high-powered IT investments – compute, networking and storage. The latest evolution of Weka technology includes new features designed to deliver ease-of-management and performance, at any scale, with a unified global namespace from flash to object storage to the cloud.

insideHPC Guide to QCT Platform-on-Demand Designed for Converged Workloads

In this insideHPC technology guide, “insideHPC Guide to QCT Platform-on-Demand Designed for Converged Workloads,”as we’ll see, by relying on open source software and the latest high performance/low cost system architectures, it is possible to build scalable hybrid on-premises solutions that satisfy the needs of converged HPC/AI workloads while being robust and easily manageable.

NETL, Cerebras Claim CFD Milestone

A collaboration between DOE’s National Energy Technology Laboratory and Cerebras Systems, maker of the CS-1 deep learning compute system,  has demonstrated that CS-1 could perform a key computational fluid dynamics (CFD) workload more than 200 times faster and at a fraction of the power consumption than the same workload on an optimized number of cores […]

Winners of Student Cluster Competition, Gordon Bell Prize(s) Named at SC20

It was awards day at Virtual SC20, and among the most coveted and closely watched of them ate annual SC Student Cluster Competition and the ACM Gordon Bell Prize. This year’s cluster competition winner: Tsinghaua University, China. The same team won the competition for the highest LINPACK benchmark performance. Now in its 14th year, this […]

TACC’s Frontera HPC System Expansion for ‘Urgent Computing’ – COVID-19, Hurricanes, Earthquakes

Frontera, deployed at the Texas Advanced Computing Center supercomputer and the ninth fastest HPC system in the world, will receive an expansion to support urgent computing and basic science, according to TACC. The expansion is funded by an award from the National Science Foundation (NSF) and a contribution from Dell Giving, the philanthropic arm of […]

Atos Launches HPC Software Suites

Paris, November 19, 2020 – Atos today announces its new HPC Software Suites designed to enable users to better manage their supercomputing environments, optimize performance and reduce energy consumption. These software suites can be used on Atos’ BullSequana X supercomputer product line. HPC Software Suites includes: Smart Data Management Suite, Smart Energy Management Suite, Smart Performance Management Suite and Smart Management […]

Making AI Accessible to Any Size Enterprise

In this sponsored post, our friends over at Lenovo and NetApp have teamed up with NVIDIA to discuss how the companies are helping to drive Artificial Intelligence (AI) into smaller organizations and hopefully seed that creative garden. Experience tells us that there is a relationship between organizational size and technology adoption:  Larger, more resource-rich, enterprises generally adopt new technologies first, while smaller, more resource constrained organizations follow afterward, (provided that the small organization isn’t in the technology business). 

At SC20: Intel Provides Aurora Update as Argonne Developers Use Intel Xe-HP GPUs in Lieu of ‘Ponte Vecchio’

In an update to yesterday’s “Bridge to ‘Ponte Vecchio'” story, today we interviewed, Jeff McVeigh, Intel VP/GM of data center XPU products and solutions, who updated us on developments at Intel with direct bearing on Aurora, including the projected delivery of Ponte Vecchio (unchanged); on Aurora’s deployment (sooner than forecast yesterday by industry analyst firm Hyperion Research); on Intel’s “XPU” cross-architecture strategy and its impact on Aurora application development work ongoing at Argonne; and on the upcoming release of the first production version of oneAPI (next month), Intel’s cross-architecture programming model for CPUs, GPUs, FPGAs and other accelerators.