Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


40 Powers of 10 – Simulating the Universe with the DiRAC HPC Facility

Mark Wilkinson from DiRAC gave this talk at the Swiss HPC Conference. “DiRAC is the integrated supercomputing facility for theoretical modeling and HPC-based research in particle physics, and astrophysics, cosmology, and nuclear physics, all areas in which the UK is world-leading. DiRAC provides a variety of compute resources, matching machine architecture to the algorithm design and requirements of the research problems to be solved.”

GPUs for Oil and Gas Firms: Deriving Insights from Petabytes of Data

Adoption of GPU-accelerated computing can offer oil and gas firms significant ROI today and pave the way to gain additional advantage from future technical developments. To stay competitive, these companies need to be able to derive insights from petabytes of sensor, geolocation, weather, drilling, and seismic data in milliseconds. A new white paper from Penguin Computing explores how GPUs are spurring innovation and changing how hydrocarbon businesses address data processing needs.

Job of the Week: HPC System Administrator at D.E. Shaw Research

D.E. Shaw Research is seeking and HPC System Administrator in our Job of the Week. “Exceptional sysadmins sought to manage systems, storage, and network infrastructure for a New York–based interdisciplinary research group. Ideal candidates should have strong fundamental knowledge of Linux concepts such as file systems, networking, and processes in addition to practical experience administering Linux systems. Relevant areas of expertise might include large-installation systems administration experience and strong programming and scripting ability, but specific knowledge of and level of experience in any of these areas is less critical than exceptional intellectual ability.”

Making Python Fly: Accelerate Performance Without Recoding

Developers are increasingly besieged by the big data deluge. Intel Distribution for Python uses tried-and-true libraries like the Intel Math Kernel Library (Intel MKL)and the Intel Data Analytics Acceleration Library to make Python code scream right out of the box – no recoding required. Intel highlights some of the benefits dev teams can expect in this sponsored post.

Video: The Human Side of AI

 In this video from the GPU Technology Conference, Dan Olds from OrionX discusses the human impact of AI with Greg Schmidt from HPE. The industry buzz about artificial intelligence and deep learning typically focuses on hardware, software, frameworks,  performance, and the lofty business plans that will be enabled by this new technology. What we don’t […]

Vintage Video: The Paragon Supercomputer – A Product of Partnership

In this vintage video, Intel launches the Paragon line of supercomputers, a series of massively parallel systems produced in the 1990s. In 1993, Sandia National Laboratories installed an Intel XP/S 140 Paragon supercomputer, which claimed the No. 1 position on the June 1994 TOP500 list. “With 3,680 processors, the system ran the Linpack benchmark at 143.40 Gflop/s. It was the first massively parallel processor supercomputer to be indisputably the fastest system in the world.”

Best Practices for Building, Deploying & Managing HPC Clusters

In today’s markets, a successful HPC cluster can be a formidable competitive advantage. And many are turning to these tools to stay competitive in the HPC market. That said, these systems are inherently very complex, and have to be built, deployed and managed properly to realize their full potential. A new report from Bright Computing explore best practices for HPC clusters. 

Video: Advancing U.S. Weather Prediction Capabilities with Exascale HPC

Mark Govett from NOAA gave this talk at the GPU Technology Conference. “We’ll discuss the revolution in computing, modeling, data handling and software development that’s needed to advance U.S. weather-prediction capabilities in the exascale computing era. Creating prediction models to cloud-resolving 1 KM-resolution scales will require an estimated 1,000-10,000 times more computing power, but existing models can’t exploit exascale systems with millions of processors. We’ll examine how weather-prediction models must be rewritten to incorporate new scientific algorithms, improved software design, and use new technologies such as deep learning to speed model execution, data processing, and information processing.”

Fujitsu to Productize Post-K Supercomputer Technologies

Today Fujitsu announced that it has completed the design of Post-K supercomputer for deployment at RIKEN in Japan. While full-production of the full machine is not scheduled until 2021-2022, Fujitsu disclosed plans to productize the Post-K technologies and begin global sales in the second half of fiscal 2019. “Reaching the production milestone marks a significant achievement for Post-K and we are excited to see the potential for broader deployment of Arm-based Fujitsu technologies in support of HPC and AI applications.”

AMD Powers Corona Cluster for HPC Analytics at Livermore

Lawrence Livermore National Lab has deployed a 170-node HPC cluster from Penguin Computing. Based on AMD EPYC processors and Radeon Instinct GPUs, the new Corona cluster will be used to support the NNSA Advanced Simulation and Computing (ASC) program in an unclassified site dedicated to partnerships with American industry. “Even as we do more of our computing on GPUs, many of our codes have serial aspects that need really good single core performance. That lines up well with AMD EPYC.”