Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Hyperion Research: HPC Server Market Beat Forecast in 2018

Hyperion Research has released their latest High-Performance Technical Server QView, a comprehensive report on the state of the HPC Market. The QView presents the HPC market from various perspectives, including competitive segment, vendor, cluster versus non-cluster, geography, and operating system. It also contains detailed revenue and shipment information by HPC models.

A look inside the White House AI Initiative

In this special guest feature, SC19 General Chair Michela Taufer discusses with Lynne Parker, Assistant Director for Artificial Intelligence at The White House Office of Science and Technology Policy. Parker describes her new role, share her insights on the state of AI in the US (and beyond), and opine on the future impact of HPC on the evolution of AI.

CPU, GPU, FGPA, or DSP: Heterogeneous Computing Multiplies the Processing Power

Whether your code will run on industry-standard PCs or is embedded in devices for specific uses, chances are there’s more than one processor that you can utilize. Graphics processors, DSPs and other hardware accelerators often sit idle while CPUs crank away at code better served elsewhere. This sponsored post from Intel highlights the potential of Intel SDK for OpenCL Applications, which can ramp up processing power.

Podcast: How the EZ Project is Providing Exascale with Lossy Compression for Scientific Data

In this podcast, Franck Cappello from Argonne describes EZ, an effort to effort to compress and reduce the enormous scientific data sets that some of the ECP applications are producing. “There are different approaches to solving the problem. One is called lossless compression, a data-reduction technique that doesn’t lose any information or introduce any noise. The drawback with lossless compression, however, is that user-entry floating-point values are very difficult to compress: the best effort reduces data by a factor of two. In contrast, ECP applications seek a data reduction factor of 10, 30, or even more.”

Big Compute Podcast: Accelerating HPC Workflows with AI

In this Big Compute Podcast, Gabriel Broner from Rescale and Dave Turek from IBM discuss how AI enables the acceleration of HPC workflows. “HPC can benefit from AI techniques. One area of opportunity is to augment what people do in preparing simulations, analyzing results and deciding what simulation to run next. Another opportunity exists when we take a step back and analyze whether we can use AI techniques instead of simulations to solve the problem. We should think about AI as increasing the toolbox HPC users have.”

Data Management: The Elephant in the Room for HPC Hybrid Cloud

While there are many benefits to leveraging the cloud for HPC, there are challenges as well. Along with security and cost, data handling is consistently identified as a top barrier. “In this short article, we discuss the challenge of managing data in hybrid clouds, offer some practical tips to makes things easier, and explain how automation can play a key role in improving efficiency.”

Faster Fabrics Running Against Limits of the Operating System, the Processor, and the I/O Bus

Christopher Lameter from Jump Trading gave this talk at the OpenFabrics Workshop in Austin. “In 2017 we got 100G fabrics, in 2018 200G fabrics and in 2019 it looks like 400G technology may be seeing a considerable amount of adoption. These bandwidth compete with and sometimes are higher than the internal bus speeds of the servers that are connected using these fabrics. I think we need to consider these developments and work on improving fabrics and the associated APIs so that ways to access these features become possible using vendor neutral APIs. It needs to be possible to code in a portable way and not to a vendor specific one.”

Exploring the ROI Potential of GPU Supercomputing

The growing prevalence of artificial intelligence and machine learning is putting heightened focus on the quantities of data that organizations have recently accumulated — as well as the value potential in this data. Companies looking to gain a competitive edge in their market are turning to tools like graphic processing units – or GPUs – to ramp up computing power. That’s according to a new white paper from Penguin Computing.

Podcast: Multicore Scaling Slow Down, and Fooling AI

In this podcast, the Radio Free HPC team has an animated discussion about multicore scaling, how easy it seems to be to mislead AI systems, and some good sized catches of the week. “As CPU performance improvements have slowed down, we’ve seen the semiconductor industry move towards accelerator cards to provide dramatically better results. Nvidia has been a major beneficiary of this shift, but it’s part of the same trend driving research into neural network accelerators, FPGAs, and products like Google’s TPU.”

Video: Cray Announces First Exascale System

In this video, Cray CEO Pete Ungaro announces Aurora – Argonne National Laboratory’s forthcoming supercomputer and the United States’ first exascale system. Ungaro offers some insight on the technology, what makes exascale performance possible, and why we’re going to need it. “It is an exciting testament to Shasta’s flexible design and unique system and software capabilities, along with our Slingshot interconnect, which will be the foundation for Argonne’s extreme-scale science endeavors and data-centric workloads. Shasta is designed for this transformative exascale era and the convergence of artificial intelligence, analytics and modeling and simulation– all at the same time on the same system — at incredible scale.”