Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


The Computing4Change Program takes on STEM and Workforce Issues

Kelly Gaither from TACC gave this talk at the HPC User Forum. “Computing4Change is a competition empowering people to create change through computing. You may have seen articles on the anticipated shortfall of engineers, computer scientists, and technology designers to fill open jobs. Numbers from the Report to the President in 2012 (President Obama’s Council of Advisors on Science and Technology) show a shortfall of one million available workers to fill STEM-related jobs by 2020.”

Podcast: Rescale powers Innovation in Antenna Design

In this Big Compute podcast, Gabriel Broner hosts Mike Hollenbeck, founder and CTO at Optisys. Optisys is a startup that is changing the antenna industry. Using HPC in the cloud and 3D printing they are able to design customized antennas which are much smaller, lighter and higher performing than traditional antennas.

Podcast: Enterprises go HPC at GPU Technology Conference

In this podcast, the Radio Free HPC team looks at news from the GPU Technology Conference. “Dan has been attending GTC since well before it became the big and important conference that it is today. We get a quick update on what was covered: the long keynote, automotive and robotics, the Mellanox acquisition, how a growing fraction of enterprise applications will be AI.”

Turbocharge your HPC Hybrid Cloud with Policy-based Automation

While there are many advantages to running in the cloud, the issues can be complex. Users need to figure out how to securely extend on-premise clusters, devise solutions for data handling and constantly keep an eye on costs. Univa’s Robert Lalonde, Vice President and General Manager, Cloud, explores how to turbocharge your HPC hybrid cloud with tools like policy-based automation, and how closing the loop between workload scheduling and cloud-automation can drive higher performance and dramatic cost efficiencies.

Big Compute Podcast: Boom Supersonic looks to HPC Cloud

In this Big Compute Podcast, host Gabriel Broner interviews Josh Krall co-founder and VP of Technology at Boom Supersonic. Boom is using HPC in the cloud to design a passenger supersonic plane and address the technical and business challenges it poses. “We witnessed technical success with supersonic flying with Concorde, but the economics did not work out. More than forty years later, Boom is embarking in building Overture, a supersonic plane, where passengers will pay the price of today’s business class seats.”

Hyperion Research: HPC Server Market Beat Forecast in 2018

Hyperion Research has released their latest High-Performance Technical Server QView, a comprehensive report on the state of the HPC Market. The QView presents the HPC market from various perspectives, including competitive segment, vendor, cluster versus non-cluster, geography, and operating system. It also contains detailed revenue and shipment information by HPC models.

A look inside the White House AI Initiative

In this special guest feature, SC19 General Chair Michela Taufer discusses with Lynne Parker, Assistant Director for Artificial Intelligence at The White House Office of Science and Technology Policy. Parker describes her new role, share her insights on the state of AI in the US (and beyond), and opine on the future impact of HPC on the evolution of AI.

CPU, GPU, FPGA, or DSP: Heterogeneous Computing Multiplies the Processing Power

Whether your code will run on industry-standard PCs or is embedded in devices for specific uses, chances are there’s more than one processor that you can utilize. Graphics processors, DSPs and other hardware accelerators often sit idle while CPUs crank away at code better served elsewhere. This sponsored post from Intel highlights the potential of Intel SDK for OpenCL Applications, which can ramp up processing power.

Podcast: How the EZ Project is Providing Exascale with Lossy Compression for Scientific Data

In this podcast, Franck Cappello from Argonne describes EZ, an effort to effort to compress and reduce the enormous scientific data sets that some of the ECP applications are producing. “There are different approaches to solving the problem. One is called lossless compression, a data-reduction technique that doesn’t lose any information or introduce any noise. The drawback with lossless compression, however, is that user-entry floating-point values are very difficult to compress: the best effort reduces data by a factor of two. In contrast, ECP applications seek a data reduction factor of 10, 30, or even more.”

Big Compute Podcast: Accelerating HPC Workflows with AI

In this Big Compute Podcast, Gabriel Broner from Rescale and Dave Turek from IBM discuss how AI enables the acceleration of HPC workflows. “HPC can benefit from AI techniques. One area of opportunity is to augment what people do in preparing simulations, analyzing results and deciding what simulation to run next. Another opportunity exists when we take a step back and analyze whether we can use AI techniques instead of simulations to solve the problem. We should think about AI as increasing the toolbox HPC users have.”