The Role of Middleware in Optimizing Vector Processing

A new whitepaper from NEC X delves into the world of unstructured data and explores how vector processors and their optimization software can help solve the challenges of wrangling the ever-growing volumes of data generated globally. “In short, vector processing with SX-Aurora TSUBASA will play a key role in changing the way big data is handled while stripping away the barriers to achieving even higher performance in the future.”

Podcast: Supercomputing the Coronavirus on Frontera

Scientists are preparing a massive computer model of the coronavirus that they expect will give insight into how it infects in the body. They’ve taken the first steps, testing the first parts of the model and optimizing code on the Frontera supercomputer at the Texas Advanced Computing Center of UT Austin. The knowledge gained from the full model can help researchers design new drugs and vaccines to combat the coronavirus.

Is Your Storage Infrastructure Ready for the Coming AI Wave?

In this new whitepaper from our friends over at Panasas, we take a look at whether your storage infrastructure is ready for the robust requirements in support of AI workloads. AI promises to not only create entirely new industries, but it will also fundamentally change the way organizations large and small conduct business. IT planners need to start revising their storage infrastructure now to prepare the organization for the coming AI wave.

Efficient AI Computing for the Planet

In this keynote talk from the 2020 HiPEAC conference, Alesssandro Cremonesi from STMicroelectronics describes how artificial intelligence (AI) is the central nervous system of an increasingly connected world. He sets out both the benefits and potential pitfalls of AI, before arguing that AI now has to move beyond performance to efficiency in order to be sustainable. “So far, AI developments have been focused on performances regardless of the computational power needed, reaching in some applications performances better than the human ones. Now it is time to focus on efficient computation.”

Podcast: Delivering Exascale Machine Learning Algorithms at the ExaLearn Project

In this Let’s Talk Exascale podcast, researchers from the ECP describe progress at the ExaLearn project. ExaLearn is focused on ML and related activities to inform the requirements for these pending exascale machines. “ExaLearn’s algorithms and tools will be used by the ECP applications, other ECP co-design centers, and DOE experimental facilities and leadership-class computing facilities.”

Software-defined Microarchitecture: An Arguably Terrible Idea, But Certainly Not The Worst Idea

James Mickens from Harvard University gave this talk at HiPEAC 2020. “In this presentation, I will describe some of the benefits that would emerge from a new kind of processor that aggressively exposes microarchitectural state and allows it to be programmed. Using elaborate hand gestures and cheap pleas for sympathy, I will explain why my proposals are different than prior “open microarchitecture” ideas like transport-triggered designs.”

Precision Medicine pushes demand for HPC at the Edge: AI on the Fly ® Delivers

In this special guest feature, Tim Miller from One Stop Systems writes that by bringing specialized, high performance computing capabilities to the edge through AI on the Fly, OSS is helping the industry deliver on the enormous potential of precision medicine. “The common elements of these solutions are high data rate acquisition, high speed low latency storage, and efficient high performance compute analytics. With OSS, all of these building block elements are connected seamlessly with memory mapped PCI Express interconnect configured and customized as appropriate, to meet the specific environmental requirements of ‘in the field’ installations.”

Podcast: One Big Debate over OneAPI

In this podcast, the Radio Free HPC team looks at Intel’s oneAPI project. “The OneAPI project is a highly ambitious initiative; trying to design a single API to handle CPUs, GPUs, FPGAs, and other types of processors. In the discussion, we look under the hood and see how this might work. One thing working in Intel’s favor is that they’re using data parallel C++, which is highly compatible with CUDA – and which is probably Intel’s target with this new initiative.”

How Supersonic Commercial Flight is Possible with Big Compute

In this video from Big Compute 2020, Blake Scholl from Boom Supersonic describes how high performance computing in the cloud has opened a new era of high-speed flight. “We’ve done about 66 million core hours of computing, mainly through Rescale since we started the design effort on XB-1. And if you asked yourself what that would look like and wind tunnel testing, it would be financially and timewise just absolutely impractical.”

Latest Release of Intel Parallel Studio XE Delivers New Features to Boost HPC and AI Performance

Intel Parallel Studio XE is a complete software development suite that includes highly optimized compilers and math and data analytics libraries, along with comprehensive tools for performance analysis, application debugging, and parallel processing. It’s available as a download for Windows, Linux, and MacOS. “With this release, the focus is on making it easier for HPC and AI developers to deliver fast and reliable parallel code for the most demanding applications.”