Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Articles and news on parallel programming and code modernization

Podcast: Improving Parallel Applications with the TAU tool

In the podcast, Mike Bernhardt from ECP catches up with Sameer Shende to learn how the Performance Research Lab at the University of Oregon is helping to pave the way to Exascale. “Developers of parallel computing applications can well appreciate the Tuning and Analysis Utilities (TAU) performance evaluation tool—it helps them optimize their efforts. Sameer has worked with the TAU software for nearly two and a half decades and has released more than 200 versions of it. Whatever your application looks like, there’s a good chance that TAU can support it and help you improve your performance.”

Call for Papers: International Workshop on Performance Portable Programming models for Manycore or Accelerators

The 4th International Workshop on Performance Portable Programming models for Manycore or Accelerators (P^3MA) has issued their Call for Papers. This workshop will provide a forum to bring together researchers and developers to discuss community’s proposals and solutions to performance portability.

Latest Intel Tools Make Code Modernization Possible

Code modernization means ensuring that an application makes full use of the performance potential of the underlying processors. And that means implementing vectorization, threading, memory caching, and fast algorithms wherever possible. But where do you begin? How do you take your complex, industrial-strength application code to the next performance level?

Interview: The Importance of the Message Passing Interface to Supercomputing

In this video, Mike Bernhardt from the Exascale Computing Project catches up with ORNL’s David Bernholdt at SC18. They discuss supercomputing the conference, his career, the evolution and significance of message passing interface (MPI) in parallel computing, and how ECP has influenced his team’s efforts.

Development Tools are More Important Now Than Ever

In this video, Sanjiv Shah, Vice President of the Core Visual Computing Group, and General Manager of Technical, Enterprise, and Cloud Computing Software Tools at Intel offers his perspective on the evolving nature of the developer’s role, and the latest resources to address persistent issues in application coding.

Artificial Intelligence and Cloud-to-edge Acceleration

In this video, Wei Li, Vice President and General Manager for Machine Learning and Translation at Intel, discusses the increasing importance of AI, the vision for AI’s future benefits to humanity, and Intel’s efforts in providing an advanced platform to facilitate AI deployment from the Cloud to the edge.

Slidecast: BigDL Open Source Machine Learning Framework for Apache Spark

In this video, Beenish Zia from Intel presents: BigDL Open Source Machine Learning Framework for Apache Spark. “BigDL is a distributed deep learning library for Apache Spark*. Using BigDL, you can write deep learning applications as Scala or Python* programs and take advantage of the power of scalable Spark clusters. This article introduces BigDL, shows you how to build the library on a variety of platforms, and provides examples of BigDL in action.”

Video: The Separation of Concerns in Code Modernization

In this video, Larry Meadows from Intel describes why modern processors require modern coding techniques. With vectorization and threading for code modernization, you can enjoy the full potential of Intel Scalable Processors. “In many ways, code modernization is inevitable. Even EDGE devices nowadays have multiple physical cores. And even a single-core machine will have hyperthreads. And keeping those cores busy and fed with data with Intel programming tools is the best way to speed up your applications.”

Video: The March to Exascale

As the trend toward exascale HPC systems continues, the complexities of optimizing parallel applications running on them increase too. Potential performance limitations can occur at the application level which relies on the MPI. While small-scale HPC systems are more forgiving of tiny MPI latencies, large systems running at scale prove much more sensitive. Small inefficiencies can snowball into significant lag.

Sunita Chandrasekaran Receives NSF Grant to Create Powerful Software Framework

Over at the University of Delaware, Julie Stewart writes that assistant professor Sunita Chandrasekaran has received an NSF grant to develop frameworks to adapt code for GPU supercomputers. She is working with complex patterns known as wavefronts, which are commonly found in scientific codes used in analyzing the flow of neutrons in a nuclear reactor, extracting patterns from biomedical data or predicting atmospheric patterns.