Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Articles and news on parallel programming and code modernization

Accelerated Python for Data Science

The Intel Distribution for Python takes advantage of the Intel® Advanced Vector Extensions (Intel® AVX) and multiple cores in the latest Intel architectures. By utilizing the highly optimized Intel MKL BLAS and LAPACK routines, key functions run up to 200 times faster on servers and 10 times faster on desktop systems. This means that existing Python applications will perform significantly better merely by switching to the Intel distribution.

Apply Now for 2019 Argonne Training Program on Extreme-Scale Computing

Computational scientists are invited to apply for the upcoming Argonne Training Program on Extreme-Scale Computing (ATPESC) this Summer. “This program provides intensive hands-on training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current supercomputers and the HPC systems of the future. As a bridge to that future, this two-week program fills many gaps that exist in the training computational scientists typically receive through formal education or other shorter courses.”

Call for Papers: Distributed & Heterogeneous Programming in C/C++ Event in Boston

The DHPCC++19 conference has issued its Call for Papers. Held in conjunction with the IWOCL event, the Distributed & Heterogeneous Programming in C/C++ event takes place May 13, 2019 in Boston. “This will be the 3rd DHPCC++ event in partnership with IWOCL, the international OpenCL workshop with a focus on heterogeneous programming models for C and C++, covering all the programming models that have been designed to support heterogeneous programming in C and C++.”

Podcast: Improving Parallel Applications with the TAU tool

In the podcast, Mike Bernhardt from ECP catches up with Sameer Shende to learn how the Performance Research Lab at the University of Oregon is helping to pave the way to Exascale. “Developers of parallel computing applications can well appreciate the Tuning and Analysis Utilities (TAU) performance evaluation tool—it helps them optimize their efforts. Sameer has worked with the TAU software for nearly two and a half decades and has released more than 200 versions of it. Whatever your application looks like, there’s a good chance that TAU can support it and help you improve your performance.”

Call for Papers: International Workshop on Performance Portable Programming models for Manycore or Accelerators

The 4th International Workshop on Performance Portable Programming models for Manycore or Accelerators (P^3MA) has issued their Call for Papers. This workshop will provide a forum to bring together researchers and developers to discuss community’s proposals and solutions to performance portability.

Latest Intel Tools Make Code Modernization Possible

Code modernization means ensuring that an application makes full use of the performance potential of the underlying processors. And that means implementing vectorization, threading, memory caching, and fast algorithms wherever possible. But where do you begin? How do you take your complex, industrial-strength application code to the next performance level?

Interview: The Importance of the Message Passing Interface to Supercomputing

In this video, Mike Bernhardt from the Exascale Computing Project catches up with ORNL’s David Bernholdt at SC18. They discuss supercomputing the conference, his career, the evolution and significance of message passing interface (MPI) in parallel computing, and how ECP has influenced his team’s efforts.

Development Tools are More Important Now Than Ever

In this video, Sanjiv Shah, Vice President of the Core Visual Computing Group, and General Manager of Technical, Enterprise, and Cloud Computing Software Tools at Intel offers his perspective on the evolving nature of the developer’s role, and the latest resources to address persistent issues in application coding.

Artificial Intelligence and Cloud-to-edge Acceleration

In this video, Wei Li, Vice President and General Manager for Machine Learning and Translation at Intel, discusses the increasing importance of AI, the vision for AI’s future benefits to humanity, and Intel’s efforts in providing an advanced platform to facilitate AI deployment from the Cloud to the edge.

Slidecast: BigDL Open Source Machine Learning Framework for Apache Spark

In this video, Beenish Zia from Intel presents: BigDL Open Source Machine Learning Framework for Apache Spark. “BigDL is a distributed deep learning library for Apache Spark*. Using BigDL, you can write deep learning applications as Scala or Python* programs and take advantage of the power of scalable Spark clusters. This article introduces BigDL, shows you how to build the library on a variety of platforms, and provides examples of BigDL in action.”