Latest Release of Intel Parallel Studio XE Delivers New Features to Boost HPC and AI Performance

Intel Parallel Studio XE is a complete software development suite that includes highly optimized compilers and math and data analytics libraries, along with comprehensive tools for performance analysis, application debugging, and parallel processing. It’s available as a download for Windows, Linux, and MacOS. “With this release, the focus is on making it easier for HPC and AI developers to deliver fast and reliable parallel code for the most demanding applications.”

Multiple Endpoints in the Latest Intel MPI Library Boosts Hybrid Performance

The performance of distributed memory MPI applications on the latest highly parallel multi-core processors often turns out to be lower than expected. Which is why hybrid applications using OpenMP multithreading on each node and MPI across nodes in a cluster are becoming more common. This sponsored post from Intel, written by Richard Friedman, depicts how to boost performance for hybrid applications with multiple endpoints in the Intel MPI Library. 

Converging Workflows Pushing Converged Software onto HPC Platforms

Are we witnessing the convergence of HPC, big data analytics, and AI? Once, these were separate domains, each with its own system architecture and software stack, but the data deluge is driving their convergence. Traditional big science HPC is looking more like big data analytics and AI, while analytics and AI are taking on the flavor of HPC.

Are Memory Bottlenecks Limiting Your Application’s Performance?

Often, it’s not enough to parallelize and vectorize an application to get the best performance. You also need to take a deep dive into how the application is accessing memory to find and eliminate bottlenecks in the code that could ultimately be limiting performance. Intel Advisor, a component of both Intel Parallel Studio XE and Intel System Studio, can help you identify and diagnose memory performance issues, and suggest strategies to improve the efficiency of your code.

Software-Defined Visualization with Intel Rendering Framework – No Special Hardware Needed

This sponsored post from Intel explores how the Intel Rendering Framework, which brings together a number of optimized, open source rendering libraries, can deliver better performance at a higher degree of fidelity — without having to invest in extra hardware. By letting the CPU do the work, visualization applications can run anywhere without specialized hardware, and users are seeing better performance than they could get from dedicated graphics hardware and limited memory. 

Intel High-Performance Python Extends to Machine Learning and Data Analytics

One of the big surprises of the past few years has been the spectacular rise in the use of Python* in high-performance computing applications. With the latest releases of Intel® Distribution for Python, included in Intel® Parallel Studio XE 2019, the numerical and scientific computing capabilities of high-performance Python now extends to machine learning and data analytics.

Putting Computer Vision to Work with OpenVINO

OpenVINO is a single toolkit, optimized for Intel hardware, that the data scientist and AI software developer can use for quickly developing high-performance applications that employ neural network inference and deep learning to emulate human vision over various platforms. “This toolkit supports heterogeneous execution across CPUs and computer vision accelerators including GPUs, Intel® Movidius™ hardware, and FPGAs.”

Are Platform Configuration Problems Degrading Your Application’s Performance?

The Intel VTune™ Amplifier Platform Profiler on Windows* and Linux* systems shows you critical data about the running platform that help identify common system configuration errors that may be causing performance issues and bottlenecks. Fixing these issues, or modifying the application to work around them, can greatly improve overall performance.

Accelerated Python for Data Science

The Intel Distribution for Python takes advantage of the Intel® Advanced Vector Extensions (Intel® AVX) and multiple cores in the latest Intel architectures. By utilizing the highly optimized Intel MKL BLAS and LAPACK routines, key functions run up to 200 times faster on servers and 10 times faster on desktop systems. This means that existing Python applications will perform significantly better merely by switching to the Intel distribution.

Latest Intel Tools Make Code Modernization Possible

Code modernization means ensuring that an application makes full use of the performance potential of the underlying processors. And that means implementing vectorization, threading, memory caching, and fast algorithms wherever possible. But where do you begin? How do you take your complex, industrial-strength application code to the next performance level?