Latest Release of Intel Parallel Studio XE Delivers New Features to Boost HPC and AI Performance

Intel Parallel Studio XE is a complete software development suite that includes highly optimized compilers and math and data analytics libraries, along with comprehensive tools for performance analysis, application debugging, and parallel processing. It’s available as a download for Windows, Linux, and MacOS. “With this release, the focus is on making it easier for HPC and AI developers to deliver fast and reliable parallel code for the most demanding applications.”

Intel’s Kent Moffat describes the exciting new launch of oneAPI

In this video, Kent Moffat, senior product manager from Intel, describes the oneAPI initiative, an ambitious shift from today’s single-architecture, single-vendor programming models to a unified, simplified programming model for application development across heterogeneous architectures, including CPUs, GPUs, FPGAs and other accelerators.

Exploring the Performance Optimization and Productivity Project

The “quest” for improved performance is never over, if you want to remain competitive in your respective market. Your end users will undoubtedly call for more speed in the future, and the models your clients are building are likely bigger and more complex than ever. Enter the Performance Optimisation and Productivity (PoP) project.

Leadership Performance with 2nd-Generation Intel Xeon Scalable Processors

According to Intel, its new 2nd generation Intel Xeon Scalable Processor family includes Intel Deep Learning Boost for AI deep learning inference acceleration, fresh features and support for Intel Octane DC (data center) persistent memory, and more. Learn more about the offerings in a new issue of Parallel Universe Magazine.

Parallelism in Python: Directing Vectorization with NumExpr

According to a new edition of Parallel Universe Magazine, from Intel, Python has several pathways to vectorization. These range from just-intime (JIT) compilation with Numba 1 to C-like code with Cython. A chapter from a recent edition of Parallel Universe Magazine, explores parallelism in Python.

7 Ways HPC Software Developers Can Benefit from Intel Software Investments

Intel has long focused on supporting HPC software. But, as the years have gone by, much has changed — and the company’s offerings have grown and evolved. A chapter from a recent edition of Parallel Universe Magazine, from this past July outlines this evolution and offers seven ways HPC software developers can benefit from Intel software investments. 

Multiple Endpoints in the Latest Intel MPI Library Boosts Hybrid Performance

The performance of distributed memory MPI applications on the latest highly parallel multi-core processors often turns out to be lower than expected. Which is why hybrid applications using OpenMP multithreading on each node and MPI across nodes in a cluster are becoming more common. This sponsored post from Intel, written by Richard Friedman, depicts how to boost performance for hybrid applications with multiple endpoints in the Intel MPI Library. 

Achieving Parallelism in Intel Distribution for Python with Numba

The rapid growth in popularity of Python as a programming language for mathematics, science, and engineering applications has been amazing. Not only is it easy to learn, but there is a vast treasure of packaged open source libraries out there targeted at just about every computational domain imaginable. This sponsored post from Intel highlights how today’s enterprises can achieve high levels of parallelism in large scale Python applications using the Intel Distribution for Python with Numba. 

Intel Optimized Libraries Accelerate Deep Learning Applications on Intel Platforms

Whatever the platform, getting the best possible performance out of an application always presents big challenges. This is especially true when developing AI and machine learning applications on CPUs. This sponsored post from Intel explores how to effectively train and execute machine learning and deep learning projects on CPUs.

Using Inference Engines to Power AI Apps Audio, Video and more

With the demand for intelligent solutions like autonomous driving, digital assistants, recommender systems, enterprises of every type are demanding AI powered – applications for surveillance, retail, manufacturing, smart cities and homes, office automation, autonomous driving, and more coming every day. Increasingly, AI applications are powered by smart inference-based inputs. This sponsored post from Intel explores how inference engines can be used to power AI apps, audio, video and highlights the capabilities of Intel’s Distribution of OpenVINO (Open Visual Inference and Neural Network Optimization) toolkit.