Achieving High-Performance Math Processing with Intel MKL 2017

Print Friendly, PDF & Email
Gears Working Together

Gears Working TogetherSponsored Post

The Intel Math Kernel Library, Intel MKL, is a library of common numerical methods used in scientific and engineering applications that is highly optimized for Intel processors running Windows, MacOS, and Linux. The latest version of Intel MKL extends functionality to include optimized methods for key machine learning algorithms.

The development of standardized numerical libraries started in the 1970s when it became clear that most scientific, engineering, and financial computations relied on a relatively small number of algorithms. These libraries provided callable routines for solving equations, inverting matrices, finding maxima and minima, smoothing data and fitting curves, evaluating Fourier transforms, solving certain classes of differential equations, and so on. While these advanced methods could be programmed using algorithms found in textbooks on numerical analysis, it became necessary to ensure consistent accuracy and highest performance as applications migrate over the latest hardware architectures.

In general, these optimized math libraries performed better and with greater accuracy than anything most non-specialist programmers could provide. Even more important, they offered a standardized interface that made portability of application codes possible.

Many of the libraries developed in the 70s and 80s for core linear algebra and scientific math computation, such as BLAS, LAPACK, FFT, are still in use today with C, C++, Fortran, and even Python programs. With MKL, Intel has engineered a ready-to-use, royalty-free library that implements these numerical algorithms optimized specifically to take advantage of the latest features of Intel chip architectures. Even the best compiler can’t compete with the level of performance possible from a hand-optimized library. Any application that already relies on the BLAS or LAPACK functionality will achieve better performance on Intel and compatible architectures just by downloading and re-linking with Intel MKL. And, that application will continue to run optimally on future generations of Intel processors with minimal additional effort.

Intel MKL provides architecture-specific implementations for supported platforms, such as IA-32, IA-64, and Intel Many Integrated Core Architecture, and is structured to support multiple compilers and interfaces, both serial and multi-threaded modes, different implementations of threading run-time libraries, and a wide range of processors.

Included in Intel MKL are the optimized versions of the following standard numerical libraries:

  • Basic Linear Algebra Subprograms (BLAS) and BLAS-like extension transposition routines
  • Sparse BLAS Levels 1, 2, and 3
  • LAPACK routines for solving systems of linear equations, least-squares problems, eigenvalue and singular value problems, and Sylvester’s equations, along with auxiliary and utility LAPACK routines
  • Parallel Basic Linear Algebra Subprograms (PBLAS)
  • ScaLAPACK
  • Direct Sparse Solvers including
Intel MKL PARDISO, Parallel Direct Sparse Solvers for Clusters and other Direct and Iterative Sparse Solver routines
  • Vector Mathematics (VM) and Vector Statistics (VS)
  • Fast Fourier Transforms (FFT) and Cluster FFT
  • Trigonometric Transforms
  • Fast Poisson, Laplace, and Helmholtz Solver (Poisson Library)
  • Optimization (Trust-Region) Solver
  • Data Fitting
  • Deep Neural Network (DNN) functions
  • Extended Eigensolver
  • And various support functions (including memory allocation)

Each group of functions for Fortran, C, C++, and Python are detailed in a set of Intel MKL Developer Reference Guides, as is support for mixed-language programming.

Intel MKL is extensively parallelized and is thread safe. Intel MKL supports either OpenMP or Intel Threaded Building Blocks (TBB) environments, executing efficiently over multiple threads. Applications can call Intel MKL functions from multiple threads and not worry about the function instances interfering with each other.

[clickToTweet tweet=”The latest Intel MKL extends functionality to include optimized methods for key machine learning algorithms.” quote=”The latest Intel MKL extends functionality to include optimized methods for key machine learning algorithms.”]

While designed for high-performance computing (HPC), the latest Intel MKL release creates not only the foundation for many scientific and engineering solutions, but also offers a foundation for machine learning. Intel MKL 2017 includes optimized methods to benefit key machine learning algorithms and extensions that address the unique computational needs of machine learning.

Machine learning activities tend to rely on many algorithms for multidimensional convolutions and matrix-matrix multiplications. The also include several layers that operate on matrices with small dimensions. To minimize the overhead of data transformations, Intel MKL 2017 introduces optimized implementations of these key functions in a new Deep Neural Networks (DNN) domain by including functions necessary to accelerate the most popular image recognition topologies, including AlexNet, VGG, GoogLeNet, and ResNet.

Along with other optimized tools and compilers, Intel MKL is solidly integrated into Intel Parallel Studio XE 2017. Download and try Intel MKL 2017  for yourself for free.

Tap into the power of parallel on the fastest Intel® processors & coprocessors.  Intel® Parallel Studio XE 2017