Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Building Fast Data Compression Code with Intel Integrated Performance Primitives (Intel IPP) 2018

Sponsored Post

Data compression is critical to cloud and data streaming applications by reducing data transfer times and storage utilization. But compressing and decompressing data consumes processor resources and can have a significant impact overall system performance.  So great care must be taken to how compression/decompression algorithms are implemented to ensure high performance efficiency.

Data compression methods involve trade-offs between the degree of compression, the amount of distortion or data loss introduced, and the computational resources required to compress and decompress the data. Choosing the right algorithm and optimizations gives the greatest benefit to overall performance. Proper alignment of data and reducing the size of internal data tables and lists to fit in CPU cache can also boost performance. Also, employing the latest CPU architectures, SIMD extensions, and bit manipulation instructions can further improve performance.

Intel® Integrated Performance Primitives (Intel IPP) is a highly optimized, production-ready, library for lossless data compression/decompression targeting image, signal, and data processing, and cryptography applications.

Intel IPP provides optimized implementations of the common data compression algorithms, including the BZIP2*, ZLIB*, LZO*, and a new LZ4* function, as “drop-in” replacements for the original compression code. For example, IPP ZLIB is the basic compression method used by many file archivers such as gzip*, WinZip*, and PKZIP*, along with PNG graphics libraries, network protocols, and some Java* classes. Because Intel IPP provides fully compatible APIs, applications can immediately utilize the optimized ZLIB in Intel IPP by just relinking with the Intel library, or switching to a new dynamic ZLIB library built with Intel IPP.

Intel IPP functions are highly optimized for a wide range of Intel architectures, including  Intel Quark™, Intel Atom®, Intel Core™, Intel Xeon, and Intel Xeon Phi™ processors. Each Intel IPP function includes multiple code paths, each optimized for specific Intel and compatible processors. As new processors are released, developers can take advantage of the latest processor architectures by linking to the newest version of Intel IPP.

Data compression is critical for cloud and streaming applications. Download the optimized Intel IPP library of compression primitives for free. Click To Tweet

Intel IPP includes more than 2,500 image processing, 1,300 signal processing, 500 computer vision, and 300 cryptography optimized functions for creating digital media, enterprise data, embedded, communications, and scientific, technical, and security applications.

Just released Intel IPP 2018 introduces these new features:

  • New functions to support the LZ4 data compression and decompression.
  • Standalone cryptography packages that can be used without the main Intel IPP packages.
  • Optimized GraphicsMagick version 1.3.25 APIs: ResizeImage, ScaleImage, GaussianBlurImage, FlipImage, and FlopImage. Performance improvements up to 4x, depending on the functionality, input parameters, and processors.
  • Computer Vision: Added the 64-bit data length support for Canny edge detection functions (ippiCanny_32f8u_C1R_L)
  • Color Conversion: Added the ippiDemosaicVNG functions that support the demosaicing algorithm with VNG interpolation.
  • Cryptography: Added the Elliptic Curves key generation and Elliptic Curves based Diffie-Hellman shared secret functionality. Added the Elliptic Curves sign generation and verification functionalities for the DSA, NR, and SM2 algorithms.
  • Performance: Extended optimization for the Intel Advanced Vector Extensions 512 (Intel AVX-512) and Intel Advanced Vector Extensions 2 (Intel AVX2) instruction sets. Improved performance of LZO data compression functions on Intel AVX2 and Intel Streaming SIMD Extensions 4.2 (Intel SSE4.2).
  • Intel IPP functions support internal threading by providing Threading Layer APIs in the platform-aware functions. These APIs can support both 64-bit object sizes (for large size images and signal data) and internal threading in Intel IPP functions.

Intel IPP 2018 is supported on various Microsoft* Windows*, Linux* operating systems, as well as macOS*, and Android* OS, and is available as part of Intel Parallel Studio XE and Intel System Studio tool suites, and as a free stand-alone version.

Download Intel® Performance Libraries for free.

Leave a Comment

*

Resource Links: