Sponsored Post
About ten years ago, a team at Intel launched an open source project to create a C++ template library that would bring parallelism to C++ applications. Intel Threading Building Blocks (Intel TBB) became an immediate success as a programming model. And as the company’s first commercial software product to embrace open source, it caused a small revolution inside Intel. What made it a success was its openness and the great feedback from the user community and contributors.
As a result, Intel TBB has had a revolutionizing effect on communities of developers who demand modularity and composability in their applications. The Intel Math Kernel Library (Intel MKL) offers a version built on top of Intel TBB for exactly this reason. And the much newer (and open source) Intel Data Analytics Acceleration Library (Intel DAAL) always uses Intel TBB and the Intel TBB-powered Intel MKL. Intel TBB is also finding use in some versions of Python* as well.
According to James Reinders, one of the originators of Intel TBB, while HPC developers “worry about squeezing out the ultimate performance while running an application on dedicated cores, Intel TBB tackles a problem that HPC users never worry about: How can you make parallelism work well when you share the cores that you run upon?”
This is more of a concern if you’re running that application on a many-core laptop or workstation than a dedicated supercomputer because who knows what’s also running on those shared cores. Intel TBB reduces the delays from other applications by utilizing a revolutionary task-stealing scheduler. This is the real magic of TBB.
Another revolution in parallelism within Intel TBB is the way it uses data flow to avoid a lot of synchronization at runtime. Extra synchronization in traditional parallelization scenarios often prohibits scaling applications beyond a few threads. The better approach used by Intel TBB is to let the programmer express the flow of data. This results in a minimal level of synchronization. TBB flow graph technology represents a critical step for advancing parallel programming models to support future developments.
The real appeal of Intel TBB is that it was designed for the non-expert C++ programmer to fully exploit parallelism in modern multicore processors. And it does it using standard C++ compilers in a way that scales well to larger and larger systems. Over its ten year history, what started out as a small template library has grown to handle new platforms, new complexities, and new programming styles. But all along it has managed to stay true to its original design roots as a purely library solution.[clickToTweet tweet=”A decade of Intel Threading Building Blocks revolutionizes multicore parallel programming.” quote=”A decade of Intel Threading Building Blocks revolutionizes multicore parallel programming.”]
Intel Threading Building Blocks, Intel Math Kernel Library, and Intel Data Analytics Acceleration Library, are all integral parts of Intel Parallel Studio XE for 2017. Download and try Intel Parallel Studio XE for yourself from the following link:
Tap into the power of parallel on the fastest Intel® processors & coprocessors. Intel® Parallel Studio XE