Video: How oneAPI Is Revolutionizing Programming

Print Friendly, PDF & Email

Michael Wong from the Khronos SYCL Working Group is looking forward to working with oneAPI.

In this video, academics and industry experts weigh in on the potential of oneAPI, the new, unified software programming model for CPU, GPU, AI, and FPGA accelerators that delivers high compute performance for emerging specialized workloads across diverse compute architectures.

What is oneAPI?

The two components of oneAPI – the industry initiative and the Intel beta product – both representing first steps on a long-term journey:

  • The oneAPI initiative cross-architecture development model is based on industry standards and an open specification to enable broad ecosystem adoption of the technologies that will shape the next evolution of application development.
  • The Intel oneAPI beta product is Intel’s implementation of oneAPI that contains the oneAPI specification components with direct programming (Data Parallel C++), API-based programming with a set of performance libraries, advanced analysis and debug tools, and other components. Developers can test their code and workloads in the Intel DevCloud for oneAPI on multiple types of Intel architectures today, including Intel® Xeon® Scalable processors, Intel® Core™ processors with integrated graphics and Intel FPGAs (Intel Arria®, Stratix®). The effort represents millions of Intel engineering hours in software development to offer the global ecosystem of developers a bridge from existing code and skills to the coming XPU era.
intel oneapi info
» Click for full infographic

Why is it significant?

oneAPI represents Intel’s commitment to a “software-first” strategy that will define and lead programming for an increasingly AI-infused, heterogeneous and multi-architecture world.

The ability to program across diverse architectures (CPU, GPU, FPGA and other accelerators) is critical in supporting data-centric workloads where multiple architectures are required and will become the future norm. Today, each hardware platform typically requires developers to maintain separate code bases that need to be programmed using different languages, libraries and software tools. This is complex and time consuming for developers, slowing acceleration and innovation.

oneAPI will address this challenge by delivering a unified and open programming experience to developers on the architecture of their choice without compromising performance, while also eliminating the complexity of separate code bases, multiple programming languages and different tools and workflows. This provides developers with a compelling and modern alternative to today’s closed programming environments, which are based on single-vendor architectures, enabling them to preserve their existing software investments, while delivering a seamless bridge to create versatile applications for the multi-architecture world of the future.

Why is Intel qualified to tackle this challenge?

Intel has been deeply engaged with the developer ecosystem for more than 20 years. Intel has over 15,000 software engineers and 10,000 high-touch customer deployments in software. The company is also the number one contributor to the Linux kernel, modifies over half a million lines of code each year, optimizes for over 100 operating systems and has a vibrant 20-million-plus developer ecosystem. And that barely scratches the surface.

Intel’s work across infrastructure, network and operating system developers, tools and SDKs, and the number of standards bodies it influences is virtually unmatched in the industry. Intel has taken this expertise – coupled with millions of Intel software development hours – and applied it to the challenges that developers will face in the future by creating a unified programming model that will democratize development. This model takes what is difficult today and makes it easier by providing more portability, productivity and greater performance gains for developers.

Why an open specification?

Intel has a decades-long history of working with standards groups and industry/academia initiatives, such as the ISO C++/Fortran groups, OpenMP* ARB, MPI Forum and The Khronos Group, to create and define specifications in an open and collaborative process to achieve interoperability and interchangeability. Intel’s efforts with the oneAPI project are a continuation of this precedent. oneAPI will support and be interoperable with existing industry standards. The latest oneAPI specification can be viewed at the oneAPI initiative site.

What’s included in the oneAPI open specification?

The open specification includes a cross-architecture language, Data Parallel C++ (DPC++) for direct programming, together with a set of libraries for API-based programming and a low-level interface to hardware, oneAPI Level Zero. Together, those components allow Intel and other companies to build their own implementations of oneAPI to support their own products or create new products based on oneAPI.

What is Data Parallel C++?

Based on familiar C and C++ constructs, DPC++ is the primary language for oneAPI and incorporates SYCL* from The Khronos Group to support data parallelism and heterogeneous programming for performance across CPUs and accelerators. The goal is to simplify programming and enable code reuse across hardware targets, while allowing for tuning to specific accelerators.

DPC++ language enhancements will be driven through a community project with extensions to simplify data parallel programming. The project is open with cooperative development for continued evolution.

Will the oneAPI specification elements be open sourced?

Many libraries and components already are, or may soon be open sourced. Visit oneapi.com to see what elements have open source availability.

What companies are endorsing and participating in the oneAPI initiative?

As of November 17, more than 30 leading companies and research organizations support the oneAPI concept. The list includes leaders in HPC, AI innovators, hardware vendors/OEMs, ISVs, CSP, universities and more. Many are also actively testing and providing feedback on Intel’s oneAPI beta toolkits.

oneapi industry support logos
» Click for full infographic

The initiative has just started and Intel anticipates broader adoption over a multi-year timeline. Companies that create their own implementation of oneAPI and complete a self-certification process can use the new oneAPI initiative brand and logo.

What are the different types of Intel oneAPI Beta Toolkits?

The Intel oneAPI Base Toolkit (Beta) is a core set of tools and libraries for building and deploying high-performance, data-centric applications across diverse architectures. It contains the oneAPI open specification technologies (DPC++ language, domain-specific libraries) and the Intel® Distribution for Python* to provide drop-in acceleration across relevant architectures. Enhanced profiling, design assistance, debug tools and other components complete the kit.

For specialized HPC, AI and other workloads, additional toolkits are available that complement the Intel oneAPI Base Toolkit. These include:

  • Intel oneAPI HPC Toolkit (Beta) to deliver fast C++, Fortran and OpenMP applications that scale
  • Intel oneAPI DL Framework Developer Toolkit (Beta) to build deep learning frameworks or customize existing ones
  • Intel oneAPI Rendering Toolkit (Beta) to create high-performance, high-fidelity visualization applications (including sci-vis)
  • Intel AI Analytics Toolkit (Beta) Powered by oneAPI, this toolkit is used by AI developers and data scientists to build applications that leverage machine learning and deep learning models

There are also two other complementary toolkits powered by oneAPI: the Intel System Bring-Up Toolkit for system engineers and the production-level Intel Distribution of OpenVINO™ Toolkit for deep learning inference and computer vision. Visit the Intel oneAPI site for more details.

Which processors and accelerators are supported by oneAPI?

The oneAPI specification is designed to support a broad range of CPUs and accelerators from multiple vendors. Intel’s oneAPI beta reference implementation currently supports Intel CPUs (Intel Xeon, Core and Atom®), Intel Arria FPGAs and Gen9/Intel Processor Graphics as a proxy development platform for future Intel discrete data center GPUs. Additional Intel accelerator architectures will be added over time.

Will oneAPI work on other vendors’ hardware?

The oneAPI language, DPC++ and library specifications are publicly available for use by other hardware vendors, and we will encourage them to do so. It’s up to those vendors or others in the industry to create their own oneAPI implementations and optimize for specific hardware.

Where can developers go to get more information?

More information on the oneAPI initiative can be found at oneAPI.com. Developers can download the Intel oneAPI Beta Toolkits for local use from the Intel Developer Zone at software.intel.com/oneapi. Developers can alternatively get started quickly through the Intel DevCloud for oneAPI to access the oneAPI Toolkits, to test their code and workloads across a variety of Intel data-centric architectures. This saves time by bypassing installation and setup and provides flexibility to try out different hardware without development platform costs.

Sign up for our insideHPC Newsletter