Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Using Inference Engines to Power AI Apps Audio, Video and more

With the demand for intelligent solutions like autonomous driving, digital assistants, recommender systems, enterprises of every type are demanding AI powered – applications for surveillance, retail, manufacturing, smart cities and homes, office automation, autonomous driving, and more coming every day. Increasingly, AI applications are powered by smart inference-based inputs. This sponsored post from Intel explores how inference engines can be used to power AI apps, audio, video and highlights the capabilities of Intel’s Distribution of OpenVINO (Open Visual Inference and Neural Network Optimization) toolkit.

Making Computer Vision Real Today – For Any Application

With the demand for intelligent vision solutions increasing everywhere from edge to cloud, enterprises of every type are demanding visually-enabled – and intelligent – applications. Up till now, most intelligent computer vision applications have required a wealth of machine learning, deep learning, and data science knowledge to enable simple object recognition, much less facial recognition or collision avoidance. That’s changed with the introduction of Intel’s Distribution of OpenVINO toolkit.

Putting Computer Vision to Work with OpenVINO

OpenVINO is a single toolkit, optimized for Intel hardware, that the data scientist and AI software developer can use for quickly developing high-performance applications that employ neural network inference and deep learning to emulate human vision over various platforms. “This toolkit supports heterogeneous execution across CPUs and computer vision accelerators including GPUs, Intel® Movidius™ hardware, and FPGAs.”