Intel Joins Open Neural Network Exchange

Print Friendly, PDF & Email

Jason Knight is the Senior Technology Officer for Intel AI Products.

Jason Knight from Intel writes that the company has joined Microsoft, Facebook, and others to participate in the Open Neural Network Exchange (ONNX) project.

By joining the project, we plan to further expand the choices developers have on top of frameworks powered by the Intel Nervana Graph library and deployment through our Deep Learning Deployment Toolkit. Developers should have the freedom to choose the best software and hardware to build their artificial intelligence model and not be locked into one solution based on a framework. Deep learning is better when developers can move models from framework to framework and use the best hardware platform for the job.

Intel Nervana Graph is a hardware-independent, open source library that enables deep learning frameworks to achieve maximum performance across a wide variety of hardware platforms. In a similar light, the ONNX format by Microsoft and Facebook is designed to give users their choice of framework so they are free to choose the best tool for model construction, training, and deployment.

“We plan to enable users to convert ONNX models to and from Intel Nervana Graph models, giving users an even broader selection of choice in their deep learning toolkits. These converters will be simple to use (they are a conversion from one protobuf format to another) and bidirectional. In addition to these converters, we are also participating in the open development of ONNX to make sure it continues to evolve into a format that can keep pace with rapid developments in both deep learning algorithms and hardware.”

For an example of how ONNX and Intel Nervana Graph compatibility is beneficial for users, imagine this scenario: Your colleague has trained a new language model in CNTK and you’d like to implement a multi-modal fusion model that builds on top of your colleague’s model while also integrating camera input. You download a SqueezeNet* trained model from the Pytorch* model zoo, import both models into a neon™ framework (through the ONNX and Intel Nervana Graph converters), add some fusion layers, and then train the final layers using your laptop or an Intel Xeon optimized cloud instance. From there you are free to convert back to ONNX to deploy in Caffe 2, or quantize and prune using the Intel Deep Learning Deployment Toolkit to prepare for mobile deployment.

Sign up for our insideHPC Newsletter