Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Developing Next Generation Autonomous Vehicles using ‘AI on the Fly’

In this sponsored post from One Stop Systems (OSS), Tim Miller, Vice President of Strategic Development, explores how autonomous vehicles will change the transportation landscape, and highlights the role AI has in this shift. 

AI compute

Tim Miller, Vice President Strategic Development, One Stop Systems

The next decade will see a fundamental change in the way we get from point A to point B in our automobiles. The quest to remove humans from behind the wheel with truly autonomous vehicles will drive billions of dollars in investment by car manufacturers and transportation service providers to develop and acquire the required technology. According to the SAE international classifications for autonomous capabilities, we are only at Level 2, meaning only basic levels of driver assistance automation are being deployed in commercial vehicles today. However, many of the key players in the industry are projecting that Level 5 vehicles will be on the road by 2028, providing full automation of all dynamic driving tasks under all roadway and environmental conditions. Additionally, it is projected that by 2040 virtually all vehicles on the road will be fully automated, saving thousands of lives a year from automobile accidents and bringing the brief 150 year history of human driving to an end.

To reach this milestone, major car manufacturers and rideshare companies are starting to deploy fleets of development and prototype cars. These fleets are being used to gather the data required to develop and test the artificial intelligence algorithms, which will eventually be deployed in millions of commercial vehicles. The cars in these fleets need to be outfitted with specialized high performance edge computing equipment including high bandwidth data ingest systems tied to the myriad of video, radar and LIDAR sensors in the car, high capacity and low latency storage subsystems and high performance compute engines that can perform the AI machine learning and inference tasks needed to enable the vehicle to see, hear, think and make decisions just like human drivers.

In addition to performance requirements, there is also the need for specialization of this computer equipment in terms of form factor, cooling and ruggedization to meet the unique harsh environment of cars driving hundreds of thousand miles in all road and weather conditions. This combination of requirements is ideally addressed with AI on the Fly technologies where specialized high-performance accelerated computing resources for deep learning training are deployed in field near the data source; in this case, inside the vehicles themselves. In typical AI solutions, deep learning training has been a centralized datacenter process, and only inferencing occurs in the field. In contrast, AI on the Fly moves this capability to the edge and allows rapid response to new data with continual reinforcement and transfer learning. This is critical to effectively performing fundamental autonomous vehicle tasks such as obstacle detection and collision avoidance.

AI on the Fly is made of three modular sub-systems; data ingest, data storage and compute engines. These sub-systems support high speed components including data capture hardware, NVMe SSD storage and GPU and FPGA compute accelerators all with PCI Express interfaces for flexible scaling while maintaining high bandwidth and low latency. The data ingest system must be capable of absorbing the vast amounts of data continually flowing in from the sensors and process the data for efficient delivery to both the persistent storage as well as the compute engines. Features in PCIe allow for simultaneous multi-casting the data to the multiple sub-systems using RDMA transfers to avoid system memory bottleneck without additional network protocol latency. The compute functions include machine learning tasks using traditional data science tools, data analysis, deep learning training tasks using neural network frameworks and inference engines for prediction using trained models against newly sourced data. Each of these elements may require specialized GPU resources. AI on the Fly provides all of these elements in flexible building block components that are easily customized to the specific requirement of the autonomous vehicle developer. The figure below illustrates an example of AI on the Fly configurations for autonomous vehicles.

AI on the Fly Hardware Configurations (Image: One Stop Systems)

One Stop Systems is working with some industry leaders to provide technology for their autonomous vehicle development programs. These companies look to OSS as their trusted development partner because of its technical expertise in specialized high performance edge computing. They rely on OSS’s experience in developing scalable PCI Express based systems which tie together high bandwidth sensor data ingest sub-systems with low latency NVMe storage and ultra-high performance multi-GPUs all packaged in specialized rugged form factors. OSS recently announced a collaborative engineering design win for a major international network transportation company for deployment of AI on the Fly components in its 150 vehicle autonomous driving development fleet.

AI on the Fly is playing a key role in development of fully autonomous driving vehicles and will help to usher in fundamental changes to human transport over the next decade.

Tim Miller is Vice President of Strategic Development at One Stop Systems

Disclaimer: This article may contain forward-looking statements based on One Stop Systems’ current expectations and assumptions regarding the company’s business and the performance of its products, the economy and other future conditions and forecasts of future events, circumstances and results.

Leave a Comment

*

Resource Links: