Overcoming Challenges to Deep Learning Infrastructure

Print Friendly, PDF & Email

With use cases like computer vision, natural language processing, predictive modeling, and much more, deep learning (DL) provides the kinds of far-reaching applications that change the way technology can impact human existence. The possibilities are limitless, and we’ve just scratched the surface of its potential.

But designing an infrastructure for DL creates a unique set of challenges. Even the training and inferences steps of DL feature separate requirements. You typically want to run a proof of concept (POC) for the training phase of the project and a separate one for the inference portion, as the requirements for each are quite different.

Deep Learning Infrastructure Challenges

There are three significant obstacles for you to be aware of when designing a deep learning infrastructure: scalability, customizing for each workload, and optimizing workload performance.

Scalability

The hardware-related steps required to stand up a DL technology cluster each have unique challenges. Moving from POC to production often results in failure, due to additional scale, complexity, user adoption, and other issues. You need to design scalability into the hardware at the start.

Customized Workloads

Specific workloads require specific customizations. You can run ML on a non-GPU-accelerated cluster, but DL typically requires GPU-based systems. And training requires the ability to support ingest, egress, and processing of massive datasets.

Optimize Workload Performance

One of the most crucial factors of your hardware build is optimizing performance for your workload. Your cluster should be a modular design, allowing customization to meet your key concerns, such as networking speed, processing power, etc. This build can grow with you and your workloads and adapt as new technologies or needs arise.

Infrastructure Needs for DL Processes

Training an artificial neural network requires you to curate huge quantities of data into a designated structure, then feed that massive training dataset into a DL framework. Once the DL framework is trained, it can leverage this training when exposed to new data and make inferences about the new data. But each of these processes features different infrastructure requirements for optimal performance.

Training

Training is the process of learning a new capability from existing data based on exposure to related data, usually in very large quantities. These factors should be considered in your training infrastructure:

  • Get as much raw compute power and as many nodes as you can allocate. You should employ multi-core processors and GPUs because accurately training your AI model is the most critical issue you’ll face. It may take a long time to get there but the more nodes and the more mathematical accuracy you can build into your cluster, the faster and more accurate your training will be.
  • Training often requires incremental addition of new data sets that remain clean and well-structured. That means these resources cannot be shared with others in the datacenter. You should focus on optimization for this workload to have better performance and more accurate training. Don’t try to make a general-purpose compute cluster with the assumption that it can take on other jobs in its free time.
  • Huge training datasets require massive networking and storage capabilities to hold and transfer the data, especially if your data is image-based or heterogeneous. Plan for adequate networking and storage capacity, not just for strong computing.
  • The greatest challenge in designing hardware for neural network training is scaling. Doubling the amount of training data doesn’t mean doubling the number of resources used to process it. It means expanding exponentially.

Inference

Inference is the application of what has been learned to new data (usually via an application or service) and making an informed decision regarding the data and its attributes. Once your framework is trained, it can then make educated assumptions about new data based on the training it has received. These factors should be considered in your inference infrastructure:

  • Inference clusters should be optimized for performance using simpler hardware with less power than the training cluster but with the lowest latency possible.
  • Throughput is critical to inference. The process requires high I/O bandwidth and enough memory to hold both the required training model(s) and the input data without having to make calls back to the storage components of the cluster.
  • Data center resource requirements for inference are typically not as great for a single instance compared to training needs. This is because the amount of data or number of users an inference platform can support is limited to the performance of the platform and the application requirements. Think of speech recognition software, which can only operate when there is one clear input stream. More than one input stream renders the application inoperable. It’s the same with inference input streams.

Inference on the Edge

There are several special considerations for inference on the edge:

  • Edge-based computers are significantly less powerful than the massive compute power available in data centers and the cloud. But this still works because inference requires much less processing power than training clusters.
  • If you have hundreds or thousands of instances of the neural network model to support, though, remember that each of these multiple incoming data sources needs sufficient resources to process the data.
  • Normally, you want your storage and memory as close to the processor as possible, to reduce latency. But when you have edge devices, the memory is sometimes nowhere near the processing and storage components of the system. This means you either need a device that supports GPU or FPGA compute and storage at the edge, and/or access to a high-performance, low-latency network.
  • You could also use a hybrid model, where the edge device gathers data but sends it to the cloud, where the inference model is applied to the new data. If the inherent latency of moving data to the cloud is acceptable (it is not in some real-time applications, such as self-driving cars), this could work for you.

Achieving DL Technology Goals

Your goals for your DL technology are to drive AI applications that optimize automation and allow you a far greater level of efficiency in your organization. Learn even more about how to build the infrastructure that will accomplish these goals with this white paper from Silicon Mechanics.