Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Architecting the Right System for Your AI Application—without the Vendor Fluff

Brett Newman is the author of HPC Tech Tips at Microway.

In this video from the 2019 Stanford HPC Conference, Brett Newman from Microway presents: Architecting the Right System for Your AI Application—without the Vendor Fluff.

Figuring out how to map your dataset or algorithm to the optimal hardware design is one of the hardest tasks in HPC. We’ll review what helps steer the selection of one system architecture from another for AI applications. Plus the right questions to ask of your collaborators—and a hardware vendor. Honest technical advice, no fluff.

Artificial Intelligence (AI) and, more specifically, Deep Learning (DL) are revolutionizing the way businesses utilize the vast amounts of data they collect and how researchers accelerate their time to discovery. Some of the most significant examples come from the way AI has already impacted life as we know it such as smartphone speech recognition, search engine image classification, and cancer detection in biomedical imaging. Most businesses have collected troves of data or incorporated new avenues to collect data in recent years. Through the innovations of deep learning, that same data can be used to gain insight, make accurate predictions, and pave the path to discovery.

Developing a plan to integrate AI workloads into an existing business infrastructure or research group presents many challenges. However, there are two key elements that will drive the decisions to customizing an AI cluster. First, understanding the types and volumes of data is paramount to beginning to understand the computational requirements of training the neural network. Secondly, understanding the business expectation for time to result is equally important. Each of these factors influence the first and second stages of the AI workload, respectively. Underestimating the data characteristics will result in insufficient computational and infrastructure resources to train the networks in a reasonable timeframe. Moreover, underestimating the value and requirement of time-to-results can fail to deliver ROI to the business or hamper research results.

Brett Newman is the VP of Marketing and Customer Engagement at Microway, Inc, a leading systems integrator for the intersection of AI & HPC. Since 1982, customers have trusted Microway to design and deliver solutions that keep them at the bleeding edge of supercomputing. Brett is part of a broad Microway team with proven technical ability—that architects & builds users unique hardware configurations tuned for their applications. Brett has served many roles in HPC—a cluster architect, as part of the IBM HPC group, and in product marketing focused solely on materials and resources with serious technical “street cred.”

See more talks in the Stanford HPC Conference Video Gallery

Check out our insideHPC Events Calendar

Leave a Comment

*

Resource Links: