Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Keys to Success for AI in Modeling and Simulation

In this special guest feature from Scientific Computing World, Robert Roe interviews Loren Dean from Mathworks on the use of AI in modeling and simulation.

Loren Dean from MathWorks

Artificial Intelligence (AI) is surging in popularity and considered a transformative technology that can enhance systems in almost every industry vertical and application. However, AI as a technology is still in its infancy, and so careful steps must be taken to ensure that AI is combined appropriately with the modelling and simulation tools used to build these systems.

At this Years Matlab Expo, held in the UK in October, Loren Dean, senior director of engineering for Matlab products at Mathworks, highlighted the importance of creating ‘AI-driven systems’ that require more than just intelligent algorithms.

In his keynote presentation, Dean noted the importance of providing insight from domain experts coupled with implementation details, including data preparation, compute-platform selection, modelling and simulation, and automatic code generation to support the integration of the AI component into the final engineered system. If these points are considered carefully, then modeling and simulation users can adopt AI and deploy the technology across embedded systems, edge devices, on-premise IT/OT systems, and cloud platforms.

Robert Roe: Can you start by defining AI and what you mean by the phrase ‘AI-driven systems’?

Loren Dean: Artificial intelligence is the capability of a computer or machine to match or exceed intelligent human behavior. AI relies on learning algorithms, such as machine learning and deep learning, that are trained to perceive the environment, make decisions, or take actions. AI-driven systems are complex integrations with AI algorithms that enable full or partial automation within complex environments.

We see more and more companies exploring and integrating AI in systems they create. This is especially true in industrial applications in industries like automotive, aeronautics, industrial machinery, oil and gas, and electric utilities. In these cases, AI is being used to automate a process (e.g. defect detection with visual inspection on an assembly line) or to improve a system (e.g. lane detection in an automated driving application. We come across these a lot at MathWorks since we focus on engineering and science in industry.

Robert Roe: What is the potential for AI to drive changes in system development with modelling and simulation?

Loren Dean: What we see going on with AI is that it’s really transforming engineering. It is taking applications and solutions that engineers have traditionally built and it is enabling them to provide more capable systems as well as new offerings that were previously unavailable. We see this transformation occurring across all industries and applications including robotics, industrial automation, medical devices, electrification, automated driving and autonomous systems.

It is occurring both in systems that are being deployed and also in services and capabilities that augment these systems, for instance with predictive maintenance applications.

There is a lot of interest and activity going on with AI and it is poised to have a dramatic impact on us, but it is still in its infancy – people are still trying to understand it and how to apply it.

Robert Roe: What is required to 
implement AI?

Loren Dean: Whether you are talking about deep learning, machine learning, or reinforcement learning – all of these are types of AI modeling algorithms – AI is being applied in interesting ways that were previously unachievable and are now practical to do.

If you just focus on AI algorithms, you generally don’t succeed. It is more than just developing your intelligent algorithms, and it’s more than just adding AI – you really need to look at it in the context of the broader system being built and how to intelligently improve it. For this you need the knowledge of the domain experts, the people who have designed the systems, know how they are used, and are familiar with the modelling and simulation needed to build the overall system.

You then need to pay attention to how AI is integrated and implemented with the system. There are tools that go across the whole system, so that implementation is done across the entire design flow – not just the AI modelling piece.

Finally, there is the interaction of the system. As you are developing your broad AI system you need to be smart about what the interactions are going to be. If I think about an automotive application, many vehicles tend to have a lane detection algorithm built-in which can help the driver steer back into the middle of the lane.

There are many ways to do that – you can instantaneously jerk it right back to the middle or you can bring it in gradually for a smoother ride. That is an example of interaction design where you have to pay attention to the control or the response that you give to the system in order to optimise it for either the person or environment that might be involved.

So what we see in AI is that while it is in its infancy, people tend to be focusing on just the AI piece and not the broader system. By designing the entire system, and not just the AI, organisations can deliver systems that are more impactful.

For instance, if you’re designing an autonomous vehicle, the passengers in the vehicle should have a smooth ride. The vehicle systems that have AI integrated in them will be making decisions about how the vehicle should respond to the environment around it. The system might need to brake, speed up, or turn the steering wheel – there are many different actions of the system that AI can help enhance or automate – but you need to collectively optimize the reaction for the overall system design and requirements.

Robert Roe: How do you create a combined workflow?

Loren Dean: In the context of the bigger system we typically look at four main stages, data preparation, AI modelling, system design and deployment.

Data preparation is the collection or preparation of the data inputs to the system. Training accurate AI models requires lots of data. Often you have lots of data for normal system operation, but really want to predict anomalies or critical failure conditions. This is especially true for predictive maintenance applications. A failure condition, such as a seal leak in a pump, may rarely happen and can be destructive to produce data from physical equipment. You can use a model of the pump and then run simulations to produce signals representing failure behaviour, signals that can be used to train an AI model.

You then have the design of the AI model itself. For this you need to make it easy for the domain experts to build AI models. This means making it easy to label data, split data for training and validation, select from algorithms that are best suited for the problem, speed up training with high performance computing hardware such as GPUs, and visualize and evaluate the performance of the model. The key here is to provide a guided and automated workflow so the domain expert can train an AI model while focusing on the application area and not intricacies of algorithm implementation or computer science.

System design requires being able to naturally take the AI model you’ve chosen and naturally use it along with the system simulation and verification tools an engineer is accustomed to using. This makes it easy to integrate the AI model as a natural part of building a bigger system.

Finally, there is the interaction, it is really important to identify where and how that AI fits into the system and then the interaction itself. In the lane detection example, I noted that you might need to make a drastic change to get yourself back on course. That provides a suboptimal response. The engineer that designs that system from that broader perspective has the ability to ensure that the right response is occurring.

That is the challenge if you just focus on AI – you miss the bigger system-level design that is necessary to occur.

Robert Roe: How does AI change modelling and testing?

Loren Dean: When I think about augmenting or extending the way that people have traditionally done modelling and testing, it brings in new challenges. One of those challenges is the creation of the data, the generation of synthetic data and being able to have a model of the system that can help to train the system or train the AI algorithm.

That brings a lot more simulation, where you are looking at the behaviour of the broader system. While people already do a lot of simulation with traditional environments we end up having to do even more. In order to train your algorithm, you generally need to scale up the number of runs you need to do and the volume of data that you need to process.

This is where you start to see integration with the cloud because people do not necessarily have the resources locally to be able to do that scaling for training purposes – or they do not have time to wait.

Robert Roe: Is the cost of more simulation offset by the benefits?

Loren Dean: When done well AI can add significant value to the system. For instance, if I buy a car today, my family and I really value the safety systems in there, and many of those modern safety systems are made possible with AI.

The value that it brings is significant, but the trade-off of additional simulation for scaling, training and testing, in addition to upskilling teams to be knowledgable about using and integrating AI, is something that requires time and investment. I have not heard it to be a roadblock for people, more something that is essential to remain competitive and innovative while delivering the ever-smarter systems that we have all come to expect.

This story appears here as part of a cross-publishing agreement with Scientific Computing World.

Sign up for our insideHPC Newsletter

Leave a Comment

*

Resource Links: